Updates from: 06/17/2022 01:08:59
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Concepts Forest Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/concepts-forest-trust.md
Previously updated : 09/15/2021 Last updated : 06/07/2021
-# How trust relationships work for resource forests in Azure Active Directory Domain Services
+# How trust relationships work for forests in Active Directory
Active Directory Domain Services (AD DS) provides security across multiple domains or forests through domain and forest trust relationships. Before authentication can occur across trusts, Windows must first check if the domain being requested by a user, computer, or service has a trust relationship with the domain of the requesting account.
To check for this trust relationship, the Windows security system computes a tru
The access control mechanisms provided by AD DS and the Windows distributed security model provide an environment for the operation of domain and forest trusts. For these trusts to work properly, every resource or computer must have a direct trust path to a DC in the domain in which it is located.
-The trust path is implemented by the Net Logon service using an authenticated remote procedure call (RPC) connection to the trusted domain authority. A secured channel also extends to other AD DS domains through interdomain trust relationships. This secured channel is used to obtain and verify security information, including security identifiers (SIDs) for users and groups.
+The trust path is implemented by the Net Logon service using an authenticated remote procedure call (RPC) connection to the trusted domain authority. A secured channel also extends to other AD DS domains through interdomain trust relationships. This secured channel is used to obtain and verify security information, including security identifiers (SIDs) for users and groups.
-For an overview of how trusts apply to Azure AD DS, see [Resource forest concepts and features][create-forest-trust].
+>[!NOTE]
+>Azure AD DS only supports one-way transitive trusts where the managed domain will trust other domains, but no other directions or trust types are supported.
+
+For an overview of how trusts apply to Azure AD DS, see [Forest concepts and features][create-forest-trust].
To get started using trusts in Azure AD DS, [create a managed domain that uses forest trusts][tutorial-create-advanced].
Before you can create a forest trust, you need to verify you have the correct Do
* When there is no shared root DNS server and the root DNS servers in each forest DNS namespace use DNS conditional forwarders for each DNS namespace to route queries for names in the other namespace. > [!IMPORTANT]
- > Azure AD Domain Services resource forest must use this DNS configuration. Hosting a DNS namespace other than the resource forest DNS namespace is not a feature of Azure AD Domain Services. Conditional forwarders is the proper configuration.
+ > Any Azure AD Domain Services forest with a trust must use this DNS configuration. Hosting a DNS namespace other than the forest DNS namespace is not a feature of Azure AD Domain Services. Conditional forwarders is the proper configuration.
* When there is no shared root DNS server and the root DNS servers in each forest DNS namespace are use DNS secondary zones are configured in each DNS namespace to route queries for names in the other namespace. To create a forest trust, you must be a member of the Domain Admins group (in the forest root domain) or the Enterprise Admins group in Active Directory. Each trust is assigned a password that the administrators in both forests must know. Members of Enterprise Admins in both forests can create the trusts in both forests at once and, in this scenario, a password that is cryptographically random is automatically generated and written for both forests.
-A managed domain resource forest supports up to five one-way outbound forest trusts to on-premises forests. The outbound forest trust for Azure AD Domain Services is created in the Azure portal. You don't manually create the trust with the managed domain itself. The incoming forest trust must be configured by a user with the privileges previously noted in the on-premises Active Directory.
+A managed domain forest supports up to five one-way outbound forest trusts to on-premises forests. The outbound forest trust for Azure AD Domain Services is created in the Azure portal. You don't manually create the trust with the managed domain itself. The incoming forest trust must be configured by a user with the privileges previously noted in the on-premises Active Directory.
## Trust processes and interactions
Administrators can use *Active Directory Domains and Trusts*, *Netdom* and *Nlte
## Next steps
-To learn more about resource forests, see [How do forest trusts work in Azure AD DS?][concepts-trust]
+To learn more about forest trusts, see [How do forest trusts work in Azure AD DS?][concepts-trust]
-To get started with creating a managed domain with a resource forest, see [Create and configure an Azure AD DS managed domain][tutorial-create-advanced]. You can then [Create an outbound forest trust to an on-premises domain][create-forest-trust].
+To get started with creating a managed domain with a forest trust, see [Create and configure an Azure AD DS managed domain][tutorial-create-advanced]. You can then [Create an outbound forest trust to an on-premises domain][create-forest-trust].
<!-- LINKS - INTERNAL --> [concepts-trust]: concepts-forest-trust.md
active-directory-domain-services Concepts Resource Forest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/concepts-resource-forest.md
Previously updated : 07/06/2020 Last updated : 06/07/2022
A *forest* is a logical construct used by Active Directory Domain Services (AD D
In an Azure AD DS managed domain, the forest only contains one domain. On-premises AD DS forests often contain many domains. In large organizations, especially after mergers and acquisitions, you may end up with multiple on-premises forests that each then contain multiple domains.
-By default, a managed domain is created as a *user* forest. This type of forest synchronizes all objects from Azure AD, including any user accounts created in an on-premises AD DS environment. User accounts can directly authenticate against the managed domain, such as to sign in to a domain-joined VM. A user forest works when the password hashes can be synchronized, and users aren't using exclusive sign-in methods like smart card authentication.
+By default, a managed domain is created as a *user* forest. This type of forest synchronizes all objects from Azure AD, including any user accounts created in an on-premises AD DS environment. User accounts can directly authenticate against the managed domain, such as to sign in to a domain-joined VM. A user forest works when the password hashes can be synchronized, and users aren't using exclusive sign-in methods like smart card authentication. In addition to users who can directly authenticate, users in other on-premises AD DS environments can also authenticate over a one-way forest trust from their on-premises AD DS to access resources in a managed domain user forest.
-In a managed domain *resource* forest, users authenticate over a one-way forest *trust* from their on-premises AD DS. With this approach, the user objects and password hashes aren't synchronized to the managed domain. The user objects and credentials only exist in the on-premises AD DS. This approach lets enterprises host resources and application platforms in Azure that depend on classic authentication such LDAPS, Kerberos, or NTLM, but any authentication issues or concerns are removed.
+In a managed domain *resource* forest, users also authenticate over a one-way forest trust from their on-premises AD DS. With this approach, the user objects and password hashes aren't synchronized to the managed domain. The user objects and credentials only exist in the on-premises AD DS. This approach lets enterprises host resources and application platforms in Azure that depend on classic authentication such LDAPS, Kerberos, or NTLM, but any authentication issues or concerns are removed.
Resource forests also provide the capability to lift-and-shift your applications one component at a time. Many legacy on-premises applications are multi-tiered, often using a web server or front end and many database-related components. These tiers make it hard to lift-and-shift the entire application to the cloud in one step. With resource forests, you can lift your application to the cloud in a phased approach, which makes it easier to move your application to Azure. + ## What are trusts? Organizations that have more than one domain often need users to access shared resources in a different domain. Access to these shared resources requires that users in one domain authenticate to another domain. To provide these authentication and authorization capabilities between clients and servers in different domains, there must be a *trust* between the two domains.
active-directory-domain-services Tutorial Create Forest Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-forest-trust.md
Previously updated : 03/07/2022 Last updated : 06/07/2022
-#Customer intent: As an identity administrator, I want to create a one-way outbound forest from an Azure Active Directory Domain Services resource forest to an on-premises Active Directory Domain Services forest to provide authentication and resource access between forests.
+#Customer intent: As an identity administrator, I want to create a one-way outbound forest from an Azure Active Directory Domain Services forest to an on-premises Active Directory Domain Services forest to provide authentication and resource access between forests.
# Tutorial: Create an outbound forest trust to an on-premises domain in Azure Active Directory Domain Services
-In environments where you can't synchronize password hashes, or where users exclusively sign in using smart cards and don't know their password, you can use a resource forest in Azure Active Directory Domain Services (Azure AD DS). A resource forest uses a one-way outbound trust from Azure AD DS to one or more on-premises AD DS environments. This trust relationship lets users, applications, and computers authenticate against an on-premises domain from the Azure AD DS managed domain. In a resource forest, on-premises password hashes are never synchronized.
+You can create a one-way outbound trust from Azure AD DS to one or more on-premises AD DS environments. This trust relationship lets users, applications, and computers authenticate against an on-premises domain from the Azure AD DS managed domain. A forest trust can help users access resources in scenarios such as:
-![Diagram of forest trust from Azure AD DS to on-premises AD DS](./media/concepts-resource-forest/resource-forest-trust-relationship.png)
+- Environments where you can't synchronize password hashes, or where users exclusively sign in using smart cards and don't know their password.
+- Hybrid scenarios that still require access to on-premises domains.
+
+Trusts can be created in both resource forest and user forest domain types. The resource forest domain type will automatically block sync for any user accounts that were synchronized to Azure AD DS from an on-premises domain. This is the safest domain type to use for trusts as it ensures that there will be no UPN collisions when users are authenticating. Trusts created in a user forest are not inherently safe but allow you more flexibility in what gets synchronized from Azure AD.
+
+![Diagram of forest trust from Azure AD DS to on-premises AD DS](./media/tutorial-create-forest-trust/forest-trust-relationship.png)
In this tutorial, you learn how to:
To complete this tutorial, you need the following resources and privileges:
* If you don't have an Azure subscription, [create an account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * An Azure Active Directory tenant associated with your subscription, either synchronized with an on-premises directory or a cloud-only directory. * If needed, [create an Azure Active Directory tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant].
-* An Azure Active Directory Domain Services managed domain created using a resource forest and configured in your Azure AD tenant.
+* An Azure Active Directory Domain Services managed domain created using a user or resource forest and configured in your Azure AD tenant.
* If needed, [create and configure an Azure Active Directory Domain Services managed domain][create-azure-ad-ds-instance-advanced]. > [!IMPORTANT]
- > Make sure that you create a managed domain using a *resource* forest. The default option creates a *user* forest. Only resource forests can create trusts to on-prem AD DS environments.
- >
- > You also need to use a minimum of *Enterprise* SKU for your managed domain. If needed, [change the SKU for a managed domain][howto-change-sku].
+ > You need to use a minimum of *Enterprise* SKU for your managed domain. If needed, [change the SKU for a managed domain][howto-change-sku].
## Sign in to the Azure portal
In this tutorial, you create and configure the outbound forest trust from Azure
## Networking considerations
-The virtual network that hosts the Azure AD DS resource forest needs network connectivity to your on-premises Active Directory. Applications and services also need network connectivity to the virtual network hosting the Azure AD DS resource forest. Network connectivity to the Azure AD DS resource forest must be always on and stable otherwise users may fail to authenticate or access resources.
+The virtual network that hosts the Azure AD DS forest needs network connectivity to your on-premises Active Directory. Applications and services also need network connectivity to the virtual network hosting the Azure AD DS forest. Network connectivity to the Azure AD DS forest must be always on and stable otherwise users may fail to authenticate or access resources.
Before you configure a forest trust in Azure AD DS, make sure your networking between Azure and on-premises environment meets the following requirements:
Before you configure a forest trust in Azure AD DS, make sure your networking be
* Create subnets with enough IP addresses to support your scenario. * Make sure Azure AD DS has its own subnet, don't share this virtual network subnet with application VMs and services. * Peered virtual networks are NOT transitive.
- * Azure virtual network peerings must be created between all virtual networks you want to use the Azure AD DS resource forest trust to the on-premises AD DS environment.
+ * Azure virtual network peerings must be created between all virtual networks you want to use the Azure AD DS forest trust to the on-premises AD DS environment.
* Provide continuous network connectivity to your on-premises Active Directory forest. Don't use on-demand connections.
-* Make sure there's continuous name resolution (DNS) between your Azure AD DS resource forest name and your on-premises Active Directory forest name.
+* Make sure there's continuous name resolution (DNS) between your Azure AD DS forest name and your on-premises Active Directory forest name.
## Configure DNS in the on-premises domain
To create the outbound trust for the managed domain in the Azure portal, complet
1. In the Azure portal, search for and select **Azure AD Domain Services**, then select your managed domain, such as *aaddscontoso.com*. 1. From the menu on the left-hand side of the managed domain, select **Trusts**, then choose to **+ Add** a trust.-
- > [!NOTE]
- > If you don't see the **Trusts** menu option, check under **Properties** for the *Forest type*. Only *resource* forests can create trusts. If the forest type is *User*, you can't create trusts. There's currently no way to change the forest type of a managed domain. You need to delete and recreate the managed domain as a resource forest.
- 1. Enter a display name that identifies your trust, then the on-premises trusted forest DNS name, such as *onprem.contoso.com*. 1. Provide the same trust password that was used to configure the inbound forest trust for the on-premises AD DS domain in the previous section. 1. Provide at least two DNS servers for the on-premises AD DS domain, such as *10.1.1.4* and *10.1.1.5*.
If the forest trust is no longer needed for an environment, complete the followi
The following common scenarios let you validate that forest trust correctly authenticates users and access to resources:
-* [On-premises user authentication from the Azure AD DS resource forest](#on-premises-user-authentication-from-the-azure-ad-ds-resource-forest)
-* [Access resources in the Azure AD DS resource forest using on-premises user](#access-resources-in-the-azure-ad-ds-resource-forest-using-on-premises-user)
+* [On-premises user authentication from the Azure AD DS forest](#on-premises-user-authentication-from-the-azure-ad-ds-forest)
+* [Access resources in the Azure AD DS forest using on-premises user](#access-resources-in-the-azure-ad-ds-forest-using-on-premises-user)
* [Enable file and printer sharing](#enable-file-and-printer-sharing) * [Create a security group and add members](#create-a-security-group-and-add-members) * [Create a file share for cross-forest access](#create-a-file-share-for-cross-forest-access) * [Validate cross-forest authentication to a resource](#validate-cross-forest-authentication-to-a-resource)
-### On-premises user authentication from the Azure AD DS resource forest
+### On-premises user authentication from the Azure AD DS forest
You should have Windows Server virtual machine joined to the managed domain. Use this virtual machine to test your on-premises user can authenticate on a virtual machine. If needed, [create a Windows VM and join it to the managed domain][join-windows-vm].
-1. Connect to the Windows Server VM joined to the Azure AD DS resource forest using [Azure Bastion](../bastion/bastion-overview.md) and your Azure AD DS administrator credentials.
+1. Connect to the Windows Server VM joined to the Azure AD DS forest using [Azure Bastion](../bastion/bastion-overview.md) and your Azure AD DS administrator credentials.
1. Open a command prompt and use the `whoami` command to show the distinguished name of the currently authenticated user: ```console
You should have Windows Server virtual machine joined to the managed domain. Use
1. If the authentication is a successful, a new command prompt opens. The title of the new command prompt includes `running as userUpn@trusteddomain.com`. 1. Use `whoami /fqdn` in the new command prompt to view the distinguished name of the authenticated user from the on-premises Active Directory.
-### Access resources in the Azure AD DS resource forest using on-premises user
+### Access resources in the Azure AD DS forest using on-premises user
-Using the Windows Server VM joined to the Azure AD DS resource forest, you can test the scenario where users can access resources hosted in the resource forest when they authenticate from computers in the on-premises domain with users from the on-premises domain. The following examples show you how to create and test various common scenarios.
+Using the Windows Server VM joined to the Azure AD DS forest, you can test the scenario where users can access resources hosted in the forest when they authenticate from computers in the on-premises domain with users from the on-premises domain. The following examples show you how to create and test various common scenarios.
#### Enable file and printer sharing
-1. Connect to the Windows Server VM joined to the Azure AD DS resource forest using [Azure Bastion](../bastion/bastion-overview.md) and your Azure AD DS administrator credentials.
+1. Connect to the Windows Server VM joined to the Azure AD DS forest using [Azure Bastion](../bastion/bastion-overview.md) and your Azure AD DS administrator credentials.
1. Open **Windows Settings**, then search for and select **Network and Sharing Center**. 1. Choose the option for **Change advanced sharing** settings.
Using the Windows Server VM joined to the Azure AD DS resource forest, you can t
#### Create a file share for cross-forest access
-1. On the Windows Server VM joined to the Azure AD DS resource forest, create a folder and provide name such as *CrossForestShare*.
+1. On the Windows Server VM joined to the Azure AD DS forest, create a folder and provide name such as *CrossForestShare*.
1. Right-select the folder and choose **Properties**. 1. Select the **Security** tab, then choose **Edit**. 1. In the *Permissions for CrossForestShare* dialog box, select **Add**.
In this tutorial, you learned how to:
> * Create a one-way outbound forest trust in Azure AD DS > * Test and validate the trust relationship for authentication and resource access
-For more conceptual information about forest types in Azure AD DS, see [What are resource forests?][concepts-forest] and [How do forest trusts work in Azure AD DS?][concepts-trust]
+For more conceptual information about forest types in Azure AD DS, see [What are resource forests?][concepts-forest] and [How do forest trusts work in Azure AD DS?][concepts-trust].
<!-- INTERNAL LINKS --> [concepts-forest]: concepts-resource-forest.md
active-directory Sap Successfactors Integration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/sap-successfactors-integration-reference.md
employmentNav/jobInfoNav/employmentTypeNav,employmentNav/jobInfoNav/employeeClas
After full sync, Azure AD provisioning service maintains `LastExecutionTimestamp` and uses it to create delta queries for retrieving incremental changes. The timestamp attributes present in each SuccessFactors entity, such as `lastModifiedDateTime`, `startDate`, `endDate`, and `latestTerminationDate`, are evaluated to see if the change falls between the `LastExecutionTimestamp` and `CurrentExecutionTime`. If yes, then the entry change is considered to be effective and processed for sync.
+Here is the OData API request template that Azure AD uses to query SuccessFactors for incremental changes. You can update the variables `SuccessFactorsAPIEndpoint`, `LastExecutionTimestamp` and `CurrentExecutionTime` in the request template below use a tool like [Postman](https://www.postman.com/downloads/) to check what data is returned. Alternatively, you can also retrieve the actual request payload from SuccessFactors by [enabling OData API Audit logs](#enabling-odata-api-audit-logs-in-successfactors).
+
+```
+https://[SuccessFactorsAPIEndpoint]/odata/v2/PerPerson/$count?$format=json&$filter=(personEmpTerminationInfoNav/activeEmploymentsCount ne null) and
+((lastModifiedDateTime ge datetimeoffset'<LastExecutionTimestamp>' and lastModifiedDateTime le datetimeoffset'<CurrentExecutionTime>') or
+(personalInfoNav/startDate ge datetimeoffset'<LastExecutionTimestamp>' and personalInfoNav/startDate le datetimeoffset'<CurrentExecutionTime>') or
+((personalInfoNav/lastModifiedDateTime ge datetimeoffset'<LastExecutionTimestamp>' and personalInfoNav/lastModifiedDateTime le datetimeoffset'<CurrentExecutionTime>') and (personalInfoNav/startDate le datetimeoffset'<CurrentExecutionTime>' and (personalInfoNav/endDate ge datetimeoffset'<CurrentExecutionTime>' or personalInfoNav/endDate eq null))) or
+(employmentNav/startDate ge datetimeoffset'<LastExecutionTimestamp>' and employmentNav/startDate le datetimeoffset'<CurrentExecutionTime>') or
+((employmentNav/lastModifiedDateTime ge datetimeoffset'<LastExecutionTimestamp>' and employmentNav/lastModifiedDateTime le datetimeoffset'<CurrentExecutionTime>') and (employmentNav/startDate le datetimeoffset'<CurrentExecutionTime>' and (employmentNav/endDate ge datetimeoffset'<CurrentExecutionTime>' or employmentNav/endDate eq null)))
+(employmentNav/jobInfoNav/startDate ge datetimeoffset'<LastExecutionTimestamp>' and employmentNav/jobInfoNav/startDate le datetimeoffset'<CurrentExecutionTime>') or
+((employmentNav/jobInfoNav/lastModifiedDateTime ge datetimeoffset'<LastExecutionTimestamp>' and employmentNav/jobInfoNav/lastModifiedDateTime le datetimeoffset'<CurrentExecutionTime>') and (employmentNav/jobInfoNav/startDate le datetimeoffset'<CurrentExecutionTime>' and (employmentNav/jobInfoNav/endDate ge datetimeoffset'<CurrentExecutionTime>' or employmentNav/jobInfoNav/endDate eq null))) or
+(phoneNav/lastModifiedDateTime ge datetimeoffset'<LastExecutionTimestamp>' and phoneNav/lastModifiedDateTime le datetimeoffset'<CurrentExecutionTime>') or
+(emailNav/lastModifiedDateTime ge datetimeoffset'<LastExecutionTimestamp>' and emailNav/lastModifiedDateTime le datetimeoffset'<CurrentExecutionTime>') or
+(personEmpTerminationInfoNav/latestTerminationDate ge datetimeoffset'<previousDayDateStartTime24hrs>' and personEmpTerminationInfoNav/latestTerminationDate le datetimeoffset'<previousDayDateTime24hrs>') or
+(employmentNav/userNav/lastModifiedDateTime ge datetimeoffset'<LastExecutionTimestamp>' and employmentNav/userNav/lastModifiedDateTime le datetimeoffset'<CurrentExecutionTime>'))
+&$expand=employmentNav/userNav,employmentNav/jobInfoNav,personalInfoNav,personEmpTerminationInfoNav,phoneNav,emailNav,employmentNav/userNav/manager/empInfo,employmentNav/jobInfoNav/companyNav,employmentNav/jobInfoNav/departmentNav,employmentNav/jobInfoNav/locationNav,employmentNav/jobInfoNav/locationNav/addressNavDEFLT,employmentNav/jobInfoNav/locationNav/addressNavDEFLT/stateNav&customPageSize=100
+```
+ ## Reading attribute data When Azure AD provisioning service queries SuccessFactors, it retrieves a JSON result set. The JSON result set includes a number of attributes stored in Employee Central. By default, the provisioning schema is configured to retrieve only a subset of those attributes.
By using JSONPath transformation, you can customize the behavior of the Azure AD
This section covers how you can customize the provisioning app for the following HR scenarios: * [Retrieving additional attributes](#retrieving-additional-attributes) * [Retrieving custom attributes](#retrieving-custom-attributes)
+* [Mapping employment status to account status](#mapping-employment-status-to-account-status)
* [Handling worker conversion and rehire scenario](#handling-worker-conversion-and-rehire-scenario)
+* [Retrieving current active employment record](#retrieving-current-active-employment-record)
* [Handling global assignment scenario](#handling-global-assignment-scenario) * [Handling concurrent jobs scenario](#handling-concurrent-jobs-scenario) * [Retrieving position details](#retrieving-position-details) * [Provisioning users in the Onboarding module](#provisioning-users-in-the-onboarding-module)
+* [Enabling OData API Audit logs in SuccessFactors](#enabling-odata-api-audit-logs-in-successfactors)
### Retrieving additional attributes
Extending this scenario:
* If you want to map *custom35* attribute from the *User* entity, then use the JSONPath `$.employmentNav.results[0].userNav.custom35` * If you want to map *customString35* attribute from the *EmpEmployment* entity, then use the JSONPath `$.employmentNav.results[0].customString35`
+### Mapping employment status to account status
+
+By default, the Azure AD SuccessFactors connector uses the `activeEmploymentsCount` field of the `PersonEmpTerminationInfo` object to set account status. There is a known SAP SuccessFactors issue documented in [knowledge base article 3047486](https://userapps.support.sap.com/sap/support/knowledge/en/3047486) that at times this may disable the account of a terminated worker one day prior to the termination on the last day of work.
+
+If you are running into this issue or prefer mapping employment status to account status, you can update the mapping to expand the `emplStatus` field and use the employment status code present in the field `emplStatus.externalCode`. Based on [SAP support note 2505526](https://launchpad.support.sap.com/#/notes/2505526), here is a list of employment status codes that you can retrieve in the provisioning app.
+* A = Active
+* D = Dormant
+* U = Unpaid Leave
+* P = Paid Leave
+* S = Suspended
+* F = Furlough
+* O = Discarded
+* R = Retired
+* T = Terminated
+
+Use the steps below to update your mapping to retrieve these codes.
+
+1. Open the attribute-mapping blade of your SuccessFactors provisioning app.
+1. Under **Show advanced options**, click on **Edit SuccessFactors attribute list**.
+1. Find the attribute `emplStatus` and update the JSONPath to `$.employmentNav.results[0].jobInfoNav.results[0].emplStatusNav.externalCode`. This will enable the connector to retrieve the employment status codes in the table.
+1. Save the changes.
+1. In the attribute mapping blade, update the expression mapping for the account status flag.
+
+ | Provisioning Job | Account status attribute | Mapping expression |
+ | - | | |
+ | SuccessFactors to Active Directory User Provisioning | accountDisabled | Switch(\[emplStatus\], "True", "A", "False", "U", "False", "P", "False") |
+ | SuccessFactors to Azure AD User Provisioning | accountEnabled | Switch(\[emplStatus\], "False", "A", "True", "U", "True", "P", "True") |
+
+1. Save the changes.
+1. Test the configuration using [provision on demand](provision-on-demand.md).
+1. After confirming that sync works as expected, restart the provisioning job.
++ ### Handling worker conversion and rehire scenario **About worker conversion scenario:** Worker conversion is the process of converting an existing full-time employee to a contractor or a contractor to full-time. In this scenario, Employee Central adds a new *EmpEmployment* entity along with a new *User* entity for the same *Person* entity. The *User* entity nested under the previous *EmpEmployment* entity is set to null.
To handle both these scenarios so that the new employment data shows up when a c
1. The above process updates all JSONPath expressions as follows: * Old JSONPath: `$.employmentNav.results[0].jobInfoNav.results[0].departmentNav.name_localized` * New JSONPath: `$.employmentNav.results[-1:].jobInfoNav.results[0].departmentNav.name_localized`
-1. Restart provisioning.
+1. Test the configuration using [provision on demand](provision-on-demand.md).
+1. After confirming that sync works as expected, restart the provisioning job.
+
+> [!NOTE]
+> The approach described above only works if SAP SuccessFactors returns the employment objects in ascending order, where the latest employment record is always the last record in the *employmentNav* results array. The order in which multiple employment records are returned is not guaranteed by SuccessFactors. If your SuccessFactors instance has multiple employment records corresponding to a worker and you always want to retrieve attributes associated with the active employment record, use steps described in the next section.
+
+### Retrieving current active employment record
+
+Using the JSONPath root of `$.employmentNav.results[0]` or `$.employmentNav.results[-1:]` to fetch employment records works in most scenarios and keeps the configuration simple. However, depending on how your SuccessFactors instance is configured, there may be a need to update this configuration to ensure that the connector always fetches the latest active employment record.
+
+This section describes how you can update the JSONPath settings to definitely retrieve the current active employment record of the user. It also handles worker conversion and rehire scenarios.
+
+1. Open the attribute-mapping blade of your SuccessFactors provisioning app.
+1. Scroll down and click **Show advanced options**.
+1. Click on the link **Review your schema here** to open the schema editor.
+1. Click on the **Download** link to save a copy of the schema before editing.
+1. In the schema editor, press Ctrl-H key to open the find-replace control.
+1. Perform the following find replace operations. Ensure there is no leading or trailing space when performing the find-replace operations. If you are using `[-1:]` index instead of `[0]`, then update the *string-to-find* field accordingly.
+
+ | **String to find** | **String to use for replace** | **Purpose** |
+ | | -- | |
+ | $.employmentNav.results\[0\].<br>jobInfoNav.results\[0\].emplStatus | $.employmentNav..jobInfoNav..results\[?(@.emplStatusNav.externalCode == 'A' \|\| @.emplStatusNav.externalCode == 'U' \|\| @.emplStatusNav.externalCode == 'P' )\].emplStatusNav.externalCode | With this find-replace, we are adding the ability to expand emplStatusNav OData object. |
+ | $.employmentNav.results\[0\].<br>jobInfoNav.results\[0\] | $.employmentNav..jobInfoNav..results\[?(@.emplStatusNav.externalCode == 'A' \|\| @.emplStatusNav.externalCode == 'U' \|\| @.emplStatusNav.externalCode == 'P')\] | With this find-replace, we instruct the connector to always retrieve attributes associated with the active SuccessFactors EmpJobInfo record. Attributes associated with terminated/inactive records in SuccessFactors will be ignored. |
+ | $.employmentNav.results\[0\] | $.employmentNav..results\[?(@.jobInfoNav..results\[?(@.emplStatusNav.externalCode == 'A' \|\| @.emplStatusNav.externalCode == 'U' \|\| @.emplStatusNav.externalCode == 'P')\])\] | With this find-replace, we instruct the connector to always retrieve attributes associated with the active SuccessFactors Employment record. Attributes associated with terminated/inactive records in SuccessFactors will be ignored. |
+
+1. Save the schema.
+1. The above process updates all JSONPath expressions.
+1. For pre-hire processing to work, the JSONPath associated with `startDate` attribute must use either `[0]` or `[-1:]` index. Under **Show advanced options**, click on **Edit SuccessFactors attribute list**. Find the attribute `startDate` and set it to the value `$.employmentNav.results[-1:].startDate`
+1. Save the schema.
+1. To ensure that terminations are processed as expected, you can use one of the following settings in the attribute mapping section.
+
+ | Provisioning Job | Account status attribute | Expression to use if account status is based on "activeEmploymentsCount" | Expression to use if account status is based on "emplStatus" value |
+ | -- | | -- | - |
+ | SuccessFactors to Active Directory User Provisioning | accountDisabled | Switch(\[activeEmploymentsCount\], "False", "0", "True") | Switch(\[emplStatus\], "True", "A", "False", "U", "False", "P", "False") |
+ | SuccessFactors to Azure AD User Provisioning | accountEnabled | Switch(\[activeEmploymentsCount\], "True", "0", "False") | Switch(\[emplStatus\], "False", "A", "True", "U", "True", "P", "True") |
+
+1. Save your changes. 1.
+1. Test the configuration using [provision on demand](provision-on-demand.md).
+1. After confirming that sync works as expected, restart the provisioning job.
### Handling global assignment scenario
To fetch attributes belonging to the standard assignment and global assignment u
* `IIF(IsPresent([globalAssignmentDepartment]),[globalAssignmentDepartment],[department])` 1. Save the mapping.
-1. Restart provisioning.
+1. Test the configuration using [provision on demand](provision-on-demand.md).
+1. After confirming that sync works as expected, restart the provisioning job.
+ ### Handling concurrent jobs scenario
To fetch attributes belonging to both jobs, use the steps listed below:
1. Let's say you want to pull the department associated with job 1 and job 2. The pre-defined attribute *department* already fetches the value of department for the first job. You can define a new attribute called *secondJobDepartment* and set the JSONPath expression to `$.employmentNav.results[1].jobInfoNav.results[0].departmentNav.name_localized` 1. You can now either flow both department values to Active Directory attributes or selectively flow a value using expression mapping. 1. Save the mapping.
-1. Restart provisioning.
+1. Test the configuration using [provision on demand](provision-on-demand.md).
+1. After confirming that sync works as expected, restart the provisioning job.
+
### Retrieving position details
If you want to exclude processing of pre-hires in the Onboarding module, update
1. Edit the Source Object scope to apply a scoping filter `userStatus NOT EQUALS active_external` 1. Save the mapping and validate that the scoping filter works using provisioning on demand.
+### Enabling OData API Audit logs in SuccessFactors
+
+The Azure AD SuccessFactors connector uses SuccessFactors OData API to retrieve changes and provision users. If you observe issues with the provisioning service and want to confirm what data was retrieved from SuccessFactors, you can enable OData API Audit logs in SuccessFactors by following steps documented in [SAP support note 2680837](https://userapps.support.sap.com/sap/support/knowledge/en/2680837). From these audit logs you can retrieve the request payload sent by Azure AD. To troubleshoot, you can copy this request payload in a tool like "Postman", set it up to use the same API user that is used by the connector and see if it returns the desired changes from SuccessFactors.
+ ## Writeback scenarios
-This section covers different write-back scenarios. It recommends configuration approaches based on how email and phone number is setup in SuccessFactors.
+This section covers different write-back scenarios. It recommends configuration approaches based on how email and phone number is set up in SuccessFactors.
### Supported scenarios for phone and email write-back
active-directory Concept Certificate Based Authentication Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-limitations.md
Previously updated : 02/18/2022 Last updated : 06/07/2022
The following scenarios aren't supported:
- Public Key Infrastructure for creating client certificates. Customers need to configure their own Public Key Infrastructure (PKI) and provision certificates to their users and devices. - Certificate Authority hints aren't supported, so the list of certificates that appears for users in the UI isn't scoped.-- Windows login using smart cards on Windows devices. - Only one CRL Distribution Point (CDP) for a trusted CA is supported. - The CDP can be only HTTP URLs. We don't support Online Certificate Status Protocol (OCSP), or Lightweight Directory Access Protocol (LDAP) URLs. - Configuring other certificate-to-user account bindings, such as using the **subject field**, or **keyid** and **issuer**, arenΓÇÖt available in this release.
active-directory Concept Certificate Based Authentication Mobile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-mobile.md
+
+ Title: Azure Active Directory certificate-based authentication on mobile devices (Android and iOS) - Azure Active Directory
+description: Learn about Azure Active Directory certificate-based authentication on mobile devices (Android and iOS)
+++++ Last updated : 06/07/2022+++++++++
+# Azure Active Directory certificate-based authentication on mobile devices (Android and iOS) (Preview)
+
+Android and iOS devices can use certificate-based authentication (CBA) to authenticate to Azure Active Directory using a client certificate on their device when connecting to:
+
+- Office mobile applications such as Microsoft Outlook and Microsoft Word
+- Exchange ActiveSync (EAS) clients
+
+Azure AD certificate-based authentication (CBA) is supported for certificates on-device on native browsers as well as on Microsoft first-party applications on both iOS and Android devices.
+
+Azure AD CBA eliminates the need to enter a username and password combination into certain mail and Microsoft Office applications on your mobile device.
+
+## Prerequisites
+
+- For Android device, OS version must be Android 5.0 (Lollipop) and above.
+- For iOS device, OS version must be iOS 9 or above.
+- Microsoft Authenticator is required for Office applications on iOS.
+
+## Microsoft mobile applications support
+
+| Applications | Support |
+|:|::|
+|Azure Information Protection app| &#x2705; |
+|Company Portal | &#x2705; |
+|Microsoft Teams | &#x2705; |
+|Office (mobile) | &#x2705; |
+|OneNote | &#x2705; |
+|OneDrive | &#x2705; |
+|Outlook | &#x2705; |
+|Power BI | &#x2705; |
+|Skype for Business | &#x2705; |
+|Word / Excel / PowerPoint | &#x2705; |
+|Yammer | &#x2705; |
+
+## Support for Exchange ActiveSync clients
+
+On iOS 9 or later, the native iOS mail client is supported.
+
+Certain Exchange ActiveSync applications on Android 5.0 (Lollipop) or later are supported.
+
+To determine if your email application supports this feature, contact your application developer.
+
+## Known issue
+
+On iOS, users will see a double prompt, where they must click the option to use certificate-based authentication twice. We are working on making the user experience better.
+
+## Next steps
+
+- [Overview of Azure AD CBA](concept-certificate-based-authentication.md)
+- [Technical deep dive for Azure AD CBA](concept-certificate-based-authentication-technical-deep-dive.md)
+- [Limitations with Azure AD CBA](concept-certificate-based-authentication-limitations.md)
+- [Windows SmartCard logon using Azure AD CBA](concept-certificate-based-authentication-smartcard.md)
+- [FAQ](certificate-based-authentication-faq.yml)
+- [Troubleshoot Azure AD CBA](troubleshoot-certificate-based-authentication.md)
++
active-directory Concept Certificate Based Authentication Smartcard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-smartcard.md
+
+ Title: Windows SmartCard logon using Azure Active Directory certificate-based authentication - Azure Active Directory
+description: Learn how to enable Windows SmartCard logon using Azure Active Directory certificate-based authentication
+++++ Last updated : 06/15/2022+++++++++
+# Windows SmartCard logon using Azure Active Directory certificate-based authentication (Preview)
+
+Azure AD users can authenticate using X.509 certificates on their SmartCards directly against Azure AD at Windows logon. There is no special configuration needed on the Windows client to accept the SmartCard authentication.
+
+## User experience
+
+Follow these steps to set up Windows SmartCard logon:
+
+1. Join the machine to either Azure AD or a hybrid environment (hybrid join).
+1. Configure Azure AD CBA in your tenant as described in [Configure Azure AD CBA](how-to-certificate-based-authentication.md).
+1. Make sure the user is either on managed authentication or using staged rollout.
+1. Present the physical or virtual SmartCard to the test machine.
+1. Select SmartCard icon, enter the PIN and authenticate the user.
+
+ :::image type="content" border="false" source="./media/concept-certificate-based-authentication/smartcard.png" alt-text="Screenshot of SmartCard sign in.":::
+
+Users will get a primary refresh token (PRT) from Azure Active Directory after the successful login and depending on the Certificate-based authentication configuration, the PRT will contain the multifactor claim.
+
+## Restrictions and caveats
+
+- The Windows login only works with the latest preview build of Windows 11. We are working to backport the functionality to Windows 10 and Windows Server.
+- Only Windows machines that are joined to either or a hybrid environment can test SmartCard logon.
+- Like in the other Azure AD CBA scenarios, the user must be on a managed domain or using staged rollout and cannot use a federated authentication model.
+
+## Next steps
+
+- [Overview of Azure AD CBA](concept-certificate-based-authentication.md)
+- [Technical deep dive for Azure AD CBA](concept-certificate-based-authentication-technical-deep-dive.md)
+- [Limitations with Azure AD CBA](concept-certificate-based-authentication-limitations.md)
+- [Azure AD CBA on mobile devices (Android and iOS)](concept-certificate-based-authentication-mobile.md)
+- [FAQ](certificate-based-authentication-faq.yml)
+- [Troubleshoot Azure AD CBA](troubleshoot-certificate-based-authentication.md)
active-directory Concept Certificate Based Authentication Technical Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-technical-deep-dive.md
Previously updated : 03/11/2022 Last updated : 06/15/2022
Let's cover each step:
1. The user tries to access an application, such as [MyApps portal](https://myapps.microsoft.com/). 1. If the user is not already signed in, the user is redirected to the Azure AD **User Sign-in** page at [https://login.microsoftonline.com/](https://login.microsoftonline.com/).
-1. The user enters their username into the Azure AD sign in page, and then clicks **Next**.
+1. The user enters their username into the Azure AD sign-in page, and then clicks **Next**.
:::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/sign-in.png" alt-text="Screenshot of the Sign-in for MyApps portal.":::
Since multiple authentication binding policy rules can be created with different
1. Exact match is used for strong authentication via policy OID. If you have a certificate A with policy OID **1.2.3.4.5** and a derived credential B based on that certificate has a policy OID **1.2.3.4.5.6** and the custom rule is defined as **Policy OID** with value **1.2.3.4.5** with MFA, only certificate A will satisfy MFA and credential B will satisfy only single-factor authentication. If the user used derived credential during sign-in and was configured to have MFA, the user will be asked for a second factor for successful authentication. 1. Policy OID rules will take precedence over certificate issuer rules. If a certificate has both policy OID and Issuer, the policy OID is always checked first and if no policy rule is found then the issuer subject bindings are checked. Policy OID has a higher strong authentication binding priority than the issuer.
-1. If one CA binds to MFA, all user certificates that this CA issues qualify as MFA. The same logic applies for single-factor authentication.
+1. If one CA binds to MFA, all user certificates that the CA issues qualify as MFA. The same logic applies for single-factor authentication.
1. If one policy OID binds to MFA, all user certificates that include this policy OID as one of the OIDs (A user certificate could have multiple policy OIDs) qualify as MFA. 1. If there is a conflict between multiple policy OIDs (such as when a certificate has two policy OIDs, where one binds to single-factor authentication and the other binds to MFA) then treat the certificate as a single-factor authentication. 1. One certificate can only have one valid strong authentication binding (that is, a certificate cannot bind to both single-factor and MFA).
Use the highest priority (lowest number) binding.
1. If a unique user is found, authenticate the user. 1. If a unique user is not found, authentication fails. 1. If the X.509 certificate field is not on the presented certificate, move to the next priority binding.
-1. If the specified X.509 certificate field is found on the certificate, but Azure AD does not find a user object in the directory matching that value, the authentication fails. Azure AD does not attempt to use the next binding in the list in this case. Only if the X.509 certificate field is not on the certificate does it tries the next binding, as mentioned in Step 2.
+1. If the specified X.509 certificate field is found on the certificate, but Azure AD does not find a user object in the directory matching that value, the authentication fails. Azure AD does not attempt to use the next binding in the list in this case. Only if the X.509 certificate field is not on the certificate does it try the next binding, as mentioned in Step 2.
## Understanding the certificate revocation process
An admin can configure the CRL distribution point during the setup process of th
>[!IMPORTANT] >If the admin skips the configuration of the CRL, Azure AD will not perform any CRL checks during the certificate-based authentication of the user. This can be helpful for initial troubleshooting but should not be considered for production use.
-As of now, we don't support Online Certificate Status Protocol (OCSP) because of performance and reliability reasons. Instead of downloading the CRL at every connection by the client browser for OCSP, Azure AD downloads once at the first sign in and caches it, thereby improving the performance and reliability of CRL verification. We also index the cache so the search is must faster every time. Customers must publish CRLs for certificate revocation.
+As of now, we don't support Online Certificate Status Protocol (OCSP) because of performance and reliability reasons. Instead of downloading the CRL at every connection by the client browser for OCSP, Azure AD downloads once at the first sign-in and caches it, thereby improving the performance and reliability of CRL verification. We also index the cache so the search is much faster every time. Customers must publish CRLs for certificate revocation.
**Typical flow of the CRL check:**
For the next test scenario, configure the authentication policy where the **poli
:::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/several-entries.png" alt-text="Screenshot of several entries in the sign-in logs." lightbox="./media/concept-certificate-based-authentication-technical-deep-dive/several-entries.png":::
- The entry with **Interrupted** status provides has more diagnostic info in the **Additional Details** tab.
+ The entry with **Interrupted** status has more diagnostic info on the **Additional Details** tab.
:::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/interrupted-user-details.png" alt-text="Screenshot of interrupted attempt details in the sign-in logs." :::
For the next test scenario, configure the authentication policy where the **poli
- [Overview of Azure AD CBA](concept-certificate-based-authentication.md) - [Limitations with Azure AD CBA](concept-certificate-based-authentication-limitations.md) - [How to configure Azure AD CBA](how-to-certificate-based-authentication.md)
+- [Windows SmartCard logon using Azure AD CBA](concept-certificate-based-authentication-smartcard.md)
+- [Azure AD CBA on mobile devices (Android and iOS)](concept-certificate-based-authentication-mobile.md)
- [FAQ](certificate-based-authentication-faq.yml) - [Troubleshoot Azure AD CBA](troubleshoot-certificate-based-authentication.md)
active-directory Concept Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication.md
Previously updated : 02/09/2022 Last updated : 06/07/2022
The following images show how Azure AD CBA simplifies the customer environment b
- [Technical deep dive for Azure AD CBA](concept-certificate-based-authentication-technical-deep-dive.md) - [Limitations with CBA](concept-certificate-based-authentication-limitations.md) - [How to configure CBA](how-to-certificate-based-authentication.md)
+- [Windows SmartCard logon using Azure AD CBA](concept-certificate-based-authentication-smartcard.md)
+- [Azure AD CBA on mobile devices (Android and iOS)](concept-certificate-based-authentication-mobile.md)
- [FAQ](certificate-based-authentication-faq.yml) - [Troubleshoot CBA](troubleshoot-certificate-based-authentication.md)
active-directory How To Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-certificate-based-authentication.md
description: Topic that shows how to configure Azure AD certificate-based authen
Previously updated : 04/21/2022 Last updated : 06/15/2022
Follow these instructions to configure and use Azure AD CBA.
Make sure that the following prerequisites are in place. -- Configure at least one certificate authority (CA) and any intermediate certificate authorities in Azure Active Directory.
+- Configure at least one certification authority (CA) and any intermediate certification authorities in Azure Active Directory.
- The user must have access to a user certificate (issued from a trusted Public Key Infrastructure configured on the tenant) intended for client authentication to authenticate against Azure AD. >[!IMPORTANT]
Make sure that the following prerequisites are in place.
## Steps to configure and test Azure AD CBA
-There are some configuration steps to complete before enabling Azure AD CBA. First, an admin must configure the trusted CAs that issue user certificates. As seen in the following diagram, we use role-based access control to make sure only least-privileged administrators make changes. Configuring the certificate authority is done only by the [Privileged Authentication Administrator](../roles/permissions-reference.md#privileged-authentication-administrator) role.
+There are some configuration steps to complete before enabling Azure AD CBA. First, an admin must configure the trusted CAs that issue user certificates. As seen in the following diagram, we use role-based access control to make sure only least-privileged administrators make changes. Configuring the certification authority is done only by the [Privileged Authentication Administrator](../roles/permissions-reference.md#privileged-authentication-administrator) role.
Optionally, you can also configure authentication bindings to map certificates to single-factor or multifactor and configure username bindings to map certificate field to a user object attribute. Configuring user-related settings can be done by [Authentication Policy Administrators](../roles/permissions-reference.md#authentication-policy-administrator). Once all the configurations are complete, enable Azure AD CBA on the tenant.
-## Step 1: Configure the certificate authorities
+## Step 1: Configure the certification authorities
+
+### Configure certification authorities using the Azure portal
+
+To enable the certificate-based authentication and configure user bindings in the Azure portal, complete the following steps:
+
+1. Sign in to the Azure portal as a Global Administrator.
+1. Select Azure Active Directory, then choose Security from the menu on the left-hand side.
+
+ :::image type="content" border="true" source="./media/how-to-certificate-based-authentication/certificate-authorities.png" alt-text="Screenshot of certification authorities.":::
+
+1. To upload a CA, click **Upload**:
+ 1. Select the CA file.
+ 1. Select **Yes** if the CA is a root certificate, otherwise select **No**.
+ 1. Set the http internet-facing URL for the certification authority's base CRL that contains all revoked certificates. This should be set or authentication with revoked certificates will not fail.
+ 1. Set **Delta CRL URL** - the http internet-facing URL for the CRL that contains all revoked certificates since the last base CRL was published.
+ 1. Click **Add**.
+
+ :::image type="content" border="true" source="./media/how-to-certificate-based-authentication/upload-certificate-authority.png" alt-text="Screenshot of how to upload certification authority file.":::
+
+1. To delete a CA certificate, select the certificate and click **Delete**.
+1. Click **Columns** to add or delete columns.
+
+### Configure certification authorities using PowerShell
Only one CRL Distribution Point (CDP) for a trusted CA is supported. The CDP can only be HTTP URLs. Online Certificate Status Protocol (OCSP) or Lightweight Directory Access Protocol (LDAP) URLs are not supported. ### Connect
Only one CRL Distribution Point (CDP) for a trusted CA is supported. The CDP can
[!INCLUDE [New-AzureAD](../../../includes/active-directory-authentication-new-trusted-azuread.md)] **AuthorityType**-- Use 0 to indicate that this is a Root Certificate Authority-- Use 1 to indicate that this is an Intermediate or Issuing Certificate Authority
+- Use 0 to indicate that this is a Root certification authority
+- Use 1 to indicate that this is an Intermediate or Issuing certification authority
**crlDistributionPoint**
-You can validate the crlDistributionPoint value you provide in the above PowerShell example are valid for the Certificate Authority being added by downloading the CRL and comparing the CA certificate and the CRL Information.
+You can validate the crlDistributionPoint value you provide in the above PowerShell example are valid for the certification authority being added by downloading the CRL and comparing the CA certificate and the CRL Information.
The below table and graphic indicate how to map information from the CA Certificate to the attributes of the downloaded CRL.
The below table and graphic indicate how to map information from the CA Certific
>If Issuing CA is Windows Server > >- On the [Properties](/windows-server/networking/core-network-guide/cncg/server-certs/configure-the-cdp-and-aia-extensions-on-ca1#to-configure-the-cdp-and-aia-extensions-on-ca1)
- of the CA in the Certificate Authority Microsoft Management Console (MMC)
+ of the CA in the certification authority Microsoft Management Console (MMC)
>- On the CA running [certutil](/windows-server/administration/windows-commands/certutil#-cainfo) -cainfo cdp For additional details see: [Understanding the certificate revocation process](./concept-certificate-based-authentication-technical-deep-dive.md#understanding-the-certificate-revocation-process).
To enable the certificate-based authentication and configure user bindings in th
:::image type="content" border="true" source="./media/how-to-certificate-based-authentication/policy.png" alt-text="Screenshot of Authentication policy."::: - 1. Click **Configure** to set up authentication binding and username binding. 1. The protection level attribute has a default value of **Single-factor authentication**. Select **Multi-factor authentication** to change the default value to MFA.
To enable the certificate-based authentication and configure username bindings u
- [Overview of Azure AD CBA](concept-certificate-based-authentication.md) - [Technical deep dive for Azure AD CBA](concept-certificate-based-authentication-technical-deep-dive.md) - [Limitations with Azure AD CBA](concept-certificate-based-authentication-limitations.md)
+- [Windows SmartCard logon using Azure AD CBA](concept-certificate-based-authentication-smartcard.md)
+- [Azure AD CBA on mobile devices (Android and iOS)](concept-certificate-based-authentication-mobile.md)
- [FAQ](certificate-based-authentication-faq.yml) - [Troubleshoot Azure AD CBA](troubleshoot-certificate-based-authentication.md)
active-directory Howto Mfa Nps Extension Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-errors.md
Previously updated : 11/21/2019 Last updated : 05/12/2022
If you encounter errors with the NPS extension for Azure AD Multi-Factor Authent
| **CONTACT_SUPPORT** | [Contact support](#contact-microsoft-support), and mention the list of steps for collecting logs. Provide as much information as you can about what happened before the error, including tenant ID, and user principal name (UPN). | | **CLIENT_CERT_INSTALL_ERROR** | There may be an issue with how the client certificate was installed or associated with your tenant. Follow the instructions in [Troubleshooting the MFA NPS extension](howto-mfa-nps-extension.md#troubleshooting) to investigate client cert problems. | | **ESTS_TOKEN_ERROR** | Follow the instructions in [Troubleshooting the MFA NPS extension](howto-mfa-nps-extension.md#troubleshooting) to investigate client cert and security token problems. |
-| **HTTPS_COMMUNICATION_ERROR** | The NPS server is unable to receive responses from Azure AD MFA. Verify that your firewalls are open bidirectionally for traffic to and from https://adnotifications.windowsazure.com |
+| **HTTPS_COMMUNICATION_ERROR** | The NPS server is unable to receive responses from Azure AD MFA. Verify that your firewalls are open bidirectionally for traffic to and from `https://adnotifications.windowsazure.com` and that TLS 1.2 is enabled (default). If TLS 1.2 is disabled, user authentication will fail and event ID 36871 with source SChannel is entered in the System log in Event Viewer. To verify TLS 1.2 is enabled, see [TLS registry settings](/windows-server/security/tls/tls-registry-settings.md#tls-dtls-and-ssl-protocol-version-settings). |
| **HTTP_CONNECT_ERROR** | On the server that runs the NPS extension, verify that you can reach `https://adnotifications.windowsazure.com` and `https://login.microsoftonline.com/`. If those sites don't load, troubleshoot connectivity on that server. | | **NPS Extension for Azure AD MFA:** <br> NPS Extension for Azure AD MFA only performs Secondary Auth for Radius requests in AccessAccept State. Request received for User username with response state AccessReject, ignoring request. | This error usually reflects an authentication failure in AD or that the NPS server is unable to receive responses from Azure AD. Verify that your firewalls are open bidirectionally for traffic to and from `https://adnotifications.windowsazure.com` and `https://login.microsoftonline.com` using ports 80 and 443. It is also important to check that on the DIAL-IN tab of Network Access Permissions, the setting is set to "control access through NPS Network Policy". This error can also trigger if the user is not assigned a license. | | **REGISTRY_CONFIG_ERROR** | A key is missing in the registry for the application, which may be because the [PowerShell script](howto-mfa-nps-extension.md#install-the-nps-extension) wasn't run after installation. The error message should include the missing key. Make sure you have the key under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureMfa. |
active-directory Troubleshoot Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/troubleshoot-certificate-based-authentication.md
Previously updated : 02/09/2022 Last updated : 06/15/2022
This topic covers how to troubleshoot Azure AD certificate-based authentication
## Why don't I see an option to sign in using certificates against Azure Active Directory after I enter my username?
-An administrator needs to enable CBA for the tenant to make the sign in with certificate option available for users. Link to getting started doc step 2
+An administrator needs to enable CBA for the tenant to make the sign-in with certificate option available for users. For more information, see [Step 2: Configure authentication binding policy](how-to-certificate-based-authentication.md#step-2-configure-authentication-binding-policy).
## User-facing sign-in error messages
This error is returned if the user selects the wrong user certificate from the l
Make sure the certificate is valid and works for the user binding and authentication policy configuration.
-### AADSTS50034 - Users sign in fails with "Your account or password is incorrect. If you don't remember your password, reset it now."
+### AADSTS50034 - User sign-in fails with "Your account or password is incorrect. If you don't remember your password, reset it now."
:::image type="content" border="true" source="./media/troubleshoot-certificate-based-authentication/reset.png" alt-text="Screenshot of password reset error." :::
If the user is a federated user moving to Azure AD and if the user binding confi
>[!NOTE] >There is a known issue that this scenario is not logged into the sign-in logs.
-### AADSTS130501 - Users sign in fails with "Sign in was blocked due to User Credential Policy"
+### AADSTS130501 - User sign-in fails with "Sign in was blocked due to User Credential Policy"
:::image type="content" border="true" source="./media/troubleshoot-certificate-based-authentication/policy-failed.png" alt-text="Screenshot of policy error." ::: There is also a known issue when a user who is not in scope for CBA ties to sign in with a certificate to an [Office app](https://office.com) or any portal app, and the sign-in fails with an error: In both cases, the error can be resolved by making sure the user is in scope for Azure AD CBA. For more information, see [Step 4: Enable CBA on the tenant](how-to-certificate-based-authentication.md#step-4-enable-cba-on-the-tenant).
In both cases, the error can be resolved by making sure the user is in scope for
After sign-in fails and I retry sign-in with the correct certificate, I get an error: This is a client behavior where the browser keeps using the original certificate selected. When the sign-in fails, close the existing browser session and retry sign-in from a new browser session.
There is a known issue when the authentication sometimes fails, the failure scre
For example, if a user certificate is revoked and is part of a Certificate Revocation List, then authentication fails correctly. However, instead of the error message, you might see the following screen: To get more diagnostic information, look in **Sign-in logs**. If a user authentication fails due to CRL validation for example, sign-in logs show the error information correctly.
The authentication policy is cached. After a policy update, it may take up to an
## I get an error ΓÇÿCannot read properties of undefineΓÇÖ while trying to add a custom authentication rule
-This is a known issue, and we are working on graceful error handling. This error happens when there is no Certificate Authority (CA) on the tenant. To resolve the error, see [Configure the certificate authorities](how-to-certificate-based-authentication.md#step-1-configure-the-certificate-authorities).
+This is a known issue, and we are working on graceful error handling. This error happens when there is no Certification Authority (CA) on the tenant. To resolve the error, see [Configure the certificate authorities](how-to-certificate-based-authentication.md#step-1-configure-the-certification-authorities).
## I see a valid Certificate Revocation List (CRL) endpoint set, but why don't I see any CRL revocation?
This is a known issue, and we are working on graceful error handling. This error
- [Technical deep dive for Azure AD CBA](concept-certificate-based-authentication-technical-deep-dive.md) - [Limitations with Azure AD CBA](concept-certificate-based-authentication-limitations.md) - [How to configure Azure AD CBA](how-to-certificate-based-authentication.md)
+- [Windows SmartCard logon using Azure AD CBA](concept-certificate-based-authentication-smartcard.md)
+- [Azure AD CBA on mobile devices (Android and iOS)](concept-certificate-based-authentication-mobile.md)
- [FAQ](certificate-based-authentication-faq.yml)
active-directory Resilience Defaults https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/resilience-defaults.md
Sample request body:
This patch operation may be deployed using Microsoft PowerShell after installation of the Microsoft.Graph.Authentication module. To install this module, open an elevated PowerShell prompt and execute
-`Install-Module Microsoft.Graph.Authentication`
+```powershell
+Install-Module Microsoft.Graph.Authentication
+```
-Connect to Microsoft Graph, requesting the required scopes ΓÇô
+Connect to Microsoft Graph, requesting the required scopes:
-`Connect-MgGraph -Scopes Policy.Read.All,Policy.ReadWrite.ConditionalAccess,Application.Read.All -TenantId <TenantID>`
+```powershell
+Connect-MgGraph -Scopes Policy.Read.All,Policy.ReadWrite.ConditionalAccess,Application.Read.All -TenantId <TenantID>
+```
Authenticate when prompted.
-Create the JSON body for the PATCH request ΓÇô
+Create the JSON body for the PATCH request:
-`$patchBody = '{"sessionControls": {"disableResilienceDefaults": true}}'`
+```powershell
+$patchBody = '{"sessionControls": {"disableResilienceDefaults": true}}'
+```
-Execute the patch operation ΓÇô
+Execute the patch operation:
-`Invoke-MgGraphRequest -Method PATCH -Uri https://graph.microsoft.com/beta/identity/conditionalAccess/policies/<PolicyID> -Body $patchBody`
+```powershell
+Invoke-MgGraphRequest -Method PATCH -Uri https://graph.microsoft.com/beta/identity/conditionalAccess/policies/<PolicyID> -Body $patchBody
+```
## Recommendations
active-directory Msal Net Client Assertions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-client-assertions.md
MSAL.NET has four methods to provide either credentials or assertions to the con
- `.WithClientClaims()` > [!NOTE]
-> While it is possible to use the `WithClientAssertion()` API to acquire tokens for the confidential client, we do not recommend using it by default as it is more advanced and is designed to handle very specific scenarios which are not common. Using the `.WithCertificate()` API will allow MSAL.NET to handle this for you. This api offers you the ability to customize your authentication request if needed but the default assertion created by `.WithCertificate()` will suffice for most authentication scenarios. This API can also be used as a workaround in some scenarios where MSAL.NET fails to perform the signing operation internally.
+> While it is possible to use the `WithClientAssertion()` API to acquire tokens for the confidential client, we do not recommend using it by default as it is more advanced and is designed to handle very specific scenarios which are not common. Using the `.WithCertificate()` API will allow MSAL.NET to handle this for you. This api offers you the ability to customize your authentication request if needed but the default assertion created by `.WithCertificate()` will suffice for most authentication scenarios. This API can also be used as a workaround in some scenarios where MSAL.NET fails to perform the signing operation internally. The difference between the two is using the `WithCertificate()` requires the certificate and private key to be available on the machine creating the assertion, and using the `WithClientAssertion()` allows you to compute the assertion somewhere else, like inside the Azure Key Vault or from Managed Identity, or with a Hardware security module.
### Signed assertions
You can also use the delegate form, which enables you to compute the assertion j
```csharp string signedClientAssertion = ComputeAssertion(); app = ConfidentialClientApplicationBuilder.Create(config.ClientId)
- .WithClientAssertion(() => { return GetSignedClientAssertion(); } )
- .Build();
-
-// or in async manner
-
-app = ConfidentialClientApplicationBuilder.Create(config.ClientId)
- .WithClientAssertion(async cancellationToken => { return await GetClientAssertionAsync(cancellationToken); })
+ .WithClientAssertion(async (AssertionRequestOptions options) => {
+ // use 'options.ClientID' or 'options.TokenEndpoint' to generate client assertion
+ return await GetClientAssertionAsync(options.ClientID, options.TokenEndpoint, options.CancellationToken);
+ })
.Build(); ```
active-directory Users Search Enhanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-search-enhanced.md
Previously updated : 01/03/2022 Last updated : 06/15/2022
# User management enhancements in Azure Active Directory
-This article describes how to use the user management enhancements in the Azure Active Directory (Azure AD) portal. The **All users** and **Deleted users** pages have been updated to provide more information and make it easier to find users.
+This article describes how to use the user management enhancements in the Azure Active Directory (Azure AD) portal. The **All users** page and user profile pages have been updated to provide more information and make it easier to find users.
Enhancements include: -- More visible user properties including object ID, directory sync status, creation type, and identity issuer-- Search allows substring search and combined search of names, emails, and object IDs-- Enhanced filtering by user type (member, guest, none), directory sync status, creation type, company name, and domain name-- Sorting capabilities on properties like name and user principal name-- Total users count that updates with searches or filters
+- Infinite scroll so you no longer have to select ΓÇÿLoad moreΓÇÖ to view more users
+- More user properties can be added as columns including city, country, employee ID, employee type, and external user state
+- More user properties can be filtered on including custom security attributes, on-premises extension attributes, and manager
+- More ways to customize your view, like using drag-and-drop to reorder columns
+- Copy and share your customized All Users view with others
+- An enhanced User Profile experience that gives you quick insights about a user and lets you view and edit more properties
> [!NOTE] > These enhancements are not currently available for Azure AD B2C tenants.
-## User properties enhanced
+## All users page
-WeΓÇÖve made some changes to the columns available on the **All users** and **Deleted users** pages. In addition to the existing columns we provide for managing your list of users, we've added a few more columns.
+We’ve made some changes to the columns and filters available on the **All users** page. In addition to the existing columns for managing your list of users, we've added the option to add more user properties as columns and filters including employee ID, employee hire date, on-premises attributes, and more.
-### All users page
+![new user properties displayed on All users page and user profile pages](./media/users-search-enhanced/user-properties.png)
-The following are the displayed user properties on the **All users** page:
+### Reorder columns
-- Name: The display name of the user.-- User principal name: The user principal name (UPN) of the user.-- User Type: Member, guest, none.-- Creation time: The date and time the user was created.-- Job Title: The job title of the user.-- Department: The department the user works in.-- Directory synced: Indicates whether the user is synced from an on-premises directory.-- Identity issuer: The issuers of the identity used to sign into a user account.-- Object ID: The object ID of the user.-- Creation type: Indicates how the user account was created.-- Company name: The company name which the user is associated.-- Invitation state: The status of the invitation for a guest user.-- Mail: The email of the user.
+You can customize your list view by reordering the columns on the page in one of two ways. One way is to directly drag and drop the columns on the page. Another way is to select **Columns** to open the column picker and then drag and drop the three- dot "handle" next to any given column.
-![new user properties displayed on All users and Deleted users pages](./media/users-search-enhanced/user-properties.png)
+### Share views
-### Deleted users page
+If you want to share your customized list view with another person, you can select **Copy link to current view** in the upper right corner to share a link to the view.
-The **Deleted users** page includes all the columns that are available on the **All users** page, and a few additional columns, namely:
+## User Profile enhancements
-- Deletion date: The date the user was first deleted from the organization (the user is restorable).-- Permanent deletion date: The date after which the process of permanently deleting the user from the organization automatically begins.-- Original user principal name: The original UPN of the user before their object ID was added as a prefix to their deleted UPN.
+The user profile page is now organized into three tabs: **Overview**, **Monitoring**, and **Properties**.
-> [!NOTE]
-> Deletion dates are displayed in Coordinated Universal Time ΓÇÄ(UTC)ΓÇÄ.
-
-Some columns are displayed by default. To add other columns, select **Columns** on the page, select the column names youΓÇÖd like to add, and select **OK** to save your preferences.
-
-### Identity issuers
-
-Select an entry in the **Identity issuer** column for any user to view additional details about the issuer including the sign-in type and the issuer assigned ID. The entries in the **Identity issuer** column can be multi-valued. If there are multiple issuers of the user's identity, you'll see the word Multiple in the **Identity issuer** column on **All users** and **Deleted users** pages, and the details pane list all issuers.
-
-> [!NOTE]
-> The **Source** column is replaced by multiple columns including **Creation type**, **Directory synced**, and **Identity issuer** for more granular filtering.
-
-## User list search
-
-When you enter a search string, the search now uses "starts with" and substring search to match names, emails, or object IDs in a single search. You can enter any of these attributes into the search box, and the search automatically looks across all these properties to return any matching results. The substring search is performed only on whole words. You can perform the same search on both the **All users** and **Deleted users** pages.
-
-## User list filtering
-
-Filtering capabilities have been enhanced to provide more filtering options for the **All users** and **Deleted users** pages. You can now filter by multiple properties simultaneously, and can filter by more properties.
-
-### Filtering All users list
+### Overview tab
-The following are the filterable properties on the **All users** page:
+The overview tab contains key properties and insights about a user, such as:
-- User type: Member, guest, none-- Directory synced status: Yes, no-- Creation type: Invitation, Email verified, Local account-- Creation time: Last 7, 14, 30, 90, 360 or >360 days ago-- Job Title: Enter a job title-- Department: Enter a department name-- Group: Search for a group-- Invitation state ΓÇô Pending acceptance, Accepted-- Domain name: Enter a domain name-- Company name: Enter a company name-- Administrative unit: Select this option to restrict the scope of the users you view to a single administrative unit. For more information, see [Administrative units management preview](../roles/administrative-units.md).
+- Properties like user principal name, object ID, created date/time and user type
+- Selectable aggregate values such as the number of groups that the user is a member of, the number of apps to which they have access, and the number of licenses that are are assigned to them
+- Quick alerts and insights about a user such as their current account enabled status, the last time they signed in, whether they can use multifactor authentication, and B2B collaboration options
-### Filtering Deleted users list
+![new user profile displaying the Overview tab contents](./media/users-search-enhanced/user-profile-overview.png)
-The **Deleted users** page has additional filters not in the **All users** page. The following are the filterable properties on the **Deleted users** page:
+> [!NOTE]
+> Some insights about a user may not be visible to you unless you have sufficient role permissions.
-- User type: Member, guest, none-- Directory synced status: Yes, no-- Creation type: Invitation, Email verified, Local account-- Creation time: Last 7, 14, 30, 90, 360 or > 360 days ago-- Job Title: Enter a job title-- Department: Enter a department name-- Invitation state: Pending acceptance, Accepted-- Deletion date: Last 7, 14, or 30 days-- Domain name: Enter a domain name-- Company name: Enter a company name-- Permanent deletion date: Last 7, 14, or 30 days
+### Monitoring tab
-## User list sorting
+The monitoring tab is the new home for the chart showing user sign-ins over the past 30 days.
-You can now sort by name and user principal name in the **All users** and **Deleted users** pages. You can also sort by deletion date in the **Deleted Users** list.
+### Properties tab
-## User list counts
+The properties tab now contains more user properties. Properties are broken up into categories including Identity, Job information, Contact information, Parental controls, Settings, and On-premises.
-You can view the total number of users in the **All users** and **Deleted users** pages. As you search or filter the lists, the count is updated to reflect the total number of users found.
+![new user profile displaying the Properties tab contents](./media/users-search-enhanced/user-profile-properties.png)
-![Illustration of user list counts on the All users page](./media/users-search-enhanced/user-list-sorting.png)
+You can edit properties by selecting the pencil icon next to any category, which will then redirect you to a new editing experience. Here, you can search for specific properties or scroll through property categories. You can edit one or many properties, across categories, before selecting **Save**.
-## Frequently Asked Questions (FAQ)
+![user profile properties open for editing](./media/users-search-enhanced/user-properties-edit.png)
-Question | Answer
|
-Why is the deleted user still displayed when the permanent deletion date has passed? | The permanent deletion date is displayed in the UTC time zone, so this may not match your current time zone. Also, this date is the earliest date after which the user will be permanently deleted from the organization, so it may still be processing. Permanently deleted users will automatically be removed from the list.
-What happen to the bulk capabilities for users and guests? | The bulk operations are all still available for users and guests, including bulk create, bulk invite, bulk delete, and download users. WeΓÇÖve just merged them into a menu called **Bulk operations**. You can find the **Bulk operations** options at the top of the **All users** page.
-What happened to the Source column? | The **Source** column has been replaced with other columns that provide similar information, while allowing you to filter on those values independently. Examples include **Creation type**, **Directory synced** and **Identity issuer**.
-What happened to the User Name column? | The **User Name** column is still there, but itΓÇÖs been renamed to **User Principal Name**. This better reflects the information contained in that column. YouΓÇÖll also notice that the full User Principal Name is now displayed for B2B guests. This matches what youΓÇÖd get in MS Graph.
+> [!NOTE]
+> Some properties will not be visible or editable if they are read-only or if you donΓÇÖt have sufficient role permissions to edit them.
+
## Next steps User operations
active-directory Invite Internal Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/invite-internal-users.md
You can use the Azure portal, PowerShell, or the invitation API to send a B2B in
1. Select the **Azure Active Directory** service. 1. Select **Users**. 1. Find the user in the list or use the search box. Then select the user.
-1. On the user's profile page, in the **Identity** section, select **Manage B2B collaboration**.
+1. In the **Overview** tab, underΓÇ»**My Feed**, select **B2B collaboration**.
- ![Screenshot of the user profile](media/invite-internal-users/manage-b2b-collaboration-link.png)
+ ![Screenshot of user profile Overview tab with B2B collaboration card](media/invite-internal-users/manage-b2b-collaboration-link.png)
- > [!NOTE]
- > If you see **Invitation accepted** instead of **Manage B2B collaboration**, the user has already been invited to use external credentials for B2B collaboration.
+ > [!NOTE]
+ > If the card says ΓÇ£Resend this B2B user's invitation or reset their redemption status.ΓÇ¥ the user has already been invited to use external credentials for B2B collaboration.
1. Next to **Invite internal user to B2B collaboration?** select **Yes**, and then select **Done**.
active-directory Reset Redemption Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/reset-redemption-status.md
Previously updated : 10/12/2021 Last updated : 06/16/2022
To manage these scenarios previously, you had to manually delete the guest user
1. Sign in to the [Azure portal](https://portal.azure.com/) using a Global administrator or User administrator account for the directory. 1. Search for and select **Azure Active Directory**. 1. Select **Users**.
-1. In the list, select the user's name to open the user's profile.
+1. In the list, select the user's name to open their user profile.
1. If the user wants to sign in using a different email:
- - Select the **Edit** icon at the top of the page.
- - In the **Contact info** section, under **Email**, type the new email.
- - Next to **Alternate email**, select **Edit**. Update the alternate email In the list with the new email, and then select **Update**.
- - Select the **Save** icon at the top of the page.
-1. In the **Identity** section, under **Invitation accepted**, select **(manage)**.
+ - Select the **Properties** tab.
+ - Select the **Edit** icon next to **Contact information**.
+ - Next to **Email**, type the new email.
+ - UpdateΓÇ»**Other emails** to also include the new email.
+ - Select the **Save** button at the bottom of the page.
+
+1. In the **Overview** tab, underΓÇ»**My Feed**, select **B2B collaboration**.
+ ![new user profile page displaying the B2B Collaboration tile](./media/reset-redemption-status/user-profile-b2b-collaboration.png)
1. Under **Redemption status**, next to **Reset invitation status? (Preview)**, select **Yes**. 1. Select **Yes** to confirm.
active-directory Active Directory Users Reset Password Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-users-reset-password-azure-portal.md
Previously updated : 09/05/2018 Last updated : 06/07/2022
active-directory Overview For Developers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/overview-for-developers.md
+
+ Title: Developer introduction and guidelines
+description: An overview how developers can use managed identities for Azure resources.
+
+documentationcenter:
++
+editor:
+ms.assetid: 0232041d-b8f5-4bd2-8d11-27999ad69370
++
+ms.devlang:
++ Last updated : 06/15/2022+++
+#Customer intent: As a developer, I'd like to securely manage the credentials that my application uses for authenticating to cloud services without having the credentials in my code or checked into source control.
++
+# Connecting from your application to resources without handling credentials
+
+Azure resources with managed identities support always provide an option to specify a managed identity to connect to Azure resources that support Azure Active directory authentication. Managed identities support makes it unnecessary for developers to manage credentials in code. Managed identities are the recommended authentication option when working with Azure resources that support them. [Read an overview of managed identities](overview.md).
+
+This page demonstrates how to configure an App Service so it can connect to Azure Key Vault, Azure Storage, and Microsoft SQL Server. The same principles can be used for any Azure resource that supports managed identities and that will connect to resources that support Azure Active Directory authentication.
+
+The code samples use the Azure Identity client library, which is the recommended method as it automatically handles many of the steps for you, including acquiring an access token used in the connection.
+
+### What resources can managed identities connect to?
+A managed identity can connect to any resource that supports Azure Active Directory authentication. In general, there's no special support required for the resource to allow managed identities to connect to it.
+
+Some resources don't support Azure Active Directory authentication, or their client library doesn't support authenticating with a token. Keep reading to see our guidance on how to use a Managed identity to securely access the credentials without needing to store them in your code or application configuration.
+
+## Creating a managed identity
+
+There are two types of managed identity: system-assigned and user-assigned. System-assigned identities are directly linked to a single Azure resource. When the Azure resource is deleted, so is the identity. A user-assigned managed identity can be associated with multiple Azure resources, and its lifecycle is independent of those resources.
+
+This article will explain how to create and configure a user-assigned managed identity, which is [recommended for most scenarios](managed-identity-best-practice-recommendations.md). If the source resource you're using doesn't support user-assigned managed identities, then you should refer to that resource provider's documentation to learn how to configure it to have a system-assigned managed identity.
+
+### Creating a user-assigned managed identity
+
+> [!NOTE]
+> You'll need a role such as "Managed Identity Contributor" to create a new user-assigned managed identity.
+
+#### [Portal](#tab/portal)
+
+1. Search for "Managed Identities" from the search bar at the top of the Portal and select the matching result.
++
+2. Select the "Create" button.
++
+3. Select the Subscription and Resource group, and enter a name for the Managed identity.
++
+4. Select "Review + create" to run the validation test, and then select the "Create" button.
+
+5. When the identity has been created, a confirmation screen will appear.
++
+#### [Azure CLI](#tab/cli)
+```azurecli
+az identity create --name <name of the identity> --resource-group <name of the resource group>
+```
+
+Take a note of the `clientId` and the `principalId` values that are returned when the managed identity is created. You'll use `principalId` while adding permissions, and `clientId` in your application's code.
+++
+You now have an identity that can be associated with an Azure source resource. [Read more about managing user-assigned managed identities.](how-manage-user-assigned-managed-identities.md).
+
+#### Configuring your source resource to use a user-assigned managed identity
+
+Follow these steps to configure your Azure resource to have a managed identity through the Portal. Refer to the documentation for the specific resource type to learn how to configure the resource's identity using the Command Line Interface, PowerShell or ARM template.
+
+> [!NOTE]
+> You'll need "Write" permissions to configure an Azure resource to have a system-assigned identity. You'll need a role such as "Managed Identity Operator" to associate a user-assigned identity with an Azure resource.
+
+1. Locate the resource using the search bar at the top of the Portal
++
+2. Select the Identity link in the navigation
++
+3. Select the "User-assigned" tab
+
+4. Select the "Add" button
++
+5. Select the user-assigned identity that you created earlier and select "Add"
++
+6. The identity will be associated with the resource, and the list will update.
++
+Your source resource now has a user-assigned identity that it can use to connect to target resources.
+
+## Adding permissions to the identity
+
+> [!NOTE]
+> You'll need a role such as "User Access Administrator" or "Owner" for the target resource to add Role assignments. Ensure you're granting the least privilege required for the application to run.
+
+Now your App Service has a managed identity, you'll need to give the identity the correct permissions. As you're using this identity to interact with Azure Storage, you'll use the [Azure Role Based Access Control (RBAC) system](../../role-based-access-control/overview.md).
+
+### [Portal](#tab/portal)
+
+1. Locate the resource you want to connect to using the search bar at the top of the Portal
+2. Select the "Access Control (IAM)" link in the left hand navigation.
++
+3. Select the "Add" button near the top of the screen and select "Add role assignment".
++
+4. A list of Roles will be displayed. You can see the specific permissions that a role has by selecting the "View" link. Select the role that you want to grant to the identity and select the "Next" button.
++
+5. You'll be prompted to select who the role should be granted to. Select the "Managed identity" option and then the "Add members" link.
++
+6. A context pane will appear on the right where you can search by the type of the managed identity. Select "User-assigned managed identity" from the "Managed identity" option.
++
+7. Select the identity that you created earlier and the "Select" button. The context pane will close, and the identity will be added to the list.
++
+8. Select the "Review + assign" button to view the summary of the role assignment, and then once more to confirm.
+9. Select the "Role assignments" option, and a list of the role assignments for the resource will be displayed.
++
+### [Azure CLI](#tab/cli)
+```azurecli
+az role assignment create --assignee "<Object/Principal ID of the managed identity>" \
+--role "<Role name or Role ID>" \
+--scope "/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{providerName}/{resourceType}/{resourceSubType}/{resourceName}"
+```
+
+[Read more about adding role assignments using the Command Line Interface](../../role-based-access-control/role-assignments-cli.md).
+++
+Your managed identity now has the correct permissions to access the Azure target resource. [Read more about Azure Role Based Access Control](../../role-based-access-control/overview.md).
+
+## Using the managed identity in your code
+
+Your App Service now has a managed identity with permissions. You can use the managed identity in your code to interact with target resources, instead of storing credentials in your code.
+
+The recommended method is to use the Azure Identity library for your preferred programming language. The supported languages include [.NET](/dotnet/api/overview/azure/identity-readme), [Java](/jav). The library acquires access tokens for you, making it simple to connect to target resources.
+
+### Using the Azure Identity library in your development environment
+
+Except for the C++ library, the Azure Identity libraries support a `DefaultAzureCredential` type. `DefaultAzureCredential` automatically attempts to authenticate via multiple mechanisms, including environment variables or an interactive sign-in. The credential type can be used in your development environment using your own credentials. It can also be used in your production Azure environment using a managed identity. No code changes are required when you deploy your application.
+
+If you're using user-assigned managed identities, you should also explicitly specify the user-assigned managed identity you wish to authenticate with by passing in the identity's client ID as a parameter. You can retrieve the client ID by browsing to the identity in the Portal.
++
+Read more about the Azure Identity libraries below:
+
+* [Azure Identity library for .NET](/dotnet/api/overview/azure/identity-readme)
+* [Azure Identity library for Java](/java/api/overview/azure/identity-readme?view=azure-java-stable&preserve-view=true)
+* [Azure Identity library for JavaScript](/javascript/api/overview/azure/identity-readme?view=azure-node-latest&preserve-view=true)
+* [Azure Identity library for Python](/python/api/overview/azure/identity-readme?view=azure-python&preserve-view=true)
+* [Azure Identity module for Go](/azure/developer/go/azure-sdk-authentication)
+* [Azure Identity library for C++](https://github.com/Azure/azure-sdk-for-cpp/blob/main/sdk/identity/azure-identity/README.md)
+
+### Accessing a Blob in Azure Storage
+
+#### [.NET](#tab/dotnet)
+
+```csharp
+using Azure.Identity;
+using Azure.Storage.Blobs;
+
+// code omitted for brevity
+
+// Specify the Client ID if using user-assigned managed identities
+var clientID = Environment.GetEnvironmentVariable("Managed_Identity_Client_ID");
+var credentialOptions = new DefaultAzureCredentialOptions
+{
+ ManagedIdentityClientId = clientID;
+};
+var credential = new DefaultAzureCredential(credentialOptions);
+
+var blobServiceClient1 = new BlobServiceClient(new Uri("<URI of Storage account>"), credential);
+BlobContainerClient containerClient1 = blobServiceClient1.GetBlobContainerClient("<name of blob>");
+BlobClient blobClient1 = containerClient1.GetBlobClient("<name of file>");
+
+if (blobClient1.Exists())
+{
+ var downloadedBlob = blobClient1.Download();
+ string blobContents = downloadedBlob.Value.Content.ToString();
+}
+```
+
+#### [Java](#tab/java)
+
+```java
+import com.azure.identity.DefaultAzureCredential;
+import com.azure.identity.DefaultAzureCredentialBuilder;
+import com.azure.storage.blob.BlobClient;
+import com.azure.storage.blob.BlobContainerClient;
+import com.azure.storage.blob.BlobServiceClient;
+import com.azure.storage.blob.BlobServiceClientBuilder;
+
+// read the Client ID from your environment variables
+String clientID = System.getProperty("Client_ID");
+DefaultAzureCredential credential = new DefaultAzureCredentialBuilder()
+ .managedIdentityClientId(clientID)
+ .build();
+
+BlobServiceClient blobStorageClient = new BlobServiceClientBuilder()
+ .endpoint("<URI of Storage account>")
+ .credential(credential)
+ .buildClient();
+
+BlobContainerClient blobContainerClient = blobStorageClient.getBlobContainerClient("<name of blob container>");
+BlobClient blobClient = blobContainerClient.getBlobClient("<name of blob/file>");
+if (blobClient.exists()) {
+ String blobContent = blobClient.downloadContent().toString();
+}
+```
+
+### Accessing a secret stored in Azure Key Vault
+
+#### [.NET](#tab/dotnet)
+
+```csharp
+using Azure.Identity;
+using Azure.Security.KeyVault.Secrets;
+using Azure.Core;
+
+// code omitted for brevity
+
+// Specify the Client ID if using user-assigned managed identities
+var clientID = Environment.GetEnvironmentVariable("Managed_Identity_Client_ID");
+var credentialOptions = new DefaultAzureCredentialOptions
+{
+ ManagedIdentityClientId = clientID;
+};
+var credential = new DefaultAzureCredential(credentialOptions);
+
+var client = new SecretClient(
+ new Uri("https://<your-unique-key-vault-name>.vault.azure.net/"),
+ credential);
+
+KeyVaultSecret secret = client.GetSecret("<my secret>");
+string secretValue = secret.Value;
+```
+
+#### [Java](#tab/java)
+
+```java
+import com.azure.core.util.polling.SyncPoller;
+import com.azure.identity.DefaultAzureCredentialBuilder;
+
+import com.azure.security.keyvault.secrets.SecretClient;
+import com.azure.security.keyvault.secrets.SecretClientBuilder;
+import com.azure.security.keyvault.secrets.models.DeletedSecret;
+import com.azure.security.keyvault.secrets.models.KeyVaultSecret;
+
+String keyVaultName = "mykeyvault";
+String keyVaultUri = "https://" + keyVaultName + ".vault.azure.net";
+String secretName = "mysecret";
+
+// read the user-assigned managed identity Client ID from your environment variables
+String clientID = System.getProperty("Managed_Identity_Client_ID");
+DefaultAzureCredential credential = new DefaultAzureCredentialBuilder()
+ .managedIdentityClientId(clientID)
+ .build();
+
+SecretClient secretClient = new SecretClientBuilder()
+ .vaultUrl(keyVaultUri)
+ .credential(credential)
+ .buildClient();
+
+KeyVaultSecret retrievedSecret = secretClient.getSecret(secretName);
+```
++
+### Accessing Azure SQL Database
+
+#### [.NET](#tab/dotnet)
+
+```csharp
+using Azure.Identity;
+using Microsoft.Data.SqlClient;
+
+// code omitted for brevity
+
+// Specify the Client ID if using user-assigned managed identities
+var clientID = Environment.GetEnvironmentVariable("Managed_Identity_Client_ID");
+var credentialOptions = new DefaultAzureCredentialOptions
+{
+ ManagedIdentityClientId = clientID;
+};
+
+AccessToken accessToken = await new DefaultAzureCredential(credentialOptions).GetTokenAsync(
+ new TokenRequestContext(new string[] { "https://database.windows.net//.default" }));
+
+using var connection = new SqlConnection("Server=<DB Server>; Database=<DB Name>;")
+{
+ AccessToken = accessToken.Token
+};
+var cmd = new SqlCommand("select top 1 ColumnName from TableName", connection);
+await connection.OpenAsync();
+SqlDataReader dr = cmd.ExecuteReader();
+while(dr.Read())
+{
+ Console.WriteLine(dr.GetValue(0).ToString());
+}
+dr.Close();
+```
+
+#### [Java](#tab/java)
+
+If you use [Azure Spring Apps](/azure/spring-cloud/), you can connect to Azure SQL Database with a managed identity without needing to make any changes to your code.
+
+Open the `src/main/resources/application.properties` file, and add `Authentication=ActiveDirectoryMSI;` at the end of the following line. Be sure to use the correct value for `$AZ_DATABASE_NAME` variable.
+
+```properties
+spring.datasource.url=jdbc:sqlserver://$AZ_DATABASE_NAME.database.windows.net:1433;database=demo;encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;Authentication=ActiveDirectoryMSI;
+```
+
+Read more about how to [use a managed identity to connect Azure SQL Database to an Azure Spring Apps app](/azure/spring-cloud/connect-managed-identity-to-azure-sql/).
+++
+## Connecting to resources that don't support Azure Active Directory or token based authentication in libraries
+
+Some Azure resources either don't yet support Azure Active Directory authentication, or their client libraries don't support authenticating with a token. Typically these resources are open-source technologies that expect a username and password or an access key in a connection string.
+
+To avoid storing credentials in your code or your application configuration, you can store the credentials as a secret in Azure Key Vault. Using the example displayed above, you can retrieve the secret from Azure KeyVault using a managed identity, and pass the credentials into your connection string. This approach means that no credentials need to be handled directly in your code or environment.
+
+## Guidelines if you're handling tokens directly
+
+In some scenarios, you may want to acquire tokens for managed identities manually instead of using a built-in method to connect to the target resource. These scenarios include no client library for the programming language that you're using or the target resource you're connecting to, or connecting to resources that aren't running on Azure. When acquiring tokens manually, we provide the following guidelines:
+
+### Cache the tokens you acquire
+For performance and reliability, we recommend that your application caches tokens in local memory, or encrypted if you want to save them to disk. As Managed identity tokens are valid for 24 hours, there's no benefit in requesting new tokens regularly, as a cached one will be returned from the token issuing endpoint. If you exceed the request limits, you'll be rate limited and receive an HTTP 429 error.
+
+When you acquire a token, you can set your token cache to expire 5 minutes before the `expires_on` (or equivalent property) that will be returned when the token is generated.
+
+### Token inspection
+Your application shouldn't rely on the contents of a token. The token's content is intended only for the audience (target resource) that is being accessed, not the client that's requesting the token. The token content may change or be encrypted in the future.
+
+### Don't expose or move tokens
+Tokens should be treated like credentials. Don't expose them to users or other services; for example, logging/monitoring solutions. They shouldn't be moved from the source resource that's using them, other than to authenticate against the target resource.
+
+## Next steps
+
+* [How to use managed identities for App Service and Azure Functions](../../app-service/overview-managed-identity.md)
+* [How to use managed identities with Azure Container Instances](../../container-instances/container-instances-managed-identity.md)
+* [Implementing managed identities for Microsoft Azure Resources](https://www.pluralsight.com/courses/microsoft-azure-resources-managed-identities-implementing)
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/overview.md
Operations on managed identities can be performed by using an Azure Resource Man
## Next steps
+* [Developer introduction and guidelines](overview-for-developers.md)
* [Use a Windows VM system-assigned managed identity to access Resource Manager](tutorial-windows-vm-access-arm.md) * [Use a Linux VM system-assigned managed identity to access Resource Manager](tutorial-linux-vm-access-arm.md) * [How to use managed identities for App Service and Azure Functions](../../app-service/overview-managed-identity.md)
active-directory Qs Configure Cli Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md
To create an Azure VM with the system-assigned managed identity enabled, your ac
az group create --name myResourceGroup --location westus ```
-1. Create a VM using [az vm create](/cli/azure/vm/#az-vm-create). The following example creates a VM named *myVM* with a system-assigned managed identity, as requested by the `--assign-identity` parameter. The `--admin-username` and `--admin-password` parameters specify the administrative user name and password account for virtual machine sign-in. Update these values as appropriate for your environment:
+1. Create a VM using [az vm create](/cli/azure/vm/#az-vm-create). The following example creates a VM named *myVM* with a system-assigned managed identity, as requested by the `--assign-identity` parameter, with the specified `--role` and `--scope`. The `--admin-username` and `--admin-password` parameters specify the administrative user name and password account for virtual machine sign-in. Update these values as appropriate for your environment:
```azurecli-interactive
- az vm create --resource-group myResourceGroup --name myVM --image win2016datacenter --generate-ssh-keys --assign-identity --admin-username azureuser --admin-password myPassword12
+ az vm create --resource-group myResourceGroup --name myVM --image win2016datacenter --generate-ssh-keys --assign-identity --role contributor --scope mySubscription --admin-username azureuser --admin-password myPassword12
``` ### Enable system-assigned managed identity on an existing Azure VM
To assign a user-assigned identity to a VM during its creation, your account nee
} ```
-3. Create a VM using [az vm create](/cli/azure/vm/#az-vm-create). The following example creates a VM associated with the new user-assigned identity, as specified by the `--assign-identity` parameter. Be sure to replace the `<RESOURCE GROUP>`, `<VM NAME>`, `<USER NAME>`, `<PASSWORD>`, and `<USER ASSIGNED IDENTITY NAME>` parameter values with your own values.
+3. Create a VM using [az vm create](/cli/azure/vm/#az-vm-create). The following example creates a VM associated with the new user-assigned identity, as specified by the `--assign-identity` parameter, with the specified `--role` and `--scope`. Be sure to replace the `<RESOURCE GROUP>`, `<VM NAME>`, `<USER NAME>`, `<PASSWORD>`, `<USER ASSIGNED IDENTITY NAME>`, `<ROLE>`, and `<SUBSCRIPTION>` parameter values with your own values.
```azurecli-interactive
- az vm create --resource-group <RESOURCE GROUP> --name <VM NAME> --image UbuntuLTS --admin-username <USER NAME> --admin-password <PASSWORD> --assign-identity <USER ASSIGNED IDENTITY NAME>
+ az vm create --resource-group <RESOURCE GROUP> --name <VM NAME> --image UbuntuLTS --admin-username <USER NAME> --admin-password <PASSWORD> --assign-identity <USER ASSIGNED IDENTITY NAME> --role <ROLE> --scope <SUBSCRIPTION>
``` ### Assign a user-assigned managed identity to an existing Azure VM
active-directory Qs Configure Cli Windows Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vmss.md
To create a virtual machine scale set with the system-assigned managed identity
az group create --name myResourceGroup --location westus ```
-1. [Create](/cli/azure/vmss/#az-vmss-create) a virtual machine scale set. The following example creates a virtual machine scale set named *myVMSS* with a system-assigned managed identity, as requested by the `--assign-identity` parameter. The `--admin-username` and `--admin-password` parameters specify the administrative user name and password account for virtual machine sign-in. Update these values as appropriate for your environment:
+1. [Create](/cli/azure/vmss/#az-vmss-create) a virtual machine scale set. The following example creates a virtual machine scale set named *myVMSS* with a system-assigned managed identity, as requested by the `--assign-identity` parameter, with the specified `--role` and `--scope`. The `--admin-username` and `--admin-password` parameters specify the administrative user name and password account for virtual machine sign-in. Update these values as appropriate for your environment:
```azurecli-interactive
- az vmss create --resource-group myResourceGroup --name myVMSS --image win2016datacenter --upgrade-policy-mode automatic --custom-data cloud-init.txt --admin-username azureuser --admin-password myPassword12 --assign-identity --generate-ssh-keys --role contributor
+ az vmss create --resource-group myResourceGroup --name myVMSS --image win2016datacenter --upgrade-policy-mode automatic --custom-data cloud-init.txt --admin-username azureuser --admin-password myPassword12 --assign-identity --generate-ssh-keys --role contributor --scope mySubscription
``` ### Enable system-assigned managed identity on an existing Azure virtual machine scale set
This section walks you through creation of a virtual machine scale set and assig
} ```
-3. [Create](/cli/azure/vmss/#az-vmss-create) a virtual machine scale set. The following example creates a virtual machine scale set associated with the new user-assigned managed identity, as specified by the `--assign-identity` parameter. Be sure to replace the `<RESOURCE GROUP>`, `<VMSS NAME>`, `<USER NAME>`, `<PASSWORD>`, `<USER ASSIGNED IDENTITY>`, and `<ROLE>` parameter values with your own values.
+3. [Create](/cli/azure/vmss/#az-vmss-create) a virtual machine scale set. The following example creates a virtual machine scale set associated with the new user-assigned managed identity, as specified by the `--assign-identity` parameter, with the specified `--role` and `--scope`. Be sure to replace the `<RESOURCE GROUP>`, `<VMSS NAME>`, `<USER NAME>`, `<PASSWORD>`, `<USER ASSIGNED IDENTITY>`, `<ROLE>`, and `<SUBSCRIPTION>` parameter values with your own values.
```azurecli-interactive
- az vmss create --resource-group <RESOURCE GROUP> --name <VMSS NAME> --image UbuntuLTS --admin-username <USER NAME> --admin-password <PASSWORD> --assign-identity <USER ASSIGNED IDENTITY> --role <ROLE>
+ az vmss create --resource-group <RESOURCE GROUP> --name <VMSS NAME> --image UbuntuLTS --admin-username <USER NAME> --admin-password <PASSWORD> --assign-identity <USER ASSIGNED IDENTITY> --role <ROLE> --scope <SUBSCRIPTION>
``` ### Assign a user-assigned managed identity to an existing virtual machine scale set
active-directory Howto Use Azure Monitor Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md
When working with workbooks, you can either start with an empty workbook, or use
There are: -- **Public templates** published to a [gallery](../../azure-monitor/visualize/workbooks-overview.md#gallery) that serve as a good starting point when you are just getting started with workbooks.
+- **Public templates** published to a [gallery](../../azure-monitor/visualize/workbooks-overview.md#the-gallery) that serve as a good starting point when you are just getting started with workbooks.
- **Private templates** when you start building your own workbooks and want to save one as a template to serve as the foundation for multiple workbooks in your tenant.
active-directory Cwt Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cwt-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with CWT'
+description: Learn how to configure single sign-on between Azure Active Directory and CWT.
++++++++ Last updated : 06/08/2022++
+# Tutorial: Azure AD SSO integration with CWT
+
+In this tutorial, you'll learn how to integrate CWT with Azure Active Directory (Azure AD). When you integrate CWT with Azure AD, you can:
+
+* Control in Azure AD who has access to CWT.
+* Enable your users to be automatically signed-in to CWT with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* CWT single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD single sign-on in a test environment.
+
+* CWT supports **IDP** initiated SSO.
+
+## Add CWT from the gallery
+
+To configure the integration of CWT into Azure AD, you need to add CWT from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **CWT** in the search box.
+1. Select **CWT** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for CWT
+
+Configure and test Azure AD SSO with CWT using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in CWT.
+
+To configure and test Azure AD SSO with CWT, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure CWT SSO](#configure-cwt-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create CWT test user](#create-cwt-test-user)** - to have a counterpart of B.Simon in CWT that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **CWT** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the application is pre-configured and the necessary URLs are already pre-populated with Azure. The user needs to save the configuration by clicking the **Save** button.
+
+1. On the **Set-up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set-up CWT** section, copy the appropriate URL(s) as per your requirement.
+
+ ![Screenshot shows to copy appropriate configuration U R L.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to CWT.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **CWT**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure CWT SSO
+
+To configure single sign-on on **CWT** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [CWT support team](https://www.mycwt.com/traveler-help/). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create CWT test user
+
+In this section, you create a user called Britta Simon in CWT. Work with [CWT support team](https://www.mycwt.com/traveler-help/) to add the users in the CWT platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the CWT for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the CWT tile in the My Apps, you should be automatically signed in to the CWT for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+
+## Next steps
+
+Once you configure CWT you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Momenta Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/momenta-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Momenta | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Momenta'
description: Learn how to configure single sign-on between Azure Active Directory and Momenta.
Previously updated : 07/13/2020 Last updated : 06/15/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Momenta
+# Tutorial: Azure AD SSO integration with Momenta
In this tutorial, you'll learn how to integrate Momenta with Azure Active Directory (Azure AD). When you integrate Momenta with Azure AD, you can:
In this tutorial, you'll learn how to integrate Momenta with Azure Active Direct
* Enable your users to be automatically signed-in to Momenta with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Momenta single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description
In this tutorial, you configure and test Azure AD SSO in a test environment.
* Momenta supports **SP and IDP** initiated SSO.
-* Once you configure Momenta you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
-
-## Adding Momenta from the gallery
+## Add Momenta from the gallery
To configure the integration of Momenta into Azure AD, you need to add Momenta from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Momenta** in the search box. 1. Select **Momenta** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - ## Configure and test Azure AD SSO for Momenta Configure and test Azure AD SSO with Momenta using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Momenta.
-To configure and test Azure AD SSO with Momenta, complete the following building blocks:
+To configure and test Azure AD SSO with Momenta, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with Momenta, complete the following building
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Momenta** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Momenta** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
a. In the **Identifier** text box, type a URL using the following pattern: `https://<SUBDOMAIN>.btsmomenta.com/sso/<CUSTOMID>-federationmetadata.xml`
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
- ![The Certificate download link](common/copy-metadataurl.png)
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Momenta**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, you create a user called B.Simon in Momenta. Work with [Moment
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Momenta tile in the Access Panel, you should be automatically signed in to the Momenta for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+#### SP initiated:
-## Additional resources
+* Click on **Test this application** in Azure portal. This will redirect to Momenta Sign-on URL where you can initiate the login flow.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Go to Momenta Sign on URL directly and initiate the login flow from there.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+#### IDP initiated:
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Momenta for which you set up the SSO.
-- [Try Momenta with Azure AD](https://aad.portal.azure.com/)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Momenta tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Momenta for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is session control in Microsoft Defender for Cloud Apps?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect Momenta with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)
+Once you configure Momenta you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Motus Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/motus-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Motus | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Motus'
description: Learn how to configure single sign-on between Azure Active Directory and Motus.
Previously updated : 11/19/2019 Last updated : 06/15/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Motus
+# Tutorial: Azure AD SSO integration with Motus
In this tutorial, you'll learn how to integrate Motus with Azure Active Directory (Azure AD). When you integrate Motus with Azure AD, you can:
In this tutorial, you'll learn how to integrate Motus with Azure Active Director
* Enable your users to be automatically signed-in to Motus with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Motus single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment. --
-* Motus supports **SP and IDP** initiated SSO
+* Motus supports **SP and IDP** initiated SSO.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Adding Motus from the gallery
+## Add Motus from the gallery
To configure the integration of Motus into Azure AD, you need to add Motus from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Motus** in the search box. 1. Select **Motus** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. -
-## Configure and test Azure AD single sign-on for Motus
+## Configure and test Azure AD SSO for Motus
Configure and test Azure AD SSO with Motus using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Motus.
-To configure and test Azure AD SSO with Motus, complete the following building blocks:
+To configure and test Azure AD SSO with Motus, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with Motus, complete the following building b
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Motus** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Motus** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Basic SAML Configuration** section the application is pre-configured in **IDP** initiated mode and the necessary URLs are already pre-populated with Azure. The user needs to save the configuration by clicking the **Save** button.
+1. On the **Basic SAML Configuration** section, the application is pre-configured in **IDP** initiated mode and the necessary URLs are already pre-populated with Azure. The user needs to save the configuration by clicking the **Save** button.
1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- In the **Sign-on URL** text box, type a URL:
+ In the **Sign-on URL** text box, type the URL:
`https://app.motus.com/` 1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
1. On the **Set up Motus** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Motus**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, you create a user called B.Simon in Motus. Work with [Motus su
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Motus Sign on URL where you can initiate the login flow.
-When you click the Motus tile in the Access Panel, you should be automatically signed in to the Motus for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Go to Motus Sign-on URL directly and initiate the login flow from there.
-## Additional resources
+#### IDP initiated:
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Motus for which you set up the SSO.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Motus tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Motus for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Motus with Azure AD](https://aad.portal.azure.com/)
+Once you configure Motus you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Myaryaka Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/myaryaka-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with MyAryaka | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with MyAryaka'
description: Learn how to configure single sign-on between Azure Active Directory and MyAryaka.
Previously updated : 11/19/2019 Last updated : 06/15/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with MyAryaka
+# Tutorial: Azure AD SSO integration with MyAryaka
In this tutorial, you'll learn how to integrate MyAryaka with Azure Active Directory (Azure AD). When you integrate MyAryaka with Azure AD, you can:
In this tutorial, you'll learn how to integrate MyAryaka with Azure Active Direc
* Enable your users to be automatically signed-in to MyAryaka with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * MyAryaka single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* MyAryaka supports **SP** initiated SSO
+* MyAryaka supports **SP** initiated SSO.
-## Adding MyAryaka from the gallery
+## Add MyAryaka from the gallery
To configure the integration of MyAryaka into Azure AD, you need to add MyAryaka from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **MyAryaka** in the search box. 1. Select **MyAryaka** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for MyAryaka
+## Configure and test Azure AD SSO for MyAryaka
Configure and test Azure AD SSO with MyAryaka using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in MyAryaka.
-To configure and test Azure AD SSO with MyAryaka, complete the following building blocks:
+To configure and test Azure AD SSO with MyAryaka, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- * **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
1. **[Configure MyAryaka SSO](#configure-myaryaka-sso)** - to configure the single sign-on settings on application side.
- * **[Create MyAryaka test user](#create-myaryaka-test-user)** - to have a counterpart of B.Simon in MyAryaka that is linked to the Azure AD representation of user.
+ 1. **[Create MyAryaka test user](#create-myaryaka-test-user)** - to have a counterpart of B.Simon in MyAryaka that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **MyAryaka** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **MyAryaka** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Sign on URL** text box, use one of the following pattern:
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `https://kso.aryaka.com/auth/realms/<CUSTOMERID>`
- ```https
- https://my.aryaka.com/
- https://kso.aryaka.com/auth/realms/<CUSTOMERID>
- ```
+ b. In the **Sign on URL** text box, type a URL using one of the following patterns:
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
- `https://kso.aryaka.com/auth/realms/<CUSTOMERID>`
+ | **Sign on URL** |
+ ||
+ | `https://my.aryaka.com/` |
+ | `https://kso.aryaka.com/auth/realms/<CUSTOMERID>` |
> [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [MyAryaka Client support team](mailto:support@aryaka.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [MyAryaka Client support team](mailto:support@aryaka.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
- ![The Certificate download link](common/copy-metadataurl.png)
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **MyAryaka**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, you create a user called B.Simon in MyAryaka. Work with [MyAry
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the MyAryaka tile in the Access Panel, you should be automatically signed in to the MyAryaka for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional resources
+* Click on **Test this application** in Azure portal. This will redirect to MyAryaka Sign-On URL where you can initiate the login flow.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Go to MyAryaka Sign-On URL directly and initiate the login flow from there.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* You can use Microsoft My Apps. When you click the MyAryaka tile in the My Apps, this will redirect to MyAryaka Sign-On URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try MyAryaka with Azure AD](https://aad.portal.azure.com/)
+Once you configure MyAryaka you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Opal Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/opal-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Opal | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Opal'
description: Learn how to configure single sign-on between Azure Active Directory and Opal.
Previously updated : 10/24/2019 Last updated : 06/15/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Opal
+# Tutorial: Azure AD SSO integration with Opal
In this tutorial, you'll learn how to integrate Opal with Azure Active Directory (Azure AD). When you integrate Opal with Azure AD, you can:
In this tutorial, you'll learn how to integrate Opal with Azure Active Directory
* Enable your users to be automatically signed-in to Opal with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Opal single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Opal supports **IDP** initiated SSO
+* Opal supports **IDP** initiated SSO.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Adding Opal from the gallery
+## Add Opal from the gallery
To configure the integration of Opal into Azure AD, you need to add Opal from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Opal** in the search box. 1. Select **Opal** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Opal
+## Configure and test Azure AD SSO for Opal
Configure and test Azure AD SSO with Opal using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Opal.
-To configure and test Azure AD SSO with Opal, complete the following building blocks:
+To configure and test Azure AD SSO with Opal, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with Opal, complete the following building bl
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Opal** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Opal** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Set up single sign-on with SAML** page, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
- 1. In the **Identifier** text box, type a URL:
+ a. In the **Identifier** text box, type the value:
`Opal`
- 1. In the **Reply URL** text box, type a URL using the following pattern:
-
- `https://<subdomain>.ouropal.com/auth/saml/callback`
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<subdomain>.ouropal.com/auth/saml/callback`
- > [!NOTE]
- > The Reply URL value is not real. Update the value with the actual Reply URL. Contact [Opal Client support team](mailto:support@workwithopal.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > [!NOTE]
+ > The Reply URL value is not real. Update the value with the actual Reply URL. Contact [Opal Client support team](mailto:support@workwithopal.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. Opal application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
- ![image](common/edit-attribute.png)
+ ![Screenshot shows the image of Opal application.](common/edit-attribute.png "Image")
1. In addition to above, Opal application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirement.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
1. On the **Set up Opal** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Opal**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, you create a user called Britta Simon in Opal. Work with [Opal
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Opal tile in the Access Panel, you should be automatically signed in to the Opal for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional resources
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* Click on Test this application in Azure portal and you should be automatically signed in to the Opal for which you set up the SSO.
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* You can use Microsoft My Apps. When you click the Opal tile in the My Apps, you should be automatically signed in to the Opal for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Opal with Azure AD](https://aad.portal.azure.com/)
+Once you configure Opal you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Servicenow Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/servicenow-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the **Basic SAML Configuration** section, perform the following steps:
- a. In **Sign on URL**, enter a URL that uses the following pattern:
- `https://<instance-name>.service-now.com/login_with_sso.do?glide_sso_id=<sys_id of the sso configuration>`
+ a. In **Sign on URL**, enter one of the following URL patterns:
+
+ | Sign on URL|
+ |-|
+ | `https://<instancename>.service-now.com/navpage.do` |
+ | `https://<instance-name>.service-now.com/login_with_sso.do?glide_sso_id=<sys_id of the sso configuration>` |
+ |
> [!NOTE]
- > Please copy the sys_id value from step 5.d.iii in **Configure ServiceNow** section.
+ > Please copy the sys_id value from **Configure ServiceNow** section which is explained later in the tutorial.
b. In **Identifier (Entity ID)**, enter a URL that uses the following pattern: `https://<instance-name>.service-now.com`
Follow these steps to enable Azure AD SSO in the Azure portal.
| Reply URL| |-| | `https://<instancename>.service-now.com/navpage.do` |
- | `https://<instancename>.service-now.com/customer.do` |
+ | `https://<instancename>.service-now.com/customer.do` |
+ |
d. In **Logout URL**, enter a URL that uses the following pattern: `https://<instancename>.service-now.com/navpage.do`
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
4. In the **Basic SAML Configuration** section, perform the following steps:
- a. For **Sign on URL**, enter a URL that uses the following pattern:
- `https://<instance-name>.service-now.com/login_with_sso.do?glide_sso_id=<sys_id of the sso configuration>` please copy the sys_id value from step 5.d.iii in **Configure ServiceNow** section.
+ a. For **Sign on URL**, enter one of the following URL pattern:
+
+ | Sign on URL |
+ |--|
+ | `https://<instance-name>.service-now.com/login_with_sso.do?glide_sso_id=<sys_id of the sso configuration>` |
+ | `https://<instancename>.service-now.com/customer.do` |
+ |
b. For **Identifier (Entity ID)**, enter a URL that uses the following pattern: `https://<instance-name>.service-now.com`
- c. For **Reply URL**, enter one of the following URL:
+ c. For **Reply URL**, enter one of the following URL pattern:
| Reply URL | |--| | `https://<instancename>.service-now.com/navpage.do` | | `https://<instancename>.service-now.com/customer.do` |
+ |
d. In **Logout URL**, enter a URL that uses the following pattern: `https://<instancename>.service-now.com/navpage.do`
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![Screenshot of System Definition section, with System Definition and Plugins highlighted](./media/servicenow-tutorial/tutorial-servicenow-03.png "Activate plugin")
- b. Search for **Integration - Multiple Provider single sign-on Installer**.
+ b. Search for **Integration - Multiple Provider single sign-on Installer** then, **Install** and **activate** it.
![Screenshot of System Plugins page, with Integration - Multiple Provider Single Sign-On Installer highlighted](./media/servicenow-tutorial/tutorial-servicenow-04.png "Activate plugin")
- c. Select the plug-in. Right-click, and select **Activate/Upgrade**.
-
- ![Screenshot of plug-in right-click menu, with Activate/Upgrade highlighted](./media/servicenow-tutorial/tutorial-activate.png "Activate plugin")
-
- d. Select **Activate**.
-
- ![Screenshot of Activate Plugin dialog box, with Activate highlighted](./media/servicenow-tutorial/tutorial-activate-1.png "Activate plugin")
-
-1. In the left pane, search for the **Multi-Provider SSO** section from the search bar, and then select **Properties**.
+1. In the left pane, search for the **Multi-Provider SSO** section from the search bar, and then select **Properties** in the **Administration**.
![Screenshot of Multi-Provider SSO section, with Multi-Provider SSO and Properties highlighted](./media/servicenow-tutorial/tutorial-servicenow-06.png "Configure app URL") 1. In the **Multiple Provider SSO Properties** dialog box, perform the following steps:
- ![Screenshot of Multiple Provider SSO Properties dialog box](./media/servicenow-tutorial/ic7694981.png "Configure app URL")
+ ![Screenshot of Multiple Provider SSO Properties dialog box](./media/servicenow-tutorial/multi-provider.png "Configure app URL")
* For **Enable multiple provider SSO**, select **Yes**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the **Identity Providers** dialog box, select **New**.
- ![Screenshot of Identity Providers dialog box, with New highlighted](./media/servicenow-tutorial/ic7694977.png "Configure single sign-on")
+ ![Screenshot of Identity Providers dialog box, with New highlighted](./media/servicenow-tutorial/new-button.png "Configure single sign-on")
1. In the **Identity Providers** dialog box, select **SAML**.
- ![Screenshot of Identity Providers dialog box, with SAML highlighted](./media/servicenow-tutorial/ic7694978.png "Configure single sign-on")
+ ![Screenshot of Identity Providers dialog box, with SAML highlighted](./media/servicenow-tutorial/kind-of-sso.png "Configure single sign-on")
1. In **Import Identity Provider Metadata**, perform the following steps:
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![Screenshot of Identity Provider](./media/servicenow-tutorial/identity-provider.png "Configure single sign-on")
- a. For **Name**, enter a name for your configuration (for example, **Microsoft Azure Federated single sign-on**).
+ a. Right click on the grey bar at the top of the screen and click **Copy sys_id** and use this value to the **Sign on URL** in **Basic SAML Configuration** section.
+
+ b. For **Name**, enter a name for your configuration (for example, **Microsoft Azure Federated single sign-on**).
- b. Copy the **ServiceNow Homepage** value. Paste it in **Sign-on URL** in the **ServiceNow Basic SAML Configuration** section of the Azure portal.
+ c. Copy the **ServiceNow Homepage** value. Paste it in **Sign-on URL** in the **ServiceNow Basic SAML Configuration** section of the Azure portal.
> [!NOTE] > The ServiceNow instance homepage is a concatenation of your **ServiceNow tenant URL** and **/navpage.do** (for example:`https://fabrikam.service-now.com/navpage.do`).
- c. Copy the **Entity ID / Issuer** value. Paste it in **Identifier** in **ServiceNow Basic SAML Configuration** section of the Azure portal.
+ d. Copy the **Entity ID / Issuer** value. Paste it in **Identifier** in **ServiceNow Basic SAML Configuration** section of the Azure portal.
- d. Confirm that **NameID Policy** is set to `urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified` value.
+ e. Confirm that **NameID Policy** is set to `urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified` value.
- e. Select **Advanced**. In **User Field**, enter **email**.
+ f. Select **Advanced**. In **User Field**, enter **email**.
> [!NOTE] > You can configure Azure AD to emit either the Azure AD user ID (user principal name) or the email address as the unique identifier in the SAML token. Do this by going to the **ServiceNow** > **Attributes** > **Single sign-on** section of the Azure portal, and mapping the desired field to the **nameidentifier** attribute. The value stored for the selected attribute in Azure AD (for example, user principal name) must match the value stored in ServiceNow for the entered field (for example, user_name).
active-directory Yodeck Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/yodeck-tutorial.md
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In a different web browser window, sign in to your Yodeck company site as an administrator.
-1. Click on **User Settings** option form the top right corner of the page and select **Account Settings**.
+1. Click on **User Settings** option from the top right corner of the page and select **Account Settings**.
![Screenshot shows with Account Settings selected for the user.](./media/yodeck-tutorial/account.png)
aks Limit Egress Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/limit-egress-traffic.md
description: Learn what ports and addresses are required to control egress traff
Previously updated : 03/7/2022 Last updated : 06/15/2022 #Customer intent: As an cluster operator, I want to restrict egress traffic for nodes to only access defined ports and addresses and improve cluster security.
The following FQDN / application rules are required for AKS clusters that have M
| **`*.ods.opinsights.azure.com`** | **`HTTPS:443`** | Required for Microsoft Defender to upload security events to the cloud.| | **`*.oms.opinsights.azure.com`** | **`HTTPS:443`** | Required to Authenticate with LogAnalytics workspaces.|
+### CSI Secret Store
+
+#### Required FQDN / application rules
+
+The following FQDN / application rules are required for AKS clusters that have CSI Secret Store enabled.
+
+| FQDN | Port | Use |
+|--|--|-|
+| **`vault.azure.net`** | **`HTTPS:443`** | Required for CSI Secret Store addon pods to talk to Azure KeyVault server.|
+ ### Azure Monitor for containers There are two options to provide access to Azure Monitor for containers, you may allow the Azure Monitor [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) **or** provide access to the required FQDN/Application Rules.
aks Load Balancer Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/load-balancer-standard.md
This document covers the integration with Public Load balancer. For internal Loa
## Before you begin
-Azure Load Balancer is available in two SKUs - *Basic* and *Standard*. By default, *Standard* SKU is used when you create an AKS cluster. Use the *Standard* SKU to have access to added functionality, such as a larger backend pool, [**multiple node pools**](use-multiple-node-pools.md), and [**Availability Zones**](availability-zones.md). It's the recommended Load Balancer SKU for AKS.
+Azure Load Balancer is available in two SKUs - *Basic* and *Standard*. By default, *Standard* SKU is used when you create an AKS cluster. The *Standard* SKU gives you access to added functionality, such as a larger backend pool, [multiple node pools](use-multiple-node-pools.md), [Availability Zones](availability-zones.md), and is [secure by default][azure-lb]. It's the recommended Load Balancer SKU for AKS.
For more information on the *Basic* and *Standard* SKUs, see [Azure load balancer SKU comparison][azure-lb-comparison].
Learn more about using Internal Load Balancer for Inbound traffic at the [AKS In
[az-network-public-ip-show]: /cli/azure/network/public-ip#az_network_public_ip_show [az-network-public-ip-prefix-show]: /cli/azure/network/public-ip/prefix#az_network_public_ip_prefix_show [az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create
-[azure-lb]: ../load-balancer/load-balancer-overview.md
+[azure-lb]: ../load-balancer/load-balancer-overview.md#securebydefault
[azure-lb-comparison]: ../load-balancer/skus.md [azure-lb-outbound-rules]: ../load-balancer/load-balancer-outbound-connections.md#outboundrules [azure-lb-outbound-connections]: ../load-balancer/load-balancer-outbound-connections.md
aks Spot Node Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/spot-node-pool.md
Title: Add a spot node pool to an Azure Kubernetes Service (AKS) cluster
-description: Learn how to add a spot node pool to an Azure Kubernetes Service (AKS) cluster.
+ Title: Add an Azure Spot node pool to an Azure Kubernetes Service (AKS) cluster
+description: Learn how to add an Azure Spot node pool to an Azure Kubernetes Service (AKS) cluster.
Last updated 01/21/2022
-#Customer intent: As a cluster operator or developer, I want to learn how to add a spot node pool to an AKS Cluster.
+#Customer intent: As a cluster operator or developer, I want to learn how to add an Azure Spot node pool to an AKS Cluster.
-# Add a spot node pool to an Azure Kubernetes Service (AKS) cluster
+# Add an Azure Spot node pool to an Azure Kubernetes Service (AKS) cluster
-A spot node pool is a node pool backed by a [spot virtual machine scale set][vmss-spot]. Using spot VMs for nodes with your AKS cluster allows you to take advantage of unutilized capacity in Azure at a significant cost savings. The amount of available unutilized capacity will vary based on many factors, including node size, region, and time of day.
+A Spot node pool is a node pool backed by an [Azure Spot Virtual machine scale set][vmss-spot]. Using Spot VMs for nodes with your AKS cluster allows you to take advantage of unutilized capacity in Azure at a significant cost savings. The amount of available unutilized capacity will vary based on many factors, including node size, region, and time of day.
-When deploying a spot node pool, Azure will allocate the spot nodes if there's capacity available. But there's no SLA for the spot nodes. A spot scale set that backs the spot node pool is deployed in a single fault domain and offers no high availability guarantees. At any time when Azure needs the capacity back, the Azure infrastructure will evict spot nodes.
+When you deploy a Spot node pool, Azure will allocate the Spot nodes if there's capacity available. There's no SLA for the Spot nodes. A Spot scale set that backs the Spot node pool is deployed in a single fault domain and offers no high availability guarantees. At any time when Azure needs the capacity back, the Azure infrastructure will evict Spot nodes.
-Spot nodes are great for workloads that can handle interruptions, early terminations, or evictions. For example, workloads such as batch processing jobs, development and testing environments, and large compute workloads may be good candidates to be scheduled on a spot node pool.
+Spot nodes are great for workloads that can handle interruptions, early terminations, or evictions. For example, workloads such as batch processing jobs, development and testing environments, and large compute workloads may be good candidates to schedule on a Spot node pool.
-In this article, you add a secondary spot node pool to an existing Azure Kubernetes Service (AKS) cluster.
+In this article, you add a secondary Spot node pool to an existing Azure Kubernetes Service (AKS) cluster.
This article assumes a basic understanding of Kubernetes and Azure Load Balancer concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
If you don't have an Azure subscription, create a [free account](https://azure.m
## Before you begin
-When you create a cluster to use a spot node pool, that cluster must also use Virtual Machine Scale Sets for node pools and the *Standard* SKU load balancer. You must also add an additional node pool after you create your cluster to use a spot node pool. Adding an additional node pool is covered in a later step.
+When you create a cluster to use a Spot node pool, that cluster must use Virtual Machine Scale Sets for node pools and the *Standard* SKU load balancer. You must also add another node pool after you create your cluster, which is covered in a later step.
-This article requires that you are running the Azure CLI version 2.14 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+This article requires that you're running the Azure CLI version 2.14 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
### Limitations
-The following limitations apply when you create and manage AKS clusters with a spot node pool:
+The following limitations apply when you create and manage AKS clusters with a Spot node pool:
-* A spot node pool can't be the cluster's default node pool. A spot node pool can only be used for a secondary pool.
-* You can't upgrade a spot node pool since spot node pools can't guarantee cordon and drain. You must replace your existing spot node pool with a new one to do operations such as upgrading the Kubernetes version. To replace a spot node pool, create a new spot node pool with a different version of Kubernetes, wait until its status is *Ready*, then remove the old node pool.
-* The control plane and node pools cannot be upgraded at the same time. You must upgrade them separately or remove the spot node pool to upgrade the control plane and remaining node pools at the same time.
-* A spot node pool must use Virtual Machine Scale Sets.
-* You cannot change ScaleSetPriority or SpotMaxPrice after creation.
+* A Spot node pool can't be the cluster's default node pool. A Spot node pool can only be used for a secondary pool.
+* The control plane and node pools can't be upgraded at the same time. You must upgrade them separately or remove the Spot node pool to upgrade the control plane and remaining node pools at the same time.
+* A Spot node pool must use Virtual Machine Scale Sets.
+* You can't change ScaleSetPriority or SpotMaxPrice after creation.
* When setting SpotMaxPrice, the value must be -1 or a positive value with up to five decimal places.
-* A spot node pool will have the label *kubernetes.azure.com/scalesetpriority:spot*, the taint *kubernetes.azure.com/scalesetpriority=spot:NoSchedule*, and system pods will have anti-affinity.
-* You must add a [corresponding toleration][spot-toleration] and affinity to schedule workloads on a spot node pool.
+* A Spot node pool will have the label *kubernetes.azure.com/scalesetpriority:spot*, the taint *kubernetes.azure.com/scalesetpriority=spot:NoSchedule*, and system pods will have anti-affinity.
+* You must add a [corresponding toleration][spot-toleration] and affinity to schedule workloads on a Spot node pool.
-## Add a spot node pool to an AKS cluster
+## Add a Spot node pool to an AKS cluster
-You must add a spot node pool to an existing cluster that has multiple node pools enabled. More details on creating an AKS cluster with multiple node pools are available [here][use-multiple-node-pools].
+You must add a Spot node pool to an existing cluster that has multiple node pools enabled. For more details on creating an AKS cluster with multiple node pools, see [use multiple node pools][use-multiple-node-pools].
+
+Create a node pool using the [az aks nodepool add][az-aks-nodepool-add] command:
-Create a node pool using the [az aks nodepool add][az-aks-nodepool-add].
```azurecli-interactive az aks nodepool add \ --resource-group myResourceGroup \
az aks nodepool add \
--no-wait ```
-By default, you create a node pool with a *priority* of *Regular* in your AKS cluster when you create a cluster with multiple node pools. The above command adds an auxiliary node pool to an existing AKS cluster with a *priority* of *Spot*. The *priority* of *Spot* makes the node pool a spot node pool. The *eviction-policy* parameter is set to *Delete* in the above example, which is the default value. When you set the [eviction policy][eviction-policy] to *Delete*, nodes in the underlying scale set of the node pool are deleted when they're evicted. You can also set the eviction policy to *Deallocate*. When you set the eviction policy to *Deallocate*, nodes in the underlying scale set are set to the stopped-deallocated state upon eviction. Nodes in the stopped-deallocated state count against your compute quota and can cause issues with cluster scaling or upgrading. The *priority* and *eviction-policy* values can only be set during node pool creation. Those values can't be updated later.
+By default, you create a node pool with a *priority* of *Regular* in your AKS cluster when you create a cluster with multiple node pools. The above command adds an auxiliary node pool to an existing AKS cluster with a *priority* of *Spot*. The *priority* of *Spot* makes the node pool a Spot node pool. The *eviction-policy* parameter is set to *Delete* in the above example, which is the default value. When you set the [eviction policy][eviction-policy] to *Delete*, nodes in the underlying scale set of the node pool are deleted when they're evicted. You can also set the eviction policy to *Deallocate*. When you set the eviction policy to *Deallocate*, nodes in the underlying scale set are set to the stopped-deallocated state upon eviction. Nodes in the stopped-deallocated state count against your compute quota and can cause issues with cluster scaling or upgrading. The *priority* and *eviction-policy* values can only be set during node pool creation. Those values can't be updated later.
-The command also enables the [cluster autoscaler][cluster-autoscaler], which is recommended to use with spot node pools. Based on the workloads running in your cluster, the cluster autoscaler scales up and scales down the number of nodes in the node pool. For spot node pools, the cluster autoscaler will scale up the number of nodes after an eviction if additional nodes are still needed. If you change the maximum number of nodes a node pool can have, you also need to adjust the `maxCount` value associated with the cluster autoscaler. If you do not use a cluster autoscaler, upon eviction, the spot pool will eventually decrease to zero and require a manual operation to receive any additional spot nodes.
+The command also enables the [cluster autoscaler][cluster-autoscaler], which is recommended to use with Spot node pools. Based on the workloads running in your cluster, the cluster autoscaler scales up and scales down the number of nodes in the node pool. For Spot node pools, the cluster autoscaler will scale up the number of nodes after an eviction if more nodes are still needed. If you change the maximum number of nodes a node pool can have, you also need to adjust the `maxCount` value associated with the cluster autoscaler. If you don't use a cluster autoscaler, upon eviction, the Spot pool will eventually decrease to zero and require a manual operation to receive any additional Spot nodes.
-> [!Important]
-> Only schedule workloads on spot node pools that can handle interruptions, such as batch processing jobs and testing environments. It is recommended that you set up [taints and tolerations][taints-tolerations] on your spot node pool to ensure that only workloads that can handle node evictions are scheduled on a spot node pool. For example, the above command by default adds a taint of *kubernetes.azure.com/scalesetpriority=spot:NoSchedule* so only pods with a corresponding toleration are scheduled on this node.
+> [!IMPORTANT]
+> Only schedule workloads on Spot node pools that can handle interruptions, such as batch processing jobs and testing environments. It is recommended that you set up [taints and tolerations][taints-tolerations] on your Spot node pool to ensure that only workloads that can handle node evictions are scheduled on a Spot node pool. For example, the above command by default adds a taint of *kubernetes.azure.com/scalesetpriority=spot:NoSchedule* so only pods with a corresponding toleration are scheduled on this node.
-## Verify the spot node pool
+## Verify the Spot node pool
-To verify your node pool has been added as a spot node pool:
+To verify your node pool has been added as a Spot node pool:
```azurecli az aks nodepool show --resource-group myResourceGroup --cluster-name myAKSCluster --name spotnodepool
az aks nodepool show --resource-group myResourceGroup --cluster-name myAKSCluste
Confirm *scaleSetPriority* is *Spot*.
-To schedule a pod to run on a spot node, add a toleration and node affinity that corresponds to the taint applied to your spot node. The following example shows a portion of a yaml file that defines a toleration that corresponds to the *kubernetes.azure.com/scalesetpriority=spot:NoSchedule* taint and a node affinity that corresponds to the *kubernetes.azure.com/scalesetpriority=spot* label used in the previous step.
+To schedule a pod to run on a Spot node, add a toleration and node affinity that corresponds to the taint applied to your Spot node. The following example shows a portion of a yaml file that defines a toleration that corresponds to the *kubernetes.azure.com/scalesetpriority=spot:NoSchedule* taint and a node affinity that corresponds to the *kubernetes.azure.com/scalesetpriority=spot* label used in the previous step.
```yaml spec:
spec:
When a pod with this toleration and node affinity is deployed, Kubernetes will successfully schedule the pod on the nodes with the taint and label applied.
-## Max price for a spot pool
-[Pricing for spot instances is variable][pricing-spot], based on region and SKU. For more information, see pricing for [Linux][pricing-linux] and [Windows][pricing-windows].
+## Upgrade a Spot node pool
+
+Upgrading Spot node pools was previously unsupported, but is now an available operation. When Upgrading a Spot node pool, AKS will internally issue a cordon and an eviction notice, but no drain is applied. There are no surge nodes available for Spot node pool upgrades. Outside of these changes, behavior when upgrading Spot node pools is consistent with other node pool types.
+
+For more information on upgrading, see [Upgrade an AKS cluster][upgrade-cluster] and the Azure CLI command [az aks upgrade][az-aks-upgrade].
+
+## Max price for a Spot pool
+
+[Pricing for Spot instances is variable][pricing-spot], based on region and SKU. For more information, see pricing for [Linux][pricing-linux] and [Windows][pricing-windows].
-With variable pricing, you have option to set a max price, in US dollars (USD), using up to 5 decimal places. For example, the value *0.98765* would be a max price of $0.98765 USD per hour. If you set the max price to *-1*, the instance won't be evicted based on price. The price for the instance will be the current price for Spot or the price for a standard instance, whichever is less, as long as there is capacity and quota available.
+With variable pricing, you have option to set a max price, in US dollars (USD), using up to five decimal places. For example, the value *0.98765* would be a max price of $0.98765 USD per hour. If you set the max price to *-1*, the instance won't be evicted based on price. The price for the instance will be the current price for Spot or the price for a standard instance, whichever is less, as long as there's capacity and quota available.
## Next steps
-In this article, you learned how to add a spot node pool to an AKS cluster. For more information about how to control pods across node pools, see [Best practices for advanced scheduler features in AKS][operator-best-practices-advanced-scheduler].
+In this article, you learned how to add a Spot node pool to an AKS cluster. For more information about how to control pods across node pools, see [Best practices for advanced scheduler features in AKS][operator-best-practices-advanced-scheduler].
<!-- LINKS - External --> [kubernetes-services]: https://kubernetes.io/docs/concepts/services-networking/service/
In this article, you learned how to add a spot node pool to an AKS cluster. For
[taints-tolerations]: operator-best-practices-advanced-scheduler.md#provide-dedicated-nodes-using-taints-and-tolerations [use-multiple-node-pools]: use-multiple-node-pools.md [vmss-spot]: ../virtual-machine-scale-sets/use-spot.md
+[upgrade-cluster]: upgrade-cluster.md
+[az-aks-upgrade]: /cli/azure/aks#az_aks_upgrade
applied-ai-services Form Recognizer Container Install Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-install-run.md
http {
* Gather a set of at least six forms of the same type. You'll use this data to train the model and test a form. You can use a [sample data set](https://go.microsoft.com/fwlink/?linkid=2090451) (download and extract *sample_data.zip*). Download the training files to the **shared** folder you created above.
-* If you want to label your data, download the [Form Recognizer Sample Labeling tool for Windows](https://github.com/microsoft/OCR-Form-Tools/releases/tag/v2.1-ga). The download will import the labeling tool .exe file that you'll use to label the data present on your local file system. You can ignore any warnings that occur during the download process.
+* If you want to label your data, download the [Form Recognizer Sample Labeling tool for Windows](https://github.com/microsoft/OCR-Form-Tools/releases). The download will import the labeling tool .exe file that you'll use to label the data present on your local file system. You can ignore any warnings that occur during the download process.
#### Create a new Sample Labeling tool project
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-region.md
In the Product Catalog, always-available services are listed as "non-regional" s
| [Azure Route Server](../route-server/route-server-faq.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | Azure Stream Analytics | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [SQL Server on Azure Virtual Machines](/azure/azure-sql/database/high-availability-sla) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
-| Azure Storage:ΓÇ»[Files Storage](../storage/files/storage-files-planning.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| Azure Storage:ΓÇ»[Files Storage](migrate-storage.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
| [Azure Virtual WAN](../virtual-wan/virtual-wan-faq.md#how-are-availability-zones-and-resiliency-handled-in-virtual-wan) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Web Application Firewall](../firewall/deploy-availability-zone-powershell.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Power BI Embedded](/power-bi/admin/service-admin-failover#what-does-high-availability) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
-| Virtual Machines:ΓÇ»[Azure Dedicated Host](../virtual-machines/windows/create-powershell-availability-zone.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
-| Virtual Machines:ΓÇ»[Ddsv4-Series](../virtual-machines/windows/create-powershell-availability-zone.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
-| Virtual Machines:ΓÇ»[Ddv4-Series](../virtual-machines/windows/create-powershell-availability-zone.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
-| Virtual Machines:ΓÇ»[Dsv4-Series](../virtual-machines/windows/create-powershell-availability-zone.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
-| Virtual Machines:ΓÇ»[Dv4-Series](../virtual-machines/windows/create-powershell-availability-zone.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
-| Virtual Machines:ΓÇ»[Edsv4-Series](../virtual-machines/windows/create-powershell-availability-zone.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
-| Virtual Machines:ΓÇ»[Edv4-Series](../virtual-machines/windows/create-powershell-availability-zone.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
-| Virtual Machines:ΓÇ»[Esv4-Series](../virtual-machines/windows/create-powershell-availability-zone.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
-| Virtual Machines:ΓÇ»[Ev4-Series](../virtual-machines/windows/create-powershell-availability-zone.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
-| Virtual Machines:ΓÇ»[Fsv2-Series](../virtual-machines/windows/create-powershell-availability-zone.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
-| Virtual Machines:ΓÇ»[M-Series](../virtual-machines/windows/create-powershell-availability-zone.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Azure Dedicated Host](migrate-vm.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Ddsv4-Series](migrate-vm.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Ddv4-Series](migrate-vm.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Dsv4-Series](migrate-vm.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Dv4-Series](migrate-vm.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Edsv4-Series](migrate-vm.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Edv4-Series](migrate-vm.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Esv4-Series](migrate-vm.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Ev4-Series](migrate-vm.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Fsv2-Series](migrate-vm.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[M-Series](migrate-vm.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
| Virtual WAN:ΓÇ»[Azure ExpressRoute](../virtual-wan/virtual-wan-faq.md#how-are-availability-zones-and-resiliency-handled-in-virtual-wan) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | Virtual WAN:ΓÇ»[Point-to-site VPN Gateway](../vpn-gateway/about-zone-redundant-vnet-gateways.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | Virtual WAN:ΓÇ»[Site-to-site VPN Gateway](../vpn-gateway/about-zone-redundant-vnet-gateways.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
azure-arc Create Complete Managed Instance Directly Connected https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-complete-managed-instance-directly-connected.md
NAME STATE
<namespace> Ready ```
-## Create an Azure Arc-enabled SQL Managed Instance
+## Create an instance of Azure Arc-enabled SQL Managed Instance
1. In the portal, locate the resource group. 1. In the resource group, select **Create**.
azure-arc Upgrade Data Controller Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-direct-cli.md
Previously updated : 05/24/2022 Last updated : 05/27/2022
This article describes how to upgrade a directly connected Azure Arc-enabled data controller using the Azure CLI (`az`).
-During a data controller upgrade, portions of the data control plane such as Custom Resource Definitions (CRDs) and containers may be upgraded. An upgrade of the data controller will not cause downtime for the data services (SQL Managed Instance or PostgreSQL Hyperscale server).
+During a data controller upgrade, portions of the data control plane such as Custom Resource Definitions (CRDs) and containers may be upgraded. An upgrade of the data controller won't cause downtime for the data services (SQL Managed Instance or PostgreSQL server).
## Prerequisites
-You will need a directly connected data controller with the imageTag v1.0.0_2021-07-30 or later.
+You'll need a directly connected data controller with the imageTag v1.0.0_2021-07-30 or later.
To check the version, run:
kubectl get datacontrollers -n <namespace> -o custom-columns=BUILD:.spec.docker.
## Install tools
-Before you can proceed with the tasks in this article you need to install:
+Before you can proceed with the tasks in this article, you need to install:
-- The [Azure CLI (az)](/cli/azure/install-azure-cli)
+- The [Azure CLI (`az`)](/cli/azure/install-azure-cli)
- The [`arcdata` extension for Azure CLI](install-arcdata-extension.md) [!INCLUDE [azure-arc-angle-bracket-example](../../../includes/azure-arc-angle-bracket-example.md)]
+The `arcdata` extension version and the image version are related. Check that you have the correct `arcdata` extension version that corresponds to the image version you want to upgrade to in the [Version log](version-log.md).
+ ## View available images and chose a version Pull the list of available images for the data controller with the following command: ```azurecli
- az arcdata dc list-upgrades --k8s-namespace <custom location>
+ az arcdata dc list-upgrades --k8s-namespace <namespace>
``` The command above returns output like the following example:
This section shows how to upgrade a directly connected data controller.
> To upgrade, delete all non-GA database instances. You can find the list of generally available > and preview services in the [Release Notes](./release-notes.md).
-### Upgrade
++
+### Authenticate
-You will need to connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the upgrade of the Azure Arc data controller.
+You'll need to connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the upgrade of the Azure Arc data controller.
```kubectl kubectl config use-context <Kubernetes cluster name> ```
-You can perform a dry run first. The dry run validates the registry exists, the version schema, and the private repository authorization token (if used). To perform a dry run, use the `--dry-run` parameter in the `az arcdata dc upgrade` command. For example:
+### Upgrade Arc data controller extension
+
+Upgrade the Arc data controller extension first.
+
+Retrieve the name of your extension and its version:
+
+1. Go to the Azure portal
+1. Select **Overview** for your Azure Arc enabled Kubernetes cluster
+1. Selecting the **Extensions** tab on the left.
+
+Alternatively, you can use `az` CLI to get the name of your extension and its version running.
```azurecli
-az arcdata dc upgrade --resource-group <resource group> --name <data controller name> --desired-version <version> [--no-wait]
+az k8s-extension list --resource-group <resource-group> --cluster-name <connected cluster name> --cluster-type connectedClusters
```
-The output for the preceding command is:
+Example:
-```output
-Preparing to upgrade dc arcdc in namespace arc to version <version-tag>.
-Preparing to upgrade dc arcdc in namespace arc to version <version-tag>.
-****Dry Run****
-Arcdata Control Plane would be upgraded to: <version-tag>
+```azurecli
+az k8s-extension list --resource-group rg-arcds --cluster-name aks-arc --cluster-type connectedClusters
```
-Upgrade the data controller by running an upgrade on the Arc data controller extension first. This can be done as follows:
+After you retrieve the extension name and its version, upgrade the extension.
```azurecli az k8s-extension update --resource-group <resource-group> --cluster-name <connected cluster name> --cluster-type connectedClusters --name <name of extension> --version <extension version> --release-train stable --config systemDefaultValues.image="<registry>/<repository>/arc-bootstrapper:<imageTag>" ```
-You can retrieve the name of your extension and its version, by browsing to the Overview blade of your Arc enabled kubernetes cluster and select Extensions tab on the left. You can also retrieve the name of your extension and its version running `az` CLI As follows:
+
+Example:
```azurecli
-az k8s-extension list --resource-group <resource-group> --cluster-name <connected cluster name> --cluster-type connectedClusters
+az k8s-extension update --resource-group rg-arcds --cluster-name aks-arc --cluster-type connectedClusters --name aks-arc-ext --version
+1.2.19581002 --release-train stable --config systemDefaultValues.image="mcr.microsoft.com/arcdata/arc-bootstrapper:v1.7.0_2022-05-24"
```
-For example:
+### Upgrade data controller
+
+You can perform a dry run first. The dry run validates the registry exists, the version schema, and the private repository authorization token (if used). To perform a dry run, use the `--dry-run` parameter in the `az arcdata dc upgrade` command. For example:
```azurecli
-az k8s-extension list --resource-group myresource-group --cluster-name myconnected-cluster --cluster-type connectedClusters
+az arcdata dc upgrade --resource-group <resource group> --name <data controller name> --desired-version <version> --dry-run [--no-wait]
```
-After retrieving the Arc data controller extension name and its version, the extension can be upgraded as follows:
-
-For example:
+The output for the preceding command is:
-```azurecli
-az k8s-extension update --resource-group myresource-group --cluster-name myconnected-cluster --cluster-type connectedClusters --name arcdc-ext --version 1.2.19481002 --release-train stable --config systemDefaultValues.image="mcr.microsoft.com/arcdata/arc-bootstrapper:v1.6.0_2022-05-02"
+```output
+Preparing to upgrade dc arcdc in namespace arc to version <version-tag>.
+****Dry Run****
+Arcdata Control Plane would be upgraded to: <version-tag>
```
-Once the extension is upgraded, run the `az arcdata dc upgrade` command to upgrade the data controller. If you don't specify a target image, the data controller will be upgraded to the latest version.
+After the Arc data controller extension has been upgraded, run the `az arcdata dc upgrade` command, specifying the image tag with `--desired-version`.
```azurecli
-az arcdata dc upgrade --resource-group <resource group> --name <data controller name> [--no-wait]
+az arcdata dc upgrade --resource-group <resource group> --name <data controller name> --desired-version <version> [--no-wait]
```
-In example above, you can include `--desired-version <version>` to specify a version if you do not want the latest version.
-
-> [!NOTE]
-> Currently upgrade is only supported to the next immediate version. Hence, if you are more than one version behind, specify the `--desired-version` to avoid compatibility issues.
+Example:
+```azurecli
+az arcdata dc upgrade --resource-group rg-arcds --name dc01 --desired-version v1.7.0_2022-05-24 [--no-wait]
+```
## Monitor the upgrade status
azure-arc Upgrade Data Controller Direct Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-direct-portal.md
Previously updated : 01/18/2022 Last updated : 05/31/2022
This article describes how to upgrade a directly connected Azure Arc-enabled data controller using the Azure portal.
-During a data controller upgrade, portions of the data control plane such as Custom Resource Definitions (CRDs) and containers may be upgraded. An upgrade of the data controller will not cause downtime for the data services (SQL Managed Instance or PostgreSQL Hyperscale server).
+During a data controller upgrade, portions of the data control plane such as Custom Resource Definitions (CRDs) and containers may be upgraded. An upgrade of the data controller will not cause downtime for the data services (SQL Managed Instance or PostgreSQL server).
## Prerequisites
This section shows how to upgrade a directly connected data controller.
> To upgrade, delete all non-GA database instances. You can find the list of generally available > and preview services in the [Release Notes](./release-notes.md). ++ ### Upgrade Open your data controller resource. If an upgrade is available, you will see a notification on the **Overview** blade that says, "One or more upgrades are available for this data controller."
azure-arc Upgrade Data Controller Indirect Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-indirect-cli.md
Previously updated : 11/03/2021 Last updated : 05/27/2022
This article describes how to upgrade an indirectly connected Azure Arc-enabled data controller using the Azure CLI (`az`).
-During a data controller upgrade, portions of the data control plane such as Custom Resource Definitions (CRDs) and containers may be upgraded. An upgrade of the data controller will not cause downtime for the data services (SQL Managed Instance or PostgreSQL Hyperscale server).
+During a data controller upgrade, portions of the data control plane such as Custom Resource Definitions (CRDs) and containers may be upgraded. An upgrade of the data controller won't cause downtime for the data services (SQL Managed Instance or PostgreSQL server).
## Prerequisites
-You will need an indirectly connected data controller with the imageTag v1.0.0_2021-07-30 or later.
+You'll need an indirectly connected data controller with the imageTag v1.0.0_2021-07-30 or later.
To check the version, run:
kubectl get datacontrollers -n <namespace> -o custom-columns=BUILD:.spec.docker.
## Install tools
-Before you can proceed with the tasks in this article you need to install:
+Before you can proceed with the tasks in this article, you need to install:
-- The [Azure CLI (az)](/cli/azure/install-azure-cli)
+- The [Azure CLI (`az`)](/cli/azure/install-azure-cli)
- The [`arcdata` extension for Azure CLI](install-arcdata-extension.md) [!INCLUDE [azure-arc-angle-bracket-example](../../../includes/azure-arc-angle-bracket-example.md)]
+The `arcdata` extension version and the image version are related. Check that you have the correct `arcdata` extension version that corresponds to the image version you want to upgrade to in the [Version log](version-log.md).
+ ## View available images and chose a version Pull the list of available images for the data controller with the following command:
This section shows how to upgrade an indirectly connected data controller.
> To upgrade, delete all non-GA database instances. You can find the list of generally available > and preview services in the [Release Notes](./release-notes.md). ++ ### Upgrade
-You will need to connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the upgrade of the Azure Arc data controller.
+You'll need to connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the upgrade of the Azure Arc data controller.
You can perform a dry run first. The dry run validates the registry exists, the version schema, and the private repository authorization token (if used). To perform a dry run, use the `--dry-run` parameter in the `az arcdata dc upgrade` command. For example:
Preparing to upgrade dc arcdc in namespace arc to version <version-tag>.
Arcdata Control Plane would be upgraded to: <version-tag> ```
-To upgrade the data controller, run the `az arcdata dc upgrade` command. If you don't specify a target image, the data controller will be upgraded to the latest version.
+To upgrade the data controller, run the `az arcdata dc upgrade` command, specifying the image tag with `--desired-version`.
+
+```azurecli
+az arcdata dc upgrade --name <data controller name> --desired-version <image tag> --k8s-namespace <namespace> --use-k8s
+```
+
+Example:
```azurecli
-az arcdata dc upgrade --k8s-namespace <namespace> --use-k8s
+az arcdata dc upgrade --name arcdc --desired-version v1.7.0_2022-05-24 --k8s-namespace arc --use-k8s
``` The output for the preceding command shows the status of the steps:
Service account arc:cr-upgrade-worker has been created successfully.
Creating privileged job arc-elevated-bootstrapper-job ```
-In example above, you can include `--desired-version <version>` to specify a version if you do not want the latest version.
- ## Monitor the upgrade status
-You can monitor the progress of the upgrade with kubectl or CLI.
-
-### kubectl
-
-```console
-kubectl get datacontrollers --namespace <namespace>
-kubectl get monitors --namespace <namespace>
-```
-
-The upgrade is a two-part process. First the controller is upgraded, then the monitoring stack is upgraded. During the upgrade, use ```kubectl get monitors -n <namespace> -w``` to view the status. The output will be:
-
-```output
-NAME STATUS AGE
-monitorstack Updating 36m
-monitorstack Updating 36m
-monitorstack Updating 39m
-monitorstack Updating 39m
-monitorstack Updating 41m
-monitorstack Ready 41m
-```
+The upgrade is a two-part process. First the controller is upgraded, then the monitoring stack is upgraded. You can monitor the progress of the upgrade with CLI.
### CLI ```azurecli
- az arcdata dc status show --k8s-namespace <namespace> --use-k8s
+ az arcdata dc status show --name <data controller name> --k8s-namespace <namespace> --use-k8s
```
-The upgrade is a two-part process. First the controller is upgraded, then the monitoring stack is upgraded. When the upgrade is complete, the output will be:
+When the upgrade is complete, the output will be:
```output Ready
azure-arc Upgrade Data Controller Indirect Kubernetes Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-indirect-kubernetes-tools.md
Previously updated : 12/09/2021 Last updated : 05/27/2022
This article explains how to upgrade an indirectly connected Azure Arc-enabled data controller with Kubernetes tools.
-During a data controller upgrade, portions of the data control plane such as Custom Resource Definitions (CRDs) and containers may be upgraded. An upgrade of the data controller will not cause downtime for the data services (SQL Managed Instance or PostgreSQL Hyperscale server).
+During a data controller upgrade, portions of the data control plane such as Custom Resource Definitions (CRDs) and containers may be upgraded. An upgrade of the data controller will not cause downtime for the data services (SQL Managed Instance or PostgreSQL server).
-In this article, you will apply a .yaml file to:
+In this article, you'll apply a .yaml file to:
1. Specify a service account. 1. Set the cluster roles.
In this article, you will apply a .yaml file to:
## Prerequisites
-Prior to beginning the upgrade of the Azure Arc data controller, you will need:
+Prior to beginning the upgrade of the Azure Arc data controller, you'll need:
- To connect and authenticate to a Kubernetes cluster - An existing Kubernetes context selected You need an indirectly connected data controller with the `imageTag: v1.0.0_2021-07-30` or greater.
-### Install tools
+## Install tools
-To upgrade the Azure Arc data controller using Kubernetes tools you need to have the Kubernetes tools installed.
+To upgrade the Azure Arc data controller using Kubernetes tools, you need to have the Kubernetes tools installed.
-The examples in this article use kubectl, but similar approaches could be used with other Kubernetes tools
-such as the Kubernetes dashboard, oc, or helm if you are familiar with those tools and Kubernetes yaml/json.
+The examples in this article use `kubectl`, but similar approaches could be used with other Kubernetes tools
+such as the Kubernetes dashboard, `oc`, or helm if you're familiar with those tools and Kubernetes yaml/json.
[Install the kubectl tool](https://kubernetes.io/docs/tasks/tools/)
Found 2 valid versions. The current datacontroller version is <current-version>
## Create or download .yaml file
-To upgrade the data controller, you will apply a yaml file to the Kubernetes cluster. The example file for the upgrade is available in GitHub at <https://github.com/microsoft/azure_arc/tree/main/arc_data_services/upgrade/yaml>.
+To upgrade the data controller, you'll apply a yaml file to the Kubernetes cluster. The example file for the upgrade is available in GitHub at <https://github.com/microsoft/azure_arc/blob/main/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml>.
You can download the file - and other Azure Arc related demonstration files - by cloning the repository. For example:
For more information, see [Cloning a repository](https://docs.github.com/en/repo
The following steps use files from the repository.
-In the yaml file, you will replace ```{{namespace}}``` with your namespace.
+In the yaml file, you'll replace ```{{namespace}}``` with your namespace.
+
+## Upgrade data controller
+
+This section shows how to upgrade an indirectly connected data controller.
+
+> [!NOTE]
+> Some of the data services tiers and modes are generally available and some are in preview.
+> If you install GA and preview services on the same data controller, you can't upgrade in place.
+> To upgrade, delete all non-GA database instances. You can find the list of generally available
+> and preview services in the [Release Notes](./release-notes.md).
+++
+### Upgrade
+
+You'll need to connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the upgrade of the Azure Arc data controller.
### Specify the service account
To specify the service account:
### Set the cluster roles
-A cluster role (`ClusterRole`) grants the service account permission to perform the upgrade.
+A cluster role (`ClusterRole`) grants the service account permission to perform the upgrade.
-1. Describe the cluster role and rules in a .yaml file. The following example defines a cluster role for `arc:cr-upgrade-worker` and allows all API groups, resources, and verbs.
+1. Describe the cluster role and rules in a .yaml file. The following example defines a cluster role for `arc:cr-upgrade-worker` and allows all API groups, resources, and verbs.
:::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="7-9":::
-1. Edit the file as needed.
+1. Edit the file as needed.
### Set the cluster role binding
A cluster role binding (`ClusterRoleBinding`) links the service account and the
:::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="20-21":::
-1. Edit the file as needed.
+1. Edit the file as needed.
### Specify the job
A job creates a pod to execute the upgrade.
1. Edit the file for your environment.
+### Upgrade the data controller
+
+Specify the image tag to upgrade the data controller to.
+
+ :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="50-56":::
+ ### Apply the resources Run the following kubectl command to apply the resources to your cluster.
azure-arc Upgrade Sql Managed Instance Auto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-sql-managed-instance-auto.md
Title: Enable automatic upgrades - Azure Arc enabled SQL Managed Instance
-description: Article describes how to enable automatic upgrades of SQL Managed Instance for Azure Arc
+ Title: Enable automatic upgrades - Azure SQL Managed Instance for Azure Arc
+description: Article describes how to enable automatic upgrades for Azure SQL Managed Instance deployed for Azure Arc
Previously updated : 01/24/2022 Last updated : 05/27/2022
-# Enable automatic upgrades of a SQL Managed Instance
+# Enable automatic upgrades of an Azure SQL Managed Instance for Azure Arc
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
-You can set the `--desired-version` parameter of the `spec.update.desiredVersion` property of an Azure Arc-enabled SQL Managed Instance to `auto` to ensure that your Managed Instance will be upgraded after a data controller upgrade, with no interaction from a user. This allows for ease of management, as you do not need to manually upgrade every instance for every release.
+You can set the `--desired-version` parameter of the `spec.update.desiredVersion` property of an Azure Arc-enabled SQL Managed Instance to `auto` to ensure that your managed instance will be upgraded after a data controller upgrade, with no interaction from a user. This setting simplifies management, as you don't need to manually upgrade every instance for every release.
-After setting the `--desired-version` parameter of the `spec.update.desiredVersion` property to `auto` the first time, Azure Arc-enabled data service will begin an upgrade to the newest image version within five minutes for the Managed Instance. Thereafter, within five minutes of a data controller being upgraded, the Managed Instance will begin the upgrade process. This works for both directly connected and indirectly connected modes.
+After setting the `--desired-version` parameter of the `spec.update.desiredVersion` property to `auto` the first time, the Azure Arc-enabled data service will begin an upgrade of the managed instance to the newest image version within five minutes, or within the next [Maintenance Window](maintenance-window.md). Thereafter, within five minutes of a data controller being upgraded, or within the next maintenance window, the managed instance will begin the upgrade process. This setting works for both directly connected and indirectly connected modes.
-If the `spec.update.desiredVersion` property is pinned to a specific version, automatic upgrades will not take place. This allows you to let most instances automatically upgrade, while manually managing instances that need a more hands-on approach.
+If the `spec.update.desiredVersion` property is pinned to a specific version, automatic upgrades won't take place. This property allows you to let most instances automatically upgrade, while manually managing instances that need a more hands-on approach.
-## Enable with with Kubernetes tools (kubectl)
+## Prerequisites
-Use kubectl to view the existing spec in yaml.
+Your managed instance version must be equal to the data controller version before enabling auto mode.
+
+## Enable with Kubernetes tools (kubectl)
+
+Use kubectl to view the existing spec in yaml.
```console kubectl --namespace <namespace> get sqlmi <sqlmi-name> --output yaml
kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{
To set the `--desired-version` to `auto`, use the following command:
-Indirectly connected:
+Indirectly connected:
````cli az sql mi-arc upgrade --name <instance name> --desired-version auto --k8s-namespace <namespace> --use-k8s
Example:
az sql mi-arc upgrade --name instance1 --desired-version auto --k8s-namespace arc1 --use-k8s ````
-Directly connected:
+Directly connected:
````cli az sql mi-arc upgrade --resource-group <resource group> --name <instance name> --desired-version auto [--no-wait]
Example:
````cli az sql mi-arc upgrade --resource-group rgarc --name instance1 --desired-version auto
-````
+````
azure-arc Upgrade Sql Managed Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-sql-managed-instance-cli.md
Title: Upgrade an indirectly connected Azure Arc-enabled Managed Instance using the CLI
-description: Article describes how to upgrade an indirectly connected Azure Arc-enabled Managed Instance using the CLI
+ Title: Upgrade Azure SQL Managed Instance indirectly connected to Azure Arc using the CLI
+description: Article describes how to upgrade an Azure SQL Managed Instance indirectly connected to Azure Arc-enabled using the CLI
Last updated 11/03/2021
-# Upgrade an indirectly connected Azure Arc-enabled Managed Instance using the CLI
+# Upgrade Azure SQL Managed Instance indirectly connected Azure Arc using the CLI
This article describes how to upgrade a SQL Managed Instance deployed on an indirectly connected Azure Arc-enabled data controller using the Azure CLI (`az`).
This article describes how to upgrade a SQL Managed Instance deployed on an indi
### Install tools
-Before you can proceed with the tasks in this article you need to install:
+Before you can proceed with the tasks in this article, install:
-- The [Azure CLI (az)](/cli/azure/install-azure-cli)
+- The [Azure CLI (`az`)](/cli/azure/install-azure-cli)
- The [`arcdata` extension for Azure CLI](install-arcdata-extension.md)
+The `arcdata` extension version and the image version are related. Check that you have the correct `arcdata` extension version that corresponds to the image version you want to upgrade to in the [Version log](version-log.md).
+ ## Limitations
-The Azure Arc Data Controller must be upgraded to the new version before the Managed Instance can be upgraded.
+The Azure Arc Data Controller must be upgraded to the new version before the managed instance can be upgraded.
+
+The managed instance must be at the same version as the data controller before a data controller is upgraded.
+
+There's no batch upgrade process available at this time.
-Currently, only one Managed Instance can be upgraded at a time.
+## Upgrade the managed instance
-## Upgrade the Managed Instance
+A dry run can be performed first. The dry run validates the version schema and lists which instance(s) will be upgraded.
-A dry run can be performed first. This will validate the version schema and list which instance(s) will be upgraded.
+For example:
-````cli
+```azurecli
az sql mi-arc upgrade --name <instance name> --k8s-namespace <namespace> --dry-run --use-k8s
-````
+```
The output will be:
Preparing to upgrade sql sqlmi-1 in namespace arc to data controller version.
****Dry Run****1 instance(s) would be upgraded by this commandsqlmi-1 would be upgraded to <version-tag>. ```
-### General Purpose
-
-During a SQL Managed Instance General Purpose upgrade, the containers in the pod will be upgraded and will be reprovisioned. This will cause a short amount of downtime as the new pod is created. You will need to build resiliency into your application, such as connection retry logic, to ensure minimal disruption. Read [Overview of the reliability pillar](/azure/architecture/framework/resiliency/overview) for more information on architecting resiliency and [Retry Guidance for Azure Services](/azure/architecture/best-practices/retry-service-specific#sql-database-using-adonet).
-### Business Critical
### Upgrade
-To upgrade the Managed Instance, use the following command:
+To upgrade the managed instance, use the following command:
-````cli
+```azurecli
az sql mi-arc upgrade --name <instance name> --desired-version <version> --k8s-namespace <namespace> --use-k8s
-````
+```
Example:
-````cli
+```azurecli
az sql mi-arc upgrade --name instance1 --desired-version v1.0.0.20211028 --k8s-namespace arc1 --use-k8s
-````
+```
## Monitor
-You can monitor the progress of the upgrade with kubectl or CLI.
-
-### kubectl
-
-```console
-kubectl describe sqlmi --namespace <namespace>
-```
- ### CLI
+You can monitor the progress of the upgrade with the `show` command.
+ ```cli az sql mi-arc show --name <instance name> --k8s-namespace <namespace> --use-k8s ```
azure-arc Upgrade Sql Managed Instance Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-sql-managed-instance-direct-cli.md
Title: Upgrade a directly connected Azure Arc-enabled Managed Instance using the CLI
-description: Article describes how to upgrade a directly connected Azure Arc-enabled Managed Instance using the CLI
+ Title: Upgrade a directly connected Azure SQL Managed Instance for Azure Arc using the CLI
+description: Article describes how to upgrade a directly connected Azure SQL Managed Instance for Azure Arc using the CLI
Previously updated : 05/21/2022 Last updated : 05/27/2022
-# Upgrade a directly connected Azure Arc-enabled Managed Instance using the CLI
+# Upgrade an Azure SQL Managed Instance directly connected to Azure Arc using the CLI
-This article describes how to upgrade a SQL Managed Instance deployed on a directly connected Azure Arc-enabled data controller using the Azure CLI (`az`).
+This article describes how to upgrade an Azure SQL Managed Instance deployed on a directly connected Azure Arc-enabled data controller using the Azure CLI (`az`).
## Prerequisites ### Install tools
-Before you can proceed with the tasks in this article you need to install:
+Before you can proceed with the tasks in this article, install:
-- The [Azure CLI (az)](/cli/azure/install-azure-cli)
+- The [Azure CLI (`az`)](/cli/azure/install-azure-cli)
- The [`arcdata` extension for Azure CLI](install-arcdata-extension.md)
+The `arcdata` extension version and the image version are related. Check that you have the correct `arcdata` extension version that corresponds to the image version you want to upgrade to in the [Version log](version-log.md).
+ ## Limitations
-The Azure Arc Data Controller must be upgraded to the new version before the Managed Instance can be upgraded.
+The Azure Arc data controller must be upgraded to the new version before the managed instance can be upgraded.
+
+The managed instance must be at the same version as the data controller before a data controller is upgraded.
-Currently, only one Managed Instance can be upgraded at a time.
+There's no batch upgrade process available at this time.
-## Upgrade the Managed Instance
+## Upgrade the managed instance
-A dry run can be performed first. This will validate the version schema and list which instance(s) will be upgraded.
+You can perform a dry run first. The dry run validates the version schema and lists which instance(s) will be upgraded. Use `--dry-run`. For example:
-````cli
+```azurecli
az sql mi-arc upgrade --resource-group <resource group> --name <instance name> --dry-run
-````
+```
The output will be:
Preparing to upgrade sql sqlmi-1 in namespace arc to data controller version.
****Dry Run****1 instance(s) would be upgraded by this commandsqlmi-1 would be upgraded to <version-tag>. ```
-### General Purpose
-During a SQL Managed Instance General Purpose upgrade, the containers in the pod will be upgraded and will be reprovisioned. This will cause a short amount of downtime as the new pod is created. You will need to build resiliency into your application, such as connection retry logic, to ensure minimal disruption. Read [Overview of the reliability pillar](/azure/architecture/framework/resiliency/overview) for more information on architecting resiliency and [retry guidance for Azure Services](/azure/architecture/best-practices/retry-service-specific#sql-database-using-adonet).
-
-### Business Critical
### Upgrade
-To upgrade the Managed Instance, use the following command:
+To upgrade the managed instance, use the following command:
-````cli
+```azurecli
az sql mi-arc upgrade --resource-group <resource group> --name <instance name> --desired-version <imageTag> [--no-wait]
-````
+```
Example:
-````cli
+```azurecli
az sql mi-arc upgrade --resource-group myresource-group --name sql1 --desired-version v1.6.0_2022-05-02 [--no-wait]
-````
+```
## Monitor You can monitor the progress of the upgrade with CLI.
-### CLI
+### CLI example
```cli az sql mi-arc show --resource-group <resource group> --name <instance name>
azure-arc Upgrade Sql Managed Instance Direct Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-sql-managed-instance-direct-portal.md
+
+ Title: Upgrade Azure SQL Managed Instance directly connected Azure Arc using the portal
+description: Article describes how to upgrade Azure SQL Managed Instance directly connected Azure Arc using Azure portal
+++++++ Last updated : 05/27/2022+++
+# Upgrade Azure SQL Managed Instance directly connected Azure Arc using the portal
+
+This article describes how to upgrade Azure SQL Managed Instance deployed on a directly connected Azure Arc-enabled data controller using the portal.
+
+## Limitations
+
+The Azure Arc data controller must be upgraded to the new version before the managed instance can be upgraded.
+
+The managed instance must be at the same version as the data controller before a data controller is upgraded.
+
+There's no batch upgrade process available at this time.
+
+## Upgrade the managed instance
+++
+### Upgrade
+
+Open your SQL Managed Instance - Azure Arc resource.
+
+Under **Settings**, select the **Upgrade Management**.
+
+In the table of available versions, choose the version you want to upgrade to and select **Upgrade Now**.
+
+In the confirmation dialog box, select **Upgrade**.
+
+## Monitor the upgrade status
+
+To view the status of your upgrade in the portal, go to the resource group of the SQL Managed Instance and select **Activity log**.
+
+A **Validate Deploy** option that shows the status.
+
+## Troubleshoot upgrade problems
+
+If you encounter any troubles with upgrading, see the [troubleshooting guide](troubleshoot-guide.md).
azure-arc Upgrade Sql Managed Instance Indirect Kubernetes Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-sql-managed-instance-indirect-kubernetes-tools.md
Title: Upgrade an indirectly connected Azure Arc-enabled Managed Instance using Kubernetes tools
-description: Article describes how to upgrade an indirectly connected Azure Arc-enabled Managed Instance using Kubernetes tools
+ Title: Upgrade Azure SQL Managed Instance indirectly connected to Azure Arc using Kubernetes tools
+description: Article describes how to upgrade an indirectly connected Azure Arc-enabled SQL Managed Instance using Kubernetes tools
Last updated 11/08/2021
-# Upgrade an an indirectly connected Azure Arc-enabled Managed Instance using Kubernetes tools
+# Upgrade Azure SQL Managed Instance indirectly connected to Azure Arc using Kubernetes tools
-This article describes how to upgrade a SQL Managed Instance deployed on an indirectly connected Azure Arc-enabled data controller using Kubernetes tools.
+This article describes how to upgrade Azure SQL Managed Instance deployed on an indirectly connected Azure Arc-enabled data controller using Kubernetes tools.
## Prerequisites ### Install tools
-Before you can proceed with the tasks in this article you need:
+Before you can proceed with the tasks in this article, you need:
- To connect and authenticate to a Kubernetes cluster - An existing Kubernetes context selected
You need an indirectly connected data controller with the `imageTag v1.0.0_2021-
## Limitations
-The Azure Arc Data Controller must be upgraded to the new version before the Managed Instance can be upgraded.
+The Azure Arc Data Controller must be upgraded to the new version before the managed instance can be upgraded.
-Currently, only one Managed Instance can be upgraded at a time.
+The managed instance must be at the same version as the data controller before a data controller is upgraded.
-## Upgrade the Managed Instance
+There's no batch upgrade process available at this time.
-### General Purpose
+## Upgrade the managed instance
-During a SQL Managed Instance General Purpose upgrade, the containers in the pod will be upgraded and will be reprovisioned. This will cause a short amount of downtime as the new pod is created. You will need to build resiliency into your application, such as connection retry logic, to ensure minimal disruption. Read [Overview of the reliability pillar](/azure/architecture/framework/resiliency/overview) for more information on architecting resiliency.
-### Business Critical
### Upgrade
-Use a kubectl command to view the existing spec in yaml.
+Use a kubectl command to view the existing spec in yaml.
```console kubectl --namespace <namespace> get sqlmi <sqlmi-name> --output yaml ```
-Run kubectl patch to update the desired version.
+Run kubectl patch to update the desired version.
```console kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{"spec": {"update": {"desiredVersion": "v1.1.0_2021-11-02"}}}'
kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{
## Monitor
-You can monitor the progress of the upgrade with kubectl.
+You can monitor the progress of the upgrade with kubectl.
### kubectl
azure-arc Manage Vm Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions.md
The minimum version of the Connected Machine agent that is supported with this f
To upgrade your machine to the version of the agent required, see [Upgrade agent](manage-agent.md#upgrade-the-agent).
+## Operating system extension availability
+
+The following extensions are available for Windows and Linux machines:
+
+### Windows extension availability
+
+|Operating system |Azure Monitor agent |Log Analytics agent |Dependency VM Insights |Qualys |Custom Script |Key Vault |Hybrid Runbook |Antimalware Extension |Connected Machine agent |
+|--|--|--|--|-|--|-||-||
+|Windows Server 2019 |X |X |X |X |X |X | |X |
+|Windows Server 2019 Core |X | | |X |X |X |X | |X |
+|Windows Server 2016 |X |X |X |X |X |X |X |Built-in |X |
+|Windows Server 2016 Core |X | | |X |X |X | |Built-in |X |
+|Windows Server 2012 R2 |X |X |X |X |X | |X |X |X |
+|Windows Server 2012 |X |X |X |X |X |X |X |X |X |
+|Windows Server 2008 R2 SP1 |X |X |X |X |X | |X |X | |
+|Windows Server 2008 R2 | | | |X |X | |X |X |X |
+|Windows Server 2008 SP2 | |X | |X |X | |X | | |
+|Windows 11 client OS |X | | |X | | | | | |
+|Windows 10 1803 (RS4) and higher |X | | |X |X | | | |X |
+|Windows 10 Enterprise (including multi-session) and Pro (Server scenarios only) |X |X |X |X |X | |X | |X |
+|Windows 8 Enterprise and Pro (Server scenarios only) | |X |X |X | | |X | | |
+|Windows 7 SP1 (Server scenarios only) | |X |X |X | | |X | | |
+|Azure Stack HCI (Server scenarios only) | |X | |X | | |X | |X |
+
+### Linux extension availability
+
+|Operating system |Azure Monitor agent |Log Analytics agent |Dependency VM Insights |Qualys |Custom Script |Key Vault |Hybrid Runbook |Antimalware Extension |Connected Machine agent |
+|--|--|--|--|-|--|-||-||
+|Amazon Linux 2 | |X | |X | | |X |X |
+|CentOS Linux 8 |X |X |X |X |X | |X |X |
+|CentOS Linux 7 |X |X |X |X |X | |X |X |
+|CentOS Linux 6 | |X | |X |X | |X | |
+|Debian 10 |X | | |X |X | |X | |
+|Debian 9 |X |X |X |X |X | | | |
+|Debian 8 | |X |X |X | | |X | |
+|Debian 7 | | | |X | | |X | |
+|OpenSUSE 13.1+ | | | |X |X | | | |
+|Oracle Linux 8 |X |X | |X |X | |X |X |
+|Oracle Linux 7 |X |X | |X |X | |X |X |
+|Oracle Linux 6 | |X | |X |X | |X |X |
+|Red Hat Enterprise Linux Server 8 |X |X | |X |X | |X |X |
+|Red Hat Enterprise Linux Server 7 |X |X |X |X |X | |X |X |
+|Red Hat Enterprise Linux Server 6 | |X |X |X | | |X | |
+|SUSE Linux Enterprise Server 15.2 |X | | |X |X |X | |X |
+|SUSE Linux Enterprise Server 15.1 |X |X | |X |X |X |X |X |
+|SUSE Linux Enterprise Server 15 SP1 |X |X |X |X |X |X |X |X |
+|SUSE Linux Enterprise Server 15 |X |X |X |X |X |X |X |X |
+|SUSE Linux Enterprise Server 15 SP5 |X |X |X |X |X | |X |X |
+|SUSE Linux Enterprise Server 12 SP5 |X |X |X |X |X | |X |X |
+|Unbuntu 20.04 LTS |X |X |X |X |X | |X |X |
+|Unbuntu 18.04 LTS |X |X |X |X |X |X |X |X |
+|Unbuntu 16.04 LTS |X |X |X |X | | |X |X |
+|Unbuntu 140.04 LTS | |X | |X | | |X | |
+
+For the regional availabilities of different Azure services and VM extensions available for Azure Arc-enabled servers, [refer to Azure Global's Product Availability Roadmap](https://global.azure.com/product-availability/roadmap).
+ ## Next steps You can deploy, manage, and remove VM extensions using the [Azure CLI](manage-vm-extensions-cli.md), [Azure PowerShell](manage-vm-extensions-powershell.md), from the [Azure portal](manage-vm-extensions-portal.md), or [Azure Resource Manager templates](manage-vm-extensions-template.md).
azure-cache-for-redis Cache How To Active Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-active-geo-replication.md
Title: Configure active geo-replication for Enterprise Azure Cache for Redis instances
-description: Learn how to replicate your Azure Cache for Redis Enterprise instances across Azure regions
+description: Learn how to replicate your Azure Cache for Redis Enterprise instances across Azure regions.
Previously updated : 02/02/2022 Last updated : 06/15/2022 + # Configure active geo-replication for Enterprise Azure Cache for Redis instances
-In this article, you'll learn how to configure an active geo-replicated Azure Cache using the Azure portal.
+In this article, you learn how to configure an active geo-replicated cache using the Azure portal.
-Active geo-replication groups up to five Enterprise Azure Cache for Redis instances into a single cache that spans across Azure regions. All instances act as the local primaries. An application decides which instance or instances to use for read and write requests.
+Active geo-replication groups up to five instances of Enterprise Azure Cache for Redis into a single cache that spans across Azure regions. All instances act as the local, primary caches. An application decides which instance or instances to use for read and write requests.
> [!NOTE] > Data transfer between Azure regions is charged at standard [bandwidth rates](https://azure.microsoft.com/pricing/details/bandwidth/).-
-## Create or join an active geo-replication group
-
-> [!IMPORTANT]
-> Active geo-replication must be enabled at the time an Azure Cache for Redis is created.
->
>
-1. In the **Advanced** tab of **New Redis Cache** creation UI, select **Enterprise** for **Clustering Policy**.
-
- For more information on choosing **Clustering policy**, see [Clustering Policy](quickstart-create-redis-enterprise.md#clustering-policy).
+## Create or join an active geo-replication group
- :::image type="content" source="media/cache-how-to-active-geo-replication/cache-clustering-policy.png" alt-text="Configure active geo-replication":::
+1. When creating a new Azure Cache for Redis resource, select the **Advanced** tab. Complete the first part of the form including clustering policy. For more information on choosing **Clustering policy**, see [Clustering Policy](quickstart-create-redis-enterprise.md#clustering-policy).
1. Select **Configure** to set up **Active geo-replication**.
-1. Create a new replication group, for a first cache instance, or select an existing one from the list.
+ :::image type="content" source="media/cache-how-to-active-geo-replication/cache-active-geo-replication-configure.png" alt-text="Screenshot of advanced tab of create new Redis cache page.":::
+
+1. Create a new replication group for a first cache instance. Or, select an existing one from the list.
- :::image type="content" source="media/cache-how-to-active-geo-replication/cache-active-geo-replication-new-group.png" alt-text="Link caches":::
+ :::image type="content" source="media/cache-how-to-active-geo-replication/cache-active-geo-replication-new-group.png" alt-text="Screenshot showing replication groups.":::
1. Select **Configure** to finish.
- :::image type="content" source="media/cache-how-to-active-geo-replication/cache-active-geo-replication-configured.png" alt-text="Active geo-replication configured":::
+1. Wait for the first cache to be created successfully. When complete, you see **Configured** set for **Active geo-replication**. Repeat the above steps for each cache instance in the geo-replication group.
-1. Wait for the first cache to be created successfully. Repeat the above steps for each cache instance in the geo-replication group.
+ :::image type="content" source="media/cache-how-to-active-geo-replication/cache-active-geo-replication-configured.png" alt-text="Screenshot showing active geo-replication is configured.":::
## Remove from an active geo-replication group
You should remove the unavailable cache because the remaining caches in the repl
1. Go to Azure portal and select one of the caches in the replication group that is still available. 1. Select to **Active geo-replication** in the Resource menu on the left to see the settings in the working pane.
- :::image type="content" source="media/cache-how-to-active-geo-replication/cache-active-geo-replication-group.png" alt-text="screenshot of active geo-replication group":::
+
+ :::image type="content" source="media/cache-how-to-active-geo-replication/cache-active-geo-replication-group.png" alt-text="Screenshot of active geo-replication group.":::
1. Select the cache that you need to force-unlink by checking the box. 1. Select **Force unlink** and then **OK** to confirm.
- :::image type="content" source="media/cache-how-to-active-geo-replication/cache-cache-active-geo-replication-unlink.png" alt-text="screenshot of unlinking in active geo-replication":::
+
+ :::image type="content" source="media/cache-how-to-active-geo-replication/cache-cache-active-geo-replication-unlink.png" alt-text="Screenshot of unlinking in active geo-replication.":::
1. Once the affected region's availability is restored, you need to delete the affected cache and recreate it to add it back to your replication group.
azure-functions Functions Create Your First Function Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-your-first-function-visual-studio.md
Title: "Quickstart: Create your first C# function in Azure using Visual Studio"
description: "In this quickstart, you learn how to use Visual Studio to create and publish a C# HTTP triggered function to Azure Functions that runs on .NET Core 3.1." ms.assetid: 82db1177-2295-4e39-bd42-763f6082e796 Previously updated : 11/03/2021 Last updated : 06/13/2022 ms.devlang: csharp adobe-target: true
adobe-target-content: ./functions-create-your-first-function-visual-studio-uiex
# Quickstart: Create your first C# function in Azure using Visual Studio
-Azure Functions lets you run your C# code in a serverless environment in Azure.
+Azure Functions lets you use Visual Studio to create local C# function projects and then easily publish this project to run in a scalable serverless environment in Azure. If you prefer to develop your C# apps locally using Visual Studio Code, you should instead consider the [Visual Studio Code-based version](create-first-function-vs-code-csharp.md) of this article.
+
+By default, this article shows you how to create C# functions that run on .NET 6 [in the same process as the Functions host](functions-dotnet-class-library.md). These _in-process_ C# functions are only supported on Long Term Support (LTS) version of .NET, such as .NET 6. To create C# functions on .NET 6 that can also run on .NET 5.0 and .NET Framework 4.8 (in preview) [in an isolated process](dotnet-isolated-process-guide.md), see the [alternate version of this article](functions-create-your-first-function-visual-studio.md?tabs=isolated-process).
In this article, you learn how to:
In this article, you learn how to:
> * Run your code locally to verify function behavior. > * Deploy your code project to Azure Functions. - Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
-There is also a [Visual Studio Code-based version](create-first-function-vs-code-csharp.md) of this article.
- ## Prerequisites + [Visual Studio 2022](https://visualstudio.microsoft.com/vs/), which supports .NET 6.0. Make sure to select the **Azure development** workload during installation.
The Azure Functions project template in Visual Studio creates a C# class library
1. In **Configure your new project**, enter a **Project name** for your project, and then select **Create**. The function app name must be valid as a C# namespace, so don't use underscores, hyphens, or any other nonalphanumeric characters. 1. For the **Additional information** settings, use the values in the following table:
+
+ # [.NET 6](#tab/in-process)
| Setting | Value | Description | | | - |-- |
- | **Functions worker** | **.NET 6** or **.NET 6 Isolated** | When you choose **.NET 6**, you create a project that runs in-process with version 4.x of the Azure Functions runtime. When you choose **.NET 6 Isolated**, you create a project that runs in a separate worker process. Azure Functions 1.x supports the .NET Framework. For more information, see [Azure Functions runtime versions overview](./functions-versions.md). |
+ | **Functions worker** | **.NET 6** | When you choose **.NET 6**, you create a project that runs in-process with the Azure Functions runtime. Use in-process unless you need to run your function app on .NET 5.0 or on .NET Framework 4.8 (preview). To learn more, see [Supported versions](functions-dotnet-class-library.md#supported-versions). |
| **Function** | **HTTP trigger** | This value creates a function triggered by an HTTP request. |
- | **Use Azurite for runtime storage account (AzureWebJobsStorage)** | Enable | Because a function app in Azure requires a storage account, one is assigned or created when you publish your project to Azure. An HTTP trigger doesn't use an Azure Storage account connection string; all other trigger types require a valid Azure Storage account connection string. When you select this option, the Azurite emulator is used. |
+ | **Use Azurite for runtime storage account (AzureWebJobsStorage)** | Enable | Because a function app in Azure requires a storage account, one is assigned or created when you publish your project to Azure. An HTTP trigger doesn't use an Azure Storage account connection string; all other trigger types require a valid Azure Storage account connection string. When you select this option, the [Azurite emulator](../storage/common/storage-use-azurite.md?tabs=visual-studio) is used. |
| **Authorization level** | **Anonymous** | The created function can be triggered by any client without providing a key. This authorization setting makes it easy to test your new function. For more information about keys and authorization, see [Authorization keys](./functions-bindings-http-webhook-trigger.md#authorization-keys) and [HTTP and webhook bindings](./functions-bindings-http-webhook.md). |
- :::image type="content" source="../../includes/media/functions-vs-tools-create/functions-project-settings-v4.png" alt-text="Azure Functions project settings":::
+ :::image type="content" source="../../includes/media/functions-vs-tools-create/functions-project-settings-v4.png" alt-text="Screenshot of Azure Functions project settings.":::
+
+ # [.NET 6 Isolated](#tab/isolated-process)
+
+ | Setting | Value | Description |
+ | | - |-- |
+ | **Functions worker** | **.NET 6 Isolated** | When you choose **.NET 6 Isolated**, you create a project that runs in a separate worker process. Choose isolated process when you need to run your function app on .NET 5.0 or on .NET Framework 4.8 (preview). To learn more, see [Supported versions](dotnet-isolated-process-guide.md#supported-versions). |
+ | **Function** | **HTTP trigger** | This value creates a function triggered by an HTTP request. |
+ | **Use Azurite for runtime storage account (AzureWebJobsStorage)** | Enable | Because a function app in Azure requires a storage account, one is assigned or created when you publish your project to Azure. An HTTP trigger doesn't use an Azure Storage account connection string; all other trigger types require a valid Azure Storage account connection string. When you select this option, the [Azurite emulator](../storage/common/storage-use-azurite.md?tabs=visual-studio) is used. |
+ | **Authorization level** | **Anonymous** | The created function can be triggered by any client without providing a key. This authorization setting makes it easy to test your new function. For more information about keys and authorization, see [Authorization keys](./functions-bindings-http-webhook-trigger.md#authorization-keys) and [HTTP and webhook bindings](./functions-bindings-http-webhook.md). |
+
+ :::image type="content" source="../../includes/media/functions-vs-tools-create/functions-project-settings-v4-isolated.png" alt-text="Screenshot of Azure Functions project settings.":::
+
+
Make sure you set the **Authorization level** to **Anonymous**. If you choose the default level of **Function**, you're required to present the [function key](./functions-bindings-http-webhook-trigger.md#authorization-keys) in requests to access your function endpoint. 2. Select **Create** to create the function project and HTTP trigger function.
The `FunctionName` method attribute sets the name of the function, which by defa
1. In the `HttpTrigger` method named `Run`, rename the `FunctionName` method attribute to `HttpExample`.
-Your function definition should now look like the following code, depending on mode:
+Your function definition should now look like the following code:
-# [In-process](#tab/in-process)
+# [.NET 6](#tab/in-process)
:::code language="csharp" source="~/functions-docs-csharp/http-trigger-template/HttpExample.cs" range="15-18":::
-# [Isolated process](#tab/isolated-process)
+# [.NET 6 Isolated](#tab/isolated-process)
:::code language="csharp" source="~/functions-docs-csharp/http-trigger-isolated/HttpExample.cs" range="11-13":::
After you've verified that the function runs correctly on your local computer, i
## Publish the project to Azure
-Before you can publish your project, you must have a function app in your Azure subscription. Visual Studio publishing creates a function app for you the first time you publish your project.
+Visual Studio can publish your local project to Azure. Before you can publish your project, you must have a function app in your Azure subscription. If you don't already have a function app in Azure, Visual Studio publishing creates one for you the first time you publish your project. In this article you create a function app and related Azure resources.
[!INCLUDE [Publish the project to Azure](../../includes/functions-vstools-publish.md)]
Before you can publish your project, you must have a function app in your Azure
## Clean up resources
-Other quickstarts in this collection build upon this quickstart. If you plan to work with subsequent quickstarts, tutorials, or with any of the services you have created in this quickstart, do not clean up the resources.
- *Resources* in Azure refer to function apps, functions, storage accounts, and so forth. They're grouped into *resource groups*, and you can delete everything in a group by deleting the group.
-You created resources to complete these quickstarts. You may be billed for these resources, depending on your [account status](https://azure.microsoft.com/account/) and [service pricing](https://azure.microsoft.com/pricing/).
+You created Azure resources to complete this quickstart. You may be billed for these resources, depending on your [account status](https://azure.microsoft.com/account/) and [service pricing](https://azure.microsoft.com/pricing/). Other quickstarts in this collection build upon this quickstart. If you plan to work with subsequent quickstarts, tutorials, or with any of the services you have created in this quickstart, don't clean up the resources.
[!INCLUDE [functions-vstools-cleanup](../../includes/functions-vstools-cleanup.md)]
You created resources to complete these quickstarts. You may be billed for these
In this quickstart, you used Visual Studio to create and publish a C# function app in Azure with a simple HTTP trigger function.
+# [.NET 6](#tab/in-process)
+
+To learn more about working with C# functions that run in-process with the Functions host, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
+ Advance to the next article to learn how to add an Azure Storage queue binding to your function: > [!div class="nextstepaction"]
-> [Add an Azure Storage queue binding to your function](functions-add-output-binding-storage-queue-vs.md)
+> [Add an Azure Storage queue binding to your function](functions-add-output-binding-storage-queue-vs.md?tabs=in-process)
+
+# [.NET 6 Isolated](#tab/isolated-process)
+
+To learn more about working with C# functions that run in an isolated process, see the [Guide for running C# Azure Functions in an isolated process](dotnet-isolated-process-guide.md).
+
+Advance to the next article to learn how to add an Azure Storage queue binding to your function:
+> [!div class="nextstepaction"]
+> [Add an Azure Storage queue binding to your function](functions-add-output-binding-storage-queue-vs.md?tabs=isolated-process)
+++
azure-government Documentation Government Plan Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-plan-identity.md
description: Microsoft Azure Government provides the same ways to build applicat
recommendations: false Previously updated : 01/28/2022 Last updated : 06/15/2022 # Planning identity for Azure Government applications
Before determining the identity approach for your application, you need to know
When building any Azure application, you must first decide on the authentication technology: -- **Applications using modern authentication** ΓÇô Applications using OAuth, OpenID Connect, and/or other modern authentication protocols supported by Azure AD such as newly developed application built using PaaS technologies (for example, Web Apps, Azure SQL Database, and so on).-- **Applications using legacy authentication protocols (Kerberos/NTLM)** ΓÇô Applications typically migrated from on-premises (for example, lift-and-shift applications).
+- **Applications using modern authentication** ΓÇô Applications using OAuth, OpenID Connect, and/or other modern authentication protocols supported by Azure AD such as newly developed application built using PaaS technologies, for example, Web Apps, Azure SQL Database, and so on.
+- **Applications using legacy authentication protocols (Kerberos/NTLM)** ΓÇô Applications typically migrated from on-premises, for example, lift-and-shift applications.
-Based on this decision there are different considerations when building and deploying on Azure Government.
+Based on this decision, there are different considerations when building and deploying on Azure Government.
### Applications using modern authentication in Azure Government
azure-monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-manage.md
The Log Analytics agent for Windows and Linux can be upgraded to the latest rele
| Environment | Installation Method | Upgrade method | |--|-|-|
-| Azure VM | Log Analytics agent VM extension for Windows/Linux | Agent is automatically upgraded by default [after the VM model changes](../../virtual-machines/extensions/features-linux.md#how-agents-and-extensions-are-updated), unless you configured your Azure Resource Manager template to opt out by setting the property *autoUpgradeMinorVersion* to **false**. |
+| Azure VM | Log Analytics agent VM extension for Windows/Linux | Agent is automatically upgraded [after the VM model changes](../../virtual-machines/extensions/features-linux.md#how-agents-and-extensions-are-updated), unless you configured your Azure Resource Manager template to opt out by setting the property _autoUpgradeMinorVersion_ to **false**. Once deployed, however, the extension will not upgrade minor versions unless redeployed, even with this property set to true. Major version upgrade is always manual. See [VirtualMachineExtensionInner.AutoUpgradeMinorVersion Property](https://docs.azure.cn/dotnet/api/microsoft.azure.management.compute.fluent.models.virtualmachineextensioninner.autoupgrademinorversion?view=azure-dotnet). |
| Custom Azure VM images | Manual install of Log Analytics agent for Windows/Linux | Updating VMs to the newest version of the agent needs to be performed from the command line running the Windows installer package or Linux self-extracting and installable shell script bundle.| | Non-Azure VMs | Manual install of Log Analytics agent for Windows/Linux | Updating VMs to the newest version of the agent needs to be performed from the command line running the Windows installer package or Linux self-extracting and installable shell script bundle. |
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
The following tables list the operating systems that are supported by the Azure
| Operating system | Azure Monitor agent <sup>1</sup> | Log Analytics agent <sup>1</sup> | Dependency agent | Diagnostics extension <sup>2</sup>| |:|::|::|::|::
-| AlmaLinux | X | | | |
+| AlmaLinux | X | X | | |
| Amazon Linux 2017.09 | | X | | | | Amazon Linux 2 | | X | | | | CentOS Linux 8 | X <sup>3</sup> | X | X | |
The following tables list the operating systems that are supported by the Azure
| Red Hat Enterprise Linux Server 7 | X | X | X | X | | Red Hat Enterprise Linux Server 6 | | X | X | | | Red Hat Enterprise Linux Server 6.7+ | | X | X | X |
-| Rocky Linux | X | | | |
+| Rocky Linux | X | X | | |
| SUSE Linux Enterprise Server 15.2 | X <sup>3</sup> | | | | | SUSE Linux Enterprise Server 15.1 | X <sup>3</sup> | X | | | | SUSE Linux Enterprise Server 15 SP1 | X | X | X | |
azure-monitor Action Groups Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups-logic-app.md
Last updated 2/23/2022+ # How to trigger complex actions with Azure Monitor alerts
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
Last updated 06/06/2022 -
- - references_regions
- - kr2b-contr-experiment
+++ # Create and manage action groups in the Azure portal
azure-monitor Alerts Enable Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-enable-template.md
Last updated 03/30/2022+ # Create a classic metric alert rule with a Resource Manager template
azure-monitor Alerts Manage Alerts Previous Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alerts-previous-version.md
Last updated 2/23/2022 + # Manage alert rules created in previous versions
azure-monitor Alerts Metric Create Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-create-templates.md
Last updated 4/4/2022 + # Create a metric alert with a Resource Manager template
azure-monitor Alerts Metric Near Real Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-near-real-time.md
Last updated 5/18/2022
+ms.reviwer: harelbr
# Supported resources for metric alerts in Azure Monitor
azure-monitor Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-overview.md
Last updated 06/09/2022 -+ # What are Azure Monitor Alerts?
See [this article](alerts-types.md) for detailed information about each alert ty
|Alert type|Description| |:|:|
-|[Metric alerts](alerts-types.md#metric-alerts)|Metric alerts evaluate resource metrics at regular intervals. Metrics can be platform metrics, custom metrics, logs from Azure Monitor converted to metrics or Application Insights metrics. Metric alerts have several additional features (link), such as the ability to apply multiple conditions and dynamic thresholds.|
+|[Metric alerts](alerts-types.md#metric-alerts)|Metric alerts evaluate resource metrics at regular intervals. Metrics can be platform metrics, custom metrics, logs from Azure Monitor converted to metrics or Application Insights metrics. Metric alerts have several additional features, such as the ability to apply multiple conditions and dynamic thresholds.|
|[Log alerts](alerts-types.md#log-alerts)|Log alerts allow users to use a Log Analytics query to evaluate resource logs at a predefined frequency.| |[Activity log alerts](alerts-types.md#activity-log-alerts)|Activity log alerts are triggered when a new activity log event occurs that matches the defined conditions.| |[Smart detection alerts](alerts-types.md#smart-detection-alerts)|Smart detection on an Application Insights resource automatically warns you of potential performance problems and failure anomalies in your web application. You can migrate smart detection on your Application Insights resource to create alert rules for the different smart detection modules.|
azure-monitor Alerts Rate Limiting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-rate-limiting.md
Last updated 2/23/2022+ # Rate limiting for Voice, SMS, emails, Azure App push notifications and webhook posts
azure-monitor Alerts Resource Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-resource-move.md
Last updated 2/23/2022+ # How to update alert rules or alert processing rules when their target resource moves to a different Azure region
azure-monitor Alerts Sms Behavior https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-sms-behavior.md
Last updated 2/23/2022+ # SMS Alert Behavior in Action Groups
azure-monitor Alerts Troubleshoot Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-metric.md
Last updated 5/25/2022
+ms:reviwer: harelbr
# Troubleshooting problems in Azure Monitor metric alerts
azure-monitor Alerts Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-types.md
Last updated 04/26/2022 -+ # Types of Azure Monitor alerts
This table can help you decide when to use what type of alert. For more detailed
|||| |Metric alert|Metric alerts are useful when you want to be alerted about data that requires little or no manipulation. Metric data is stored in the system already pre-computed, so metric alerts are less expensive than log alerts. If the data you want to monitor is available in metric data, you would want to metric alerts.|Each metrics alert rule is charged based on the number of time-series that are monitored. | |Log alert|Log alerts allow you to perform advanced logic operations on your data. If the data you want to monitor is available in logs, or requires advanced logic, you can use the robust features of KQL for data manipulation using log alerts. Log alerts are more expensive than metric alerts.|Each Log Alert rule is billed based the interval at which the log query is evaluated (more frequent query evaluation results in a higher cost). Additionally, for Log Alerts configured for [at scale monitoring](#splitting-by-dimensions-in-log-alert-rules), the cost will also depend on the number of time series created by the dimensions resulting from your query. |
-|Activity Log alert|Activity logs provide auditing of all actions that occurred on resources. Use activity log alerts if you want to be alerted when a specific event happens to a resource, for example, a restart, a shutdown, or the creation or deletion of a resource.|For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).|
+|Activity Log alert|Activity logs provide auditing of all actions that occurred on resources. Use activity log alerts to be alerted when a specific event happens to a resource, for example, a restart, a shutdown, or the creation or deletion of a resource.|For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).|
## Metric alerts
azure-monitor Itsm Connector Secure Webhook Connections Azure Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsm-connector-secure-webhook-connections-azure-configuration.md
Title: IT Service Management Connector - Secure Webhook in Azure Monitor - Azure
description: This article shows you how to configure Azure in order to connect your ITSM products/services with Secure Webhook in Azure Monitor to centrally monitor and manage ITSM work items. Last updated 04/28/2022
+ms.reviwer:
azure-monitor Resource Manager Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/resource-manager-action-groups.md
Last updated 04/27/2022+
azure-monitor Resource Manager Alerts Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/resource-manager-alerts-metric.md
Last updated 04/27/2022 + # Resource Manager template samples for metric alert rules in Azure Monitor
azure-monitor Asp Net Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md
# Dependency Tracking in Azure Application Insights
-A *dependency* is a component that is called by your application. It's typically a service called using HTTP, or a database, or a file system. [Application Insights](./app-insights-overview.md) measures the duration of dependency calls, whether its failing or not, along with additional information like name of dependency and so on. You can investigate specific dependency calls, and correlate them to requests and exceptions.
+A *dependency* is a component that is called by your application. It's typically a service called using HTTP, or a database, or a file system. [Application Insights](./app-insights-overview.md) measures the duration of dependency calls, whether it's failing or not, along with additional information like name of dependency and so on. You can investigate specific dependency calls, and correlate them to requests and exceptions.
## Automatically tracked dependencies
Application Insights SDKs for .NET and .NET Core ships with `DependencyTrackingT
||-| |Http/Https | Local or Remote http/https calls | |WCF calls| Only tracked automatically if Http-based bindings are used.|
-|SQL | Calls made with `SqlClient`. See [this](#advanced-sql-tracking-to-get-full-sql-query) for capturing SQL query. |
+|SQL | Calls made with `SqlClient`. See [this documentation](#advanced-sql-tracking-to-get-full-sql-query) for capturing SQL query. |
|[Azure storage (Blob, Table, Queue )](https://www.nuget.org/packages/WindowsAzure.Storage/) | Calls made with Azure Storage Client. |
-|[EventHub Client SDK](https://www.nuget.org/packages/Microsoft.Azure.EventHubs) | Version 1.1.0 and above. |
-|[ServiceBus Client SDK](https://www.nuget.org/packages/Microsoft.Azure.ServiceBus)| Version 3.0.0 and above. |
+|[EventHub Client SDK](https://nuget.org/packages/Azure.Messaging.EventHubs) | Use the latest package. https://nuget.org/packages/Azure.Messaging.EventHubs |
+|[ServiceBus Client SDK](https://nuget.org/packages/Azure.Messaging.ServiceBus)| Use the latest package. https://nuget.org/packages/Azure.Messaging.ServiceBus |
|Azure Cosmos DB | Only tracked automatically if HTTP/HTTPS is used. TCP mode won't be captured by Application Insights. | If you're missing a dependency, or using a different SDK make sure it's in the list of [auto-collected dependencies](./auto-collect-dependencies.md). If the dependency isn't auto-collected, you can still track it manually with a [track dependency call](./api-custom-events-metrics.md#trackdependency).
To automatically track dependencies from .NET console apps, install the NuGet pa
depModule.Initialize(TelemetryConfiguration.Active); ```
-For .NET Core console apps TelemetryConfiguration.Active is obsolete. Refer to the guidance in the [worker service documentation](./worker-service.md) and the [ASP.NET Core monitoring documentation](./asp-net-core.md)
+For .NET Core console apps, `TelemetryConfiguration.Active` is obsolete. Refer to the guidance in the [worker service documentation](./worker-service.md) and the [ASP.NET Core monitoring documentation](./asp-net-core.md)
### How automatic dependency monitoring works?
The following are some examples of dependencies, which aren't automatically coll
For those dependencies not automatically collected by SDK, you can track them manually using the [TrackDependency API](api-custom-events-metrics.md#trackdependency) that is used by the standard auto collection modules.
-For example, if you build your code with an assembly that you didn't write yourself, you could time all the calls to it, to find out what contribution it makes to your response times. To have this data displayed in the dependency charts in Application Insights, send it using `TrackDependency`.
+**Example**
+If you build your code with an assembly that you didn't write yourself, you could time all the calls to it. This scenario would allow you to find out what contribution it makes to your response times.
+
+To have this data displayed in the dependency charts in Application Insights, send it using `TrackDependency`.
```csharp
For example, if you build your code with an assembly that you didn't write yours
} ```
-Alternatively, `TelemetryClient` provides extension methods `StartOperation` and `StopOperation` which can be used to manually track dependencies, as shown [here](custom-operations-tracking.md#outgoing-dependencies-tracking)
+Alternatively, `TelemetryClient` provides extension methods `StartOperation` and `StopOperation`, which can be used to manually track dependencies as shown [here](custom-operations-tracking.md#outgoing-dependencies-tracking)
If you want to switch off the standard dependency tracking module, remove the reference to DependencyTrackingTelemetryModule in [ApplicationInsights.config](../../azure-monitor/app/configuration-with-applicationinsights-config.md) for ASP.NET applications. For ASP.NET Core applications, follow instructions [here](asp-net-core.md#configuring-or-removing-default-telemetrymodules).
For web pages, Application Insights JavaScript SDK automatically collects AJAX c
> [!NOTE] > Azure Functions requires separate settings to enable SQL text collection: within [host.json](../../azure-functions/functions-host-json.md#applicationinsights) set `"EnableDependencyTracking": true,` and `"DependencyTrackingOptions": { "enableSqlCommandTextInstrumentation": true }` in `applicationInsights`.
-For SQL calls, the name of the server and database is always collected and stored as name of the collected `DependencyTelemetry`. There's an additional field called 'data', which can contain the full SQL query text.
+For SQL calls, the name of the server and database is always collected and stored as name of the collected `DependencyTelemetry`. There's another field called 'data', which can contain the full SQL query text.
-For ASP.NET Core applications, It is now required to opt-in to SQL Text collection by using
+For ASP.NET Core applications, It's now required to opt in to SQL Text collection by using
```csharp services.ConfigureTelemetryModule<DependencyTrackingTelemetryModule>((module, o) => { module. EnableSqlCommandTextInstrumentation = true; }); ```
For ASP.NET applications, full SQL query text is collected with the help of byte
| Platform | Step(s) Needed to get full SQL Query | | | |
-| Azure Web App |In your web app control panel, [open the Application Insights blade](../../azure-monitor/app/azure-web-apps.md) and enable SQL Commands under .NET |
-| IIS Server (Azure VM, on-prem, and so on.) | Either use the [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient) NuGet package or use the Status Monitor PowerShell Module to [install the Instrumentation Engine](../../azure-monitor/app/status-monitor-v2-api-reference.md#enable-instrumentationengine) and restart IIS. |
+| Azure Web App |In your web app control panel, [open the Application Insights pane](../../azure-monitor/app/azure-web-apps.md) and enable SQL Commands under .NET |
+| IIS Server (Azure VM, on-premises, and so on.) | Either use the [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient) NuGet package or use the Status Monitor PowerShell Module to [install the Instrumentation Engine](../../azure-monitor/app/status-monitor-v2-api-reference.md#enable-instrumentationengine) and restart IIS. |
| Azure Cloud Service | Add [startup task to install StatusMonitor](../../azure-monitor/app/cloudservices.md#set-up-status-monitor-to-collect-full-sql-queries-optional) <br> Your app should be onboarded to ApplicationInsights SDK at build time by installing NuGet packages for [ASP.NET](./asp-net.md) or [ASP.NET Core applications](./asp-net-core.md) | | IIS Express | Use the [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient) NuGet package. | Azure Web Jobs | Use the [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient) NuGet package.
In the above cases, the correct way of validating that instrumentation engine is
* [Application Map](app-map.md) visualizes dependencies between your app and neighboring components. * [Transaction Diagnostics](transaction-diagnostics.md) shows unified, correlated server data. * [Browsers tab](javascript.md) shows AJAX calls from your users' browsers.
-* Click through from slow or failed requests to check their dependency calls.
+* Select from slow or failed requests to check their dependency calls.
* [Analytics](#logs-analytics) can be used to query dependency data. ## <a name="diagnosis"></a> Diagnose slow requests
-Each request event is associated with the dependency calls, exceptions, and other events that are tracked while your app is processing the request. So if some requests are doing badly, you can find out whether it's because of slow responses from a dependency.
+Each request event is associated with the dependency calls, exceptions, and other events tracked while processing the request. So if some requests are doing badly, you can find out whether it's because of slow responses from a dependency.
### Tracing from requests to dependencies Open the **Performance** tab and navigate to the **Dependencies** tab at the top next to operations.
-Click on a **Dependency Name** under overall. After you select a dependency a graph of that dependency's distribution of durations will show up on the right.
+Select a **Dependency Name** under overall. After you select a dependency, a graph of that dependency's distribution of durations will show up on the right.
![In the performance tab click on the Dependency tab at the top then a Dependency name in the chart](./media/asp-net-dependencies/2-perf-dependencies.png)
-Click on the blue **Samples** button on the bottom right and then on a sample to see the end-to-end transaction details.
+Select the blue **Samples** button on the bottom right and then on a sample to see the end-to-end transaction details.
![Click on a sample to see the end-to-end transaction details](./media/asp-net-dependencies/3-end-to-end.png)
No idea where the time goes? The [Application Insights profiler](../../azure-mon
Failed requests might also be associated with failed calls to dependencies.
-We can go to the **Failures** tab on the left and then click on the **dependencies** tab at the top.
+We can go to the **Failures** tab on the left and then select on the **dependencies** tab at the top.
![Click the failed requests chart](./media/asp-net-dependencies/4-fail.png)
-Here you will be able to see the failed dependency count. To get more details about a failed occurrence trying clicking on a dependency name in the bottom table. You can click on the blue **Dependencies** button at the bottom right to get the end-to-end transaction details.
+Here you'll be able to see the failed dependency count. To get more details about a failed occurrence trying clicking on a dependency name in the bottom table. You can select the blue **Dependencies** button at the bottom right to get the end-to-end transaction details.
## Logs (Analytics)
You can track dependencies in the [Kusto query language](/azure/kusto/query/). H
### *How does automatic dependency collector report failed calls to dependencies?*
-* Failed dependency calls will have 'success' field set to False. `DependencyTrackingTelemetryModule` does not report `ExceptionTelemetry`. The full data model for dependency is described [here](data-model-dependency-telemetry.md).
+* Failed dependency calls will have 'success' field set to False. `DependencyTrackingTelemetryModule` doesn't report `ExceptionTelemetry`. The full data model for dependency is described [here](data-model-dependency-telemetry.md).
### *How do I calculate ingestion latency for my dependency telemetry?*
dependencies
### *How do I determine the time the dependency call was initiated?*
-In the Log Analytics query view `timestamp` represents the moment the TrackDependency() call was initiated which occurs immediately after the dependency call response is received. To calculate the time when the dependency call began, you would take `timestamp` and subtract the recorded `duration` of the dependency call.
+In the Log Analytics query view, `timestamp` represents the moment the TrackDependency() call was initiated which occurs immediately after the dependency call response is received. To calculate the time when the dependency call began, you would take `timestamp` and subtract the recorded `duration` of the dependency call.
## Open-source SDK Like every Application Insights SDK, dependency collection module is also open-source. Read and contribute to the code, or report issues at [the official GitHub repo](https://github.com/Microsoft/ApplicationInsights-dotnet).
azure-monitor Usage Heart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-heart.md
To view your saved workbook, navigate to the 'Workbooks' section under 'Monitori
:::image type="content" source="media/usage-overview/workbook-view-faq.png" alt-text="Screenshot highlighting the 'Workbooks' button next to the 'Public templates' tab, where the edited copy of the workbook will be found.":::
-For more on editing workbook templates, refer to the [Exploring a Workbook Template](../visualize/workbooks-overview.md#exploring-a-workbook-template) page.
+For more on editing workbook templates, refer to the [Azure Workbook templates](../visualize/workbooks-templates.md) page.
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
This workbook helps you to visualize the source of your data without having to b
[![Data usage workbook](media/container-insights-cost/data-usage-workbook.png)](media/container-insights-cost/data-usage-workbook.png#lightbox)
-To learn about managing rights and permissions to the workbook, review [Access control](../visualize/workbooks-access-control.md).
+To learn about managing rights and permissions to the workbook, review [Access control](../visualize/workbooks-overview.md#access-control).
After completing your analysis to determine which source or sources are generating the most data or more data that are exceeding your requirements, you can reconfigure data collection. Details on configuring collection of stdout, stderr, and environmental variables is described in the [Configure agent data collection settings](container-insights-agent-config.md) article.
azure-monitor Azure Networking Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/azure-networking-analytics.md
Azure Monitor offers the following solutions for monitoring your networks:
* Azure Application Gateway metrics * Solutions to monitor and audit network activity on your cloud network * [Traffic Analytics](../../networking/network-monitoring-overview.md#traffic-analytics)
- * Azure Network Security Group Analytics
## Network Performance Monitor (NPM)
The [Network Performance Monitor](../../networking/network-monitoring-overview.m
For more information, see [Network Performance Monitor](../../networking/network-monitoring-overview.md).
-## Network Security Group analytics
-
-1. Add the management solution to Azure Monitor, and
-2. Enable diagnostics to direct the diagnostics to a Log Analytics workspace in Azure Monitor. It is not necessary to write the logs to Azure Blob storage.
-
-If diagnostic logs are not enabled, the dashboard blades for that resource are blank and display an error message.
## Azure Application Gateway analytics
If diagnostic logs are not enabled, the dashboard blades for that resource are b
If diagnostic logs are not enabled for Application Gateway, only the default metric data would be populated within the workbook.
-> [!NOTE]
-> In January 2017, the supported way of sending logs from Application Gateways and Network Security Groups to a Log Analytics workspace changed. If you see the **Azure Networking Analytics (deprecated)** solution, refer to [migrating from the old Networking Analytics solution](#migrating-from-the-old-networking-analytics-solution) for steps you need to follow.
->
->
- ## Review Azure networking data collection details The Azure Application Gateway analytics and the Network Security Group analytics management solutions collect diagnostics logs directly from Azure Application Gateways and Network Security Groups. It is not necessary to write the logs to Azure Blob storage and no agent is required for data collection.
The Network Insights workbook allows you to take advantage of the latest capabil
* Flexible canvas to support creation of custom rich [visualizations](../visualize/workbooks-overview.md#visualizations).
-* Ability to consume and [share workbook templates](../visualize/workbooks-overview.md#workbooks-versus-workbook-templates) with wider community.
+* Ability to consume and [share workbook templates](../visualize/workbooks-templates.md) with wider community.
To find more information about the capabilities of the new workbook solution check out [Workbooks-overview](../visualize/workbooks-overview.md)
To find more information about the capabilities of the new workbook solution che
[ ![Screenshot of the delete option for Azure Application Gateway analytics solution.](media/azure-networking-analytics/azure-appgw-analytics-delete.png)](media/azure-networking-analytics/application-gateway-analytics-delete.png#lightbox)
-## Azure Network Security Group analytics solution in Azure Monitor
-
-![Azure Network Security Group Analytics symbol](media/azure-networking-analytics/azure-analytics-symbol.png)
-
-> [!NOTE]
-> The Network Security Group analytics solution is moving to community support since its functionality has been replaced by [Traffic Analytics](../../network-watcher/traffic-analytics.md).
-> - The solution is now available in [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/oms-azurensg-solution/) and will soon no longer be available in the Azure Marketplace.
-> - For existing customers who already added the solution to their workspace, it will continue to function with no changes.
-> - Microsoft will continue to support sending NSG resource logs to your workspace using Diagnostics Settings.
-
-The following logs are supported for network security groups:
-
-* NetworkSecurityGroupEvent
-* NetworkSecurityGroupRuleCounter
-
-### Install and configure the solution
-Use the following instructions to install and configure the Azure Networking Analytics solution:
-
-1. Enable the Azure Network Security Group analytics solution by using the process described in [Add Azure Monitor solutions from the Solutions Gallery](./solutions.md).
-2. Enable diagnostics logging for the [Network Security Group](../../virtual-network/virtual-network-nsg-manage-log.md) resources you want to monitor.
-
-### Enable Azure network security group diagnostics in the portal
-
-1. In the Azure portal, navigate to the Network Security Group resource to monitor
-2. Select *Diagnostics logs* to open the following page
-
- ![Screenshot of the Diagnostics logs page for a Network Security Group resource showing the option to Turn on diagnostics.](media/azure-networking-analytics/log-analytics-nsg-enable-diagnostics01.png)
-3. Click *Turn on diagnostics* to open the following page
-
- ![Screenshot of the page for configuring Diagnostics settings. Status is set to On, Send to Log Analytics is selected and two Log types are selected.](media/azure-networking-analytics/log-analytics-nsg-enable-diagnostics02.png)
-4. To turn on diagnostics, click *On* under *Status*
-5. Click the checkbox for *Send to Log Analytics*
-6. Select an existing Log Analytics workspace, or create a workspace
-7. Click the checkbox under **Log** for each of the log types to collect
-8. Click *Save* to enable the logging of diagnostics to Log Analytics
-
-### Enable Azure network diagnostics using PowerShell
-
-The following PowerShell script provides an example of how to enable resource logging for network security groups
-```powershell
-$workspaceId = "/subscriptions/d2e37fee-1234-40b2-5678-0b2199de3b50/resourcegroups/oi-default-east-us/providers/microsoft.operationalinsights/workspaces/rollingbaskets"
-
-$nsg = Get-AzNetworkSecurityGroup -Name 'ContosoNSG'
-
-Set-AzDiagnosticSetting -ResourceId $nsg.ResourceId -WorkspaceId $workspaceId -Enabled $true
-```
-
-### Use Azure Network Security Group analytics
-After you click the **Azure Network Security Group analytics** tile on the Overview, you can view summaries of your logs and then drill in to details for the following categories:
-
-* Network security group blocked flows
- * Network security group rules with blocked flows
- * MAC addresses with blocked flows
-* Network security group allowed flows
- * Network security group rules with allowed flows
- * MAC addresses with allowed flows
-
-![Screenshot of tiles with data for Network security group blocked flows, including Rules with blocked flows and MAC addresses with blocked flows.](media/azure-networking-analytics/log-analytics-nsg01.png)
-
-![Screenshot of tiles with data for Network security group allowed flows, including Rules with allowed flows and MAC addresses with allowed flows.](media/azure-networking-analytics/log-analytics-nsg02.png)
-
-On the **Azure Network Security Group analytics** dashboard, review the summary information in one of the blades, and then click one to view detailed information on the log search page.
-
-On any of the log search pages, you can view results by time, detailed results, and your log search history. You can also filter by facets to narrow the results.
-
-## Migrating from the old Networking Analytics solution
-In January 2017, the supported way of sending logs from Azure Application Gateways and Azure Network Security Groups to a Log Analytics workspace changed. These changes provide the following advantages:
-+ Logs are written directly to Azure Monitor without the need to use a storage account
-+ Less latency from the time when logs are generated to them being available in Azure Monitor
-+ Fewer configuration steps
-+ A common format for all types of Azure diagnostics
-
-To use the updated solutions:
-
-1. [Configure diagnostics to be sent directly to Azure Monitor from Azure Application Gateways](#enable-azure-application-gateway-diagnostics-in-the-portal)
-2. [Configure diagnostics to be sent directly to Azure Monitor from Azure Network Security Groups](#enable-azure-network-security-group-diagnostics-in-the-portal)
-2. Enable the *Azure Application Gateway Analytics* and the *Azure Network Security Group Analytics* solution by using the process described in [Add Azure Monitor solutions from the Solutions Gallery](solutions.md)
-3. Update any saved queries, dashboards, or alerts to use the new data type
- + Type is to AzureDiagnostics. You can use the ResourceType to filter to Azure networking logs.
-
- | Instead of: | Use: |
- | | |
- | NetworkApplicationgateways &#124; where OperationName=="ApplicationGatewayAccess" | AzureDiagnostics &#124; where ResourceType=="APPLICATIONGATEWAYS" and OperationName=="ApplicationGatewayAccess" |
- | NetworkApplicationgateways &#124; where OperationName=="ApplicationGatewayPerformance" | AzureDiagnostics &#124; where ResourceType=="APPLICATIONGATEWAYS" and OperationName=="ApplicationGatewayPerformance" |
- | NetworkSecuritygroups | AzureDiagnostics &#124; where ResourceType=="NETWORKSECURITYGROUPS" |
-
- + For any field that has a suffix of \_s, \_d, or \_g in the name, change the first character to lower case
- + For any field that has a suffix of \_o in name, the data is split into individual fields based on the nested field names.
-4. Remove the *Azure Networking Analytics (Deprecated)* solution.
- + If you are using PowerShell, use `Set-AzureOperationalInsightsIntelligencePack -ResourceGroupName <resource group that the workspace is in> -WorkspaceName <name of the log analytics workspace> -IntelligencePackName "AzureNetwork" -Enabled $false`
-
-Data collected before the change is not visible in the new solution. You can continue to query for this data using the old Type and field names.
## Troubleshooting [!INCLUDE [log-analytics-troubleshoot-azure-diagnostics](../../../includes/log-analytics-troubleshoot-azure-diagnostics.md)]
azure-monitor Oms Portal Transition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/oms-portal-transition.md
The OMS mobile app will be sunsetted along with the OMS portal. Instead of the O
As such, Application Insights Connector was deprecated and removed from Azure Marketplace along with OMS portal deprecation on March 30, 2019. Existing connections will continue to work until June 30, 2019. With OMS portal deprecation, there is no way to configure and remove existing connections from the portal. This will be supported using the REST API that will be made available in January, 2019 and a notification will be posted on [Azure updates](https://azure.microsoft.com/updates/). ## Azure Network Security Group Analytics
-The [Azure Network Security Group Analytics solution](../insights/azure-networking-analytics.md#azure-network-security-group-analytics-solution-in-azure-monitor) will be replaced with the recently launched [Traffic Analytics](https://azure.microsoft.com/blog/traffic-analytics-in-preview/) which provides visibility into user and application activity on cloud networks. Traffic Analytics helps you audit your organization's network activity, secure applications and data, optimize workload performance and stay compliant.
+The [Azure Network Security Group Analytics solution](../insights/azure-networking-analytics.md) will be replaced with the recently launched [Traffic Analytics](https://azure.microsoft.com/blog/traffic-analytics-in-preview/) which provides visibility into user and application activity on cloud networks. Traffic Analytics helps you audit your organization's network activity, secure applications and data, optimize workload performance and stay compliant.
This solution analyzes NSG Flow logs and provides insights into the following.
This solution analyzes NSG Flow logs and provides insights into the following.
- Security including malicious traffic, ports open to the Internet, applications or VMs attempting Internet access. - Capacity utilization, which helps you eliminate issues of over provisioning or underutilization.
-You can continue to rely on Diagnostics Settings to send NSG logs to Log Analytics so your existing saved searches, alerts, dashboards will continue to work. Customers who have already installed the solution can continue to use it until further notice. Starting September 5, the Network Security Group Analytics solution will be removed from the marketplace and made available through the community as a [Azure QuickStart Template](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Operationalinsights).
+You can continue to rely on Diagnostics Settings to send NSG logs to Log Analytics so your existing saved searches, alerts, dashboards will continue to work. Customers who have already installed the solution can continue to use it until further notice. Starting September 5, the Network Security Group Analytics solution will be removed from the marketplace and made available through the community as a [Azure Quickstart Template](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Operationalinsights).
## System Center Operations Manager If you've [connected your Operations Manager management group to Log Analytics](../agents/om-agents.md), then it will continue to work with no changes. For new connections though, you must follow the guidance in [Microsoft System Center Operations Manager Management Pack to configure Operations Management Suite](https://techcommunity.microsoft.com/t5/system-center-blog/bg-p/SystemCenterBlog). ## Next steps-- See [Common questions for transition from OMS portal to Azure portal for Log Analytics users](../overview.md) for guidance on moving from the OMS portal to the Azure portal.
+- See [Common questions for transition from OMS portal to Azure portal for Log Analytics users](../overview.md) for guidance on moving from the OMS portal to the Azure portal.
azure-monitor View Designer Conversion Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/view-designer-conversion-overview.md
While this guide offers simple steps to directly recreate several of the commonl
![Example of workbooks application](media/view-designer-conversion-overview/workbook-template-example.jpg)
-## How to start using workbooks
-Open workbooks from the Workbooks tile under your Log Analytics workspace.
-![Workbooks navigation](media/view-designer-conversion-overview/workbooks-nav.png)
-
-Once selected, a gallery will be displayed listing out all the saved workbooks and templates for your workspace.
-
-![Workbooks gallery](media/view-designer-conversion-overview/workbooks-gallery.png)
-
-To start a new workbook, you may select the **Empty** template under **Quick start**, or the **New** icon in the top navigation bar. To view templates or return to saved workbooks, select the item from the gallery or search for the name in the search bar.
-
-To save a workbook, you will need to save the report with a specific title, subscription, resource group, and location.
-The workbook will autofill to the same settings as the LA workspace, with the same subscription, resource group, however, users may change these report settings. Workbooks are shared resources that require write access to the parent resource group to be saved.
## Next steps
azure-monitor Workbooks Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-access-control.md
- Title: Azure Monitor Workbooks access control
-description: Simplify complex reporting with prebuilt and custom parameterized workbooks with role based access control
--- Previously updated : 07/16/2021--
-# Access control
-
-Access control in workbooks refers to two things:
-
-* Access required to read data in a workbook. This access is controlled by standard [Azure roles](../../role-based-access-control/overview.md) on the resources used in the workbook. Workbooks do not specify or configure access to those resources. Users would usually get this access to those resources using the [Monitoring Reader](../../role-based-access-control/built-in-roles.md#monitoring-reader) role on those resources.
-
-* Access required to save workbooks
-
- - Saving workbooks requires write privileges in a resource group to save the workbook. These privileges are usually specified by the [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) role, but can also be set via the *Workbooks Contributor* role.
-
-## Standard roles with workbook-related privileges
-
-[Monitoring Reader](../../role-based-access-control/built-in-roles.md#monitoring-reader) includes standard /read privileges that would be used by monitoring tools (including workbooks) to read data from resources.
-
-[Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) includes general `/write` privileges used by various monitoring tools for saving items (including `workbooks/write` privilege to save shared workbooks).
-ΓÇ£Workbooks ContributorΓÇ¥ adds ΓÇ£workbooks/writeΓÇ¥ privileges to an object to save shared workbooks.
-
-For custom roles:
-
-Add `microsoft.insights/workbooks/write` to save workbooks. For more details, see the [Workbook Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) role.
-
-## Next steps
-
-* [Get started](./workbooks-overview.md#visualizations) learning more about workbooks many rich visualizations options.
azure-monitor Workbooks Add Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-add-text.md
+
+ Title: Azure Workbooks text parameters
+description: Learn about adding text parameters to your Azure workbook.
++++ Last updated : 05/30/2022+++
+# Adding text to your workbook
+
+Workbooks allow authors to include text blocks in their workbooks. The text can be human analysis of the telemetry, information to help users interpret the data, section headings, etc.
+
+ :::image type="content" source="media/workbooks-add-text/workbooks-text-example.png" alt-text="Screenshot of adding text to a workbook.":::
+
+Text is added through a markdown control - into which an author can add their content. An author can use the full formatting capabilities of markdown to make their documents appear just how they want it. These include different heading and font styles, hyperlinks, tables, etc. This allows authors to create rich Word- or Portal-like reports or analytic narratives. Text Steps can contain parameter values in the markdown text, and those parameter references will be updated as the parameters change.
+
+**Edit mode**:
+ :::image type="content" source="media/workbooks-add-text/workbooks-text-control-edit-mode.png" alt-text="Screenshot showing adding text to a workbook in edit mode.":::
+
+**Preview mode**:
+ :::image type="content" source="media/workbooks-add-text/workbooks-text-control-edit-mode-preview.png" alt-text="Screenshot showing adding text to a workbook in preview mode.":::
+
+## Add text
+1. Switch the workbook to edit mode by clicking on the _Edit_ toolbar item.
+1. Use the _Add_ button below a step or at the bottom of the workbook, and choose "Add Text" to add a text control to the workbook.
+1. Enter markdown text into the editor field
+1. Use the _Text Style_ option to switch between plain markdown, and markdown wrapped with the Azure portal's standard info/warning/success/error styling.
+
+ > [!TIP]
+ > Use this [markdown cheat sheet](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet) to see the different formatting options.
+
+1. Use the Preview tab to see how your content will look. While editing, the preview will show the content inside a scrollable area to limit its size, but when displayed at runtime, the markdown content will expand to fill whatever space it needs, with no scrollbars.
+1. Select the _Done Editing_ button to complete editing the step
+
+## Text styles
+The following text styles are available for text steps:
+
+| Style | Description |
+| | |
+| `plain` | No other formatting is applied |
+| `info` | The portal's "info" style, with a `ℹ` or similar icon and blue background |
+| `error` | The portal's "error" style, with a `❌` or similar icon and red background |
+| `success` | The portal's "success" style, with a `Γ£ö` or similar icon and green background |
+| `upsell` | The portal's "upsell" style, with a `🚀` or similar icon and purple background |
+| `warning` | The portal's "warning" style, with a `ΓÜá` or similar icon and blue background |
++
+Instead of picking a specific style, you may also choose a text parameter as the source of the style. The parameter value must be one of the above text values. The absence of a value, or any unrecognized value will be treated as `plain` style.
+
+### info style example:
+ :::image type="content" source="media/workbooks-add-text/workbooks-text-control-edit-mode-preview.png" alt-text="Screenshot of adding text to a workbook in preview mode showing info style.":::
+
+### warning style example:
+ :::image type="content" source="media/workbooks-add-text/workbooks-text-example-warning.png" alt-text="Screenshot of a text visualization in warning style.":::
+
+## Next Steps
+- [Add Workbook parameters](workbooks-parameters.md)
azure-monitor Workbooks Combine Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-combine-data.md
+
+ Title: Combine data from different sources in your Azure Workbook
+description: Learn how to combine data from different sources in your Azure Workbook.
++++ Last updated : 05/30/2022+++
+# Combine data from different sources
+
+It is often necessary to bring together data from different sources that enhance the insights experience. An example is augmenting active alert information with related metric data. This allows users to see not just the effect (an active alert), but also potential causes (for example, high CPU usage). The monitoring domain has numerous such correlatable data sources that are often critical to the triage and diagnostic workflow.
+
+Workbooks allow not just the querying of different data sources, but also provides simple controls that allow you to merge or join the data to provide rich insights. The `merge` control is the way to achieve it.
+
+## Combining alerting data with Log Analytics VM performance data
+
+The example below combines alerting data with Log Analytics VM performance data to get a rich insights grid.
+
+![Screenshot of a workbook with a merge control that combines alert and log analytics data.](./media/workbooks-data-sources/merge-control.png)
+
+## Using merge control to combine Azure Resource Graph and Log Analytics data
+
+Here is a tutorial on using the merge control to combine Azure Resource Graph and Log Analytics data:
+
+[![Combining data from different sources in workbooks](https://img.youtube.com/vi/7nWP_YRzxHg/0.jpg)](https://www.youtube.com/watch?v=7nWP_YRzxHg "Video showing how to combine data from different sources in workbooks.")
+
+Workbooks support these merges:
+
+* Inner unique join
+* Full inner join
+* Full outer join
+* Left outer join
+* Right outer join
+* Left semi-join
+* Right semi-join
+* Left anti-join
+* Right anti-join
+* Union
+* Duplicate table
+
+## Next steps
+ - [Getting started with Azure Workbooks](workbooks-getting-started.md).
+ - [Azure workbooks data sources](workbooks-data-sources.md).
azure-monitor Workbooks Composite Bar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-composite-bar.md
The Composite bar view for Graph with the above settings will look like this:
## Next steps
-* [Deploy](../visualize/workbooks-automate.md) workbooks with Azure Resource Manager.
-* [Control](./workbooks-access-control.md) and share access to your workbook resources.
+ - [Getting started with Azure Workbooks](workbooks-getting-started.md).
azure-monitor Workbooks Create Workbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-create-workbook.md
+
+ Title: Create an Azure Workbook
+description: Learn how to create an Azure Workbook.
++++ Last updated : 05/30/2022+++
+# Create an Azure Workbook
+
+This video provides a walkthrough of creating workbooks.
+
+> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4B4Ap]
+
+## To create a new Azure Workbook
+To create a new Azure workbook:
+1. From the Azure Workbooks page, select an empty template or select **New** in the top toolbar.
+1. Combine any of these steps to include the elements you want to the workbook:
+ - [Add text to your workbook](workbooks-add-text.md)
+ - [Add parameters to your workbook](workbooks-parameters.md)
+ - Add queries to your workbook
+ - [Combine data from different sources](workbooks-combine-data.md)
+ - Add Metrics to your workbook
+ - Add Links to your workbook
+ - Add Groups to your workbook
+ - Add more configuration options to your workbook
++
+## Next steps
+- [Getting started with Azure Workbooks](workbooks-getting-started.md).
+- [Azure workbooks data sources](workbooks-data-sources.md).
azure-monitor Workbooks Criteria https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-criteria.md
+
+ Title: Azure Workbooks criteria parameters.
+description: Learn about adding criteria parameters to your Azure workbook.
++++ Last updated : 05/30/2022+++
+# Text parameter criteria
+
+When a query depends on many parameters, then the query will be stalled until each of its parameters have been resolved. Sometimes a parameter could have a simple query that concatenates a string or performs a conditional evaluation. However these queries still make network calls to services that perform these basic operations and that increases the time it takes for a parameter to resolve a value. This results in long load times for complex workbooks.
+
+Criteria Text parameters solve this issue, as an author can define a set of criteria based on previously specified parameters, which will be evaluated to provide a dynamic value. The main benefit of using Criteria parameters is that it has the ability to resolve values of previously specified parameters and perform simple conditional operations without making any network calls. Below is an example of such a use case.
+
+## Example
+Consider the conditional query below:
++
+```
+let metric = dynamic({Counter});
+print tostring((metric.object == 'Network Adapter' and (metric.counter == 'Bytes Received/sec' or metric.counter == 'Bytes Sent/sec')) or (metric.object == 'Network' and (metric.counter == 'Total Bytes Received' or metric.counter == 'Total Bytes Transmitted')))
+```
+
+If the user is focused on the `metric.counter` object, essentially the value of the parameter `isNetworkCounter` should be true, if the parameter `Counter` has `Bytes Received/sec`, `Bytes Sent/sec`, `Total Bytes Received`, or `Total Bytes Transmitted`.
+
+This can be translated to a criteria text parameter like so:
++
+In the image above, the conditions will be evaluated from top to bottom and the value of the parameter `isNetworkCounter` will take the value of which ever condition evaluates to true first. All conditions except for the default condition (the 'else' condition) can be reordered to get the desired outcome.
+
+## Setting up criteria
+1. Start with a workbook with at least one existing parameter in edit mode.
+ 1. Choose Add parameters from the links within the workbook.
+ 1. Select on the blue Add Parameter button.
+ 1. In the new parameter pane that pops up enter:
+ - Parameter name: rand
+ - Parameter type: Text
+ - Required: checked
+ - Get data from: Query
+ - Enter `print rand(0-1)` into the query editor. This parameter will output a value between 0-1.
+ 1. Choose 'Save' from the toolbar to create the parameter.
+
+ > [!NOTE]
+ > The first parameter in the workbook will not show the `Criteria` tab
+
+ :::image type="content" source="media/workbooks-criteria/workbooks-criteria-first-param.png" alt-text="Screenshot showing the first parameter.":::
+
+1. In the table with the 'rand' parameter, select on the blue Add Parameter button.
+1. In the new parameter pane that pops up enter:
+ - Parameter name: randCriteria
+ - Parameter type: Text
+ - Required: checked
+ - Get data from: Criteria
+1. A grid should appear, select on 'Edit' next to the blank text box, this will bring up a 'Criteria Settings' form. Refer to [Criteria Settings form](#criteria-settings-form) for the description of each field.
+
+ :::image type="content" source="media/workbooks-criteria/workbooks-criteria-setting.png" alt-text="Screenshot showing the criteria settings form.":::
+
+1. Enter the data below to populate the first Criteria, then select 'OK'.
+ - First operand: rand
+ - Operator: >
+ - Value from: Static Value
+ - Second Operand: 0.25
+ - Value from: Static Value
+ - Result is: is over 0.25
+
+ :::image type="content" source="media/workbooks-criteria/workbooks-criteria-setting-filled.png" alt-text="Screenshot showing the criteria settings form filled.":::
+
+1. Select on edit, next to the condition `Click edit to specify a result for the default condition.`, this will edit the default condition.
+
+ > [!NOTE]
+ > For the default condition, everthing should be disabled except for the last `Value from` and `Result is` fields.
+
+1. Enter the data below to populate the default condition, then select 'OK'.
+ - Value from: Static Value
+ - Result is: is 0.25 or under
+
+ :::image type="content" source="media/workbooks-criteria/workbooks-criteria-default.png" alt-text="Screenshot showing the criteria settings default form filled.":::
+
+1. Save the Parameter
+1. Select on the refresh button on the workbook, to see the `randCriteria` parameter in action. Its value will be based on the value of `rand`!
+
+## Criteria settings form
+|Form fields|Description|
+|--|-|
+|First operand| This is a dropdown consisting of parameter names that have already been created. The value of the parameter will be used on the left hand side of the comparison |
+|Operator|The operator used to compare the first and the second operands. Can be a numerical or string evaluation. The operator `is empty` will disable the `Second operand` as only the `First operand` is required.|
+|Value from|If set to `Parameter`, a dropdown consisting of parameters that have already been created will be shown. The value of that parameter will be used on the right hand side of the comparison.<br/> If set to `Static Value`, a text box will be shown where an author can enter a value for the right hand side of the comparison.|
+|Second Operand| Will be either a dropdown menu consisting of created parameters, or a textbox depending on the above `Value from` selection.|
+|Value from|If set to `Parameter`, a dropdown consisting of parameters that have already been created will be shown. The value of that parameter will be used for the return value of the current parameter.<br/> If set to `Static Value`:<br>a text box will be shown where an author can enter a value for the result.<br>>An author can also dereference other parameters by using curly braces around the parameter name.<br>>It is possible concatenate multiple parameters and create a custom string, for example: "`{paramA}`, `{paramB}`, and some string" <br><br>If set to `Expression`:<br> a text box will be shown where an author can enter a mathematical expression that will be evaluated as the result<br>Like the `Static Value` case, multiple parameters may be dereferenced in this text box.<br>If the parameter value referenced in the text box is not a number, it will be treated as the value `0`|
+|Result is| Will be either a dropdown menu consisting of created parameters, or a textbox depending on the above Value from selection. The textbox will be evaluated as the final result of this Criteria Settings form.
azure-monitor Workbooks Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-data-sources.md
Title: Azure Monitor workbooks data sources | Microsoft docs
-description: Simplify complex reporting with prebuilt and custom parameterized Azure Monitor Workbooks built from multiple data sources
+ Title: Azure Workbooks data sources | Microsoft docs
+description: Simplify complex reporting with prebuilt and custom parameterized Azure Workbooks built from multiple data sources.
---++ Previously updated : 06/29/2020 Last updated : 05/30/2022+
-# Azure Monitor workbooks data sources
+# Azure Workbooks data sources
+
+Workbooks can extract data from these data sources:
-Workbooks are compatible with a large number of data sources. This article will walk you through data sources which are currently available for Azure Monitor workbooks.
+ - [Logs](#logs)
+ - [Metrics](#metrics)
+ - [Azure Resource Graph](#azure-resource-graph)
+ - [Azure Resource Manager](#azure-resource-manager)
+ - [Azure Data Explorer](../visualize/workbooks-data-sources.md#azure-data-explorer)
+ - [Workload health](#workload-health)
+ - [Azure resource health](#azure-resource-health)
+ - [Change Analysis (preview)](#change-analysis-preview)
+ - [JSON](#json)
+ - [Custom endpoint](#custom-endpoint)
+ - [Azure RBAC](#azure-rbac)
## Logs
To make a query control use this data source, use the Data source drop-down to c
## Azure Data Explorer Workbooks now have support for querying from [Azure Data Explorer](/azure/data-explorer/) clusters with the powerful [Kusto](/azure/kusto/query/index) query language.
-For the **Cluster Name** field, you should add ther region name following the cluster name. For example: *mycluster.westeurope*.
+For the **Cluster Name** field, you should add the region name following the cluster name. For example: *mycluster.westeurope*.
![Screenshot of Kusto query window](./media/workbooks-data-sources/data-explorer.png)
To make a query control use this data source, use the **Query type** drop-down t
## Change Analysis (preview)
-To make a query control using [Application Change Analysis](../app/change-analysis.md) as the data source, use the *Data source* drop down and choose *Change Analysis (preview)* and select a single resource. Changes for up to the last 14 days can be shown. The *Level* drop down can be used to filter between "Important", "Normal", and "Noisy" changes, and this drop down supports workbook parameters of type [drop down](workbooks-dropdowns.md).
+To make a query control using [Application Change Analysis](../app/change-analysis.md) as the data source, use the **Data source** drop-down and choose *Change Analysis (preview)* and select a single resource. Changes for up to the last 14 days can be shown. The *Level* drop-down can be used to filter between "Important", "Normal", and "Noisy" changes, and this drop down supports workbook parameters of type [drop down](workbooks-dropdowns.md).
> [!div class="mx-imgBorder"] > ![A screenshot of a workbook with Change Analysis](./media/workbooks-data-sources/change-analysis-data-source.png)
-## Merge data from different sources
-
-It is often necessary to bring together data from different sources that enhance the insights experience. An example is augmenting active alert information with related metric data. This allows users to see not just the effect (an active alert), but also potential causes (for example, high CPU usage). The monitoring domain has numerous such correlatable data sources that are often critical to the triage and diagnostic workflow.
-
-Workbooks allows not just the querying of different data sources, but also provides simple controls that allow you to merge or join the data to provide rich insights. The `merge` control is the way to achieve it.
-
-The example below combines alerting data with log analytics VM performance data to get a rich insights grid.
-
-> [!div class="mx-imgBorder"]
-> ![A screenshot of a workbook with a merge control that combines alert and log analytics data](./media/workbooks-data-sources/merge-control.png)
-
-Workbooks support a variety of merges:
-
-* Inner unique join
-* Full inner join
-* Full outer join
-* Left outer join
-* Right outer join
-* Left semi-join
-* Right semi-join
-* Left anti-join
-* Right anti-join
-* Union
-* Duplicate table
- ## JSON The JSON provider allows you to create a query result from static JSON content. It is most commonly used in Parameters to create dropdown parameters of static values. Simple JSON arrays or objects will automatically be converted into grid rows and columns. For more specific behaviors, you can use the Results tab and JSONPath settings to configure columns.
-This provider supports [JSONPath](workbooks-jsonpath.md).
-
-## Alerts (preview)
- > [!NOTE]
-> The suggested way to query for Azure Alert information is by using the [Azure Resource Graph](#azure-resource-graph) data source, by querying the `AlertsManagementResources` table.
->
-> See the [Azure Resource Graph table reference](../../governance/resource-graph/reference/supported-tables-resources.md), or the [Alerts template](https://github.com/microsoft/Application-Insights-Workbooks/blob/master/Workbooks/Azure%20Resources/Alerts/Alerts.workbook) for examples.
->
-> The Alerts data source will remain available for a period of time while authors transition to using ARG. Use of this data source in templates is discouraged.
+> Do not include any sensitive information in any fields (`headers`, `parameters`, `body`, `url`), since they will be visible to all of the Workbook users.
-Workbooks allow users to visualize the active alerts related to their resources.
-Limitations: the alerts data source requires read access to the Subscription in order to query resources, and may not show newer kinds of alerts.
-
-To make a query control use this data source, use the _Data source_ drop-down to choose _Alerts (preview)_ and select the subscriptions, resource groups, or resources to target. Use the alert filter drop downs to select an interesting subset of alerts for your analytic needs.
+This provider supports [JSONPath](workbooks-jsonpath.md).
## Custom endpoint
To make a query control use this data source, use the _Data source_ drop-down to
To avoid automatically making calls to untrusted hosts when using templates, the user needs to mark the used hosts as trusted. This can be done by either clicking on the _Add as trusted_ button, or by adding it as a trusted host in Workbook settings. These settings will be saved in [browsers that support IndexDb with web workers](https://caniuse.com/#feat=indexeddb).
-> [!NOTE]
-> Do not write any secrets in any of the fields (`headers`, `parameters`, `body`, `url`), since they will be visible to all of the Workbook users.
- This provider supports [JSONPath](workbooks-jsonpath.md).
+## Azure RBAC
+The Azure RBAC provider allows you to check permissions on resources. It is most commonly used in parameter to check if the correct RBAC are set up. A use case would be to create a parameter to check deployment permission and then notify the user if they don't have deployment permission. Simple JSON arrays or objects will automatically be converted into grid rows and columns or text with a 'hasPermission' column with either true or false. The permission is checked on each resource and then either 'or' or 'and' to get the result. The [operations or actions](../../role-based-access-control/resource-provider-operations.md) can be a string or an array.
+
+ **String:**
+ ```
+ "Microsoft.Resources/deployments/validate/action"
+ ```
+
+ **Array:**
+ ```
+ ["Microsoft.Resources/deployments/read","Microsoft.Resources/deployments/write","Microsoft.Resources/deployments/validate/action","Microsoft.Resources/operations/read"]
+ ```
## Next steps
-* [Get started](./workbooks-overview.md#visualizations) learning more about workbooks many rich visualizations options.
-* [Control](./workbooks-access-control.md) and share access to your workbook resources.
-* [Log Analytics query optimization tips](../logs/query-optimization.md)
+ - [Getting started with Azure Workbooks](workbooks-getting-started.md)
+ - [Create an Azure Workbook](workbooks-create-workbook.md).
azure-monitor Workbooks Dropdowns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-dropdowns.md
Here is an example for multi-select drop-down at work:
## Next steps
-* [Get started](./workbooks-overview.md#visualizations) learning more about workbooks many rich visualizations options.
-* [Control](./workbooks-access-control.md) and share access to your workbook resources.
+ - [Getting started with Azure Workbooks](workbooks-getting-started.md).
azure-monitor Workbooks Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-getting-started.md
+
+ Title: Common Workbooks tasks
+description: Learn how to perform the commonly used tasks in Workbooks.
+++ Last updated : 05/30/2022+++
+# Getting started with Azure Workbooks
+
+This article describes how to access Azure Workbooks and the common tasks used to work with Workbooks.
+
+You can access Workbooks in a few ways:
+- In the [Azure portal](https://portal.azure.com), click on **Monitor**, and then select **Workbooks** from the menu bar on the left.
+
+ :::image type="content" source="./media/workbooks-overview/workbooks-menu.png" alt-text="Screenshot of Workbooks icon in the menu.":::
+
+- From a **Log Analytics workspace** page, select the **Workbooks** icon at the top of the page.
+
+ :::image type="content" source="media/workbooks-overview/workbooks-log-analytics-icon.png" alt-text="Screenshot of Workbooks icon on Log analytics workspace page.":::
+
+The gallery opens. Select a saved workbook or a template from the gallery, or search for the name in the search bar.
+
+## Start a new workbook
+To start a new workbook, select the **Empty** template under **Quick start**, or the **New** icon in the top navigation bar. For more information on creating new workbooks, see [Create a workbook](workbooks-create-workbook.md).
+
+## Save a workbook
+To save a workbook, save the report with a specific title, subscription, resource group, and location.
+The workbook will autofill to the same settings as the LA workspace, with the same subscription, resource group, however, users may change these report settings. Workbooks are shared resources that require write access to the parent resource group to be saved.
+
+## Share a workbook template
+
+Once you start creating your own workbook template, you may want to share it with the wider community. To learn more, and to explore other templates that aren't part of the default Azure Monitor gallery, visit our [GitHub repository](https://github.com/Microsoft/Application-Insights-Workbooks/blob/master/README.md). To browse existing workbooks, visit the [Workbook library](https://github.com/microsoft/Application-Insights-Workbooks/tree/master/Workbooks) on GitHub.
+
+## Pin a visualization
+
+Use the pin button next to a text, query, or metrics steps in a workbook can be pinned by using the pin button on those items while the workbook is in pin mode, or if the workbook author has enabled settings for that element to make the pin icon visible.
+
+To access pin mode, select **Edit** to enter editing mode, and select the blue pin icon in the top bar. An individual pin icon will then appear above each corresponding workbook part's *Edit* box on the right-hand side of your screen.
++
+> [!NOTE]
+> The state of the workbook is saved at the time of the pin, and pinned workbooks on a dashboard will not update if the underlying workbook is modified. In order to update a pinned workbook part, you will need to delete and re-pin that part.
+
+### Time ranges for pinned queries
+
+Pinned workbook query parts will respect the dashboard's time range if the pinned item is configured to use a *Time Range* parameter. The dashboard's time range value will be used as the time range parameter's value, and any change of the dashboard time range will cause the pinned item to update. If a pinned part is using the dashboard's time range, you will see the subtitle of the pinned part update to show the dashboard's time range whenever the time range changes.
+
+Additionally, pinned workbook parts using a time range parameter will auto refresh at a rate determined by the dashboard's time range. The last time the query ran will appear in the subtitle of the pinned part.
+
+If a pinned step has an explicitly set time range (does not use a time range parameter), that time range will always be used for the dashboard, regardless of the dashboard's settings. The subtitle of the pinned part will not show the dashboard's time range, and the query will not auto-refresh on the dashboard. The subtitle will show the last time the query executed.
+
+> [!NOTE]
+> Queries using the *merge* data source are not currently supported when pinning to dashboards.
+
+## Auto-Refresh
+Clicking on the Auto-Refresh button opens a list of intervals to let the user pick up the interval. The Workbook will keep refreshing after the selected time interval.
+* Auto-Refresh only refreshes when the Workbook is in read mode. If a user sets an interval of say 5 minutes and after 4 minutes switches to edit mode then there is no refreshing when the user is still in edit mode. But if the user comes back to read mode, the interval of 5 minutes resets and the Workbook will be refreshed after 5 minutes.
+* Clicking on the Refresh button on Read mode also reset the interval. Say a user sets the interval to 5 minutes and after 3 minutes, the user clicks on the refresh button to manually refresh the Workbook, then the Auto-refresh interval resets and the Workbook will be auto refreshed after 5 minutes.
+* This setting is not saved with Workbook. Every time a user opens a Workbook, the Auto-refresh is Off initially and needs to be set again.
+* Switching Workbooks, going out of gallery will clear the Auto refresh interval.
+++
+## Next Steps
+ - [Azure workbooks data sources](workbooks-data-sources.md).
azure-monitor Workbooks Interactive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-interactive.md
While the default behavior is to export a parameter as text, if you know that th
## Next steps
-* [Get started](./workbooks-overview.md#visualizations) learning more about workbooks many rich visualizations options.
-* [Control](./workbooks-access-control.md) and share access to your workbook resources.
+ - [Getting started with Azure Workbooks](workbooks-getting-started.md).
azure-monitor Workbooks Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-limits.md
+
+ Title: Azure Workbooks data source limits | Microsoft docs
+description: Learn about the limits of each type of workbook data source.
++++ Last updated : 05/30/2022+++
+# Workbooks result limits
+
+- In general, Workbooks limits the results of queries to be no more than 10,000 results. Any results after that point are truncated.
+- Each data source may have its own specific limits based on the limits of the service they query.
+- Those limits may be on the numbers of resources, regions, results returned, time ranges. Consult the documentation for each service to find those limits.
+
+## Data Source limits
+
+This table lists the limits of specific data sources.
+
+|Data Source|Limits |
+|||
+|Log based Queries|Log Analytics [has limits](../service-limits.md#log-queries-and-language) for the number of resources, workspaces, and regions involved in queries.|
+|Metrics|Metrics grids are limited to querying 200 resources at a time. |
+|Azure Resource Graph|Resource Graph limits queries to 1000 subscriptions at a time.|
+
+## Visualization limits
+
+This table lists the limits of specific data visualizations.
+
+|Visualization|Limits |
+|||
+|Grid|By default, grids only display the first 250 rows of data. This setting can be changed in the query step's advanced settings to display up to 10,000 rows. Any further items will be ignored, and a warning will be displayed.|
+|Charts|Charts are limited to 100 series.<br>Charts are limited to 10000 data points. |
+|Tiles|Tiles is limited to displaying 100 tiles. Any further items will be ignored, and a warning will be displayed.|
+|Maps|Maps are limited to displaying 100 points. Any further items will be ignored, and a warning will be displayed.|
+|Text|Text visualization only displays the first cell of data returned by a query. Any other data is ignored.|
+
+
+## Parameter limits
+
+This table lists the limits of specific data parameters.
+
+|Parameter|Limits |
+|||
+|Drop Down|Drop down based parameters are limited to 1000 items. Any items after that returned by a query are ignored.<br>When based on a query, only the first four columns of data produced by the query are used, any other columns are ignored.|
+|Multi-value|Multi-value parameters are limited to 100 items. Any items after that returned by a query are ignored.<br>When based on a query, only the first column of data produced by the query is used, any other columns are ignored. |
+|Options Group|Options group parameters are limited to 1000 items. Any items after that returned by a query are ignored. <br>When based on a query, only the first column of data produced by the query is used, any other columns are ignored.|
+|Text|Text parameters that retrieve their value based on a query will only display the first cell returned by the query (row 1, column 1). Any other data is ignored.|
+
azure-monitor Workbooks Link Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-link-actions.md
When the workbook link is opened, the new workbook view will be passed all of th
## Next steps -- [Control](../visualize/workbooks-access-control.md) and share access to your workbook resources.-- Learn how to use [groups in workbooks](../visualize/workbooks-groups.md).
+ - [Getting started with Azure Workbooks](workbooks-getting-started.md).
azure-monitor Workbooks Multi Value https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-multi-value.md
+
+ Title: Azure Workbooks multi value parameters.
+description: Learn about adding multi value parameters to your Azure workbook.
++++ Last updated : 05/30/2022+++
+# Multi-value Parameters
+
+A multi-value parameter allows the user to set one or more arbitrary text values. Multi-value parameters are commonly used for filtering, often when a drop-down control may contain too many values to be useful.
++
+## Creating a static multi-value parameter
+1. Start with an empty workbook in edit mode.
+1. Select **Add parameters** from the links within the workbook.
+1. Select the blue _Add Parameter_ button.
+1. In the new parameter pane that pops up enter:
+ - Parameter name: `Filter`
+ - Parameter type: `Multi-value`
+ - Required: `unchecked`
+ - Get data from: `None`
+1. Select **Save** from the toolbar to create the parameter.
+1. The Filter parameter will be a multi-value parameter, initially with no values:
+
+ :::image type="content" source="media/workbooks-multi-value/workbooks-multi-value-create.png" alt-text="Screenshot showing the creation of mulit-value parameter in workbooks.":::
+
+1. You can then add multiple values:
+
+ :::image type="content" source="media/workbooks-multi-value/workbooks-multi-value-third-value.png" alt-text="Screenshot showing the user adding a third value in workbooks.":::
++
+A multi-value parameter behaves similarly to a multi-select [drop down parameter](workbooks-dropdowns.md). As such, it is commonly used in an "in" like scenario
+
+```
+ let computerFilter = dynamic([{Computer}]);
+ Heartbeat
+ | where array_length(computerFilter) == 0 or Computer in (computerFilter)
+ | summarize Heartbeats = count() by Computer
+ | order by Heartbeats desc
+```
+
+## Parameter field style
+Multi-value parameter supports following field style:
+1. Standard: Allows a user to add or remove arbitrary text items
+
+ :::image type="content" source="media/workbooks-multi-value/workbooks-multi-value-standard.png" alt-text="Screenshot showing standard workbooks multi-value field.":::
+
+1. Password: Allows a user to add or remove arbitrary password fields. The password values are only hidden on UI when user types. The values are still fully accessible as a param value when referred and they are stored unencrypted when workbook is saved.
+
+ :::image type="content" source="media/workbooks-multi-value/workbooks-multi-value-password.png" alt-text="Screenshot showing a workbooks password multi-value field.":::
+
+## Creating a multi-value with initial values
+You can use a query to seed the multi-value parameter with initial values. The user can then manually remove values, or add more values. If a query is used to populate the multi-value parameter, a restore defaults button will appear on the parameter to restore back to the originally queried values.
+
+1. Start with an empty workbook in edit mode.
+1. Select **add parameters** from the links within the workbook.
+1. Select **Add Parameter**.
+1. In the new parameter pane that pops up enter:
+ - Parameter name: `Filter`
+ - Parameter type: `Multi-value`
+ - Required: `unchecked`
+ - Get data from: `JSON`
+1. In the JSON Input text block, insert this json snippet:
+ ```
+ ["apple", "banana", "carrot" ]
+ ```
+ All of the items that are the result of the query will be shown as multi value items.
+ (you are not limited to JSON, you can use any query provider to provide initial values, but will be limited to the first 100 results)
+1. Select **Run Query**.
+1. Select **Save** from the toolbar to create the parameter.
+1. The Filter parameter will be a multi-value parameter with three initial values.
+
+ :::Screenshot type="content" source="media/workbooks-multi-value/workbooks-multi-value-initial-values.png" alt-text="Screenshot showing the creation of a dynamic drop-down in workbooks.":::
+## Next steps
+
+- [Workbook parameters](workbooks-parameters.md).
+- [Workbook drop down parameters](workbooks-dropdowns.md)
azure-monitor Workbooks Options Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-options-group.md
+
+ Title: Azure Workbooks options group parameters.
+description: Learn about adding options group parameters to your Azure workbook.
++++ Last updated : 05/30/2022+++
+# Options group parameters
+
+An options group parameter allows the user to select one value from a known set (for example, select one of your appΓÇÖs requests). When there is a small number of values, an options group can be a better choice than a [drop-down parameter](workbooks-dropdowns.md), since the user can see all the possible values, and see which one is selected. Options groups are commonly used for yes/no or on/off style choices. When there are a large number of possible values, using a drop-down is a better choice. Unlike drop-down parameters, an options group always only allows one selected value.
+
+You can specify the list by:
+- providing a static list in the parameter setting
+- using a KQL query to retrieve the list dynamically
+
+## Creating a static options group parameter
+1. Start with an empty workbook in edit mode.
+1. Choose **Add parameters** from the links within the workbook.
+1. Select **Add Parameter**.
+1. In the new parameter pane that pops up enter:
+ - Parameter name: `Environment`
+ - Parameter type: `Options Group`
+ - Required: `checked`
+ - Get data from: `JSON`
+1. In the JSON Input text block, insert this json snippet:
+ ```json
+ [
+ { "value":"dev", "label":"Development" },
+ { "value":"ppe", "label":"Pre-production" },
+ { "value":"prod", "label":"Production", "selected":true }
+ ]
+ ```
+ (you are not limited to JSON, you can use any query provider to provide initial values, but will be limited to the first 100 results)
+1. Select **Update**.
+1. Select **Save** from the toolbar to create the parameter.
+1. The Environment parameter will be an options group control with the three values.
+
+ :::image type="content" source="media/workbooks-options-group/workbooks-options-group-create.png" alt-text="Screenshot showing the creation of a static options group in a workbook.":::
+
+## Next steps
+
+- [Workbook parameters](workbooks-parameters.md).
+- [Workbook drop down parameters](workbooks-dropdowns.md)
azure-monitor Workbooks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-overview.md
Title: Azure Monitor Workbooks Overview
+ Title: Azure Workbooks Overview
description: Learn how workbooks provide a flexible canvas for data analysis and the creation of rich visual reports within the Azure portal. -++ Previously updated : 07/23/2020 Last updated : 05/30/2022+
-# Azure Monitor Workbooks
-
-Workbooks provide a flexible canvas for data analysis and the creation of rich visual reports within the Azure portal. They allow you to tap into multiple data sources from across Azure, and combine them into unified interactive experiences.
-
-Here is a video walkthrough on creating workbooks.
-
-> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4B4Ap]
-
-> [!NOTE]
-> Legacy and private workbooks have been removed. Use the the [workbook retrieval tool](https://github.com/microsoft/Application-Insights-Workbooks/blob/master/Documentation/LegacyAI/DeprecatedWorkbookRetrievalTool.md) to retrieve the contents of your old workbook.
-
-## Data sources
-
-Workbooks can query data from multiple sources within Azure. Authors of workbooks can transform this data to provide insights into the availability, performance, usage, and overall health of the underlying components. For instance, analyzing performance logs from virtual machines to identify high CPU or low memory instances and displaying the results as a grid in an interactive report.
-
-But the real power of workbooks is the ability to combine data from disparate sources within a single report. This allows for the creation of composite resource views or joins across resources enabling richer data and insights that would otherwise be impossible.
-
-Workbooks are currently compatible with the following data sources:
-
-* [Logs](../visualize/workbooks-data-sources.md#logs)
-* [Metrics](../visualize/workbooks-data-sources.md#metrics)
-* [Azure Resource Graph](../visualize/workbooks-data-sources.md#azure-resource-graph)
-* [Alerts (Preview)](../visualize/workbooks-data-sources.md#alerts-preview)
-* [Workload Health](../visualize/workbooks-data-sources.md#workload-health)
-* [Azure Resource Health](../visualize/workbooks-data-sources.md#azure-resource-health)
-* [Azure Data Explorer](../visualize/workbooks-data-sources.md#azure-data-explorer)
-
-## Visualizations
-
-Workbooks provide a rich set of capabilities for visualizing your data. For detailed examples of each visualization type, you can consult the links below:
-
-* [Text](../visualize/workbooks-text-visualizations.md)
-* [Charts](../visualize/workbooks-chart-visualizations.md)
-* [Grids](../visualize/workbooks-grid-visualizations.md)
-* [Tiles](../visualize/workbooks-tile-visualizations.md)
-* [Trees](../visualize/workbooks-tree-visualizations.md)
-* [Graphs](../visualize/workbooks-graph-visualizations.md)
-* [Composite bar](../visualize/workbooks-composite-bar.md)
-* [Honey comb](workbooks-honey-comb.md)
-* [Map](workbooks-map-visualizations.md)
--
-### Pinning Visualizations
-
-Text, query, and metrics steps in a workbook can be pinned by using the pin button on those items while the workbook is in pin mode, or if the workbook author has enabled settings for that element to make the pin icon visible.
-
-To access pin mode, select **Edit** to enter editing mode, and select the blue pin icon in the top bar. An individual pin icon will then appear above each corresponding workbook part's *Edit* box on the right-hand side of your screen.
--
-> [!NOTE]
-> The state of the workbook is saved at the time of the pin, and pinned workbooks on a dashboard will not update if the underlying workbook is modified. In order to update a pinned workbook part, you will need to delete and re-pin that part.
-
-## Getting started
+# Azure Workbooks
-To explore the workbooks experience, first navigate to the Azure Monitor service. This can be done by typing **Monitor** into the search box in the Azure portal.
+Workbooks provide a flexible canvas for data analysis and the creation of rich visual reports within the Azure portal. They allow you to tap into multiple data sources from across Azure, and combine them into unified interactive experiences. Workbooks let you combine multiple kinds of visualizations and analyses, making them great for free-form exploration.
-Then select **Workbooks**.
+Workbooks combine text,ΓÇ»[log queries](/azure/data-explorer/kusto/query/), metrics, and parameters into rich interactive reports.
+Workbooks are helpful for scenarios such as:
-### Gallery
+- Exploring the usage of your virtual machine when you don't know the metrics of interest in advance: CPU utilization, disk space, memory, network dependencies, etc.
+- Explaining to your team how a recently provisioned VM is performing, by showing metrics for key counters and other log events.
+- Sharing the results of a resizing experiment of your VM with other members of your team. You can explain the goals for the experiment with text, then show each usage metric and analytics queries used to evaluate the experiment, along with clear call-outs for whether each metric was above or below target.
+- Reporting the impact of an outage on the usage of your VM, combining data, text explanation, and a discussion of next steps to prevent outages in the future.
-The gallery makes it convenient to organize, sort, and manage workbooks of all types.
-
+## The Gallery
+The gallery opens listing all the saved workbooks and templates for your workspace, allowing you to easily organize, sort, and manage workbooks of all types.
#### Gallery tabs There are four tabs in the gallery to help organize workbook types.
There are four tabs in the gallery to help organize workbook types.
| Public Templates | Shows the list of all the available ready to use, get started functional workbook templates published by Microsoft. Grouped by category. | | My Templates | Shows the list of all the available deployed workbook templates that you created or are shared with you. Grouped by category. |
-#### Features
-
-* In each tab, there is a grid with info on the workbooks. It includes description, last modified date, tags, subscription, resource group, region, and shared state. You can also sort the workbooks by this information.
-* Filter by resource group, subscriptions, workbook/template name, or template category.
-* Select multiple workbooks to delete or bulk delete.
-* Each Workbook has a context menu (ellipsis/three dots at the end), selecting it will open a list of quick actions.
- * View resource - Access workbook resource tab to see the resource ID of the workbook, add tags, manage locks etc.
- * Delete or rename workbook.
- * Pin workbook to dashboard.
-
-### Workbooks versus workbook templates
-
-You can see a _workbook_ in green and a number of _workbook templates_ in purple. Templates serve as curated reports that are designed for flexible reuse by multiple users and teams. Opening a template creates a transient workbook populated with the content of the template.
-
-You can adjust the template-based workbook's parameters and perform analysis without fear of breaking the future reporting experience for colleagues. If you open a template, make some adjustments, and then select the save icon you will be saving the template as a workbook which would then show in green leaving the original template untouched.
-
-Under the hood, templates also differ from saved workbooks. Saving a workbook creates an associated Azure Resource Manager resource, whereas the transient workbook created when just opening a template has no unique resource associated with it. To learn more about how access control is managed in workbooks consult the [workbooks access control article](../visualize/workbooks-access-control.md).
-
-### Exploring a workbook template
-
-Select **Application Failure Analysis** to see one of the default application workbook templates.
--
-As stated previously, opening the template creates a temporary workbook for you to be able to interact with. By default, the workbook opens in reading mode which displays only the information for the intended analysis experience that was created by the original template author.
-
-In the case of this particular workbook, the experience is interactive. You can adjust the subscription, targeted apps, and the time range of the data you want to display. Once you have made those selections the grid of HTTP Requests is also interactive whereby selecting an individual row will change what data is rendered in the two charts at the bottom of the report.
-
-### Editing mode
-
-To understand how this workbook template is put together you need to swap to editing mode by selecting **Edit**.
--
-Once you have switched to editing mode you will notice a number of **Edit** boxes appear to the right corresponding with each individual aspect of your workbook.
--
-If we select the edit button immediately under the grid of request data we can see that this part of our workbook consists of a Kusto query against data from an Application Insights resource.
--
-Selecting the other **Edit** buttons on the right will reveal a number of the core components that make up workbooks like markdown-based [text boxes](../visualize/workbooks-text-visualizations.md), [parameter selection](../visualize/workbooks-parameters.md) UI elements, and other [chart/visualization types](#visualizations).
+## Data sources
-Exploring the pre-built templates in edit-mode and then modifying them to fit your needs and save your own custom workbook is an excellent way to start to learn about what is possible with Azure Monitor workbooks.
+Workbooks can query data from multiple Azure sources. You can transform this data to provide insights into the availability, performance, usage, and overall health of the underlying components. For example:
+- You can analyze performance logs from virtual machines to identify high CPU or low memory instances and display the results as a grid in an interactive report.
+- You can combine data from several different sources within a single report. This allows you to create composite resource views or joins across resources enabling richer data and insights that would otherwise be impossible.
-## Dashboard time ranges
+See [this article](workbooks-data-sources.md) for detailed information about the supported data sources.
+## Visualizations
-Pinned workbook query parts will respect the dashboard's time range if the pinned item is configured to use a *Time Range* parameter. The dashboard's time range value will be used as the time range parameter's value, and any change of the dashboard time range will cause the pinned item to update. If a pinned part is using the dashboard's time range, you will see the subtitle of the pinned part update to show the dashboard's time range whenever the time range changes.
+Workbooks provide a rich set of capabilities for visualizing your data. Each data source and result set support visualizations that are most useful for that data. See [this article](workbooks-visualizations.md) for detailed information about the visualizations.
-Additionally, pinned workbook parts using a time range parameter will auto refresh at a rate determined by the dashboard's time range. The last time the query ran will appear in the subtitle of the pinned part.
-If a pinned step has an explicitly set time range (does not use a time range parameter), that time range will always be used for the dashboard, regardless of the dashboard's settings. The subtitle of the pinned part will not show the dashboard's time range, and the query will not auto-refresh on the dashboard. The subtitle will show the last time the query executed.
+## Access control
-> [!NOTE]
-> Queries using the *merge* data source are not currently supported when pinning to dashboards.
+Users must have the appropriate permissions to access to view or edit a workbook. Workbook permissions are based on the permissions the user has for the resources included in the workbooks.
-## Sharing workbook templates
+Standard Azure roles that provide the access to workbooks are:
+
+- [Monitoring Reader](../../role-based-access-control/built-in-roles.md#monitoring-reader) includes standard /read privileges that would be used by monitoring tools (including workbooks) to read data from resources.
-Once you start creating your own workbook templates you might want to share it with the wider community. To learn more, and to explore other templates that aren't part of the default Azure Monitor gallery view visit our [GitHub repository](https://github.com/Microsoft/Application-Insights-Workbooks/blob/master/README.md). To browse existing workbooks, visit the [Workbook library](https://github.com/microsoft/Application-Insights-Workbooks/tree/master/Workbooks) on GitHub.
+ - [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) includes general `/write` privileges used by various monitoring tools for saving items (including `workbooks/write` privilege to save shared workbooks).
+ΓÇ£Workbooks ContributorΓÇ¥ adds ΓÇ£workbooks/writeΓÇ¥ privileges to an object to save shared workbooks.
+For custom roles, you must add `microsoft.insights/workbooks/write` to the user's permissions in order to be able to edit and save a workbook. For more details, see the [Workbook Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) role.
-## Next step
+## Next steps
-* [Get started](#visualizations) learning more about workbooks many rich visualizations options.
-* [Control](../visualize/workbooks-access-control.md) and share access to your workbook resources.
+ - [Getting started with Azure Workbooks](workbooks-getting-started.md).
azure-monitor Workbooks Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-parameters.md
Title: Azure Monitor workbooks creating parameters
-description: Learn how parameters allow workbook authors to collect input from the consumers and reference it in other parts of the workbook.
+ Title: Creating Workbook parameters
+description: Learn how to add parameters to your workbook to collect input from the consumers and reference it in other parts of the workbook.
Last updated 10/23/2019
-# Workbook parameters
+# Creating Workbook parameters
Parameters allow workbook authors to collect input from the consumers and reference it in other parts of the workbook ΓÇô usually to scope the result set or setting the right visual. It is a key capability that allows authors to build interactive reports and experiences. Workbooks allow you to control how your parameter controls are presented to consumers ΓÇô text box vs. drop down, single- vs. multi-select, values from text, JSON, KQL, or Azure Resource Graph, etc. Supported parameter types include:
-* [Time](workbooks-time.md) - allows a user to select from prepopulated time ranges or select a custom range
+* [Time](workbooks-time.md) - allows a user to select from pre-populated time ranges or select a custom range
* [Drop down](workbooks-dropdowns.md) - allows a user to select from a value or set of values
+* [Options group](workbooks-options-group.md)
* [Text](workbooks-text.md) - allows a user to enter arbitrary text
+* [Criteria](workbooks-criteria.md)
* [Resource](workbooks-resources.md) - allows a user to select one or more Azure resources * [Subscription](workbooks-resources.md) - allows a user to select one or more Azure subscription resources
+* [Multi-value](workbooks-multi-value.md)
* Resource Type - allows a user to select one or more Azure resource type values * Location - allows a user to select one or more Azure location values These parameter values can be referenced in other parts of workbooks either via bindings or value expansions.
-## Creating a parameter
+## Create a parameter
+ 1. Start with an empty workbook in edit mode.
-2. Choose _Add parameters_ from the links within the workbook.
-3. Click on the blue _Add Parameter_ button.
-4. In the new parameter pane that pops up enter:
- 1. Parameter name: `TimeRange` *(note that parameter __names__ **cannot** include spaces or special characters)*
- 2. Display name: `Time Range` *(however, __display names__ can include spaces, special characters, emoji, etc.)*
- 2. Parameter type: `Time range picker`
- 3. Required: `checked`
- 4. Available time ranges: Last hour, Last 12 hours, Last 24 hours, Last 48 hours, Last 3 days, Last 7 days and Allow custom time range selection
-5. Choose 'Save' from the toolbar to create the parameter.
-
- ![Image showing the creation of a time range parameter](./media/workbooks-parameters/time-settings.png)
+1. Choose _Add parameters_ from the links within the workbook.
+1. Select on the blue _Add Parameter_ button.
+1. In the new parameter pane that pops up enter:
+
+ - Parameter name: `TimeRange` Note that parameter names cannot include spaces or special characters
+ - Display name: `Time Range` Note that display names can include spaces, special characters, emoji, etc.
+ - Parameter type: `Time range picker`
+ - Required: `checked`
+ - Available time ranges: Last hour, Last 12 hours, Last 24 hours, Last 48 hours, Last 3 days, Last 7 days and Allow custom time range selection
+
+1. Choose 'Save' from the toolbar to create the parameter.
+
+ :::image type="content" source="media/workbooks-parameters/workbooks-time-settings.png" alt-text="Screenshot showing the creation of a time range parameter.":::
This is how the workbook will look like in read-mode, in the "Pills" style.
- ![Image showing a time range parameter in read mode](./media/workbooks-parameters/parameters-time.png)
+ :::image type="content" source="media/workbooks-parameters/workbooks-time-settings.png" alt-text="Screenshot showing a time range parameter in read mode.":::
-## Referencing a parameter
-### Via Bindings
+## Reference a parameter
+### Reference a parameter with Bindings
1. Add a query control to the workbook and select an Application Insights resource.
-2. Open the _Time Range_ drop down and select the `Time Range` option from the Parameters section at the bottom.
+2. Open the _Time Range_ drop-down and select the `Time Range` option from the Parameters section at the bottom.
3. This binds the time range parameter to the time range of the chart. The time scope of the sample query is now Last 24 hours. 4. Run query to see the results
- ![Image showing a time range parameter referenced via bindings](./media/workbooks-parameters/time-binding.png)
+ :::image type="content" source="media/workbooks-parameters/workbooks-time-binding.png" alt-text="Screenshot showing a time range parameter referenced via bindings.":::
-### In KQL
+### Reference a parameter with KQL
1. Add a query control to the workbook and select an Application Insights resource. 2. In the KQL, enter a time scope filter using the parameter: `| where timestamp {TimeRange}` 3. This expands on query evaluation time to `| where timestamp > ago(1d)`, which is the time range value of the parameter. 4. Run query to see the results
- ![Image showing a time range referenced in KQL](./media/workbooks-parameters/time-in-code.png)
+ :::image type="content" source="media/workbooks-parameters/workbooks-time-in-code.png" alt-text="Screenshot showing a time range referenced in the K Q L query.":::
-### In Text
+### Reference a parameter with Text
1. Add a text control to the workbook. 2. In the markdown, enter `The chosen time range is {TimeRange:label}` 3. Choose _Done Editing_ 4. The text control will show text: _The chosen time range is Last 24 hours_ ## Parameter options
-The _In Text_ section used the `label` of the parameter instead of its value. Parameters expose various such options depending on its type - e.g. time range pickers allow value, label, query, start, end, and grain.
+
+Each parameter type has its own formatting options. Use the `Previews` section of the _Edit Parameter_ pane to see the formatting expansion options for your parameter:
+
+ :::image type="content" source="media/workbooks-parameters/workbooks-time-settings.png" alt-text="Screenshot showing a time range parameter options.":::
+### Format your parameters
+
+You can use these options to format all parameter types except for the time range picker. For examples of formatting times, see [Time parameter options](workbooks-time.md#time-parameter-options).
+
+- For Resource picker, resource Ids are formatted.
+- For Subscription picker, subscription values are formatted.
+
+**Format**: Convert toml to json
+
+**Syntax**: `{param:tomltojson}`
+
+**Original Value**:
+
+```
+name = "Sam Green"
+
+[address]
+state = "New York"
+country = "USA"
+```
+
+**Formatted Value**:
+
+```
+{
+ "name": "Sam Green",
+ "address": {
+ "state": "New York",
+ "country": "USA"
+ }
+}
+```
+**Format**:escape JSON
+
+**Syntax**: `{param:escapejson}`
+
+**Original Value**:
+
+```
+{
+ "name": "Sam Green",
+ "address": {
+ "state": "New York",
+ "country": "USA"
+ }
+}
+```
+
+**Formatted Value**:
+
+```
+{\r\n\t\"name\": \"Sam Green\",\r\n\t\"address\": {\r\n\t\t\"state\": \"New York\",\r\n\t\t\"country\": \"USA\"\r\n }\r\n}
+```
+
+**Format**: Encode text to base64
+
+**Syntax**: `{param:base64}`
+
+**Original Value**:
+
+```
+Sample text to test base64 encoding
+```
+
+**Formatted Value**:
+
+```
+U2FtcGxlIHRleHQgdG8gdGVzdCBiYXNlNjQgZW5jb2Rpbmc=
+```
+
+## Formatting parameters using JSONPath
+For string parameters that are JSON content, you can use [JSONPath](workbooks-jsonpath.md) in the parameter format string.
+
+For example, you may have a string parameter named `selection` that was the result of a query or selection in a visualization that has the following value
+```json
+{ "series":"Failures", "x": 5, "y": 10 }
+```
+
+Using JSONPath, you could get individual values from that object:
+
+format | result
+|
+`{selection:$.series}` | `Failures`
+`{selection:$.x}` | `5`
+`{selection:$.y}`| `10`
+
+> [!NOTE]
+> If the parameter value is not valid json, the result of the format will be an empty value.
+
+## Parameter Style
+The following styles are available to layout the parameters in a parameters step
+#### Pills
+In pills style, the default style, the parameters look like text, and require the user to select them once to go into the edit mode.
+
+ :::image type="content" source="media/workbooks-parameters/workbooks-pills-read-mode.png" alt-text="Screenshot showing Workbooks pill style read mode.":::
+
+ :::image type="content" source="media/workbooks-parameters/workbooks-pills-edit-mode.png" alt-text="Screenshot that shows Workbooks pill style edit mode.":::
+
+#### Standard
+In standard style, the controls are always visible, with a label above the control.
+
+ :::image type="content" source="media/workbooks-parameters/workbooks-standard.png" alt-text="Screenshot that shows Workbooks standard style.":::
+
+#### Form Horizontal
+In horizontal style form, the controls are always visible, with label on left side of the control.
+
+ :::image type="content" source="media/workbooks-parameters/workbooks-form-horizontal.png" alt-text="Screenshot that shows Workbooks form horizontal style.":::
+
+#### Form Vertical
+In vertical style from, the controls are always visible, with label above the control. Unlike standard style, there is only one label or control in one row.
+
+ :::image type="content" source="media/workbooks-parameters/workbooks-form-vertical.png" alt-text="Screenshot that shows Workbooks form vertical style.":::
+
+> [!NOTE]
+> In standard, form horizontal, and form vertical layouts, there's no concept of inline editing, the controls are always in edit mode.
+
+## Global parameters
+Now that you've learned how parameters work, and the limitations about only being able to use a parameter "downstream" of where it is set, it is time to learn about global parameters, which change those rules.
+
+With a global parameter, the parameter must still be declared before it can be used, but any step that sets a value to that parameter will affect all instances of that parameter in the workbook.
+
+> [!NOTE]
+> Because changing a global parameter has this "update all" behavior, The global setting should only be turned on for parameters that require this behavior. A combination of global parameters that depend on each other can create a cycle or oscillation where the competing globals change each other over and over. In order to avoid cycles, you cannot "redeclare" a parameter that's been declared as global. Any subsequent declarations of a parameter with the same name will create a read only parameter that cannot be edited in that place.
+
+Common uses of global parameters:
+
+1. Synchronizing time ranges between many charts.
+ - without a global parameter, any time range brush in a chart will only be exported after that chart, so selecting a time range in the third chart will only update the fourth chart
+ - with a global parameter, you can create a global `timeRange` parameter up front, give it a default value, have all the other charts use that as their bound time range and as their time brush output (additionally setting the "only export the parameter when the range is brushed" setting). Now, any change of time range in *any* chart will update the global `timeRange` parameter at the top of the workbook. This can be used to make a workbook act like a dashboard.
+
+1. Allowing changing the selected tab in a links step via links or buttons
+ - without a global parameter, the links step only outputs a parameter for the selected tab
+ - with a global parameter, you can create a global `selectedTab` parameter, and use that parameter name in the tab selections in the links step. This allows you to pass that parameter value into the workbook from a link, or by using another button or link to change the selected tab. Using buttons from a links step in this way can make a wizard-like experience, where buttons at the bottom of a step can affect the visible sections above it.
++
+### Create a global parameter
+When creating the parameter in a parameters step, use the "Treat this parameter as a global" option in advanced settings. The only way to make a global parameter is to declare it with a parameters step. The other methods of creating parameters (via selections, brushing, links, buttons, tabs) can only update a global parameter, they cannot themselves declare one.
+
+ :::image type="content" source="media/workbooks-parameters/workbooks-parameters-global-setting.png" alt-text="Screenshot of setting global parameters in Workbooks.":::
+
+The parameter will be available and function as normal parameters do.
+
+### Updating the value of an existing global parameter
+For the chart example above, the most common way to update a global parameter is by using Time Brushing.
+
+In this example, the `timerange` parameter above is declared as a global. In a query step below that, create and run a query that uses that `timerange` parameter in the query and returns a time chart result. In the advanced settings for the query step, enable the time range brushing setting, and use the same parameter name as the output for the time brush parameter, and also set the only export the parameter when brushed option.
+
+ :::image type="content" source="media/workbooks-parameters/workbooks-global-time-range-brush.png" alt-text="Screenshot of global time brush setting in Workbooks.":::
+
+Whenever a time range is brushed in this chart, it will also update the `timerange` parameter above this query, and the query step itself (since it also depends on `timerange`!):
+
+ 1. Before brushing:
+
+ - The time range is shown as "last hour".
+ - The chart shows the last hour of data.
+
+ :::image type="content" source="media/workbooks-parameters/workbooks-global-before-brush.png" alt-text="Screenshot of setting global parameters before brushing.":::
+
+
+
+ 1. During brushing:
+
+ - The time range is still last hour, and the brushing outlines are drawn.
+ - No parameters/etc have changed. once you let go of the brush, the time range will be updated.
+
+ :::image type="content" source="media/workbooks-parameters/workbooks-global-during-brush.png" alt-text="Screenshot of setting global parameters during brushing.":::
+
+
++
+ 1. After brushing:
+
+ - The time range specified by the time brush will be set by this step, overriding the global value (the timerange dropdown now displays that custom time range).
+ - Because the global value at the top has changed, and because this chart depends on `timerange` as an input, the time range of the query used in the chart will also update, causing the query to and the chart to update.
+ - Any other steps in the workbook that depend on `timerange` will also update.
+
+ :::image type="content" source="media/workbooks-parameters/workbooks-global-after-brush.png" alt-text="Screenshot of setting global parameters after brushing.":::
-Use the `Previews` section of the _Edit Parameter_ pane to see the expansion options for your parameter:
+
-![Image showing a time range parameter options](./media/workbooks-parameters/time-previews.png)
+ > [!NOTE]
+ > If you do not use a global parameter, the `timerange` parameter value will only change below this query step, things above or this item itself would not update.
## Next steps
-* [Get started](./workbooks-overview.md#visualizations) learning more about workbooks many rich visualizations options.
-* [Control](./workbooks-access-control.md) and share access to your workbook resources.
+- [Workbook time parameters](workbooks-time.md)
azure-monitor Workbooks Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-resources.md
Resource parameters allow picking of resources in workbooks. This is useful in s
Values from resource pickers can come from the workbook context, static list or from Azure Resource Graph queries.
-## Creating a resource parameter (workbook resources)
+## Create a resource parameter (workbook resources)
1. Start with an empty workbook in edit mode. 2. Choose _Add parameters_ from the links within the workbook. 3. Click on the blue _Add Parameter_ button.
Values from resource pickers can come from the workbook context, static list or
![Image showing the creation of a resource parameter using workbook resources](./media/workbooks-resources/resource-create.png)
-## Creating a resource parameter (Azure Resource Graph)
+## Create an Azure Resource Graph resource parameter
1. Start with an empty workbook in edit mode. 2. Choose _Add parameters_ from the links within the workbook. 3. Click on the blue _Add Parameter_ button.
Values from resource pickers can come from the workbook context, static list or
[Azure Resource Graph documentation](../../governance/resource-graph/overview.md)
-## Creating a resource parameter (JSON list)
+## Create a JSON list resource parameter
1. Start with an empty workbook in edit mode. 2. Choose _Add parameters_ from the links within the workbook. 3. Click on the blue _Add Parameter_ button.
Values from resource pickers can come from the workbook context, static list or
6. Optionally set the `Include only resource types` to _Application Insights_ 7. Choose 'Save' from the toolbar to create the parameter.
-## Referencing a resource parameter
+## Reference a resource parameter
1. Add a query control to the workbook and select an Application Insights resource. 2. Use the _Application Insights_ drop down to bind the parameter to the control. Doing this sets the scope of the query to the resources returned by the parameter at run time. 4. In the KQL control, add this snippet
Values from resource pickers can come from the workbook context, static list or
## Next steps
-* [Get started](./workbooks-overview.md#visualizations) learning more about workbooks many rich visualizations options.
-* [Control](./workbooks-access-control.md) and share access to your workbook resources.
+ - [Getting started with Azure Workbooks](workbooks-getting-started.md).
azure-monitor Workbooks Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-templates.md
+
+ Title: Azure Workbooks templates
+description: Learn how to use workbooks templates.
++++ Last updated : 05/30/2022+++
+# Azure Workbook templates
+
+Workbook templates are curated reports designed for flexible reuse by multiple users and teams. Opening a template creates a transient workbook populated with the content of the template. Workbooks are visible in green and Workbook templates are visible in purple.
+
+You can adjust the template-based workbook parameters and perform analysis without fear of breaking the future reporting experience for colleagues. If you open a template, make some adjustments, and then select the save icon, you will be saving the template as a workbook, which would then show in green, leaving the original template untouched.
+
+The design and architecture of templates is also different from saved workbooks. Saving a workbook creates an associated Azure Resource Manager resource, whereas the transient workbook created when opening a template doesn't have a unique resource associated with it. The resources associated with a workbook affect who has access to that workbook. Learn more about [Azure workbooks access control](workbooks-overview.md#access-control).
+
+## Explore a workbook template
+
+Select **Application Failure Analysis** to see one of the default application workbook templates.
+
+ :::image type="content" source="./media/workbooks-overview/failure-analysis.png" alt-text="Screenshot of application failure analysis template." border="false" lightbox="./media/workbooks-overview/failure-analysis.png":::
+
+Opening the template creates a temporary workbook for you to be able to interact with. By default, the workbook opens in reading mode, which displays only the information for the intended analysis experience created by the original template author.
+
+You can adjust the subscription, targeted apps, and the time range of the data you want to display. Once you have made those selections, the grid of HTTP Requests is also interactive, and selecting an individual row changes the data rendered in the two charts at the bottom of the report.
+
+## Editing a template
+
+To understand how this workbook template is put together, you need to swap to editing mode by selecting **Edit**.
+
+ :::image type="content" source="./media/workbooks-overview/edit.png" alt-text="Screenshot of edit button in workbooks." border="false" :::
+
+Once you have switched to editing mode, you will notice **Edit** boxes to the right, corresponding with each individual aspect of your workbook.
+
+ :::image type="content" source="./media/workbooks-overview/edit-mode.png" alt-text="Screenshot of Edit button." border="false" lightbox="./media/workbooks-overview/edit-mode.png":::
+
+If we select the edit button immediately under the grid of request data, we can see that this part of our workbook consists of a Kusto query against data from an Application Insights resource.
+
+ :::image type="content" source="./media/workbooks-overview/kusto.png" alt-text="Screenshot of underlying Kusto query." border="false" lightbox="./media/workbooks-overview/kusto.png":::
+
+Selecting the other **Edit** buttons on the right will reveal some of the core components that make up workbooks like markdown-based [text boxes](../visualize/workbooks-text-visualizations.md), [parameter selection](../visualize/workbooks-parameters.md) UI elements, and other [chart/visualization types](workbooks-visualizations.md).
+
+Exploring the pre-built templates in edit-mode and then modifying them to fit your needs and save your own custom workbook is an excellent way to start to learn about what is possible with Azure Monitor workbooks.
azure-monitor Workbooks Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-text.md
If data is coming from a query, user can select the option to pre-format the JSO
## Next steps
-* [Get started](./workbooks-overview.md#visualizations) learning more about workbooks many rich visualizations options.
-* [Control](./workbooks-access-control.md) and share access to your workbook resources.
+ - [Getting started with Azure Workbooks](workbooks-getting-started.md).
azure-monitor Workbooks Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-time.md
Time parameters allow users to set the time context of analysis and is used by a
4. Available time ranges: Last hour, Last 12 hours, Last 24 hours, Last 48 hours, Last 3 days, Last 7 days and Allow custom time range selection 5. Choose 'Save' from the toolbar to create the parameter.
- ![Image showing the creation of a time range parameter](./media/workbooks-time/time-settings.png)
+ :::image type="content" source="media/workbooks-time/time-settings.png" alt-text="Screenshot showing the creation of a workbooks time range parameter.":::
This is how the workbook will look like in read-mode.
-![Image showing a time range parameter in read mode](./media/workbooks-time/parameters-time.png)
## Referencing a time parameter ### Via Bindings
This is how the workbook will look like in read-mode.
3. This binds the time range parameter to the time range of the chart. The time scope of the sample query is now Last 24 hours. 4. Run query to see the results
- ![Image showing a time range parameter referenced via bindings](./media/workbooks-time/time-binding.png)
+ :::image type="content" source="media/workbooks-time/time-binding.png" alt-text="Screenshot showing a workbooks time range parameter referenced via bindings.":::
### In KQL 1. Add a query control to the workbook and select an Application Insights resource.
This is how the workbook will look like in read-mode.
3. This expands on query evaluation time to `| where timestamp > ago(1d)`, which is the time range value of the parameter. 4. Run query to see the results
- ![Image showing a time range referenced in KQL](./media/workbooks-time/time-in-code.png)
+ :::image type="content" source="media/workbooks-time/time-in-code.png" alt-text="Screenshot showing a time range referenced in KQL.":::
### In Text 1. Add a text control to the workbook.
requests
## Next steps
-* [Get started](./workbooks-overview.md#visualizations) learning more about workbooks many rich visualizations options.
-* [Control](./workbooks-access-control.md) and share access to your workbook resources.
+ - [Getting started with Azure Workbooks](workbooks-getting-started.md).
azure-monitor Workbooks Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-visualizations.md
+
+ Title: Workbook visualizations
+description: Learn about the types of visualizations you can use to create rich visual reports with Azure workbooks.
++++ Last updated : 05/30/2022++++
+# Workbook visualizations
+
+Workbooks provide a rich set of capabilities for visualizing Azure Monitor data. The exact set of capability depends on the data source and result set, but authors can expect them to converge over time. These controls allow authors to present their analysis in rich, interactive reports.
+
+Workbooks support these kinds of visual components:
+* [Text parameters](#text-parameters)
+* Using queries:
+ * [Charts](#charts)
+ * [Grids](#grids)
+ * [Tiles](#tiles)
+ * [Trees](#trees)
+ * [Honey comb](#honey-comb)
+ * [Graphs](#graphs)
+ * [Maps](#maps)
+ * [Text visualization](#text-visualizations)
+
+> [!NOTE]
+> Each visualization and data source may have its own [Limits](workbooks-limits.md).
+
+## Examples
+
+### [Text parameters](workbooks-text.md)
++
+### [Charts](workbooks-chart-visualizations.md)
++
+### [Grids](workbooks-grid-visualizations.md)
++
+### [Tiles](workbooks-tile-visualizations.md)
++
+### [Trees](workbooks-tree-visualizations.md)
++
+### [Honey comb](workbooks-honey-comb.md)
++
+### [Graphs](workbooks-graph-visualizations.md)
++
+### [Maps](workbooks-map-visualizations.md)
++
+### [Text visualizations](workbooks-text-visualizations.md)
+
+## Next steps
+
+ - [Getting started with Azure Workbooks](workbooks-getting-started.md).
azure-vmware Ecosystem App Monitoring Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/ecosystem-app-monitoring-solutions.md
Last updated 04/11/2022
A key objective of Azure VMware Solution is to maintain the performance and security of applications and services across VMware on Azure and on-premises. Getting there requires visibility into complex infrastructures and quickly pinpointing the root cause of service disruptions across the hybrid cloud.
+## Microsoft solutions
+
+Microsoft recommends [Application Insights](../azure-monitor/app/app-insights-overview.md#application-insights-overview), a feature of [Azure Monitor](../azure-monitor/overview.md#azure-monitor-overview), to maximize the availability and performance of your applications and services.
+
+Learn how modern monitoring with Azure Monitor can transform your business by reviewing the [product overview, features, getting started guide and more](https://azure.microsoft.com/services/monitor).
+
+## Third-party solutions
Our application performance monitoring and troubleshooting partners have industry-leading solutions in VMware-based environments that assure the availability, reliability, and responsiveness of applications and services. Our customers have adopted many of these solutions integrated with VMware NSX-T Data Center for their on-premises deployments. As one of our key principles, we want to enable them to continue to use their investments and VMware solutions running on Azure. Many of these Independent Software Vendors (ISV) have validated their solutions with Azure VMware Solution. You can find more information about these solutions here:
azure-vmware Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/introduction.md
Title: Introduction
description: Learn the features and benefits of Azure VMware Solution to deploy and manage VMware-based workloads in Azure. Azure VMware Solution SLA guarantees that Azure VMware management tools (vCenter Server and NSX Manager) will be available at least 99.9% of the time. Previously updated : 04/20/2021 Last updated : 06/15/2022 # What is Azure VMware Solution?
Azure VMware Solution private clouds use vSphere role-based access control for e
vSAN data-at-rest encryption, by default, is enabled and is used to provide vSAN datastore security. For more information, see [Storage concepts](concepts-storage.md).
+## VMware software versions
++ ## Host and software lifecycle maintenance Regular upgrades of the Azure VMware Solution private cloud and VMware software ensure the latest security, stability, and feature sets are running in your private clouds. For more information, see [Host maintenance and lifecycle management](concepts-private-clouds-clusters.md#host-maintenance-and-lifecycle-management).
backup Backup Azure Vms Enhanced Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-enhanced-policy.md
Follow these steps:
>- We support Enhanced policy configuration through [Recovery Services vault](./backup-azure-arm-vms-prepare.md) and [VM Manage blade](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm) only. Configuration through Backup center is currently not supported. >- For hourly backups, the last backup of the day is transferred to vault. If backup fails, the first backup of the next day is transferred to vault. >- Enhanced policy is only available to unprotected VMs that are new to Azure Backup. Note that Azure VMs that are protected with existing policy can't be moved to Enhanced policy.
+>- Back up an Azure VM with disks that has public network access disabled, using Enhanced policy, is not supported.
## Next steps
backup Backup Azure Vms Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-troubleshoot.md
Title: Troubleshoot backup errors with Azure VMs
description: In this article, learn how to troubleshoot errors encountered with backup and restore of Azure virtual machines. Previously updated : 05/17/2022 Last updated : 06/16/2022
Error message: VM creation failed due to Market Place purchase request being not
Azure Backup supports backup and restore of VMs which are available in Azure Marketplace. This error occurs when you are trying to restore a VM (with a specific Plan/Publisher setting) which is no longer available in Azure Marketplace, [Learn more here](/legal/marketplace/participation-policy#offering-suspension-and-removal).
-* To resolve this issue, use the [restore disks](./backup-azure-arm-restore-vms.md#restore-disks) option during the restore operation and then use [PowerShell](./backup-azure-vms-automation.md#create-a-vm-from-restored-disks) or [Azure CLI](./tutorial-restore-disk.md) cmdlets to create the VM with the latest marketplace information corresponding to the VM.
-* If the publisher does not have any Marketplace information, you can use the data disks to retrieve your data and you can attach them to an existing VM.
+In this scenario, it may not be possible to create the VM from the restored disks.
+
+If the publisher doesn't have any Marketplace information, you can use the data disks to retrieve your data and you can attach them to an existing VM.
### ExtensionConfigParsingFailure - Failure in parsing the config for the backup extension
If you see permissions in the **MachineKeys** directory that are different than
### ExtensionStuckInDeletionState - Extension state is not supportive to backup operation Error code: ExtensionStuckInDeletionState <br/>
-Error message: Extension state is not supportive to backup operation
+Error message: Extension state is not supportive to the backup operation
The Backup operation failed due to inconsistent state of Backup Extension. To resolve this issue, follow these steps:
The snapshot operation failed as the snapshot limit has exceeded for some of the
Error code: ExtensionFailedTimeoutVMNetworkUnresponsive<br/> Error message: Snapshot operation failed due to inadequate VM resources.
-Backup operation on the VM failed due to delay in network calls while performing the snapshot operation. To resolve this issue, perform Step 1. If the issue persists, try steps 2 and 3.
+The backup operation on the VM failed due to delay in network calls while performing the snapshot operation. To resolve this issue, perform Step 1. If the issue persists, try steps 2 and 3.
**Step 1**: Create snapshot through Host
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Title: Support matrix for Azure VM backup description: Provides a summary of support settings and limitations when backing up Azure VMs with the Azure Backup service. Previously updated : 05/24/2022 Last updated : 06/13/2022
chaos-studio Chaos Studio Fault Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md
description: Understand the available actions you can use with Chaos Studio incl
Previously updated : 04/21/2022 Last updated : 06/16/2022
The following faults are available for use today. Visit the [Fault Providers](./
| Supported OS Types | N/A | | Description | Adds a time delay before, between, or after other actions. Useful for waiting for the impact of a fault to appear in a service or for waiting for an activity outside of the experiment to complete (for example, waiting for autohealing to occur before injecting another fault). | | Prerequisites | N/A |
-| Urn | urn:provider:Azure-chaosStudio:Microsoft.Azure.Chaos.Delay.Timed |
+| Urn | urn:csci:microsoft:chaosStudio:timedDelay/1.0 |
| duration | The duration of the delay in ISO 8601 format (Example: PT10M) | ### Sample JSON
The following faults are available for use today. Visit the [Fault Providers](./
"actions": [ { "type": "delay",
- "name": "urn:provider:Azure-chaosStudio:Microsoft.Azure.Chaos.Delay.Timed",
+ "name": "urn:csci:microsoft:chaosStudio:timedDelay/1.0",
"duration": "PT10M" } ]
cognitive-services Backwards Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/concepts/backwards-compatibility.md
When you import the LUIS JSON application into conversational language understan
|**Feature**|**Notes**| |: - |: - | |Intents|All of your intents will be transferred as conversational language understanding intents with the same names.|
-|ML entities|All of your ML entities will be transferred as conversational language understanding entities with the same names. The labels will be persisted and used to train the Learned component of the entity. Structured ML entities will transfer over the leaf nodes of the structure as different entities and apply their labels accordingly.|
-|Utterances|All of your LUIS utterances will be transferred as conversational language understanding utterances with their intent and entity labels. Structured ML entity labels will only consider the top-level entity labels, and the individual subentity labels will be ignored.|
+|ML entities|All of your ML entities will be transferred as conversational language understanding entities with the same names. The labels will be persisted and used to train the Learned component of the entity. Structured ML entities will transfer over the lowest level subentities of the structure as different entities and apply their labels accordingly.|
+|Utterances|All of your LUIS utterances will be transferred as conversational language understanding utterances with their intent and entity labels. Structured ML entity labels will only consider the lowest level subentity labels, and all the top level entity labels will be ignored.|
|Culture|The primary language of the Conversation project will be the LUIS app culture. If the culture is not supported, the importing will fail. | |List entities|All of your list entities will be transferred as conversational language understanding entities with the same names. The normalized values and synonyms of each list will be transferred as keys and synonyms in the list component for the conversational language understanding entity.| |Prebuilt entities|All of your prebuilt entities will be transferred as conversational language understanding entities with the same names. The conversational language understanding entity will have the relevant [prebuilt entities](entity-components.md#prebuilt-component) enabled if they are supported. |
container-instances Monitor Azure Container Instances Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/monitor-azure-container-instances-reference.md
+
+ Title: Monitoring Azure Container Instances data reference
+description: Important reference material needed when you monitor Azure Container Instances
+++++ Last updated : 06/06/2022++
+# Monitoring Azure Container Instances data reference
+
+See [Monitoring Azure Container Instances](monitor-azure-container-instances.md) for details on collecting and analyzing monitoring data for Azure Container Instances.
+
+## Metrics
+
+This section lists all the automatically collected platform metrics collected for Azure Container Instances.
+
+|Metric Type | Resource Provider / Type Namespace<br/> and link to individual metrics |
+|-|--|
+| Container Instances | [Microsoft.ContainerInstance/containerGroups](/azure/azure-monitor/platform/metrics-supported#microsoftcontainerinstancecontainergroups) |
+
+## Metric dimensions
+
+For more information on what metric dimensions are, see [Multi-dimensional metrics](/azure/azure-monitor/platform/data-platform-metrics#multi-dimensional-metrics).
+
+Azure Container Instances has the following dimension associated with its metrics.
+
+| Dimension Name | Description |
+| - | -- |
+| **containerName** | The name of the container. The name must be between 1 and 63 characters long. It can contain only lowercase letters numbers, and dashes. Dashes can't begin or end the name, and dashes can't be consecutive. The name must be unique in its resource group. |
+
+## Activity log
+
+The following table lists the operations that Azure Container Instances may record in the Activity log. This is a subset of the possible entries you might find in the activity log. You can also find this information in the [Azure role-based access control (RBAC) Resource provider operations documentation](/azure/role-based-access-control/resource-provider-operations#microsoftcontainerinstance).
+
+| Operation | Description |
+|:|:|
+| Microsoft.ContainerInstance/register/action | Registers the subscription for the container instance resource provider and enables the creation of container groups. |
+| Microsoft.ContainerInstance/containerGroupProfiles/read | Get all container group profiles. |
+| Microsoft.ContainerInstance/containerGroupProfiles/write | Create or update a specific container group profile. |
+| Microsoft.ContainerInstance/containerGroupProfiles/delete | Delete the specific container group profile. |
+| Microsoft.ContainerInstance/containerGroups/read | Get all container groups. |
+| Microsoft.ContainerInstance/containerGroups/write | Create or update a specific container group. |
+| Microsoft.ContainerInstance/containerGroups/delete | Delete the specific container group. |
+| Microsoft.ContainerInstance/containerGroups/restart/action | Restarts a specific container group. This log only captures customer-intiated restarts, not restarts initiated by Azure Container Instances infrastructure. |
+| Microsoft.ContainerInstance/containerGroups/stop/action | Stops a specific container group. Compute resources will be deallocated and billing will stop. |
+| Microsoft.ContainerInstance/containerGroups/start/action | Starts a specific container group. |
+| Microsoft.ContainerInstance/containerGroups/containers/exec/action | Exec into a specific container. |
+| Microsoft.ContainerInstance/containerGroups/containers/attach/action | Attach to the output stream of a container. |
+| Microsoft.ContainerInstance/containerGroups/containers/buildlogs/read | Get build logs for a specific container. |
+| Microsoft.ContainerInstance/containerGroups/containers/logs/read | Get logs for a specific container. |
+| Microsoft.ContainerInstance/containerGroups/detectors/read | List Container Group Detectors |
+| Microsoft.ContainerInstance/containerGroups/operationResults/read | Get async operation result |
+| Microsoft.ContainerInstance/containerGroups/outboundNetworkDependenciesEndpoints/read | List Container Group Detectors |
+| Microsoft.ContainerInstance/containerGroups/providers/Microsoft.Insights/diagnosticSettings/read | Gets the diagnostic setting for the container group. |
+| Microsoft.ContainerInstance/containerGroups/providers/Microsoft.Insights/diagnosticSettings/write | Creates or updates the diagnostic setting for the container group. |
+| Microsoft.ContainerInstance/containerGroups/providers/Microsoft.Insights/metricDefinitions/read | Gets the available metrics for container group. |
+| Microsoft.ContainerInstance/locations/deleteVirtualNetworkOrSubnets/action | Notifies Microsoft.ContainerInstance that virtual network or subnet is being deleted. |
+| Microsoft.ContainerInstance/locations/cachedImages/read | Gets the cached images for the subscription in a region. |
+| Microsoft.ContainerInstance/locations/capabilities/read | Get the capabilities for a region. |
+| Microsoft.ContainerInstance/locations/operationResults/read | Get async operation result |
+| Microsoft.ContainerInstance/locations/operations/read | List the operations for Azure Container Instance service. |
+| Microsoft.ContainerInstance/locations/usages/read | Get the usage for a specific region. |
+| Microsoft.ContainerInstance/operations/read | List the operations for Azure Container Instance service. |
+| Microsoft.ContainerInstance/serviceassociationlinks/delete | Delete the service association link created by Azure Container Instance resource provider on a subnet. |
+
+See [all the possible resource provider operations in the activity log](/azure/role-based-access-control/resource-provider-operations).
+
+For more information on the schema of Activity Log entries, see [Activity Log schema](/azure/azure-monitor/essentials/activity-log-schema).
+
+## Schemas
+
+The following schemas are in use by Azure Container Instances.
+
+> [!NOTE]
+> Some of the columns listed below only exist as part of the schema, and won't have any data emitted in logs. These columns are denoted below with a description of 'Empty'.
+
+### ContainerInstanceLog_CL
+
+| Column | Type | Description |
+|-|-|-|
+|Computer|string|Empty|
+|ContainerGroup_s|string|The name of the container group associated with the record|
+|ContainerID_s|string|A unique identifier for the container associated with the record|
+|ContainerImage_s|string|The name of the container image associated with the record|
+|Location_s|string|The location of the resource associated with the record|
+|Message|string|If applicable, the message from the container|
+|OSType_s|string|The name of the operating system the container is based on|
+|RawData|string|Empty|
+|ResourceGroup|string|Name of the resource group that the record is associated with|
+|Source_s|string|Name of the logging component, "LoggingAgent"|
+|SubscriptionId|string|A unique identifier for the subscription that the record is associated with|
+|TimeGenerated|datetime|Timestamp when the event was generated by the Azure service processing the request corresponding the event|
+|Type|string|The name of the table|
+|_ResourceId|string|A unique identifier for the resource that the record is associated with|
+|_SubscriptionId|string|A unique identifier for the subscription that the record is associated with|
+
+### ContainerEvent_CL
+
+|Column|Type|Description|
+|-|-|-|
+|Computer|string|Empty|
+|ContainerGroupInstanceId_g|string|A unique identifier for the container group associated with the record|
+|ContainerGroup_s|string|The name of the container group associated with the record|
+|ContainerName_s|string|The name of the container associated with the record|
+|Count_d|real|How many times the event has occurred since the last poll|
+|FirstTimestamp_t|datetime|The timestamp of the first time the event occurred|
+|Location_s|string|The location of the resource associated with the record|
+|Message|string|If applicable, the message from the container|
+|OSType_s|string|The name of the operating system the container is based on|
+|RawData|string|Empty|
+|Reason_s|string|Empty|
+|ResourceGroup|string|The name of the resource group that the record is associated with|
+|SubscriptionId|string|A unique identifier for the subscription that the record is associated with|
+|TimeGenerated|datetime|Timestamp when the event was generated by the Azure service processing the request corresponding the event|
+|Type|string|The name of the table|
+|_ResourceId|string|A unique identifier for the resource that the record is associated with|
+|_SubscriptionId|string|A unique identifier for the subscription that the record is associated with|
+
+## See also
+
+- See [Monitoring Azure Container Instances](monitor-azure-container-instances.md) for a description of monitoring Azure Container Instances.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/insights/monitor-azure-resources) for details on monitoring Azure resources.
container-instances Monitor Azure Container Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/monitor-azure-container-instances.md
+
+ Title: Monitoring Azure Container Instances
+description: Start here to learn how to monitor Azure Container Instances
+++++ Last updated : 06/06/2022++
+# Monitoring Azure Container Instances
+
+When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
+
+This article describes the monitoring data generated by Azure Container Instances. Azure Container Instances includes built-in support for [Azure Monitor](/azure/azure-monitor/overview). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource).
+
+## Monitoring overview page in Azure portal
+
+The **Overview** page in the Azure portal for each container instance includes a brief view of resource usage and telemetry.
+
+ ![Graphs of resource usage displayed on Container Instance overview page, PNG.](./media/monitor-azure-container-instances/overview-monitoring-data.png)
+
+## Monitoring data
+
+Azure Container Instances collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/essentials/monitor-azure-resource#monitoring-data-from-Azure-resources).
+
+See [Monitoring *Azure Container Instances* data reference](monitor-azure-container-instances-reference.md) for detailed information on the metrics and logs metrics created by Azure Container Instances.
+
+## Collection and routing
+
+Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+
+Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
+
+See [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect.
+
+The metrics and logs you can collect are discussed in the following sections.
+
+## Analyzing metrics
+
+You can analyze metrics for *Azure Container Instances* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](/azure/azure-monitor/essentials/metrics-getting-started) for details on using this tool.
+
+For a list of the platform metrics collected for Azure Container Instances, see [Monitoring Azure Container Instances data reference metrics](monitor-azure-container-instances-reference.md#metrics).
+
+All metrics for Azure Container Instances are in the namespace **Container group standard metrics**. In a container group with multiple containers, you can additionally filter on the [dimension](monitor-azure-container-instances-reference.md#metric-dimensions) **containerName** to acquire metrics from a specific container within the group.
+
+For reference, you can see a list of [all resource metrics supported in Azure Monitor](/azure/azure-monitor/essentials/metrics-supported).
+
+### View operation level metrics for Azure Container Instances
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Select **Monitor** from the left-hand navigation bar, and select **Metrics**.
+
+ ![Screenshot of metrics tab under Monitor on the Azure portal, PNG.](./media/monitor-azure-container-instances/azure-monitor-metrics-pane.png)
+
+1. On the **Select a scope** page, choose your **subscription** and **resource group**. Under **Refine scope**, choose **Container instances** for **Resource type**. Pick one of your container instances from the list and select **Apply**.
+
+ ![Screenshot of selecting scope for metrics analysis on the Azure portal, PNG.](./media/monitor-azure-container-instances/select-a-scope.png)
+
+1. Next, you can pick a metric to view from the list of available metrics. Here, we choose **CPU Usage** and use **Avg** as the aggregation value.
+
+ ![Screenshot of selecting CPU Usage metric, PNG.](./media/monitor-azure-container-instances/select-a-metric.png)
+
+### Add filters to metrics
+
+In a scenario where you have a container group with multiple containers, you may find it useful to apply a filter on the metric dimension **containerName**. This will allow you to view metrics by container as opposed to an aggregate of the group as a whole.
+
+ ![Screenshot of filtering metrics by container name in a container group, PNG.](./media/monitor-azure-container-instances/apply-a-filter.png)
+
+## Analyzing logs
+
+Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
+
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/resource-logs-schema) The schema for Azure Container Instances resource logs is found in the [Azure Container Instances Data Reference](monitor-azure-container-instances-reference.md#schemas).
+
+The [Activity log](/azure/azure-monitor/essentials/activity-log) is a type of Azure platform log that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics. You can see a list of the kinds of operations that will be logged in the [Azure Container Instances Data Reference](monitor-azure-container-instances-reference.md#activity-log)
+
+### Sample Kusto queries
+
+Azure Monitor logs includes an extensive [query language][query_lang] for pulling information from potentially thousands of lines of log output.
+
+The basic structure of a query is the source table (in this article, `ContainerInstanceLog_CL` or `ContainerEvent_CL`) followed by a series of operators separated by the pipe character (`|`). You can chain several operators to refine the results and perform advanced functions.
+
+To see example query results, paste the following query into the query text box, and select the **Run** button to execute the query. This query displays all log entries whose "Message" field contains the word "warn":
+
+```query
+ContainerInstanceLog_CL
+| where Message contains "warn"
+```
+
+More complex queries are also supported. For example, this query displays only those log entries for the "mycontainergroup001" container group generated within the last hour:
+
+```query
+ContainerInstanceLog_CL
+| where (ContainerGroup_s == "mycontainergroup001")
+| where (TimeGenerated > ago(1h))
+```
+
+> [!IMPORTANT]
+> When you select **Logs** from the Azure Container Instances menu, Log Analytics is opened with the query scope set to the current Azure Container Instances. This means that log queries will only include data from that resource. If you want to run a query that includes data from other resources or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](/azure/azure-monitor/logs/scope) for details.
+
+For a list of common queries for Azure Container Instances, see the [Log Analytics queries interface](/azure/azure-monitor/logs/queries).
+
+## Alerts
+
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](/azure/azure-monitor/alerts/alerts-metric-overview), [logs](/azure/azure-monitor/alerts/alerts-unified-log), and the [activity log](/azure/azure-monitor/alerts/activity-log-alerts). Different types of alerts have benefits and drawbacks.
+
+For Azure Container Instances, there are three categories for alerting:
+
+* **Activity logs** - You can set alerts for Azure Container Instances operations like create, update, and delete. See the [Monitoring Azure Container Instances data reference](monitor-azure-container-instances-reference.md#activity-log) for a list of activities you can track.
+* **Metrics** - You can set alerts for vCPU usage, memory usage, and network input and output utilization. Depending on the function of the container you deploy, you may want to monitor different metrics. For example, if you don't expect your container's memory usage to exceed a certain threshold, setting an alert for when memory usage exceeds it may be useful.
+* **Custom log search** - You can set alerts for specific outputs in logs. For example, you can use these alerts to robustly capture stdout and stderr by setting alerts for when those outputs appear in the logs.
+
+## Next steps
+
+* See the [Monitoring Azure Container Instances data reference](monitor-azure-container-instances-reference.md) for a reference of the metrics, logs, and other important values created by Azure Container Instances.
+* See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
container-registry Container Registry Auth Aci https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auth-aci.md
az container create \
--registry-password <service-principal-password> ```
+>[!Note]
+> We recommend running the commands in the most recent version of the Azure Cloud Shell. Set `export MSYS_NO_PATHCONV=1` for running on-perm bash environment.
+ ## Sample scripts You can find the preceding sample scripts for Azure CLI on GitHub, as well versions for Azure PowerShell:
container-registry Container Registry Tutorial Sign Build Push https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-build-push.md
+
+ Title: Build, Sign and Verify a container image using notation and certificate in Azure Key Vault
+description: In this tutorial you'll learn to create a signing certificate, build a container image, remote sign image with notation and Azure Key Vault, and then verify the container image using the Azure Container Registry.
++++ Last updated : 05/08/2022++
+# Build, sign, and verify container images using Notary and Azure Key Vault (Preview)
+
+The Azure Key Vault (AKV) is used to store a signing key that can be utilized by **notation** with the notation AKV plugin (azure-kv) to sign and verify container images and other artifacts. The Azure Container Registry (ACR) allows you to attach these signatures using the **az** or **oras** CLI commands.
+
+The signed containers enable users to assure deployments are built from a trusted entity and verify artifact hasn't been tampered with since their creation. The signed artifact ensures integrity and authenticity before the user pulls an artifact into any environment and avoid attacks.
++
+In this tutorial:
+
+> [!div class="checklist"]
+> * Store a signing certificate in Azure Key Vault
+> * Sign a container image with notation
+> * Verify a container image signature with notation
+
+## Prerequisites
+
+> * Install, create and sign in to [ORAS artifact enabled registry](/articles/container-registry/container-registry-oras-artifacts#sign-in-with-oras-1)
+> * Create or use an [Azure Key Vault](/azure/key-vault/general/quick-create-cli)
+>* This tutorial can be run in the [Azure Cloud Shell](https://portal.azure.com/#cloudshell/)
+
+## Install the notation CLI and AKV plugin
+
+> [!NOTE]
+> The tutorial uses early released versions of notation and notation plugins.
+
+1. Install notation with plugin support from the [release version](https://github.com/notaryproject/notation/releases/)
+
+ ```bash
+ # Download, extract and install
+ curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v0.9.0-alpha.1/notation_0.9.0-alpha.1_linux_amd64.tar.gz
+ tar xvzf notation.tar.gz
+
+ # Copy the notation cli to the desired bin directory in your PATH
+ cp ./notation /usr/local/bin
+ ```
+
+2. Install the notation Azure Key Vault plugin for remote signing and verification
+
+ > [!NOTE]
+ > The plugin directory varies depending upon the operating system being used. The directory path below assumes Ubuntu.
+ > Please read the [notation config article](https://github.com/notaryproject/notation/blob/main/specs/notation-config.md) for more information.
+
+ ```bash
+ # Create a directory for the plugin
+ mkdir -p ~/.config/notation/plugins/azure-kv
+
+ # Download the plugin
+ curl -Lo notation-azure-kv.tar.gz \
+ https://github.com/Azure/notation-azure-kv/releases/download/v0.3.0-alpha.1/notation-azure-kv_0.3.0-alpha.1_Linux_amd64.tar.gz
+
+ # Extract to the plugin directory
+ tar xvzf notation-azure-kv.tar.gz -C ~/.config/notation/plugins/azure-kv notation-azure-kv
+ ```
+
+3. List the available plugins and verify that the plugin is available
+
+ ```bash
+ notation plugin ls
+ ```
+
+## Configure environment variables
+
+> [!NOTE]
+> For easy execution of commands in the tutorial, provide values for the Azure resources to match the existing ACR and AKV resources.
+
+1. Configure AKV resource names
+
+ ```bash
+ # Name of the existing AKV Resource Group
+ AKV_RG=myResourceGroup
+ # Name of the existing Azure Key Vault used to store the signing keys
+ AKV_NAME=<your-unique-keyvault-name>
+ # New desired key name used to sign and verify
+ KEY_NAME=wabbit-networks-io
+ KEY_SUBJECT_NAME=wabbit-networks.io
+ ```
+
+2. Configure ACR and image resource names
+
+ ```bash
+ # Name of the existing registry example: myregistry.azurecr.io
+ ACR_NAME=myregistry
+ # Existing full domain of the ACR
+ REGISTRY=$ACR_NAME.azurecr.io
+ # Container name inside ACR where image will be stored
+ REPO=net-monitor
+ TAG=v1
+ IMAGE=$REGISTRY/${REPO}:$TAG
+ # Source code directory containing Dockerfile to build
+ IMAGE_SOURCE=https://github.com/wabbit-networks/net-monitor.git#main
+ ```
+
+## Store the signing certificate in AKV
+
+If you have an existing certificate, upload it to AKV. For more information on how to use your own signing key, see the [signing certificate requirements.](https://github.com/notaryproject/notaryproject/blob/main/signature-specification.md#certificate-requirements)
+Otherwise create an x509 self-signed certificate storing it in AKV for remote signing using the steps below.
+
+### Create a self-signed certificate (Azure CLI)
+
+1. Create a certificate policy file
+
+ Once the certificate policy file is executed as below, it creates a valid signing certificate compatible with **notation** in AKV.
+
+ ```bash
+ cat <<EOF > ./my_policy.json
+ {
+ "issuerParameters": {
+ "certificateTransparency": null,
+ "name": "Self"
+ },
+ "x509CertificateProperties": {
+ "ekus": [
+ "1.3.6.1.5.5.7.3.1",
+ "1.3.6.1.5.5.7.3.2",
+ "1.3.6.1.5.5.7.3.3"
+ ],
+ "subject": "CN=${KEY_SUBJECT_NAME}",
+ "validityInMonths": 12
+ }
+ }
+ EOF
+ ```
+
+1. Create the certificate
+
+ ```azure-cli
+ az keyvault certificate create -n $KEY_NAME --vault-name $AKV_NAME -p @my_policy.json
+ ```
+
+1. Get the Key ID for the certificate
+
+ ```bash
+ KEY_ID=$(az keyvault certificate show -n $KEY_NAME --vault-name $AKV_NAME --query 'id' -otsv)
+ ```
+4. Download public certificate
+
+ ```bash
+ az keyvault certificate download --file $CERT_PATH --id $KEY_ID --encoding PEM
+ ```
+
+5. Add the Key ID to the keys and certs
+
+ ```bash
+ notation key add --name $KEY_NAME --plugin azure-kv --id $KEY_ID
+ notation cert add --name $KEY_NAME $CERT_PATH
+ ```
+
+6. List the keys and certs to confirm
+
+ ```bash
+ notation key ls
+ notation cert ls
+ ```
+
+## Build and sign a container image
+
+1. Build and push a new image with ACR Tasks
+
+ ```azure-cli
+ az acr build -r $ACR_NAME -t $IMAGE $IMAGE_SOURCE
+ ```
+
+2. Authenticate with your individual Azure AD identity to use an ACR token
+
+ ```azure-cli
+ export USER_NAME="00000000-0000-0000-0000-000000000000"
+ export PASSWORD=$(az acr login --name $ACR_NAME --expose-token --output tsv --query accessToken)
+ export NOTATION_PASSWORD=$PASSWORD
+ ```
+
+3. Sign the container image
+
+ ```bash
+ notation sign --key $KEY_NAME $IMAGE
+ ```
+
+## View the graph of artifacts with the ORAS CLI
+
+ACR support for ORAS artifacts enables a linked graph of supply chain artifacts that can be viewed through the ORAS CLI or the Azure CLI
+
+1. Signed images can be view with the ORAS CLI
+
+ ```bash
+ oras login -u $USER_NAME -p $PASSWORD $REGISTRY
+ oras discover -o tree $IMAGE
+ ```
+
+## View the graph of artifacts with the Azure CLI
+
+1. List the manifest details for the container image
+
+ ```azure-cli
+ az acr manifest show-metadata $IMAGE -o jsonc
+ ```
+
+2. Generates a result, showing the `digest` representing the notary v2 signature.
+
+ ```json
+ {
+ "changeableAttributes": {
+ "deleteEnabled": true,
+ "listEnabled": true,
+ "readEnabled": true,
+ "writeEnabled": true
+ },
+ "createdTime": "2022-05-13T23:15:54.3478293Z",
+ "digest": "sha256:effba96d9b7092a0de4fa6710f6e73bf8c838e4fbd536e95de94915777b18613",
+ "lastUpdateTime": "2022-05-13T23:15:54.3478293Z",
+ "name": "v1",
+ "quarantineState": "Passed",
+ "signed": false
+ }
+ ```
+
+## Verify the container image
+
+1. The notation command can also help to ensure the container image hasn't been tampered with since build time by comparing the `sha` with what is in the registry.
+
+```bash
+notation verify $IMAGE
+sha256:effba96d9b7092a0de4fa6710f6e73bf8c838e4fbd536e95de94915777b18613
+```
+The sha256 result is a successful verification of the image using the trusted certificate.
+
+2. We can add a different local signing certificate to show how multiple certificates and verification failures work.
+
+```bash
+notation cert generate-test -n localcert --trust true
+notation verify $IMAGE
+sha256:effba96d9b7092a0de4fa6710f6e73bf8c838e4fbd536e95de94915777b18613
+```
+
+We can see that verification still passes because `notation verify` will implicitly pass with _any_ certificate in its trust store. To get a verification failure, we'll remove the certificate utilized to sign the image.
+
+```azure-cli
+notation cert rm $KEY_NAME
+notation verify $IMAGE
+2022/06/10 11:24:30 verification failure: x509: certificate signed by unknown authority
+```
+
+## Next steps
+
+[Enforce policy to only deploy signed container images to Azure Kubernetes Service (AKS) utilizing **ratify** and **gatekeeper**.](https://github.com/Azure/notation-azure-kv/blob/main/docs/nv2-sign-verify-aks.md)
cosmos-db Managed Identity Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/managed-identity-based-authentication.md
# Use system-assigned managed identities to access Azure Cosmos DB data+ [!INCLUDE [appliesto-sql-api](includes/appliesto-sql-api.md)]
-In this article, you'll set up a *robust, key rotation agnostic* solution to access Azure Cosmos DB keys by using [managed identities](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md) and [data plane role-based access control](how-to-setup-rbac.md). The example in this article uses Azure Functions, but you can use any service that supports managed identities.
+In this article, you'll set up a *robust, key rotation agnostic* solution to access Azure Cosmos DB keys by using [managed identities](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md) and [data plane role-based access control](how-to-setup-rbac.md). The example in this article uses Azure Functions, but you can use any service that supports managed identities.
You'll learn how to create a function app that can access Azure Cosmos DB data without needing to copy any Azure Cosmos DB keys. The function app will trigger when an HTTP request is made and then list all of the existing databases.
In this step, you'll create two databases.
In this step, you'll query the document endpoint for the SQL API account.
-1. Use ``az cosmosdb show`` with the **query** parameter set to ``documentEndpoint``. Record the result. You'll use this value in a later step.
+1. Use ``az cosmosdb show`` with the **query** parameter set to ``documentEndpoint``. Record the result. You'll use this value in a later step.
```azurecli-interactive az cosmosdb show \
In this step, you'll query the document endpoint for the SQL API account.
In this step, you'll assign a role to the function app's system-assigned managed identity. Azure Cosmos DB has multiple built-in roles that you can assign to the managed identity. For this solution, you'll use the [Cosmos DB Built-in Data Reader](how-to-setup-rbac.md#built-in-role-definitions) role.
-> [!TIP]
+> [!TIP]
> When you assign roles, assign only the needed access. If your service requires only reading data, then assign the **Cosmos DB Built-in Data Reader** role to the managed identity. For more information about the importance of least privilege access, see the [Lower exposure of privileged accounts](../security/fundamentals/identity-management-best-practices.md#lower-exposure-of-privileged-accounts) article. 1. Use ``az cosmosdb show`` with the **query** parameter set to ``id``. Store the result in a shell variable named ``scope``.
In this step, you'll assign a role to the function app's system-assigned managed
} ```
-1. Use [``az role assignment create``](/cli/azure/cosmosdb/sql/role/assignment#az-cosmosdb-sql-role-assignment-create) to assign the ``Cosmos DB Built-in Data Reader`` role to the system-assigned managed identity.
+1. Use [``az cosmosdb sql role definition create``](/cli/azure/cosmosdb/sql/role/definition#az-cosmosdb-sql-role-definition-create) to create a new role definition named ``Read Cosmos Metadata`` using the custom JSON object.
+
+ ```azurecli-interactive
+ az cosmosdb sql role definition create \
+ --resource-group $resourceGroupName \
+ --account-name $cosmosName \
+ --body @definition.json
+ ```
+
+ > [!NOTE]
+ > In this example, the role definition is defined in a file named **definition.json**.
+
+1. Use [``az role assignment create``](/cli/azure/cosmosdb/sql/role/assignment#az-cosmosdb-sql-role-assignment-create) to assign the ``Read Cosmos Metadata`` role to the system-assigned managed identity.
```azurecli-interactive az cosmosdb sql role assignment create \
We now have a function app that has a system-assigned managed identity with the
} ```
-## (Optional) Run the function locally
+## (Optional) Run the function locally
In a local environment, the [``DefaultAzureCredential``](/dotnet/api/azure.identity.defaultazurecredential) class will use various local credentials to determine the current identity. While running locally isn't required for the how-to, you can develop locally using your own identity or a service principal.
In a local environment, the [``DefaultAzureCredential``](/dotnet/api/azure.ident
> [!NOTE] > This JSON object has been shortened for brevity. This JSON object also includes a sample value that assumes your account name is ``msdocs-cosmos-app``.
-1. Run the function app
+1. Run the function app
```azurecli func start
Once published, the ``DefaultAzureCredential`` class will use credentials from t
## Next steps
-* [Certificate-based authentication with Azure Cosmos DB and Azure Active Directory](certificate-based-authentication.md)
-* [Secure Azure Cosmos DB keys using Azure Key Vault](access-secrets-from-keyvault.md)
-* [Security baseline for Azure Cosmos DB](security-baseline.md)
+- [Certificate-based authentication with Azure Cosmos DB and Azure Active Directory](certificate-based-authentication.md)
+- [Secure Azure Cosmos DB keys using Azure Key Vault](access-secrets-from-keyvault.md)
+- [Security baseline for Azure Cosmos DB](security-baseline.md)
cosmos-db Migrate Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/migrate-continuous-backup.md
To perform the migration, you need `Microsoft.DocumentDB/databaseAccounts/write`
## Pricing after migration
-After you migrate your account to continuous backup mode, the cost with this mode is different when compared to the periodic backup mode. The continuous mode backup cost is cheaper than periodic mode. To learn more, see the [continuous backup mode pricing](continuous-backup-restore-introduction.md#continuous-backup-pricing) example.
+After you migrate your account to continuous backup mode, the cost with this mode is different when compared to the periodic backup mode. The continuous mode backup cost can vary from periodic mode. To learn more, see [continuous backup mode pricing](continuous-backup-restore-introduction.md#continuous-backup-pricing).
## <a id="portal"></a> Migrate using portal
cosmos-db Keys Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/common/keys-connection-strings.md
# Connection string and account key operations for an Azure Cosmos DB account using PowerShell+ [!INCLUDE[appliesto-all-apis](../../../includes/appliesto-all-apis.md)] [!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+This sample requires the Az PowerShell module 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db Migrate Dotnet V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/migrate-dotnet-v3.md
The following are some of the main class name changes:
| .NET v2 SDK | .NET v3 SDK | |-|-|
-|`Microsoft.Azure.Documents.Client.DocumentClient`|`Microsoft.Azure.CosmosClient`|
+|`Microsoft.Azure.Documents.Client.DocumentClient`|`Microsoft.Azure.Cosmos.CosmosClient`|
|`Microsoft.Azure.Documents.Client.ConnectionPolicy`|`Microsoft.Azure.Cosmos.CosmosClientOptions`| |`Microsoft.Azure.Documents.Client.DocumentClientException` |`Microsoft.Azure.Cosmos.CosmosException`| |`Microsoft.Azure.Documents.Client.Database`|`Microsoft.Azure.Cosmos.DatabaseProperties`|
cosmos-db How To Use Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-java.md
public final String connectionString =
"DefaultEndpointsProtocol=http;" + "AccountName=your_storage_account;" + "AccountKey=your_storage_account_key;" +
- "EndpointSuffix=core.windows.net;
+ "EndpointSuffix=core.windows.net";
``` ### Add an Azure Cosmos DB Table API connection string
public final String connectionString =
"DefaultEndpointsProtocol=https;" + "AccountName=your_cosmosdb_account;" + "AccountKey=your_account_key;" +
- "TableEndpoint=https://your_endpoint;" ;
+ "TableEndpoint=https://your_endpoint;";
``` In an app running within a role in Azure, you can store this string in the service configuration file, *ServiceConfiguration.cscfg*. You can access it with a call to the `System.getenv` method. Here's an example of getting the connection string from a **Setting** element named *ConnectionString* in the service configuration file:
cost-management-billing Assign Roles Azure Service Principals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/assign-roles-azure-service-principals.md
tags: billing
Previously updated : 10/22/2021 Last updated : 06/16/2022
Later in this article, you'll give permission to the Azure AD app to act by usin
- An EnrollmentReader role can be assigned to an SPN only by a user who has an enrollment writer role. - A DepartmentReader role can be assigned to an SPN only by a user who has an enrollment writer or department writer role.-- A SubscriptionCreator role can be assigned to an SPN only by a user who is the owner of the enrollment account. The role isn't shown in the EA portal. It's created by programmatic means and is only for programmatic use.
+- A SubscriptionCreator role can be assigned to an SPN only by a user who is the owner of the enrollment account (EA administrator). The role isn't shown in the EA portal. It's created by programmatic means and is only for programmatic use.
- The EA purchaser role isn't shown in the EA portal. It's created by programmatic means and is only for programmatic use. ## Assign enrollment account role permission to the SPN
cost-management-billing Direct Ea Azure Usage Charges Invoices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-azure-usage-charges-invoices.md
Enterprise administrators can also view an overall summary of the charges for th
## Download or view your Azure billing invoice
-An EA administrator can download the invoice from the [Azure portal](https://portal.azure.com) or have it sent in email. Invoices are sent to whoever is set up to receive invoices for the enrollment.
+An EA administrator can download the invoice from the [Azure portal](https://portal.azure.com) or have it sent in email. Invoices are sent to whoever is set up to receive invoices for the enrollment. If someone other than an EA administrator needs an email copy of the invoice, an EA administrator can send them a copy.
Only an Enterprise Administrator has permission to view and download the billing invoice. To learn more about getting access to billing information, see [Manage access to Azure billing using roles](manage-billing-access.md).
The following table lists the terms and descriptions shown on the Reservation tr
| Billing frequency | Billing frequency of the reservation | | Type | Type of the transaction. For example, Purchase or Refund. | | Purchase Month | Month of the Purchase |
-| MC (USD) | Indicates the Monetary Committment value |
+| MC (USD) | Indicates the Monetary Commitment value |
| Overage (USD) | Indicates the Service Overage value | | Quantity | Reservation quantity that was purchased | | Amount (USD) | Reservation cost |
cost-management-billing Download Azure Invoice Daily Usage Date https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/download-azure-invoice-daily-usage-date.md
For most subscriptions you can download your invoice from the Azure portal. If y
![Screenshot that shows billing periods, the download option, and total charges for each billing period](./media/download-azure-invoice-daily-usage-date/downloadinvoice.png)
-4. You can also download your a daily breakdown of consumed quantities and estimated charges by selecting **Download csv**.
+4. You can also download a daily breakdown of consumed quantities and estimated charges by selecting **Download csv**.
![Screenshot that shows Download invoice and usage page](./media/download-azure-invoice-daily-usage-date/usageandinvoice.png)
-For more information about your invoice, see [Understand your bill for Microsoft Azure](../understand/review-individual-bill.md). For help managing your costs, see [Analyze unexpected charges](../understand/analyze-unexpected-charges.md).
+For more information about your invoice, see [Understand your bill for Microsoft Azure](../understand/review-individual-bill.md). For help with managing your costs, see [Analyze unexpected charges](../understand/analyze-unexpected-charges.md).
### Download invoices for a Microsoft Customer Agreement
There could be several reasons that you don't see an invoice:
## Get your invoice in email (.pdf)
-You can opt in and configure additional recipients to receive your Azure invoice in an email. This feature may not be available for certain subscriptions such as support offers, Enterprise Agreements, or Azure in Open. If you have a Microsoft Customer agreement, see [Get your billing profile invoices in email](../understand/download-azure-invoice.md#get-your-billing-profiles-invoice-in-email).
+You can opt in and configure additional recipients to receive your Azure invoice in an email. This feature is not available for certain subscriptions such as support offers, Enterprise Agreements, or Azure in Open. If you have a Microsoft Customer agreement, see [Get your billing profile invoices in email](../understand/download-azure-invoice.md#get-your-billing-profiles-invoice-in-email).
### Get your subscription's invoices in email
cost-management-billing Link Partner Id Power Apps Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/link-partner-id-power-apps-accounts.md
Previously updated : 12/17/2021 Last updated : 06/13/2021 ms.devlang: azurecli
ms.devlang: azurecli
# Link a partner ID to your Power Platform and Dynamics Customer Insights accounts
-Microsoft partners who are Power Platform and Dynamics Customer Insights service providers can associate their service to customers on Microsoft Power Apps, Power Automate, Power BI and Dynamics Customer Insights. You have access to your customer's environment when you, the Microsoft partner, manage, configure, and support Power Platform and Customer Insights resources for your customer. You can use your Azure credentials and a Partner Admin Link (PAL) to associate your partner network ID with the account credentials used for service delivery.
+Microsoft partners who are Power Platform and Dynamics 365 Customer Insights service providers work with their customers to manage, configure, and support Power Platform and Customer Insights resources. To get credit for the services, you can associate your partner network ID with the Azure credential used for service delivery that's in your customersΓÇÖ production environments using the Partner Admin Link (PAL).
-The PAL allows Microsoft to identify and recognize partners that have Power Platform and Customer Insights customers. Microsoft attributes usage to a partner's organization based on the account's permissions (user role) and scope (tenant, resource, and so on). This attribution can be used for Advanced Specializations, such as the [Microsoft Low Code Advanced Specializations](https://partner.microsoft.com/membership/advanced-specialization#tab-content-2), and [Partner Incentives](https://partner.microsoft.com/asset/collection/microsoft-commerce-incentive-resources#/).
+PAL allows Microsoft to identify and recognize partners that have Power Platform and Customer Insights customers. Microsoft attributes usage to a partner's organization based on the account's permissions (user role) and scope (tenant, resource, and so on). The attribution is used for Advanced Specializations, such as the [Microsoft Low Code Advanced Specializations](https://partner.microsoft.com/membership/advanced-specialization#tab-content-2), and [Partner Incentives](https://partner.microsoft.com/asset/collection/microsoft-commerce-incentive-resources#/).
-The following sections explain in more detail how to
-- Get access from your customer-- Link your access account to your partner ID-- Attribute your access account to the product resource
+The following sections explain how to:
-The final step typically happens automatically, as the partner user is the one creating, editing, and updating the resource. It's a critical step to ensure partners receive proper credit for their work on Microsoft Power Apps, Power Automate, Power BI and Dynamics Customer Insights where relevant.
+1. Get access accounts from your customer
+2. Link your access account to your partner ID
+3. Attribute your access account to the product resource
-## Get access from your customer
+We recommend taking these actions in the sequence above.
+
+The attribution step is critical and typically happens automatically, as the partner user is the one creating, editing, and updating the resource (i.e. the Power App application, the Power Automate flow, etc.). To ensure success, we strongly recommend that you use Solutions where available to import your deliverables into the customers Production Environment via a Managed Solution. When you use Solutions, the account used to import the Solution becomes the owner of each deliverable inside the Solution. Linking the account to your partner ID ensures all deliverables inside the Solution are associated to your partner ID, automatically handling step #3 above.
+
+> [!NOTE]
+> Solutions are not available for Power BI and Customer Insights. See detailed sections below.
++
+## Get access accounts from your customer
Before you link your partner ID, your customer must give you access to their Power Platform or Customer Insights resources. They use one of the following options: -- **Guest user** - Your customer can add you as a guest user and provide access to the product you're working on. For more information, see [Add guest users from another directory](../../active-directory/external-identities/what-is-b2b.md).-- **Directory account** - Your customer can create a user account for you in their own directory and provide access to the product you're working on.-- **Service principal** - Your customer can add an app or script from your organization in their directory and provide access to the product you're working on. The identity of the app or script is known as a service principal.-- **Delegated Administrator** - For Power Platform, your customer can delegate a resource group so that your users can work on it from within your tenant. For more information, see [For partners: the Delegated Administrator](/power-platform/admin/for-partners-delegated-administrator).
+* **Directory account** - Your customer can create a dedicated user account, or a user account to act as a service account, in their own directory, and provide access to the product(s) you're working on in production.
+* **Service principal** - Your customer can add an app or script from your organization in their directory and provide access to the product you're working on in production.
## Link your access account to your partner ID
-When you have access to your customer's resources, use the Azure portal, PowerShell, or the Azure CLI to link your Microsoft Partner Network ID (MPN ID) to your user ID or service principal. Link the partner ID to each customer tenant.
+Linking your access account to your partner ID is also called *PAL association*. When you have access to a Production Environment access account, you can use PAL to link the account to your Microsoft Partner Network ID (Location MPN ID)
+
+For directory accounts (user or service), use the graphical web-based Azure portal, PowerShell, or the Azure CLI to link to your Microsoft Partner Network ID (Location Account MPN ID).
-### Use the Azure portal to link to a new partner ID
+For service principal, use PowerShell or the Azure CLI to provide the link your Microsoft Partner Network ID (Location Account MPN ID). Link the partner ID to each customer resource.
+
+To use the Azure portal to link to a new partner ID:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Go to [Link to a partner ID](https://portal.azure.com/#blade/Microsoft_Azure_Billing/managementpartnerblade) in the Azure portal.
-1. Enter the [Microsoft Partner Network](https://partner.microsoft.com/) ID for your organization. Be sure to use the **Associated MPN ID** shown on your partner center profile. It's typically known as your [Partner Location Account MPN ID](/partner-center/account-structure).
+2. Go to [Link to a partner ID](https://portal.azure.com/#blade/Microsoft_Azure_Billing/managementpartnerblade) in the Azure portal.
+3. Enter the [Microsoft Partner Network](https://partner.microsoft.com/) ID for your organization. Be sure to use the **Associated MPN ID** shown on your partner center profile. It's typically known as your [Partner Location Account MPN ID](/partner-center/account-structure).
:::image type="content" source="./media/link-partner-id-power-apps-accounts/link-partner-id.png" alt-text="Screenshot showing the Link to a partner ID window." lightbox="./media/link-partner-id-power-apps-accounts/link-partner-id.png" :::
-1. To link your partner ID to another customer, switch the directory. Under **Switch directory**, select the appropriate directory.
- :::image type="content" source="./media/link-partner-id-power-apps-accounts/switch-directory.png" alt-text="Screenshot showing the Directory + subscription window where can you switch your directory." lightbox="./media/link-partner-id-power-apps-accounts/switch-directory.png" :::
+
+> [!NOTE]
+> To link your partner ID to another customer, switch the directory. Under **Switch directory**, select the appropriate directory.
+
+For more information about using PowerShell or the Azure CLI, see [Use PowerShell, CLI, and other tools](#use-powershell-azure-cli-and-other-tools).
+
+## Attribute your access account to product resource
+
+To count the usage of a specific resource, the partner user or guest account needs to be attributed to the *resource* for Power Platform or Dynamics Customer Insights. The access account is the one that you received from your customer. It's the same account that was linked through the Partner Admin Link (PAL).
+
+To ensure success, we strongly recommend that you use Solutions where available to import your deliverables into the customers Production Environment via a Managed Solution. When you use Solutions, the account used to import the Solution becomes the owner of each deliverable inside the Solution. Linking the account to your partner ID ensures all deliverables inside the Solution are associated to your partner ID, automatically handling this step.
+
+| Product | Primary Metric | Resource | Attributed User Logic |
+|||||
+| Power Apps | Monthly Active Users (MAU) | Application |The user must be an owner/co-owner of the application. For more information, see [Share a canvas app with your organization](/powerapps/maker/canvas-apps/share-app). In cases of multiple partners being mapped to a single application, the user's activity is reviewed to select the *latest* partner. |
+| Power Automate | Monthly Active Users (MAU) | Flow | The user must be the creator of the flow. There can only be one creator so there's no logic for multiple partners. |
+| Power BI | Monthly Active Users (MAU) | Dataset | The user must be the publisher of the dataset. For more information, see [Publish datasets and reports from Power BI Desktop](/power-bi/create-reports/desktop-upload-desktop-files). In cases of multiple partners being mapped to a single dataset, the user's activity is reviewed to select the *latest* partner. |
+| Customer Insights | Unified Profiles | Instance | Any active user of an Instance is treated as the attributed user. In cases of multiple partners being mapped to a single Instance, the user's activity is reviewed to select the *latest* partner. |
+
+Other points about products:
+
+* **Power Apps - Canvas Applications**
+ * Set the PAL associated User or Service Account as the owner or co-owner of the application.
+ * You can only change the owner, not co-owner, using the PowerShell `Set-AdminPowerAppOwner` cmdlet.
+ * The importing entity becomes the new owner when it's inside of a solution and it's imported into another environment.
+* **Power Apps - Model Driven Applications**
+ * Make sure the app creator performs the PAL association.
+ * There's *no* co-owner option, and you can't change the owner using the GUI or PowerShell directly.
+ * The importing entity becomes the new owner when it's inside of a solution and it's imported into another environment.
+* **Power Automate**
+ * Make sure the flow creator performs the PAL association
+ * You can easily change the owner using the web GUI or with the PowerShell `Set-AdminFlowOwnerRole` cmdlet.
+ * The importing entity becomes the new owner when it's inside of a solution and it's imported into another environment.
+* **Power BI**
+ * The act of publishing to the Power BI service sets the owner.
+ * Make sure the user publishing the report performs the PAL association.
+ * Use PowerShell to publish as any user or Service Account.
+
+## Use PowerShell, Azure CLI, and other tools
+
+The following sections cover PowerShell, Azure CLI, and other tools to manage ownership and link partner IDs.
+
+### Tooling to update or change attributed users
+
+The following table shows the tooling compatibility to change the owner or co-owner, as described above, **user accounts or dedicated service accounts** after the application has been created.
+
+| Product | GUI | PowerShell | PP CLI | DevOps + Build Tools |
+| | | | | |
+| Power App Canvas | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Power App Model Driven | Γ£ÿ | Γ£ÿ | Γ£ö | Γ£ö |
+| Power Automate | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Power BI (Publishing) | Γ£ÿ | Γ£ö | Γ£ÿ | Γ£ÿ |
+| Power Virtual Agent | Γ£ÿ | Γ£ÿ | Γ£ö | Γ£ö |
+
+The following table shows the tooling compatibility to change a previously assigned user account to an **Application Registration known as a Service Principal**.
+
+| Product | GUI | PowerShell | PP CLI | DevOps + Build Tools |
+| | | | | |
+| Power App Canvas | Γ£ÿ | Γ£ÿ | Γ£ö | Γ£ö |
+| Power App Model Driven | Γ£ÿ | Γ£ÿ | Γ£ö | Γ£ö |
+| Power Automate | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Power BI (Publishing) | Γ£ÿ | Γ£ö | Γ£ÿ | Γ£ÿ |
+| Power Virtual Agent | Γ£ÿ | Γ£ÿ | Γ£ö | Γ£ö |
### Use PowerShell to link to a new partner ID
Delete the linked partner ID
az managementpartner delete --partner-id 12345 ```
-## Attribute your access account to the product resource
-
-The partner user/guest account that you received from your customer and was linked through the Partner Admin Link (PAL) needs to be attributed to the *resource* for Power Platform or Dynamics Customer Insights to count the usage of that specific resource. The user/guest account doesn't need to be associated with a specific Azure subscription for Power Apps, Power Automate, Power BI or D365 Customer Insights. In many cases, it happens automatically, as the partner user is the one creating, editing, and updating the resource. Besides the logic below, the specific programs the PAL link is used for (such as the [Microsoft Low Code Advanced Specializations](https://partner.microsoft.com/membership/advanced-specialization#tab-content-2) or Partner Incentives) may have other requirements such as the resource needing to be in production and associated with paid usage.
-
-| Product | Primary Metric | Resource | Attributed User Logic |
-|-||-||
-| Power Apps | Monthly Active Users (MAU) | Application |The user must be an owner/co-owner of the application. For more information, see [Share a canvas app with your organization](/powerapps/maker/canvas-apps/share-app). In cases of multiple partners being mapped to a single application, the user's activity is reviewed to select the 'latest' partner. |
-| Power Automate | Monthly Active Users (MAU) | Flow | The user must be the creator of the flow. There can only be one creator so there's no logic for multiple partners. |
-| Power BI | Monthly Active Users (MAU) | Dataset | The user must be the publisher of the dataset. For more information, see [Publish datasets and reports from Power BI Desktop](/power-bi/create-reports/desktop-upload-desktop-files). In cases of multiple partners being mapped to a single dataset, the user's activity is reviewed to select the 'latest' partner. |
-| Customer Insights | Unified Profiles | Instance | Any active user of an Instance is treated as the attributed user. In cases of multiple partners being mapped to a single Instance, the user's activity is reviewed to select the 'latest' partner |
-- ### Next steps -- Read the [Cost Management + Billing FAQ](../cost-management-billing-faq.yml) for questions and answers about linking a partner ID to Power Apps accounts.-- Join the discussion in the [Microsoft Partner Community](https://aka.ms/PALdiscussion) to receive updates or send feedback.-- Read the [Low Code Application Development advanced specialization FAQ](https://assetsprod.microsoft.com/mpn/faq-low-code-app-development-advanced-specialization.pdf) for PAL-based Power Apps association for Low code application development advanced specialization.
+- Learn more about the [Low Code Application Development advanced specialization](https://partner.microsoft.com/membership/advanced-specialization/low-code-application-development)
+- Read the [Low Code Application Development advanced specialization learning path](https://partner.microsoft.com/training/assets/collection/low-code-application-development-advanced-specialization#/)
cost-management-billing Analyze Unexpected Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/analyze-unexpected-charges.md
You can create an anomaly alert to automatically get notified when an anomaly is
An anomaly alert email includes a summary of changes in resource group count and cost. It also includes the top resource group changes for the day compared to the previous 60 days. And, it has a direct link to the Azure portal so that you can review the cost and investigate further.
-1. Start on a subscription scope.
+1. From Azure Home, select **Cost Management** under Tools.
+1. Verify you've selected the correct subscription in the scope at the top of the page.
1. In the left menu, select **Cost alerts**. 1. On the Cost alerts page, select **+ Add** > **Add anomaly alert**. 1. On the Subscribe to emails page, enter required information and then select **Save**.
data-factory Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-overview.md
Previously updated : 05/27/2022 Last updated : 06/15/2022
data-factory Managed Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/managed-virtual-network-private-endpoint.md
Previously updated : 04/01/2022 Last updated : 06/16/2022 # Azure Data Factory managed virtual network
This section discusses limitations and known issues.
### Supported data sources and services
-The following data sources and services have native private endpoint support. They can be connected through private link from a Data Factory managed virtual network:
--- Azure Blob Storage (not including storage account V1)-- Azure Cognitive Search-- Azure Cosmos DB MongoDB API-- Azure Cosmos DB SQL API-- Azure Data Lake Storage Gen2-- Azure Database for MariaDB-- Azure Database for MySQL-- Azure Database for PostgreSQL-- Azure Files (not including storage account V1)
+The following services have native private endpoint support. They can be connected through private link from a Data Factory managed virtual network:
+ - Azure Functions (Premium plan) - Azure Key Vault - Azure Machine Learning - Azure Private Link - Microsoft Purview-- Azure SQL Database-- Azure SQL Managed Instance (public preview)-- Azure Synapse Analytics-- Azure Table Storage (not including storage account V1)
-You can access all data sources that are supported by Data Factory through a public network.
+For the support of data sources, you can refer to [connector overview](connector-overview.md). You can access all data sources that are supported by Data Factory through a public network.
> [!NOTE] > Because SQL Managed Instance native private endpoint is in private preview, you can access it from a managed virtual network by using Private Link and Azure Load Balancer. For more information, see [Access SQL Managed Instance from a Data Factory managed virtual network using a private endpoint](tutorial-managed-virtual-network-sql-managed-instance.md).
dedicated-hsm Deployment Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/deployment-architecture.md
-
+ Title: Deployment architecture - Azure Dedicated HSM | Microsoft Docs description: Basic design considerations when using Azure Dedicated HSM as part of an application architecture
na Previously updated : 03/25/2021 Last updated : 06/03/2022
The HSMs are distributed across MicrosoftΓÇÖs data centers and can be easily pro
* East US 2 * West US * West US 2
+* Canada East
+* Canada Central
* South Central US * Southeast Asia * East Asia
The HSMs are distributed across MicrosoftΓÇÖs data centers and can be easily pro
* West Europe * UK South * UK West
-* Canada Central
* Australia East * Australia Southeast * Switzerland North
dedicated-hsm Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/troubleshoot.md
-
+ Title: Troubleshoot Dedicated HSM - Azure Dedicated HSM | Microsoft Docs description: Overview of Azure Dedicated HSM provides key storage capabilities within Azure that meets FIPS 140-2 Level 3 certification
na Previously updated : 03/25/2021 Last updated : 05/12/2022 #Customer intent: As an IT Pro, Decision maker I am looking for key storage capability within Azure Cloud that meets FIPS 140-2 Level 3 certification and that gives me exclusive access to the hardware.
The Azure Dedicated HSM service has two distinct facets. Firstly, the registrati
## HSM Registration
-Dedicated HSM is not freely available for use as it is delivering hardware resources in the cloud and hence is a precious resource that needs protecting. We therefore use a allowlisting process via email using `HSMrequest@microsoft.com`.
+Dedicated HSM is not freely available for use as it is delivering hardware resources in the cloud and hence is a precious resource that needs protecting. We therefore use an allowlisting process via email using HSMrequest@microsoft.com.
### Getting access to Dedicated HSM
Only when fully finished with an HSM can it be deprovisioned and then Microsoft
### How to delete an HSM resource
-The Azure resource for an HSM cannot be deleted unless the HSM is in a "zeroized" state. Hence, all key material must have been deleted prior to trying to delete it as a resource. The quickest way to zeroize is to get the HSM admin password wrong 3 times (note: this refers to the HSM admin and not appliance level admin). The Luna shell does have a `hsm -factoryreset` command that zeroizes but it can only be executed via console on the serial port and customers do not have access to this.
+**DO NOT DELETE the Resource Group of your Dedicated HSM directly. It will not delete the HSM resource, you will continue to be billed as it places the HSM into a orphaned state. If did not follow correct procedures and end up in this situation, contact Microsoft Support.**
+
+**Step 1** Zeorize the HSM. The Azure resource for an HSM cannot be deleted unless the HSM is in a "zeroized" state. Hence, all key material must have been deleted prior to trying to delete it as a resource. The quickest way to zeroize is to get the HSM admin password wrong 3 times (note: this refers to the HSM admin and not appliance level admin). Use command ΓÇÿhsm loginΓÇÖ and enter wrong password three times. The Luna shell does have a hsm -factoryreset command that zeroizes the HSM but it can only be executed via console on the serial port and customers do not have access to this.
+
+**Step 2** Once HSM is zeroized, you can use either of the following commands to initiate the Delete Dedicated HSM resource
+> **Azure CLI**: az dedicated-hsm delete --resource-group <RG name>ΓÇô-name <HSM name> <br />
+> **Azure PowerShell**: Remove-AzDedicatedHsm -Name <HSM name> -ResourceGroupName <RG name>
+
+**Step 3** Once step 2 is successful, you can delete the resource group to delete the other resources associated with the dedicated HSM by using either Azure CLI or Azure PowerShell.
+> **Azure CLI**: az group delete --name <RG name> <br />
+> **Azure PowerShell**: Remove-AzResourceGroup -Name <RG name>
## Next steps
defender-for-cloud Defender For Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md
Title: What is Microsoft Defender for Cloud? description: Use Microsoft Defender for Cloud to protect your Azure, hybrid, and multicloud resources and workloads. -- Previously updated : 05/19/2022 Last updated : 06/15/2022 # What is Microsoft Defender for Cloud?
Microsoft Defender for Cloud is a Cloud Security Posture Management (CSPM) and C
For a step-by-step walkthrough of Defender for Cloud, check out this [interactive tutorial](https://mslearn.cloudguides.com/en-us/guides/Protect%20your%20multi-cloud%20environment%20with%20Microsoft%20Defender%20for%20Cloud).
+You can learn more about Defender for Cloud from a cybersecurity expert by watching [Lessons Learned from the Field](episode-six.md).
+ ## Protect your resources and track your security progress Microsoft Defender for Cloud's features covers the two broad pillars of cloud security: Cloud Workload Protection Platform (CWPP) and Cloud Security Posture Management (CSPM).
Use the advanced protection tiles in the [workload protections dashboard](worklo
## Learn More
-If you would like to learn more about Defender for Cloud from a cybersecurity expert, check out [Lessons Learned from the Field](episode-six.md).
- You can also check out the following blogs: - [A new name for multicloud security: Microsoft Defender for Cloud](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/a-new-name-for-multi-cloud-security-microsoft-defender-for-cloud/ba-p/2943020)
defender-for-cloud Defender For Containers Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-architecture.md
These components are required in order to receive the full protection offered by
- **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** - An agent based solution that connects your EKS clusters to Azure. Azure then is capable of providing services such as Defender, and Policy as [Arc extensions](../azure-arc/kubernetes/extensions.md). -- **The Defender extension** ΓÇô The [DeamonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) that collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The extension is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace.
+- **The Defender extension** ΓÇô The [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) that collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The extension is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace.
- **The Azure Policy extension** - The workload's configuration information is collected by the Azure Policy add-on. The Azure Policy add-on extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) admission controller webhook for [Open Policy Agent](https://www.openpolicyagent.org/). The extension registers as a web hook to Kubernetes admission control and makes it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. For more information, see [Understand Azure Policy for Kubernetes clusters](../governance/policy/concepts/policy-for-kubernetes.md).
These components are required in order to receive the full protection offered by
- **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** - An agent based solution that connects your EKS clusters to Azure. Azure then is capable of providing services such as Defender, and Policy as [Arc extensions](../azure-arc/kubernetes/extensions.md). -- **The Defender extension** ΓÇô The [DeamonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) that collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The extension is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace.
+- **The Defender extension** ΓÇô The [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) that collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The extension is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace.
- **The Azure Policy extension** - The workload's configuration information is collected by the Azure Policy add-on. The Azure Policy add-on extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) admission controller webhook for [Open Policy Agent](https://www.openpolicyagent.org/). The extension registers as a web hook to Kubernetes admission control and makes it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. For more information, see [Understand Azure Policy for Kubernetes clusters](../governance/policy/concepts/policy-for-kubernetes.md).
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md
Title: How to enable Microsoft Defender for Containers in Microsoft Defender for
description: Enable the container protections of Microsoft Defender for Containers zone_pivot_groups: k8s-host Previously updated : 05/26/2022 Last updated : 06/15/2022 # Enable Microsoft Defender for Containers
Defender for Containers protects your clusters whether they're running in:
Learn about this plan in [Overview of Microsoft Defender for Containers](defender-for-containers-introduction.md).
+You can learn more about from the product manager by watching [Microsoft Defender for Containers in a multi-cloud environment](episode-nine.md).
+
+You can also watch [Protect Containers in GCP with Defender for Containers](episode-ten.md) to learn how to protect your containers.
+ ::: zone pivot="defender-for-container-arc,defender-for-container-eks,defender-for-container-gke" > [!NOTE] > Defender for Containers' support for Arc-enabled Kubernetes clusters, AWS EKS, and GCP GKE. This is a preview feature.
A full list of supported alerts is available in the [reference table of all Defe
## Learn More
-Learn more from the product manager about [Microsoft Defender for Containers in a multicloud environment](episode-nine.md).
-You can also learn how to [Protect Containers in GCP with Defender for Containers](episode-ten.md).
-
-You can also check out the following blogs:
+You can check out the following blogs:
- [Protect your Google Cloud workloads with Microsoft Defender for Cloud](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/protect-your-google-cloud-workloads-with-microsoft-defender-for/ba-p/3073360) - [Introducing Microsoft Defender for Containers](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/introducing-microsoft-defender-for-containers/ba-p/2952317)
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Title: Container security with Microsoft Defender for Cloud description: Learn about Microsoft Defender for Containers-- Previously updated : 05/25/2022 Last updated : 06/15/2022 # Overview of Microsoft Defender for Containers
Microsoft Defender for Containers is the cloud-native solution for securing your
[How does Defender for Containers work in each Kubernetes platform?](defender-for-containers-architecture.md)
+You can learn more from the product manager about Microsoft Defender for Containers by watching [Microsoft Defender for Containers](episode-three.md).
+ ## Microsoft Defender for Containers plan availability | Aspect | Details |
No, AKS is a managed service, and manipulation of the IaaS resources isn't suppo
## Learn More
-If you would like to learn more from the product manager about Microsoft Defender for Containers, check out [Microsoft Defender for Containers](episode-three.md).
-
-You can also check out the following blogs:
+You can check out the following blogs:
- [How to demonstrate the new containers features in Microsoft Defender for Cloud](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/how-to-demonstrate-the-new-containers-features-in-microsoft/ba-p/3281172) - [Introducing Microsoft Defender for Containers](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/introducing-microsoft-defender-for-containers/ba-p/2952317)
defender-for-cloud Defender For Servers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-servers-introduction.md
Title: Microsoft Defender for Servers - the benefits and features description: Learn about the benefits and features of Microsoft Defender for Servers. Previously updated : 05/11/2022 Last updated : 06/15/2022 # Introduction to Microsoft Defender for Servers
To protect machines in hybrid and multicloud environments, Defender for Cloud us
> [!TIP] > For details of which Defender for Servers features are relevant for machines running on other cloud environments, see [Supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds-servers.md?tabs=features-windows#supported-features-for-virtual-machines-and-servers).
+You can learn more from the product manager about Defender for Servers, by watching [Microsoft Defender for Servers](episode-five.md). You can also watch [Enhanced workload protection features in Defender for Servers](episode-twelve.md).
+ ## What are the Microsoft Defender for server plans? Microsoft Defender for Servers provides threat detection and advanced defenses to your Windows and Linux machines whether they're running in Azure, AWS, GCP, or on-premises. Microsoft Defender for Servers is available in two plans:
You can simulate alerts by downloading one of the following playbooks:
## Learn more
-If you would like to learn more from the product manager about Defender for Servers, check out [Microsoft Defender for Servers](episode-five.md). You can also learn about the [Enhanced workload protection features in Defender for Servers](episode-twelve.md).
-
-You can also check out the following blogs:
+You can check out the following blogs:
- [Security posture management and server protection for AWS and GCP are now generally available](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/security-posture-management-and-server-protection-for-aws-and/ba-p/3271388)
defender-for-cloud Defender For Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-introduction.md
Title: Microsoft Defender for Storage - the benefits and features description: Learn about the benefits and features of Microsoft Defender for Storage. Previously updated : 01/16/2022 Last updated : 06/16/2022 -- + # Introduction to Microsoft Defender for Storage **Microsoft Defender for Storage** is an Azure-native layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit your storage accounts. It uses advanced threat detection capabilities and [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) data to provide contextual security alerts. Those alerts also include steps to mitigate the detected threats and prevent future attacks.
Defender for Storage continually analyzes the telemetry stream generated by the
Analyzed telemetry of Azure Blob Storage includes operation types such as **Get Blob**, **Put Blob**, **Get Container ACL**, **List Blobs**, and **Get Blob Properties**. Examples of analyzed Azure Files operation types include **Get File**, **Create File**, **List Files**, **Get File Properties**, and **Put Range**.
-Defender for Storage doesn't access the Storage account data and has no impact on its performance.
+Defender for Storage doesn't access the Storage account data and has no impact on its performance.
+
+You can learn more about from the product manager by watching [Defender for Storage in the field](episode-thirteen.md)
## Availability
defender-for-cloud Deploy Vulnerability Assessment Tvm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-tvm.md
Title: Use Microsoft Defender for Endpoint's threat and vulnerability management capabilities with Microsoft Defender for Cloud description: Enable, deploy, and use Microsoft Defender for Endpoint's threat and vulnerability management capabilities with Microsoft Defender for Cloud to discover weaknesses in your Azure and hybrid machines Previously updated : 05/11/2022 Last updated : 06/15/2022 # Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management
For a quick overview of threat and vulnerability management, watch this video:
> [!TIP] > As well as alerting you to vulnerabilities, threat and vulnerability management provides additional functionality for Defender for Cloud's asset inventory tool. Learn more in [Software inventory](asset-inventory.md#access-a-software-inventory).
+You can also learn more from the product manager about security posture by watching [Microsoft Defender for Servers](episode-five.md).
## Availability
The findings for **all** vulnerability assessment tools are in the Defender for
## Learn more
-If you would like to learn more from the product manager about security posture, check out [Microsoft Defender for Servers](episode-five.md).
-
-You can also check out the following blogs:
+You can check out the following blogs:
- [Security posture management and server protection for AWS and GCP are now generally available](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/security-posture-management-and-server-protection-for-aws-and/ba-p/3271388) - [Microsoft Defender for Cloud Server Monitoring Dashboard](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-cloud-server-monitoring-dashboard/ba-p/2869658)
defender-for-cloud Episode Thirteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-thirteen.md
+
+ Title: Defender for Storage
+description: Learn about the capabilities available in Defender for Storage.
+ Last updated : 06/16/2022++
+# Defender for Storage
+
+**Episode description**: In this episode of Defender for Cloud in the Field, Eitan Shteinberg joins Yuri Diogenes to talk about the threat landscape for Azure Storage and how Defender for Storage can help detect and mitigate these threats.
+
+ Eitan talks about different use case scenarios, best practices to deploy Defender for Storage and he also demonstrates how to investigate an alert generated by Defender for Storage.
+
+<br>
+<br>
+<iframe src="https://aka.ms/docs/player?id=79f69cee-ae56-4ce3-9443-0f45e5c3ccf4" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+- [01:00](/shows/mdc-in-the-field/defender-for-storage#time=01m00s) - Current threats for Cloud Storage workloads
+
+- [07:00](/shows/mdc-in-the-field/defender-for-storage#time=07m00s) - Defender for Storage threat detections
+
+- [10:10](/shows/mdc-in-the-field/defender-for-storage#time=10m10s) - How Defender for Storage works after you enable it
+
+- [20:35](/shows/mdc-in-the-field/defender-for-storage#time=20m35s) - How to investigate a Defender for Storage Alert
+
+- [29:00](/shows/mdc-in-the-field/defender-for-storage#time=29m00s) - Best practices to enable Defender for Storage
+
+- [32:15](/shows/mdc-in-the-field/defender-for-storage#time=32m15s) - What's coming next
+
+## Recommended resources
+
+[Introduction to Microsoft Defender for Storage](defender-for-storage-introduction.md).
+
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
+
+- Follow us on social media:
+ [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
defender-for-cloud Episode Twelve https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twelve.md
Introduce yourself to [Microsoft Defender for Servers](defender-for-servers-intr
## Next steps > [!div class="nextstepaction"]
-> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
+> [Defender for Storage](episode-thirteen.md)
defender-for-cloud Information Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/information-protection.md
Title: Prioritize security actions by data sensitivity - Microsoft Defender for Cloud description: Use Microsoft Purview's data sensitivity classifications in Microsoft Defender for Cloud Previously updated : 04/27/2022 Last updated : 06/15/2022 # Prioritize security actions by data sensitivity
Microsoft Defender for Cloud customers using Microsoft Purview can benefit from
This page explains the integration of Microsoft Purview's data sensitivity classification labels within Defender for Cloud.
+You can learn more from the product manager about Microsoft Defender for Cloud's [integration with Azure Purview](episode-two.md).
+ ## Availability |Aspect|Details| |-|:-|
A graph shows the number of recommendations and alerts by classified resource ty
## Learn more
-If you would like to learn more from the product manager about Microsoft Defender for Cloud's [integration with Azure Purview](episode-two.md).
-
-You can also check out the following blog:
+You can check out the following blog:
- [Secure sensitive data in your cloud resources](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/secure-sensitive-data-in-your-cloud-resources/ba-p/2918646).
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Title: Connect your AWS account to Microsoft Defender for Cloud description: Defend your AWS resources with Microsoft Defender for Cloud Previously updated : 06/02/2022 Last updated : 06/15/2022 zone_pivot_groups: connect-aws-accounts
This screenshot shows AWS accounts displayed in Defender for Cloud's [overview d
:::image type="content" source="./media/quickstart-onboard-aws/aws-account-in-overview.png" alt-text="Four AWS projects listed on Defender for Cloud's overview dashboard" lightbox="./media/quickstart-onboard-aws/aws-account-in-overview.png":::
+You can learn more from the product manager about Microsoft Defender for Cloud's new AWS connector by watching [New AWS connector](episode-one.md).
+ ::: zone pivot="env-settings"
For other operating systems, the SSM Agent should be installed manually using th
## Learn more
-If you would like to learn more from the product manager about Microsoft Defender for Cloud's new AWS connector check out [Microsoft Defender for Cloud in the Field](episode-one.md).
-
-You can also check out the following blogs:
+You can check out the following blogs:
- [Ignite 2021: Microsoft Defender for Cloud news](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/ignite-2021-microsoft-defender-for-cloud-news/ba-p/2882807). - [Custom assessments and standards in Microsoft Defender for Cloud for AWS workloads (Preview)](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/custom-assessments-and-standards-in-microsoft-defender-for-cloud/ba-p/3066575).
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 05/30/2022 Last updated : 06/16/2022 # What's new in Microsoft Defender for Cloud?
Updates in June include:
- [Drive implementation of security recommendations to enhance your security posture](#drive-implementation-of-security-recommendations-to-enhance-your-security-posture) - [Filter security alerts by IP address](#filter-security-alerts-by-ip-address) - [General availability (GA) of Defender for SQL on machines for AWS and GCP environments](#general-availability-ga-of-defender-for-sql-on-machines-for-aws-and-gcp-environments)
+- [Alerts by resource group](#alerts-by-resource-group)
### Drive implementation of security recommendations to enhance your security posture
Using the multicloud onboarding experience, you can enable and enforce databases
Learn how to protect and connect your [AWS environment](quickstart-onboard-aws.md) and your [GCP organization](quickstart-onboard-gcp.md) with Microsoft Defender for Cloud.
+## Alerts by resource group
+
+The ability to filter, sort and group by resource group has been added to the Security alerts page.
+
+A resource group column has been added to the alerts grid.
++
+A new filter has been added which allows you to view all of the alerts for specific resource groups.
++
+You can now also group your alerts by resource group to view all of your alerts for each of your resource groups.
++ ## May 2022 Updates in May include:
defender-for-cloud Review Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/review-security-recommendations.md
Title: Improving your security posture with recommendations in Microsoft Defender for Cloud description: This document walks you through how to identify security recommendations that will help you improve your security posture. Previously updated : 05/23/2022 Last updated : 06/15/2022 # Find recommendations that can improve your security posture
To get to the list of recommendations:
You can search for specific recommendations by name. Use the search box and filters above the list of recommendations to find specific recommendations, and look at the [details of the recommendation](security-policy-concept.md#security-recommendation-details) to decide whether to [remediate it](implement-security-recommendations.md), [exempt resources](exempt-resource.md), or [disable the recommendation](tutorial-security-policy.md#disable-security-policies-and-disable-recommendations).
+You can learn more from the product manager about security posture by watching [Security posture management improvements](episode-four.md).
+ ## Finding recommendations with high impact on your secure score<a name="monitor-recommendations"></a> Your [secure score is calculated](secure-score-security-controls.md?branch=main#how-your-secure-score-is-calculated) based on the security recommendations that you have implemented. In order to increase your score and improve your security posture, you have to find recommendations with unhealthy resources and [remediate those recommendations](implement-security-recommendations.md).
When the report is ready, you'll be notified by a second pop-up.
## Learn more
-If you would like to learn more from the product manager about security posture, check out [Security posture management improvements](episode-four.md).
-
-You can also check out the following blogs:
+You can check out the following blogs:
- [Security posture management and server protection for AWS and GCP are now generally available](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/security-posture-management-and-server-protection-for-aws-and/ba-p/3271388) - [Custom assessments and standards in Microsoft Defender for Cloud for AWS workloads (Preview)](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/custom-assessments-and-standards-in-microsoft-defender-for-cloud/ba-p/3066575)
defender-for-iot Azure Iot Security Local Configuration Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/azure-iot-security-local-configuration-csharp.md
Title: Defender for IoT security agent local configuration (C#)
description: Learn more about the Defender for IoT security service, security agent local configuration file for C#. Previously updated : 03/28/2022 Last updated : 03/28/2022 # Understanding the local configuration file (C# agent) The Defender for IoT security agent uses configurations from a local configuration file.
-The security agent reads the configuration file once when the agent starts running. Configurations found in the local configuration file contain both authentication configuration and other agent related configurations.
+The security agent reads the configuration file once, when the agent starts running. Configurations found in the local configuration file contain both authentication configuration and other agent related configurations.
The C# security agent uses multiple configuration files:
For Windows:
| Configuration Name | Possible values | Details | |:--|:|:--|
-| agentId | GUID | Agent unique identifier |
-| readRemoteConfigurationTimeout | TimeSpan | Time period for fetching remote configuration from IoT Hub. If the agent can't fetch the configuration within the specified time, the operation will time out.|
-| schedulerInterval | TimeSpan | Internal scheduler interval. |
-| producerInterval | TimeSpan | Event producer worker interval. |
-| consumerInterval | TimeSpan | Event consumer worker interval. |
-| highPriorityQueueSizePercentage | 0 < number < 1 | The portion of total cache dedicated for high priority messages. |
-| logLevel | "Off", "Fatal", "Error", "Warning", "Information", "Debug" | Log messages equal and above this severity are logged to debug console (Syslog in Linux). |
-| fileLogLevel | "Off", "Fatal", "Error", "Warning", "Information", "Debug"| Log messages equal and above this severity are logged to file (Syslog in Linux). |
-| diagnosticVerbosityLevel | "None", "Some", "All", | Verbosity level of diagnostic events. None - diagnostic events are not sent. Some - Only diagnostic events with high importance are sent. All - all logs are also sent as diagnostic events. |
-| logFilePath | Path to file | If fileLogLevel > Off, logs are written to this file. |
-| defaultEventPriority | "High", "Low", "Off" | Default event priority. |
+| **agentId** | GUID | Agent unique identifier |
+| **readRemoteConfigurationTimeout** | TimeSpan | Time period for fetching remote configuration from IoT Hub. If the agent can't fetch the configuration within the specified time, the operation will time out.|
+| **schedulerInterval** | TimeSpan | Internal scheduler interval. |
+| **producerInterval** | TimeSpan | Event producer worker interval. |
+| **consumerInterval** | TimeSpan | Event consumer worker interval. |
+| **highPriorityQueueSizePercentage** | 0 < number < 1 | The portion of total cache dedicated for high priority messages. |
+| **logLevel** | "Off", "Fatal", "Error", "Warning", "Information", "Debug" | Log messages equal and above this severity are logged to debug console (Syslog in Linux). |
+| **fileLogLevel** | "Off", "Fatal", "Error", "Warning", "Information", "Debug"| Log messages equal and above this severity are logged to file (Syslog in Linux). |
+| **diagnosticVerbosityLevel** | "None", "Some", "All", | Verbosity level of diagnostic events. None - diagnostic events are not sent. Some - Only diagnostic events with high importance are sent. All - all logs are also sent as diagnostic events. |
+| **logFilePath** | Path to file | If fileLogLevel > Off, logs are written to this file. |
+| **defaultEventPriority** | "High", "Low", "Off" | Default event priority. |
### General.config example
For Windows:
| Configuration name | Possible values | Details | |:--|:|:--|
-| moduleName | string | Name of the Defender-IoT-micro-agent identity. This name must correspond to the module identity name in the device. |
-| deviceId | string | ID of the device (as registered in Azure IoT Hub). |
-| schedulerInterval | TimeSpan string | Internal scheduler interval. |
-| gatewayHostname | string | Host name of the Azure Iot Hub. Usually \<my-hub\>.azure-devices.net |
-| filePath | string - path to file | Path to the file that contains the authentication secret.|
-| type | "SymmetricKey", "SelfSignedCertificate" | The user secret for authentication. Choose *SymmetricKey* if the user secret is a Symmetric key, choose *self-signed certificate* if the secret is a self-signed certificate. |
-| identity | "DPS", "Module", "Device" | Authentication identity - DPS if authentication is made through DPS, Module if authentication is made using module credentials, or device if authentication is made using device credentials.
-| certificateLocationKind | "LocalFile", "Store" | LocalFile if the certificate is stored in a file, store if the certificate is located in a certificate store. |
-| idScope | string | ID scope of DPS |
-| registrationId | string | DPS device registration ID. |
-|
+| **moduleName** | string | Name of the Defender-IoT-micro-agent identity. This name must correspond to the module identity name in the device. |
+| **deviceId** | string | ID of the device (as registered in Azure IoT Hub). |
+| **schedulerInterval** | TimeSpan string | Internal scheduler interval. |
+| **gatewayHostname** | string | Host name of the Azure Iot Hub. Usually \<my-hub\>.azure-devices.net |
+| **filePath** | string - path to file | Path to the file that contains the authentication secret.|
+| **type** | "SymmetricKey", "SelfSignedCertificate" | The user secret for authentication. Choose *SymmetricKey* if the user secret is a Symmetric key, choose *self-signed certificate* if the secret is a self-signed certificate. |
+| **identity** | "DPS", "Module", "Device" | Authentication identity - DPS if authentication is made through DPS, Module if authentication is made using module credentials, or device if authentication is made using device credentials.
+| **certificateLocationKind** | "LocalFile", "Store" | LocalFile if the certificate is stored in a file, store if the certificate is located in a certificate store. |
+| **idScope** | string | ID scope of DPS |
+| **registrationId** | string | DPS device registration ID. |
+ ### Authentication.config example
For Windows:
| Configuration Name | Possible values | Details | |:--|:|:--|
-| transportType | "Ampq" "Mqtt" | IoT Hub transport type. |
-|
+| **transportType** | "Ampq" "Mqtt" | IoT Hub transport type. |
+ ### SecurityIotInterface.config example
defender-for-iot How To Azure Rtos Security Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-azure-rtos-security-module.md
Last updated 11/09/2021
This article describes how to configure the Defender-IoT-micro-agent for your Azure RTOS device, to meet your network, bandwidth, and memory requirements.
+## Configuration steps
+ You must select a target distribution file that has a `*.dist` extension, from the `netxduo/addons/azure_iot/azure_iot_security_module/configs` directory. When using a CMake compilation environment, you must set a command line parameter to `IOT_SECURITY_MODULE_DIST_TARGET` for the chosen value. For example, `-DIOT_SECURITY_MODULE_DIST_TARGET=RTOS_BASE`. In an IAR, or other non CMake compilation environment, you must add the `netxduo/addons/azure_iot/azure_iot_security_module/inc/configs/<target distribution>/` path to any known included paths. For example, `netxduo/addons/azure_iot/azure_iot_security_module/inc/configs/RTOS_BASE`.
+## Device behavior
+ Use the following file to configure your device behavior. **netxduo/addons/azure_iot/azure_iot_security_module/inc/configs/\<target distribution>/asc_config.h**
In a CMake compilation environment, you must change the default configuration by
The default behavior of each configuration is provided in the following tables:
-## General
+## General configuration
| Name | Type | Default | Details | | - | - | - | - |
The default behavior of each configuration is provided in the following tables:
| ASC_SECURITY_MODULE_SEND_MESSAGE_RETRY_TIME | Number | 3 | The amount of time the Defender-IoT-micro-agent will take to send the security message after a fail. (in seconds) | | ASC_SECURITY_MODULE_PENDING_TIME | Number | 300 | The Defender-IoT-micro-agent pending time (in seconds). The state will change to suspend, if the time is exceeded. |
-## Collection
+## Collection configuration
| Name | Type | Default | Details | | - | - | - | - |
defender-for-iot How To Investigate Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-investigate-device.md
In this guide, use the investigation suggestions provided to help determine the
> [!div class="checklist"] > * Find your device data
-> * Investigate using kql queries
+> * Investigate using KQL queries
> [!NOTE] > The Microsoft Defender for IoT legacy experience under IoT Hub has been replaced by our new Defender for IoT standalone experience, in the Defender for IoT area of the Azure portal. The legacy experience under IoT Hub will not be supported after **March 31, 2023**.
Following configuration, do the following to access data stored in your Log Anal
To view insights and raw data about your IoT devices, go to your Log Analytics workspace [to access your data](#how-can-i-access-my-data).
-See the sample kql queries below to get started with investigating alerts and activities on your device.
+See the sample KQL queries below to get started with investigating alerts and activities on your device.
### Related alerts
-You can find out if other alerts were triggered around the same time through the following kql query:
+You can find out if other alerts were triggered around the same time through the following KQL query:
``` let device = "YOUR_DEVICE_ID";
You can find out if other alerts were triggered around the same time through the
### Users with access
-To find out which users have access to this device use the following kql query:
+To find out which users have access to this device use the following KQL query:
``` let device = "YOUR_DEVICE_ID";
Use this data to discover:
### Open ports
-To find out which ports in the device are currently in use or were used, use the following kql query:
+To find out which ports in the device are currently in use or were used, use the following KQL query:
``` let device = "YOUR_DEVICE_ID";
Use this data to discover:
### User logins
-To find users that logged into the device use the following kql query:
+To find users that logged into the device use the following KQL query:
``` let device = "YOUR_DEVICE_ID";
Use the query results to discover:
### Process list
-To find out if the process list is as expected, use the following kql query:
+To find out if the process list is as expected, use the following KQL query:
``` let device = "YOUR_DEVICE_ID";
defender-for-iot How To Region Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-region-move.md
There are various scenarios for moving an existing resource from one region to a
You can move a Microsoft Defender for IoT "iotsecuritysolutions" resource to a different Azure region. The "iotsecuritysolutions" resource is a hidden resource that is connected to a specific IoT hub resource that is used to enable security on the hub. Learn how to [configure, and create](/azure/templates/microsoft.security/iotsecuritysolutions?tabs=bicep) this resource.
-## Prerequisites
+## Resource prerequisites
- Make sure that the resource is in the Azure region that you want to move from.
You can move a Microsoft Defender for IoT "iotsecuritysolutions" resource to a d
- Make sure that your subscription has enough resources to support the addition of resources for this process. For more information, see [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits)
-## Prepare
+## Alert preparation
In this section, you'll prepare to move the resource for the move by finding the resource and confirming it is in a region you wish to move from.
Before transitioning the resource to the new region, we recommended using [log a
:::image type="content" source="media/region-move/location.png" alt-text="Screenshot showing you the region your hub is located in.":::
-## Move
+## Moving IoT Hub
You're now ready to move your resource to your new location. Follow [these instructions](../../iot-hub/iot-hub-how-to-clone.md) to move your IoT Hub. After transferring, and enabling the resource, you can link to the same log analytics workspace that was configured earlier.
-## Verify
+## Resource verification
In this section, you'll verify that the resource has been moved, that the connection to the IoT Hub has been enabled, and that everything is working correctly.
In this tutorial, you moved an Azure resource from one region to another and cle
- Learn more about [Moving your resources to a new resource group or subscription.](../../azure-resource-manager/management/move-resource-group-and-subscription.md). -- Learn how to [move VMs to another Azure region](../../site-recovery/azure-to-azure-tutorial-migrate.md).
+- Learn how to [move VMs to another Azure region](../../site-recovery/azure-to-azure-tutorial-migrate.md).
defender-for-iot References Defender For Iot Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/references-defender-for-iot-glossary.md
# Defender for IoT glossary for device builder
-This glossary provides a brief description of important terms and concepts for the Microsoft Defender for IoT platform. Select the **Learn more** links to go to related terms in the glossary. This will help you more quickly learn and use product tools.
+This glossary provides a brief description of important terms and concepts for the Microsoft Defender for IoT platform. Select the **Learn more** links to go to related terms in the glossary. This will help you to learn and use the product tools quickly and effectively.
<a name="glossary-a"></a>
-## A
-
-## B
-
-## C
-
-## D
-
+## D
| Term | Description | Learn more | |--|--|--|
-| **Device twins** | Device twins are JSON documents that store device state information including metadata, configurations, and conditions. | [Module Twin](#m) <br /> <br />[Defender-IoT-micro-agent twin](#s) |
+| **Device twins** | Device twins are JSON documents that store device state information including metadata, configurations, and conditions. | [Module Twin](#m) <br /> |
| **Defender-IoT-micro-agent twin** `(DB)` | The Defender-IoT-micro-agent twin holds all of the information that is relevant to device security, for each specific device in your solution. | [Device twin](#d) <br /> <br />[Module Twin](#m) | | **Device inventory** | Defender for IoT identifies, and classifies devices as a single unique network device in the inventory for: <br><br> - Standalone IT, OT, and IoT devices with 1 or multiple NICs. <br><br> - Devices composed of multiple backplane components. This includes all racks, slots, and modules. <br><br> - Devices that act as network infrastructure. For example, switches, and routers with multiple NICs. <br><br> - Public internet IP addresses, multicast groups, and broadcast groups aren't considered inventory devices. <br><br>Devices that have been inactive for more than 60 days are classified as inactive Inventory devices.|
-## E
-
-## F
-
-## G
-
-## H
- ## I | Term | Description | Learn more | |--|--|--| | **IoT Hub** | Managed service, hosted in the cloud, that acts as a central message hub for bi-directional communication between your IoT application and the devices it manages. | |
-## L
- ## M - | Term | Description | Learn more | |--|--|--| | **Micro Agent** | Provides depth security capabilities for IoT devices including security posture and threat detection. | | | **Module twin** | Module twins are JSON documents that store module state information including metadata, configurations, and conditions. | [Device twins](#d) <br /> <br />[Defender-IoT-micro-agent twin](#d) |
-## N
-
-## O
-
-## P
-
-## R
-
-## S
-
-## Z
defender-for-iot Troubleshoot Defender Micro Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/troubleshoot-defender-micro-agent.md
Use the following command to verify that the Defender for IoT micro agent servic
```bash ps -aux | grep " defender-iot-micro-agent" ```
+The following sample result shows that the folder 'defender_iot_micro_agent' has root privileges due to the word 'root' appearing as shown by the red box.
:::image type="content" source="media/troubleshooting/root-privileges.png" alt-text="Verify the Defender for IoT micro agent service is running with root privileges."::: ## Review the logs
defender-for-iot Tutorial Create Micro Agent Module Twin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/tutorial-create-micro-agent-module-twin.md
A `DefenderIotMicroAgent` module twin can be created by manually editing each mo
:::image type="content" source="media/quickstart-create-micro-agent-module-twin/device-details-module.png" alt-text="Select module identities from the tab.":::
-## Clean up resources
-
-There are no resources to clean up.
- ## Next steps > [!div class="nextstepaction"]
defender-for-iot Tutorial Investigate Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/tutorial-investigate-security-alerts.md
You can access your alerts and investigate them with the Log Analytics workspace
:::image type="content" source="media/how-to-configure-agent-based-solution/log-analytic.png" alt-text="Screenshot that shows where to select to investigate in the log analytics workspace.":::
-## Clean up resources
-
-There are no resources to clean up.
- ## Next steps > [!div class="nextstepaction"]
-> Learn how to [integrate Microsoft Sentinel and Microsoft Defender for IoT](../../sentinel/iot-solution.md?bc=%2fazure%2fdefender-for-iot%2fbreadcrumb%2ftoc.json&tabs=use-out-of-the-box-analytics-rules-recommended&toc=%2fazure%2fdefender-for-iot%2forganizations%2ftoc.json)
+> Learn how to [integrate Microsoft Sentinel and Microsoft Defender for IoT](../../sentinel/iot-solution.md?bc=%2fazure%2fdefender-for-iot%2fbreadcrumb%2ftoc.json&tabs=use-out-of-the-box-analytics-rules-recommended&toc=%2fazure%2fdefender-for-iot%2forganizations%2ftoc.json)
defender-for-iot Dell Poweredge R340 Xl Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-poweredge-r340-xl-legacy.md
Title: Dell PowerEdge R340 XL for OT monitoring (legacy) - Microsoft Defender for IoT
-description: Learn about the Dell PowerEdge R340 XL appliance in its legacy configuration when used for OT monitoring with Microsoft Defender for IoT in enterprise deployments.
+description: Learn about the Dell PowerEdge R340 XL appliance's legacy configuration when used for OT monitoring with Microsoft Defender for IoT in enterprise deployments.
Last updated 04/24/2022
This article describes the Dell PowerEdge R340 XL appliance, supported for OT sensors and on-premises management consoles.
-Legacy appliances are certified but aren't currently offered as pre-configured appliances.
+Legacy appliances are certified but aren't currently offered as preconfigured appliances.
|Appliance characteristic | Description|
To install the Dell PowerEdge R340XL appliance, you need:
### Configure the Dell BIOS
-The Dell appliance is managed by an integrated iDRAC with Lifecycle Controller (LC). The LC is embedded in every Dell PowerEdge server and provides functionality that helps you deploy, update, monitor, and maintain your Dell PowerEdge appliances.
+ An integrated iDRAC manages the Dell appliance with Lifecycle Controller (LC). The LC is embedded in every Dell PowerEdge server and provides functionality that helps you deploy, update, monitor, and maintain your Dell PowerEdge appliances.
To establish the communication between the Dell appliance and the management computer, you need to define the iDRAC IP address and the management computer's IP address on the same subnet.
When the connection is established, the BIOS is configurable.
This procedure describes how to update the Dell PowerEdge R340 XL configuration for your OT deployment.
-Configure the appliance BIOS only if you didn't purchase your appliance from Arrow, or if you have an appliance, but don't have access to the XML configuration file.
+Configure the appliance BIOS only if you didn't purchase your appliance from Arrow or if you have an appliance, but don't have access to the XML configuration file.
1. Access the appliance's BIOS directly by using a keyboard and screen, or use iDRAC.
- - If the appliance isn't a Defender for IoT appliance, open a browser and go to the IP address that was configured before. Sign in with the Dell default administrator privileges. Use **root** for the username and **calvin** for the password.
+ - If the appliance isn't a Defender for IoT appliance, open a browser and go to the IP address configured beforehand. Sign in with the Dell default administrator privileges. Use **root** for the username and **calvin** for the password.
- If the appliance is a Defender for IoT appliance, sign in by using **XXX** for the username and **XXX** for the password.
Configure the appliance BIOS only if you didn't purchase your appliance from Arr
This procedure describes how to install Defender for IoT software on the HPE DL360.
-The installation process takes about 20 minutes. After the installation, the system is restarted several times.
+The installation process takes about 20 minutes. After the installation, the system restarts several times.
**To install the software**:
defender-for-iot Hpe Edgeline El300 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-edgeline-el300.md
This article describes the HPE Edgeline EL300 appliance for OT sensors or on-premises management consoles.
-Legacy appliances are certified but aren't currently offered as pre-configured appliances.
+Legacy appliances are certified but aren't currently offered as preconfigured appliances.
| Appliance characteristic |Details |
defender-for-iot Hpe Proliant Dl360 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl360.md
The following image shows a view of the HPE ProLiant Dl360 back panel:
|Component |Specifications| |||
-|Chassis |1U rack server |
-|Dimensions |Four 3.5" chassis: 4.29 x 43.46 x 70.7 cm / 1.69 x 17.11 x 27.83 in |
-|Weight | Max 16.72 kg / 35.86 lb |
-|Chassis |1U rack server|
-|Dimensions| 42.9 x 43.46 x 70.7 cm / 1.69" x 17.11" x 27.83" in|
-|Weight| Max 16.27 kg / 35.86 lb |
-|Processor | 2x Intel Xeon Silver 4215 R 3.2 GHz 11M cache 8c/16T 130 W|
-|Chipset | Intel C621|
-|Memory | 32 GB = Two 16-GB 2666MT/s DDR4 ECC UDIMM|
-|Storage| Six 1.2-TB SAS 12G Enterprise 10K SFF (2.5 in) in hot-plug hard drive - RAID 5|
-|Network controller| On-board: Two 1 Gb <br> On-board: iLO Port Card 1 Gb <br>External: One HPE Ethernet 1-Gb 4-port 366FLR adapter|
-|Management |HPE iLO Advanced |
-|Device access | Two rear USB 3.0 |
-|One front | USB 2.0 |
-|One internal |USB 3.0 |
-|Power |Two HPE 500-W flex slot platinum hot plug low halogen power supply kit
-|Rack support | HPE 1U Gen10 SFF easy install rail kit |
+|**Chassis** |1U rack server |
+|**Dimensions** |Four 3.5" chassis: 4.29 x 43.46 x 70.7 cm / 1.69 x 17.11 x 27.83 in |
+|**Weight** | Max 16.72 kg / 35.86 lb |
+|**Chassis** |1U rack server|
+|**Dimensions**| 42.9 x 43.46 x 70.7 cm / 1.69" x 17.11" x 27.83" in|
+|**Weight**| Max 16.27 kg / 35.86 lb |
+|**Processor** | 2x Intel Xeon Silver 4215 R 3.2 GHz 11M cache 8c/16T 130 W|
+|**Chipset** | Intel C621|
+|**Memory** | 32 GB = Two 16-GB 2666MT/s DDR4 ECC UDIMM|
+|**Storage**| Six 1.2-TB SAS 12G Enterprise 10K SFF (2.5 in) in hot-plug hard drive - RAID 5|
+|**Network controller**| On-board: Two 1 Gb <br> On-board: iLO Port Card 1 Gb <br>External: One HPE Ethernet 1-Gb 4-port 366FLR adapter|
+|**Management** |HPE iLO Advanced |
+|**Device access** | Two rear USB 3.0 |
+|**One front** | USB 2.0 |
+|**One internal** |USB 3.0 |
+|**Power** |Two HPE 500-W flex slot platinum hot plug low halogen power supply kit
+|**Rack support** | HPE 1U Gen10 SFF easy install rail kit |
## HPE DL360 BOM |PN |Description |Quantity| |-- | --| |
-|P19766-B21 | HPE DL360 Gen10 8SFF NC CTO Server |1|
-|P19766-B21 | Europe - Multilingual Localization |1|
-|P24479-L21 | Intel Xeon-S 4215 R FIO Kit for DL360 G10 |1|
-|P24479-B21 | Intel Xeon-S 4215 R Kit for DL360 Gen10 |1|
-|P00922-B21 | HPE 16-GB 2Rx8 PC4-2933Y-R Smart Kit |2|
-|872479-B21 | HPE 1.2-TB SAS 10K SFF SC DS HDD |6|
-|811546-B21 | HPE 1-GbE 4-p BASE-T I350 Adapter |1|
-|P02377-B21 | HPE Smart Hybrid Capacitor w_ 145 mm Cable |1|
-|804331-B21 | HPE Smart Array P408i-a SR Gen10 Controller |1|
-|665240-B21 | HPE 1-GbE 4-p FLR-T I350 Adapter |1|
-|871244-B21 | HPE DL360 Gen10 High Performance Fan Kit |1|
-|865408-B21 | HPE 500-W FS Plat Hot Plug LH Power Supply Kit |2|
-|512485-B21 | HPE iLO Adv 1-Server License 1 Year Support |1|
-|874543-B21 | HPE 1U Gen10 SFF Easy Install Rail Kit |1|
+|**P19766-B21** | HPE DL360 Gen10 8SFF NC CTO Server |1|
+|**P19766-B21** | Europe - Multilingual Localization |1|
+|**P24479-L21** | Intel Xeon-S 4215 R FIO Kit for DL360 G10 |1|
+|**P24479-B21** | Intel Xeon-S 4215 R Kit for DL360 Gen10 |1|
+|**P00922-B21** | HPE 16-GB 2Rx8 PC4-2933Y-R Smart Kit |2|
+|**872479-B21** | HPE 1.2-TB SAS 10K SFF SC DS HDD |6|
+|**811546-B21** | HPE 1-GbE 4-p BASE-T I350 Adapter |1|
+|**P02377-B21** | HPE Smart Hybrid Capacitor w_ 145 mm Cable |1|
+|**804331-B21** | HPE Smart Array P408i-a SR Gen10 Controller |1|
+|**665240-B21** | HPE 1-GbE 4-p FLR-T I350 Adapter |1|
+|**871244-B21** | HPE DL360 Gen10 High Performance Fan Kit |1|
+|**865408-B21** | HPE 500-W FS Plat Hot Plug LH Power Supply Kit |2|
+|**512485-B21** | HPE iLO Adv 1-Server License 1 Year Support |1|
+|**874543-B21** | HPE 1U Gen10 SFF Easy Install Rail Kit |1|
## Port expansion
Optional modules for port expansion include:
|Location |Type|Specifications| |-- | --| |
-| PCI Slot 1 (Low profile)| Quad Port Ethernet NIC| 811546-B21 - HPE 1 GbE 4p BASE-T I350 Adapter SI |
-| PCI Slot 1 (Low profile) | DP F/O NIC|727054-B21 - HPE 10 GbE 2p FLR-SFP+ X710 Adapter|
-| PCI Slot 2 (High profile)| Quad Port Ethernet NIC|811546-B21 - HPE 1 GbE 4p BASE-T I350 Adapter SI|
-| PCI Slot 2 (High profile)|DP F/O NIC| 727054-B21 - HPE 10 GbE 2p FLR-SFP+ X710 Adapter|
-| PCI Slot 2 (High profile)|Quad Port F/O NIC| 869585-B21 - HPE 10 GbE 4p SFP+ X710 Adapter SI|
-| SFPs for Fiber Optic NICs|MultiMode, Short Range|455883-B21 - HPE BLc 10G SFP+ SR Transceiver|
-| SFPs for Fiber Optic NICs|SingleMode, Long Range | 455886-B21 - HPE BLc 10G SFP+ LR Transceiver|
+| **PCI Slot 1 (Low profile)**| Quad Port Ethernet NIC| 811546-B21 - HPE 1 GbE 4p BASE-T I350 Adapter SI |
+| **PCI Slot 1 (Low profile)** | DP F/O NIC|727054-B21 - HPE 10 GbE 2p FLR-SFP+ X710 Adapter|
+|**PCI Slot 2 (High profile)**| Quad Port Ethernet NIC|811546-B21 - HPE 1 GbE 4p BASE-T I350 Adapter SI|
+| **PCI Slot 2 (High profile)**|DP F/O NIC| 727054-B21 - HPE 10 GbE 2p FLR-SFP+ X710 Adapter|
+| **PCI Slot 2 (High profile)**|Quad Port F/O NIC| 869585-B21 - HPE 10 GbE 4p SFP+ X710 Adapter SI|
+| **SFPs for Fiber Optic NICs**|MultiMode, Short Range|455883-B21 - HPE BLc 10G SFP+ SR Transceiver|
+|**SFPs for Fiber Optic NICs**|SingleMode, Long Range | 455886-B21 - HPE BLc 10G SFP+ LR Transceiver|
## HPE ProLiant DL360 installation
-This section describes how to install OT sensor software on the HPE ProLiant DL360 appliance, and includes adjusting the appliance's BIOS configuration.
+This section describes how to install OT sensor software on the HPE ProLiant DL360 appliance and includes adjusting the appliance's BIOS configuration.
During this procedure, you'll configure the iLO port. We recommend that you also change the default password provided for the administrative user.
defender-for-iot Neousys Nuvo 5006Lp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/neousys-nuvo-5006lp.md
In this image, numbers indicate the following components:
|Component|Technical Specifications| |:-|:-|
-|Construction|Aluminum, fanless and dust-proof design|
-|Dimensions|240 mm (W) x 225 mm (D) x 77 mm (H)|
-|Weight|3.1 kg (including CPU, memory, and HDD)|
-|CPU|Intel Core i5-6500TE (6M Cache, up to 3.30 GHz) S1151|
-|Chipset|Intel® Q170 Platform Controller Hub|
-|Memory|8 GB DDR4 2133 MHz Wide Temperature SODIMM|
-|Storage|128 GB 3ME3 Wide Temperature mSATA SSD|
-|Network controller|Six-Gigabit Ethernet ports by Intel® I219|
-|Device access|Four USBs: Two in front, two in the rear, and 1 internal|
-|Power Adapter|120/240VAC-20VDC/6A|
-|Mounting|Mounting kit, Din Rail|
-|Operating Temperature|-25┬░C - 70┬░C|
-|Storage Temperature|-40┬░C ~ 85┬░C|
-|Humidity|10%~90%, non-condensing|
-|Vibration|Operating, 5 Grms, 5-500 Hz, three Axes <br>(w/ SSD, according to IEC60068-2-64)|
-|Shock|Operating, 50 Grms, Half-sine 11 ms Duration <br>(w/ SSD, according to IEC60068-2-27)|
-|EMC|CE/FCC Class A, according to EN 55022, EN 55024 & EN 55032|
+|**Construction**|Aluminum, fanless and dust-proof design|
+|**Dimensions**|240 mm (W) x 225 mm (D) x 77 mm (H)|
+|**Weight**|3.1 kg (including CPU, memory, and HDD)|
+|**CPU**|Intel Core i5-6500TE (6M Cache, up to 3.30 GHz) S1151|
+|**Chipset**|Intel® Q170 Platform Controller Hub|
+|**Memory**|8 GB DDR4 2133 MHz Wide Temperature SODIMM|
+|**Storage**|128 GB 3ME3 Wide Temperature mSATA SSD|
+|**Network controller**|Six-Gigabit Ethernet ports by Intel® I219|
+|**Device access**|Four USBs: Two in front, two in the rear, and 1 internal|
+|**Power Adapter**|120/240VAC-20VDC/6A|
+|**Mounting**|Mounting kit, Din Rail|
+|**Operating Temperature**|-25┬░C - 70┬░C|
+|**Storage Temperature**|-40┬░C ~ 85┬░C|
+|**Humidity**|10%~90%, non-condensing|
+|**Vibration**|Operating, 5 Grms, 5-500 Hz, three Axes <br>(w/ SSD, according to IEC60068-2-64)|
+|**Shock**|Operating, 50 Grms, Half-sine 11 ms Duration <br>(w/ SSD, according to IEC60068-2-27)|
+|**EMC**|CE/FCC Class A, according to EN 55022, EN 55024 & EN 55032|
## Nuvo 5006LP sensor installation
-This section describes how to install OT sensor software on the Nuvo 5006LP appliance. Before you install the OT sensor software, you must adjust the appliance's BIOS configuration.
+This section describes how to install OT sensor software on the Nuvo 5006LP appliance. Before installing the OT sensor software, you must adjust the appliance's BIOS configuration.
> [!NOTE] > Installation procedures are only relevant if you need to re-install software on a preconfigured device, or if you buy your own hardware and configure the appliance yourself.
This section describes how to install OT sensor software on the Nuvo 5006LP appl
### Prerequisites
-Before you start installing OT sensor software, or updating the BIOS configuration, make sure that the operating system is installed on the appliance.
+Before installing OT sensor software, or updating the BIOS configuration, make sure that the operating system is installed on the appliance.
### Configure the Nuvo 5006LP BIOS
This procedure describes how to install OT sensor software on the Nuvo 5006LP. T
1. Accept the settings and continue by entering `Y`.
-After approximately 10 minutes, sign-in credentials are automatically generated. Save the username and passwords, you'll need these credentials to access the platform the first time you use it.
+After approximately 10 minutes, sign-in credentials are automatically generated. Save the username and passwords; you'll need these credentials to access the platform the first time you use it.
## Next steps
defender-for-iot Virtual Sensor Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/virtual-sensor-hyper-v.md
This procedure describes how to create a virtual machine by using Hyper-V.
While a virtual switch doesn't have mirroring capabilities, you can use *Promiscuous mode* in a virtual switch environment as a workaround for configuring a SPAN port.
-*Promiscuous mode* is a mode of operation and a security, monitoring, and administration technique that is defined at the virtual switch or portgroup level. When promiscuous mode is used, any of the virtual machineΓÇÖs network interfaces that are in the same portgroup can view all network traffic that goes through that virtual switch. By default, promiscuous mode is turned off.
+*Promiscuous mode* is a mode of operation and a security, monitoring, and administration technique that is defined at the virtual switch or portgroup level. When promiscuous mode is used, any of the virtual machineΓÇÖs network interfaces in the same portgroup can view all network traffic that goes through that virtual switch. By default, promiscuous mode is turned off.
For more information, see [Purdue reference model and Defender for IoT](../plan-network-monitoring.md#purdue-reference-model-and-defender-for-iot).
You are able to attach a SPAN Virtual Interface to the Virtual Switch through Wi
| Parameter | Description | |--|--|
- | VK-C1000V-LongRunning-650 | CPPM VA name |
- |vSwitch_Span |Newly added SPAN virtual switch name |
- |Monitor |Newly added adapter name |
+ |**VK-C1000V-LongRunning-650** | CPPM VA name |
+ |**vSwitch_Span** |Newly added SPAN virtual switch name |
+ |**Monitor** |Newly added adapter name |
1. Select **OK**.
Add-VMSwitchExtensionPortFeature -ExternalPort -SwitchName vSwitch_Span -VMSwitc
| Parameter | Description | |--|--|
-| vSwitch_Span | Newly added SPAN virtual switch name. |
-| MonitorMode=2 | Source |
-| MonitorMode=1 | Destination |
-| MonitorMode=0 | None |
+|**vSwitch_Span** | Newly added SPAN virtual switch name. |
+|**MonitorMode=2** | Source |
+|**MonitorMode=1** | Destination |
+|**MonitorMode=0** | None |
Use the following PowerShell command to verify the monitoring mode status:
Get-VMSwitchExtensionPortFeature -FeatureName "Ethernet Switch Port Security Set
``` | Parameter | Description | |--|--|
-| vSwitch_Span | Newly added SPAN virtual switch name |
+|**vSwitch_Span** | Newly added SPAN virtual switch name |
## Next steps
defender-for-iot How To Accelerate Alert Incident Response https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-accelerate-alert-incident-response.md
# Accelerate alert workflows
-This article describes how to accelerate alert workflows by using alert comments, alert groups, and custom alert rules for standard protocols and proprietary protocols in Microsoft Defender for IoT. These tools help you
+This article describes how to accelerate alert workflows using alert comments, alert groups, and custom alert rules for standard protocols and proprietary protocols in Microsoft Defender for IoT. These tools help you
- Analyze and manage the large volume of alert events detected in your network.
This article describes how to accelerate alert workflows by using alert comments
## Accelerate incident workflows by using alert comments
-Work with alert comments to improve communication between individuals and teams during the investigation of an alert event.
+Work with alert comments to improve communication between individuals and teams while investigating an alert event.
Use alert comments to improve:
Use alert comments to improve:
- **Workflow guidance**: Provide recommendations, insights, or warnings about the event.
-The list of available options appears in each alert. Users can select one or several messages.
+The list of available options appears in each alert, and users can select one or several messages.
**To add alert comments:**
The list of available options appears in each alert. Users can select one or sev
## Accelerate incident workflows by using alert groups
-Alert groups let SOC teams view and filter alerts in their SIEM solutions and then manage these alerts based on enterprise security policies and business priorities. For example, alerts about new detections are organized in a discovery group. This group includes alerts that deal with the detection of new devices, new VLANs, new user accounts, new MAC addresses, and more.
+Alert groups let SOC teams view and filter alerts in their SIEM solutions and then manage these alerts based on enterprise security policies and business priorities. For example, alerts about new detections are organized in a discovery group. This group includes alerts that deal with detecting new devices, new VLANs, new user accounts, new MAC addresses, and more.
Alert groups are applied when you create forwarding rules for the following partner solutions:
Alert groups are predefined. For details about alerts associated with alert grou
## Customize alert rules
-Add custom alert rule to pinpoint specific activity as needed for your organization such as for specific protocols, source or destination addresses, or a combination of parameters.
+Add custom alert rule to pinpoint specific activity needed for your organization such as for particular protocols, source or destination addresses, or a combination of parameters.
-For example, you might want to define an alert for an environment running MODBUS to detect any write commands to a memory register, on a specific IP address and ethernet destination. Another example would be an alert for any access to a specific IP address.
+For example, you might want to define an alert for an environment running MODBUS to detect any written commands to a memory register on a specific IP address and ethernet destination. Another example would be an alert for any access to a particular IP address.
Use custom alert rule actions to instruct Defender for IT to take specific action when the alert is triggered, such as allowing users to access PCAP files from the alert, assigning alert severity, or generating an event that shows in the event timeline. Alert messages indicate that the alert was generated from a custom alert rule.
defender-for-iot How To Investigate All Enterprise Sensor Detections In A Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md
Title: Learn about devices discovered by all sensors description: Use the device inventory in the on-premises management console to get a comprehensive view of device information from connected sensors. Use import, export, and filtering tools to manage this information. Previously updated : 11/09/2021 Last updated : 06/12/2022
You can use this information to learn. For example:
- Opened tickets for devices -- The last date when firmware was upgraded
+- The last date when the firmware was upgraded
- Devices allowed access to the internet
defender-for-iot How To Investigate Sensor Detections In A Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-sensor-detections-in-a-device-inventory.md
Title: Gain insight into devices discovered by a specific sensor description: The device inventory displays an extensive range of device attributes that a sensor detects. Previously updated : 03/09/2022 Last updated : 06/09/2022 # Investigate sensor detections in an inventory
-The device inventory displays an extensive range of device attributes that your sensor detects. Use the inventory to gain insight and full visibility into the devices on your network.
+The device inventory displays an extensive range of device attributes that your sensor detects. Use the inventory to gain insight and full visibility of the devices on your network.
:::image type="content" source="media/how-to-inventory-sensor/inventory-sensor.png" alt-text="Screenshot that shows the Device inventory main screen.":::
For more information, see [Devices monitored by Defender for IoT](architecture.m
## View device attributes in the inventory
-This section describes device details available from the inventory and describes how to work with inventory filters and view contextual information about each device.
+This section describes device details available from the inventory, how to work with inventory filters, and how to view contextual information about each device.
**To view the device inventory:**
The following columns are available for each device.
| Name | Description | |--|--| | **Description** | A description of the device |
-| **Discovered** | When this device was first seen in the network. |
+| **Discovered** | When this device was first seen on the network. |
| **Firmware version** | The device's firmware, if detected. | | **FQDN** | The device's FQDN value | | **FQDN lookup time** | The device's FQDN lookup time | | **Groups** | The groups that this device participates in. | | **IP Address** | The IP address of the device. |
-| **Is Authorized** | The authorization status defined by the user:<br />- **True**: The device has been authorized.<br />- **False**: The device hasn't been |
+| **Is Authorized** | The authorization status defined by the user:<br />- **True**: The device has been authorized.<br />- **False**: The device hasn't been authorized. |
| **Is Known as Scanner** | Defined as a network scanning device by the user. | | **Is Programming device** | Defined as an authorized programming device by the user. <br />- **True**: The device performs programming activities for PLCs, RTUs, and controllers, which are relevant to engineering stations. <br />- **False**: The device isn't a programming device. | | **Last Activity** | The last activity that the device performed. |
The following columns are available for each device.
**To view additional details:** 1. Select an alert from the inventory and the select **View full details** in the dialog box that opens.
-1. Navigate to additional information such as firmware details, and view contextual information such alerts related to the device, or a timeline of events associated with the device.
+1. Navigate to additional information such as firmware details, view contextual information such as alerts related to the device, or a timeline of events associated with the device.
## Filter the inventory
Certain device properties can be updated manually. Information manually entered
## Learn Windows registry details
-In addition to learning OT devices, you can discover Microsoft Windows workstations, and servers. These devices are also displayed in Device Inventory. After you learn devices, you can enrich the Device Inventory with detailed Windows information, such as:
+In addition to learning OT devices, you can discover Microsoft Windows workstations and servers. These devices are also displayed in the Device Inventory. After you learn devices, you can enrich the Device Inventory with detailed Windows information, such as:
- Windows version installed
To receive the script, [contact customer support](mailto:support.microsoft.com).
### Deploy the script
-You can deploy the script once or schedule ongoing queries by using standard automated deployment methods and tools.
+You can deploy the script once or schedule ongoing queries using standard automated deployment methods and tools.
### About the script
You can deploy the script once or schedule ongoing queries by using standard aut
Information learned on each endpoint should be imported to the sensor.
-Files generated from the queries can be placed in one folder that you can access from sensors. Use standard, automated methods and tools to move the files from each Windows endpoint to the location where you'll be importing them to the sensor.
+Files generated from the queries can be placed in one folder that you can access from the sensors. Use standard, automated methods and tools to move the files from each Windows endpoint to the location where you'll be importing them to the sensor.
Don't update file names.
You can filter the inventory to display devices that are inactive:
### Delete inactive devices
-Devices you delete from the Inventory are removed from the map and won't be calculated when generating Defender for IoT reports, for example Data Mining, Risk Assessment, and Attack Vector reports.
+Devices you delete from the Inventory are removed from the map and won't be calculated when generating Defender for IoT reports, for example, Data Mining, Risk Assessment, and Attack Vector reports.
-You'll be prompted to record a reason for deleting devices. This information, as well as the time/date and number of devices deleted, appears in the Event timeline.
+You'll be prompted to record a reason for deleting devices. This information, as well as the date/time and number of devices deleted, appears in the Event timeline.
**To delete inactive devices:**
defender-for-iot How To Manage Device Inventory For Organizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-device-inventory-for-organizations.md
The following table describes the device properties shown in the device inventor
| **Purdue level** | The Purdue level in which the device exists. | | **Scanner** | Whether the device performs scanning-like activities in the network. | | **Sensor** | The sensor the device is connected to. |
-| **Site** | The site that contains this device. |
+| **Site** | The site that contains this device. <br><br>All Enterprise IoT sensors are automatically added to the **Enterprise network** site.|
| **Slots** | The number of slots the device has. | | **Subtype** | The subtype of the device, such as speaker and smart tv. <br>**Default**: `Managed Device` | | **Tags** | Tagging data for each device. |
defender-for-iot How To Manage Sensors On The Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-on-the-cloud.md
This procedure describes how to use the Azure portal to contact vendors for pre-
## Onboard sensors
-Onboard a sensor by registering it with Microsoft Defender for IoT and downloading a sensor activation file.
+Onboard a sensor by registering it with Microsoft Defender for IoT. For OT sensors, you'll also need to download a sensor activation file.
-**Prerequisites**: Make sure that you've set up your sensor and configured your SPAN port or TAP. For more information, see [Defender for IoT installation](how-to-install-software.md).
+Select one of the following tabs, depending on the type of network you're working with.
+# [OT sensors](#tab/ot)
-**To onboard your sensor to Defender for IoT**:
+**Prerequisites**: Make sure that you've set up your sensor and configured your SPAN port or TAP.
-1. In the Azure portal, navigate to **Defender for IoT** > **Getting started** and select **Set up OT/ICS Security**. Alternately, from the Defender for IoT **Sites and sensors** page, select **Onboard OT sensor**.
+For more information, see [Activate and set up your sensor](how-to-activate-and-set-up-your-sensor.md) and [Defender for IoT installation](how-to-install-software.md), or our [Tutorial: Get started with Microsoft Defender for IoT for OT security](tutorial-onboarding.md).
+
+**To onboard your OT sensor to Defender for IoT**:
+
+1. In the Azure portal, navigate to **Defender for IoT** > **Getting started** and select **Set up OT/ICS Security**.
+
+ :::image type="content" source="media/tutorial-onboarding/onboard-a-sensor.png" alt-text="Screenshot of the Set up O T/I C S Security button on the Get started page.":::
+
+ Alternately, from the Defender for IoT **Sites and sensors** page, select **Onboard OT sensor**.
1. By default, on the **Set up OT/ICS Security** page, **Step 1: Did you set up a sensor?** and **Step 2: Configure SPAN port or TAPΓÇï** of the wizard are collapsed. If you haven't completed these steps, do so before continuing. 1. In **Step 3: Register this sensor with Microsoft Defender for IoT** enter or select the following values for your sensor:
- 1. In the **Sensor name** field, enter a meaningful name for your sensor. We recommend including your sensor's IP address as part of the name, or using another easily identifiable name, that can help you keep track between the registration name in the Azure portal and the IP address of the sensor shown in the sensor console.
+ 1. In the **Sensor name** field, enter a meaningful name for your sensor. We recommend including your sensor's IP address as part of the name, or using another easily identifiable name that can help you keep track between the registration name in the Azure portal and the IP address of the sensor shown in the sensor console.
1. In the **Subscription** field, select your Azure subscription.
Onboard a sensor by registering it with Microsoft Defender for IoT and downloadi
1. Select **Register**.
-A success message appears and your activation file is automatically downloaded, and your sensor is now shown under the configured site on the Defender for IoT **Sites and sensors** page.
+A success message appears and your activation file is automatically downloaded. Your sensor is now shown under the configured site on the Defender for IoT **Sites and sensors** page.
-However, until you activate your sensor, the sensor's status will show as **Pending Activation**.
+Until you activate your sensor, the sensor's status will show as **Pending Activation**.
Make the downloaded activation file accessible to the sensor console admin so that they can activate the sensor. For more information, see [Upload new activation files](how-to-manage-individual-sensors.md#upload-new-activation-files).
+# [Enterprise IoT sensors](#tab/eiot)
+
+**To set up an Enterprise IoT sensor**:
+
+1. Navigate to the [Azure portal](https://portal.azure.com#home).
+
+1. Select **Set up Enterprise IoT Security**.
+
+ :::image type="content" source="media/tutorial-get-started-eiot/onboard-sensor.png" alt-text="Screenshot of the Set up Enterprise I O T Security button on the Get started page.":::
+
+1. In the **Sensor name** field, enter a meaningful name for your sensor.
+
+1. From the **Subscription** drop-down menu, select the subscription where you want to add your sensor.
+
+1. Select **Register**. A **Sensor registration successful** screen shows your next steps and the command you'll need to start the sensor installation.
+
+ For example:
+
+ :::image type="content" source="media/tutorial-get-started-eiot/successful-registration.png" alt-text="Screenshot of the successful registration of an Enterprise I O T sensor.":::
+
+1. Copy the command to a safe location, and continue with installing the sensor. For more information, see [Install the sensor](tutorial-getting-started-eiot-sensor.md#install-the-sensor).
+
+> [!NOTE]
+> As opposed to OT sensors, where you define your sensor's site, all Enterprise IoT sensors are automatically added to the **Enterprise network** site.
++ ## Manage on-boarded sensors Sensors that you've on-boarded to Defender for IoT are listed on the Defender for IoT **Sites and sensors** page. This page supports the following management tasks:
defender-for-iot How To Set Up High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-high-availability.md
Title: Set up high availability description: Increase the resiliency of your Defender for IoT deployment by installing an on-premises management console high availability appliance. High availability deployments ensure your managed sensors continuously report to an active on-premises management console. Previously updated : 11/09/2021 Last updated : 06/12/2022 # About high availability
When a primary and secondary on-premises management console is paired:
## About failover and failback
-If a sensor can't connect to the primary on-premises management console, it automatically connects to the secondary. Your system will be supported by both the primary and secondary simultaneously, if less than half of the sensors are communicating with the secondary. The secondary takes over when more than half of the sensors are communicating with it. Fail over from the primary to the secondary takes approximately three minutes. When the failover occurs, the primary on-premises management console freezes. When this happens, you can sign in to the secondary using the same sign-in credentials.
+If a sensor can't connect to the primary on-premises management console, it automatically connects to the secondary. Your system will be supported by both the primary and secondary simultaneously, if less than half of the sensors are communicating with the secondary. The secondary takes over when more than half of the sensors are communicating with it. Failover from the primary to the secondary takes approximately three minutes. When the failover occurs, the primary on-premises management console freezes. When this happens, you can sign in to the secondary using the same sign-in credentials.
-During failover, sensors continue attempting to communicate with the primary appliance. When more than half the managed sensors succeed to communicate with the primary, the primary is restored. The following message appears on the secondary console when the primary is restored:
+During failover, sensors continue attempts to communicate with the primary appliance. When more than half the managed sensors succeed in communicating with the primary, the primary is restored. The following message appears on the secondary console when the primary is restored:
:::image type="content" source="media/how-to-set-up-high-availability/secondary-console-message.png" alt-text="Screenshot of a message that appears at the secondary console when the primary is restored.":::
Verify that you've met the following high availability requirements:
### Network access requirements
-Verify if your organizational security policy allows you to have access to the following services on the primary and secondary on-premises management console. These services also allow the connection between the sensors and secondary on-premises management console:
+Verify if your organizational security policy allows you to have access to the following services, on the primary and secondary on-premises management console. These services also allow the connection between the sensors and secondary on-premises management console:
|Port|Service|Description| |-|-|--|
The core application logs can be exported to the Defender for IoT support team t
## Update the on-premises management console with high availability
-To update an on-premises management console that has high availability configured, you will need to:
+To update an on-premises management console that has high availability configured, you'll need to:
1. Disconnect the high availability from both the primary and secondary appliances. 1. Update the appliances to the new version.
defender-for-iot How To View Information Per Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-view-information-per-zone.md
description: Use the on-premises management console to get a comprehensive view
Previously updated : 11/09/2021 Last updated : 06/12/2022
The following additional zone information is available:
- **Connectivity status**: If a sensor is disconnected, connect from the sensor. See [Connect sensors to the on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md#connect-sensors-to-the-on-premises-management-console). -- **Update progress**: If the connected sensor is being upgraded, upgrade statuses will appear. During upgrade, the on-premises management console doesn't receive device information from the sensor.
+- **Update progress**: If the connected sensor is being upgraded, upgrade statuses will appear. During the upgrade, the on-premises management console doesn't receive device information from the sensor.
## Next steps
defender-for-iot How To Work With Device Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-device-notifications.md
The following table describes the notification event types you might receive, al
| Type | Description | Responses | |--|--|--| | New IP detected | A new IP address is associated with the device. Five scenarios might be detected: <br /><br /> An additional IP address was associated with a device. This device is also associated with an existing MAC address.<br /><br /> A new IP address was detected for a device that's using an existing MAC address. Currently the device does not communicate by using an IP address.<br /> <br /> A new IP address was detected for a device that's using a NetBIOS name. <br /><br /> An IP address was detected as the management interface for a device associated with a MAC address. <br /><br /> A new IP address was detected for a device that's using a virtual IP address. | **Set Additional IP to Device** (merge devices) <br /> <br />**Replace Existing IP** <br /> <br /> **Dismiss**<br /> Remove the notification. |
-| Inactive devices | Traffic was not detected on a device for more than 60 days. | **Delete** <br /> If this device is not part of your network, remove it. <br /><br />**Dismiss** <br /> Remove the notification if the device is part of your network. If the device is inactive (for example, because it's mistakenly disconnected from the network), dismiss the notification and reconnect the device. |
-| New OT devices | A subnet includes an OT device that's not defined in an ICS subnet. <br /><br /> Each subnet that contains at least one OT device can be defined as an ICS subnet. This helps differentiate between OT and IT devices on the map. | **Set as ICS Subnet** <br /> <br /> **Dismiss** <br />Remove the notification if the device is not part of the subnet. |
+| Inactive devices | Traffic wasn't detected on a device for more than 60 days. | **Delete** <br /> If this device isn't part of your network, remove it. <br /><br />**Dismiss** <br /> Remove the notification if the device is part of your network. If the device is inactive (for example, because it's mistakenly disconnected from the network), dismiss the notification and reconnect the device. |
+| New OT devices | A subnet includes an OT device that's not defined in an ICS subnet. <br /><br /> Each subnet that contains at least one OT device can be defined as an ICS subnet. This helps differentiate between OT and IT devices on the map. | **Set as ICS Subnet** <br /> <br /> **Dismiss** <br />Remove the notification if the device isn't part of the subnet. |
| No subnets configured | No subnets are currently configured in your network. <br /><br /> Configure subnets for better representation in the map and the ability to differentiate between OT and IT devices. | **Open Subnets Configuration** and configure subnets. <br /><br />**Dismiss** <br /> Remove the notification. | | Operating system changes | One or more new operating systems have been associated with the device. | Select the name of the new OS that you want to associate with the device.<br /><br /> **Dismiss** <br /> Remove the notification. | | New subnets | New subnets were discovered. | **Learn**<br />Automatically add the subnet.<br />**Open Subnet Configuration**<br />Add all missing subnet information.<br />**Dismiss**<br />Remove the notification. |
defender-for-iot How To Work With The Sensor Device Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-the-sensor-device-map.md
The following basic search tools are available:
:::image type="icon" source="media/how-to-work-with-maps/search-bar-icon-v2.png" border="false":::
-When you search by IP or MAC address, the map displays the device that you searched for with devices connected to it.
+When you search by IP or MAC address, the map displays the device that you searched for with the devices connected to it.
:::image type="content" source="media/how-to-work-with-maps/search-ip-entered.png" alt-text="Screenshot of an I P address entered in the Device map search and displayed in the map.":::
The following predefined groups are available:
| **non-standard ports (default)** | Devices that use non-standard ports or ports that haven't been assigned an alias. | | **OT protocols (default)** | Devices that handle known OT traffic. | | **Authorization (default)** | Devices that were discovered in the network during the learning process or were officially authorized on the network. |
-| **Device inventory filters** | Devices grouped according to the filters save in the Device Inventory table. |
-| **Polling intervals** | Devices grouped by polling intervals. The polling intervals are generated automatically according to cyclic channels, or periods. For example, 15.0 seconds, 3.0 seconds, 1.5 seconds, or any interval. Reviewing this information helps you learn if systems are polling too quickly or slowly. |
+| **Device inventory filters** | Devices grouped according to the filters saved in the Device Inventory table. |
+| **Polling intervals** | Devices grouped by polling intervals. The polling intervals are generated automatically according to cyclic channels or periods. For example, 15.0 seconds, 3.0 seconds, 1.5 seconds, or any other interval. Reviewing this information helps you learn if systems are polling too quickly or slowly. |
| **Programming** | Engineering stations, and programming machines. | | **Subnets** | Devices that belong to a specific subnet. | | **VLAN** | Devices associated with a specific VLAN ID. | | **Cross subnet connections** | Devices that communicate from one subnet to another subnet. | | **Attack vector simulations** | Vulnerable devices detected in attack vector reports. To view these devices on the map, select the **Display on Device Map** checkbox when generating the Attack Vector. :::image type="content" source="media/how-to-work-with-maps/add-attack-v3.png" alt-text="Screenshot of the Add Attack Vector Simulations":::|
-| **Last seen** | Devices grouped by the time frame they were last seen, for example: One hour, six hours, one day, seven days. |
+| **Last seen** | Devices grouped by the time frame they were last seen, for example: One hour, six hours, one day, or seven days. |
| **Not In Active Directory** | All non-PLC devices that aren't communicating with the Active Directory. | For information about creating custom groups, see [Define custom groups](#define-custom-groups).
For information about creating custom groups, see [Define custom groups](#define
| :::image type="icon" source="media/how-to-work-with-maps/fit-to-screen-icon.png" border="false"::: | Fit to screen. | | :::image type="icon" source="media/how-to-work-with-maps/fit-to-selection-icon.png" border="false"::: | Fits a group of selected devices to the center of the screen. | | :::image type="icon" source="media/how-to-work-with-maps/collapse-view-icon.png" border="false"::: | IT/OT presentation. Collapse view to enable a focused view on OT devices, and group IT devices. |
-|:::image type="icon" source="media/how-to-work-with-maps/layouts-icon-v2.png" border="false"::: | Layout options, including: <br />**Pin layout**. Drag devices in the map to a new location and use the Pin option to save those locations when you leave the map to use another option. <br />**Layout by connection**. View connections between devices. <br />**Layout by Purdue**. View the devices in the map according to Enterprise, supervisory and process control layers. <br /> |
+|:::image type="icon" source="media/how-to-work-with-maps/layouts-icon-v2.png" border="false"::: | Layout options, including: <br />**Pin layout**. Drag devices on the map to a new location. Use the Pin option to save those locations when you leave the map to use another option. <br />**Layout by connection**. View connections between devices. <br />**Layout by Purdue**. View the devices in the map according to Enterprise, supervisory and process control layers. <br /> |
| :::image type="icon" source="media/how-to-work-with-maps/zoom-in-icon-v2.png" border="false"::: :::image type="icon" source="media/how-to-work-with-maps/zoom-out-icon-v2.png" border="false"::: | Zoom in or out of the map. | ### Map zoom views
-Working with map views help expedite forensics when analyzing large networks.
+Working with map views helps expedite forensics when analyzing large networks.
Three device detail views can be displayed:
Overall connections are displayed.
### View IT subnets
-By default, IT devices are automatically aggregated by subnet, so that the map view is focused on OT and ICS networks. The presentation of the IT network elements is collapsed to a minimum, which reduces the total number of the devices presented on the map and provides a clear picture of the OT and ICS network elements.
+By default, IT devices are automatically aggregated by subnet, so that the map view is focused on OT and ICS networks. The presentation of the IT network elements is collapsed to a minimum which reduces the total number of the devices presented on the map, and provides a clear picture of the OT and ICS network elements.
-Each subnet is presented as a single entity on the Device map. Options are available to expand subnets to see details; and collapse subnets or hide them.
+Each subnet is presented as a single entity on the Device map. Options are available to expand subnets to see details, collapse subnets or hide them.
**To expand an IT subnet:** 1. Right-click the icon on the map that represents the IT network and select **Expand Network**.
This section describes device details.
| Item | Description | |--|--|
-| Name | The device name. <br /> By default, the sensor discovers the device name as it defined in the network. For example, a name defined in the DNS server. <br /> If no such names were defined, the device IP address appears in this field. <br /> You can change a device name manually. Give your devices meaningful names that reflect their functionality. |
+| Name | The device name. <br /> By default, the sensor discovers the device name as it's defined in the network. For example, a name defined in the DNS server. <br /> If no such names were defined, the device IP address appears in this field. <br /> You can change a device name manually. Give your devices meaningful names that reflect their functionality. |
| Authorized status | Indicates if the device is authorized or not. During the Learning period, all the devices discovered in the network are identified as Authorized. When a device is discovered after the Learning period, it appears as Unauthorized by default. You can change this definition manually. For information on this status and manually authorizing and unauthorizing, see [Authorize and unauthorize devices](#authorize-and-unauthorize-devices). | | Last seen | The last time the device was detected. | | Alert | The number of open alerts associated with the device. |
-| Type | The device type detected by the sensor. |
+| Type | The device type as detected by the sensor. |
| Vendor | The device vendor. This is determined by the leading characters of the device MAC address. This field is read-only. | | Operating System | The device OS detected by the sensor. | | Location | The Purdue layer identified by the sensor for this device, including: <br /> - Automatic <br /> - Process Control <br /> - Supervisory <br /> - Enterprise | | Description | A free text field. <br /> Add more information about the device. |
-| Attributes | Additional information was discovered on the device. For example, view the PLC Run and Key state, the secure status of the PLC, or information on when the state changed. <br /> The information is read only and cannot be updated from the Attributes section. |
+| Attributes | Additional information was discovered on the device. For example, view the PLC Run and Key state, the secure status of the PLC, or information on when the state changed. <br /> The information is read only and can't be updated from the Attributes section. |
| Scanner or Programming device | **Scanner**: Enable this option if you know that this device is known as a scanner and there's no need to alert you about it. <br /> **Programming Device**: Enable this option if you know that this device is known as a programming device and is used to make programming changes. Identifying it as a programming device will prevent alerts for programming changes originating from this asset. | | Network Interfaces | The device interfaces. A RO field. | | Protocols | The protocols used by the device. A RO field. |
The device must be inactive for at least 10 minutes to delete it.
### Merge devices
-Under certain circumstances, you may need to merge devices. This may be required if the sensor discovered separate network entities that are associated with one unique device. For example,
+Under certain circumstances you may need to merge devices. This may be required if the sensor discovered separate network entities that are associated with one unique device. For example,
- A PLC with four network cards.
Unauthorized devices are included in Risk Assessment reports and Attack Vectors
### Mark devices as important
-You can mark significant network devices as important, for example business critical servers. These devices are marked with a star on the map. The star varies according to the map's zoom level.
+You can mark significant network devices as important, for example, business critical servers. These devices are marked with a star on the map. The star varies according to the map's zoom level.
:::image type="icon" source="media/how-to-work-with-maps/star-one.png" border="false"::: :::image type="icon" source="media/how-to-work-with-maps/star-two.png" border="false"::: :::image type="icon" source="media/how-to-work-with-maps/star-3.png" border="false":::
defender-for-iot Tutorial Getting Started Eiot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-getting-started-eiot-sensor.md
A sensor is needed to discover, and continuously monitor Enterprise IoT devices.
:::image type="content" source="media/tutorial-get-started-eiot/onboard-sensor.png" alt-text="On the Getting Started page select Onboard sensor.":::
-1. Enter a name for the sensor.
+1. In the **Sensor name** field, enter a meaningful name for your sensor.
- :::image type="content" source="media/tutorial-get-started-eiot/onboard-sensor-screen.png" alt-text="Enter the following information into the onboarding screen.":::
+1. From the **Subscription** drop-down menu, select the subscription where you want to add your sensor.
-1. Select a subscription from the drop-down menu.
+1. Select **Register**. A **Sensor registration successful** screen shows your next steps and the command you'll need to start the sensor installation.
-1. Enter a meaningful site name that will assist you in locating where the sensor is located.
+ For example:
-1. Enter a display name.
-
-1. Enter a zone name. If no name is entered, the name `default` will be applied.
+ :::image type="content" source="media/tutorial-get-started-eiot/successful-registration.png" alt-text="Screenshot of the successful registration of an Enterprise IoT sensor.":::
-1. Select **Set up**.
+1. Copy the command to a safe location, and continue [below](#install-the-sensor).
-1. Save the command provided to you.
-
- :::image type="content" source="media/tutorial-get-started-eiot/successful-registration.png" alt-text="Screenshot of the successful registration of an Enterprise IoT sensor.":::
## Install the sensor
Once you've validated your setup, the device inventory will start to populate wi
1. From the left side toolbar, select **Device inventory**.
-The device inventory is where you'll be able to view all of your device systems, and network information. Learn more about the device inventory see [Manage your IoT devices with the device inventory for organizations](how-to-manage-device-inventory-for-organizations.md#manage-your-iot-devices-with-the-device-inventory-for-organizations).
+The device inventory is where you'll be able to view all of your device systems, and network information.
+
+You can also view your sensors from the **Sites and sensors** page. Enterprise IoT sensors are all automatically added to the same site, named **Enterprise network**.
+
+For more information, see:
+
+- [Manage your IoT devices with the device inventory for organizations](how-to-manage-device-inventory-for-organizations.md)
+- [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md)
## Remove the sensor (optional)
defender-for-iot Tutorial Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-servicenow.md
Access to ServiceNow and Defender for IoT
- Defender for IoT patch 2.8.11.1 or above. > [!Note]
-> If you are already working with a Defender for IoT and ServiceNow integration, and upgrade using the on-premises management console, pervious data received from Defender for IoT sensors should be cleared from ServiceNow.
+>If you are already working with a Defender for IoT and ServiceNow integration and upgrade using the on-premises management console. In that case, the previous data from Defender for IoT sensors should be cleared from ServiceNow.
### Architecture
digital-twins How To Authenticate Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-authenticate-client.md
When writing the Azure function, consider adding these variables and code to you
Later, after publishing the function, you'll make sure the function's identity has permission to access the Azure Digital Twins APIs. For instructions on how to do so, skip ahead to [Assign an access role](#assign-an-access-role).
-* **A local variable _DigitalTwinsClient_.** Add the variable inside your function to hold your Azure Digital Twins client instance. *Don't* make this variable static inside your class.
+* **A local variable _DigitalTwinsClient_.** Add the variable inside your function to hold your Azure Digital Twins client instance. _Don't_ make this variable static inside your class.
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/adtIngestFunctionSample.cs" id="DigitalTwinsClient"::: * **A null check for _adtInstanceUrl_.** Add the null check and then wrap your function logic in a try/catch block to catch any exceptions.
digital-twins How To Ingest Iot Hub Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-ingest-iot-hub-data.md
description: Learn how to ingest device telemetry messages from Azure IoT Hub to digital twins in an instance of Azure Digital Twins. Previously updated : 02/22/2022 Last updated : 06/16/2022
This how-to outlines how to send messages from IoT Hub to Azure Digital Twins, u
Whenever a temperature telemetry event is sent by the thermostat device, a function processes the telemetry and the `Temperature` property of the digital twin should update. This scenario is outlined in a diagram below: ## Add a model and twin
You'll then need to create one twin using this model. Use the following command
az dt twin create --dt-name <instance-hostname-or-name> --dtmi "dtmi:contosocom:DigitalTwins:Thermostat;1" --twin-id thermostat67 --properties '{"Temperature": 0.0}' ```
->[!NOTE]
->If you're using anything other than Cloud Shell in the Bash environment, you may need to escape certain characters in the inline JSON so that it's parsed correctly.
->
->For more information, see [Use special characters in different shells](concepts-cli.md#use-special-characters-in-different-shells).
- When the twin is created successfully, the CLI output from the command should look something like this: ```json {
When the twin is created successfully, the CLI output from the command should lo
} ```
-## Create a function
+## Create the Azure function
In this section, you'll create an Azure function to access Azure Digital Twins and update twins based on IoT telemetry events that it receives. Follow the steps below to create and publish the function.
-1. First, create a new function app project in Visual Studio. For instructions on how to do so, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#create-an-azure-functions-project).
+1. First, create a new Azure Functions project in Visual Studio. For instructions on how to do so, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#create-an-azure-functions-project).
2. Add the following packages to your project: * [Azure.DigitalTwins.Core](https://www.nuget.org/packages/Azure.DigitalTwins.Core/)
In this section, you'll create an Azure function to access Azure Digital Twins a
4. Publish the project with the *IoTHubtoTwins.cs* function to a function app in Azure. For instructions on how to do so, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure).
+Once the process of publishing the function completes, you can use this CLI command to verify the publish was successful. There are placeholders for your resource group, and the name of your function app. The command will print information about the *IoTHubToTwins* function.
-To access Azure Digital Twins, your function app needs a system-managed identity with permissions to access your Azure Digital Twins instance. You'll set that up next.
+```azurecli-interactive
+az functionapp function show --resource-group <your-resource-group> --name <your-function-app> --function-name IoTHubToTwins
+```
### Configure the function app
-Next, assign an access role for the function and configure the application settings so that it can access your Azure Digital Twins instance.
+To access Azure Digital Twins, your function app needs a system-managed identity with permissions to access your Azure Digital Twins instance. You'll set that up in this section, by assigning an access role for the function and configuring the application settings so that it can access your Azure Digital Twins instance.
-## Connect your function to IoT Hub
+## Connect the function to IoT Hub
In this section, you'll set up your function as an event destination for the IoT hub device data. Setting up your function in this way will ensure that the data from the thermostat device in IoT Hub will be sent to the Azure function for processing.
-In the [Azure portal](https://portal.azure.com/), navigate to your IoT Hub instance that you created in the [Prerequisites](#prerequisites) section. Under **Events**, create a subscription for your function.
+Use the following CLI command to create an event subscription that the IoT Hub will use to send event data to the *IoTHubtoTwins* function. There's a placeholder for you to enter a name for the event subscription, and there are also placeholders for you to enter your subscription ID, resource group, IoT hub name, and the name of your function app.
+
+```azurecli-interactive
+az eventgrid event-subscription create --name <name-for-hub-event-subscription> --event-delivery-schema eventgridschema --source-resource-id /subscriptions/<your-subscription-ID>/resourceGroups/<your-resource-group>/providers/Microsoft.Devices/IotHubs/<your-IoT-hub> --included-event-types Microsoft.Devices.DeviceTelemetry --endpoint-type azurefunction --endpoint /subscriptions/<your-subscription-ID>/resourceGroups/<your-resource-group>/providers/Microsoft.Web/sites/<your-function-app>/functions/IoTHubtoTwins
+```
+The output will show information about the event subscription that has been created. You can confirm that the operation completed successfully by verifying the `provisioningState` value in the result:
-In the **Create Event Subscription** page, fill the fields as follows:
- 1. For **Name**, choose whatever name you want for the event subscription.
- 2. For **Event Schema**, choose **Event Grid Schema**.
- 3. For **System Topic Name**, choose whatever name you want.
- 1. For **Filter to Event Types**, choose the **Device Telemetry** checkbox and uncheck other event types.
- 1. For **Endpoint Type**, Select **Azure Function**.
- 1. For **Endpoint**, use the **Select an endpoint** link to choose what Azure Function to use for the endpoint.
-
+```azurecli
+"provisioningState": "Succeeded",
+```
-In the **Select Azure Function** page that opens up, verify or fill in the below details.
- 1. **Subscription**: Your Azure subscription.
- 2. **Resource group**: Your resource group.
- 3. **Function app**: Your function app name.
- 4. **Slot**: **Production**.
- 5. **Function**: Select the function from earlier, *IoTHubtoTwins*, from the dropdown.
+## Test with simulated IoT data
-Save your details with the **Confirm Selection** button.
-
+You can test your new ingress function by using the device simulator from [Connect an end-to-end solution](tutorial-end-to-end.md). The *DeviceSimulator* project contains a simulated thermostat device that sends sample temperature data. To set up the device simulator, follow these steps:
-Select the **Create** button to create the event subscription.
+1. Navigate to the [Azure Digital Twins end-to-end sample project repository](/samples/azure-samples/digital-twins-samples/digital-twins-samples). Get the sample project on your machine by selecting the **Browse code** button underneath the title. This will take you to the GitHub repo for the samples, which you can download as a .zip by selecting the **Code** button followed by **Download ZIP**.
-## Send simulated IoT data
+ This will download a .zip folder to your machine as *digital-twins-samples-master.zip*. Unzip the folder and extract the files. You'll be using the *DeviceSimulator* project folder.
+1. [Register the simulated device with IoT Hub](tutorial-end-to-end.md#register-the-simulated-device-with-iot-hub)
+2. [Configure and run the simulation](tutorial-end-to-end.md#configure-and-run-the-simulation)
-To test your new ingress function, use the device simulator from [Connect an end-to-end solution](./tutorial-end-to-end.md). That tutorial is driven by this [Azure Digital Twins end-to-end sample project written in C#](/samples/azure-samples/digital-twins-samples/digital-twins-samples). You'll be using the *DeviceSimulator* project in that repository.
+After completing these steps, you should have a project console window running and sending simulated telemetry data to your IoT hub.
-In the end-to-end tutorial, complete the following steps:
-1. [Register the simulated device with IoT Hub](./tutorial-end-to-end.md#register-the-simulated-device-with-iot-hub)
-2. [Configure and run the simulation](./tutorial-end-to-end.md#configure-and-run-the-simulation)
-## Validate your results
+### Validate results
-While running the device simulator above, the temperature value of your digital twin will be changing. In the Azure CLI, run the following command to see the temperature value. There's one placeholder for the instance's host name (you can also use the instance's friendly name with a slight decrease in performance).
+While running the device simulator above, the temperature value of your thermostat digital twin will be changing. In the Azure CLI, run the following command to see the temperature value. There's one placeholder for the instance's host name (you can also use the instance's friendly name with a slight decrease in performance).
```azurecli-interactive
-az dt twin query --query-command "select * from digitaltwins" --dt-name <instance-hostname-or-name>
+az dt twin query --query-command "SELECT * FROM digitaltwins WHERE \$dtId = 'thermostat67'" --dt-name <instance-hostname-or-name>
```
-Your output should contain a temperature value like this:
+>[!NOTE]
+>If you're using anything other than Cloud Shell in the Bash environment, you may need to escape the `$` character in the query differently so that it's parsed correctly. For more information, see [Use special characters in different shells](concepts-cli.md#use-special-characters-in-different-shells).
+
+Your output should show the details of the thermostat67 twin, including a temperature value, like this:
```json {
Your output should contain a temperature value like this:
} ```
-To see the value change, repeatedly run the query command above.
+To see the `Temperature` value change, repeatedly run the query command above.
## Next steps
digital-twins How To Ingest Opcua Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-ingest-opcua-data.md
In this section, you'll publish an Azure function that you downloaded in [Prereq
1. Navigate to the downloaded [OPC UA to Azure Digital Twins](https://github.com/Azure-Samples/opcua-to-azure-digital-twins) project on your local machine, and into the *Azure Functions/OPCUAFunctions* folder. Open the *OPCUAFunctions.sln* solution in Visual Studio. 2. Publish the project to a function app in Azure. For instructions on how to do so, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure).
-#### Configure the function app
+### Configure the function app
Next, assign an access role for the function and configure the application settings so that it can access your Azure Digital Twins instance.
-#### Add application settings
+Next, configure an application setting for the URL of the shared access signature for the *opcua-mapping.json* file.
-You'll also need to add some application settings to fully set up your environment and the Azure function. Go to the [Azure portal](https://portal.azure.com) and navigate to your newly created Azure function by searching for its name in the portal search bar.
-
-Select Configuration from the function's left navigation menu. Use the **+ New application setting** button to start creating new settings.
--
-There are three application settings you need to create:
-
-| Setting | Description | Required |
-| | | |
-| ADT_SERVICE_URL | URL for your Azure Digital Twins instance. Example: `https://example.api.eus.digitaltwins.azure.net` | Γ£ö |
-| JSON_MAPPINGFILE_URL | URL of the shared access signature for the opcua-mapping.json | Γ£ö |
-| LOG_LEVEL | Log level verbosity. Default is 100. Verbose is 300 | |
+```azurecli-interactive
+az functionapp config appsettings set --resource-group <your-resource-group> --name <your-function-app-name> --settings "JSON_MAPPINGFILE_URL=<file-URL>"
+```
+Optionally, you can configure a third application setting for the log level verbosity. The default is 100, or you can set it to 300 for a more verbose logging experience.
-> [!TIP]
-> Set the `LOG_LEVEL` application setting on the function to 300 for a more verbose logging experience.
+```azurecli-interactive
+az functionapp config appsettings set --resource-group <your-resource-group> --name <your-function-app-name> --settings "LOG_LEVEL=<verbosity-level>"
+```
### Create event subscription
digital-twins How To Send Twin To Twin Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-send-twin-to-twin-events.md
Next, create an Azure function that will listen on the endpoint and receive twin
5. Publish the function app to Azure. For instructions on how to publish a function app, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure).
+Once the process of publishing the function completes, you can use this CLI command to verify the publish was successful. There are placeholders for your resource group, the name of your function app, and the name of your specific function. The command will print information about your function.
+
+```azurecli-interactive
+az functionapp function show --resource-group <your-resource-group> --name <your-function-app> --function-name <your-function>
+```
### Configure the function app
digital-twins Tutorial End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/tutorial-end-to-end.md
description: Follow this tutorial to learn how to build out an end-to-end Azure Digital Twins solution that's driven by device data. Previously updated : 02/25/2022 Last updated : 06/16/2022
In the **Solution Explorer** pane, expand **SampleFunctionsApp > Dependencies**.
Doing so will open the NuGet Package Manager. Select the **Updates** tab and if there are any packages to be updated, check the box to **Select all packages**. Then select **Update**. ### Publish the app
To publish the function app to Azure, you'll first need to create a storage acco
} ```
-You've now published the functions to a function app in Azure.
+The functions should now be published to a function app in Azure. You can use the following CLI commands to verify both functions were published successfully. Each command has placeholders for your resource group and the name of your function app. The commands will print information about the *ProcessDTRoutedData* and *ProcessHubToDTEvents* functions that have been published.
+
+```azurecli-interactive
+az functionapp function show --resource-group <your-resource-group> --name <your-function-app> --function-name ProcessDTRoutedData
+az functionapp function show --resource-group <your-resource-group> --name <your-function-app> --function-name ProcessHubToDTEvents
+```
Next, your function app will need to have the right permission to access your Azure Digital Twins instance. You'll configure this access in the next section.
Save the **name** that you gave to your IoT hub. You'll use it later.
Next, connect your IoT hub to the *ProcessHubToDTEvents* Azure function in the function app you published earlier, so that data can flow from the device in IoT Hub through the function, which updates Azure Digital Twins.
-To do so, you'll create an *Event Subscription* on your IoT Hub, with the Azure function as an endpoint. This "subscribes" the function to events happening in IoT Hub.
-
-In the [Azure portal](https://portal.azure.com/), navigate to your newly created IoT hub by searching for its name in the top search bar. Select **Events** from the hub menu, and select **+ Event Subscription**.
+To do so, you'll create an *event subscription* on your IoT Hub, with the Azure function as an endpoint. This "subscribes" the function to events happening in IoT Hub.
+Use the following CLI command to create the event subscription. There's a placeholder for you to enter a name for the event subscription, and there are also placeholders for you to enter your subscription ID, resource group, IoT hub name, and the name of your function app.
-Selecting this option will bring up the **Create Event Subscription** page.
-
+```azurecli-interactive
+az eventgrid event-subscription create --name <name-for-hub-event-subscription> --event-delivery-schema eventgridschema --source-resource-id /subscriptions/<your-subscription-ID>/resourceGroups/<your-resource-group>/providers/Microsoft.Devices/IotHubs/<your-IoT-hub> --included-event-types Microsoft.Devices.DeviceTelemetry --endpoint-type azurefunction --endpoint /subscriptions/<your-subscription-ID>/resourceGroups/<your-resource-group>/providers/Microsoft.Web/sites/<your-function-app>/functions/ProcessHubToDTEvents
+```
-Fill in the fields as follows (fields filled by default aren't mentioned):
-* **EVENT SUBSCRIPTION DETAILS** > **Name**: Give a name to your event subscription.
-* **TOPIC DETAILS** > **System Topic Name**: Give a name to use for the system topic.
-* **EVENT TYPES** > **Filter to Event Types**: Select **Device Telemetry** from the menu options.
-* **ENDPOINT DETAILS** > **Endpoint Type**: Select **Azure Function** from the menu options.
-* **ENDPOINT DETAILS** > **Endpoint**: Select the **Select an endpoint** link, which will open a **Select Azure Function** window:
- :::image type="content" source="media/tutorial-end-to-end/event-subscription-3.png" alt-text="Screenshot of the Azure portal event subscription showing the window to select an Azure function." border="false":::
- - Fill in your **Subscription**, **Resource group**, **Function app**, and **Function** (**ProcessHubToDTEvents**). Some of these values may auto-populate after selecting the subscription.
- - Select **Confirm Selection**.
+The output will show information about the event subscription that has been created. You can confirm that the operation completed successfully by verifying the `provisioningState` value in the result:
-Back on the **Create Event Subscription** page, select **Create**.
+```azurecli
+"provisioningState": "Succeeded",
+```
### Register the simulated device with IoT Hub
You should see the live updated temperatures from your Azure Digital Twins insta
:::image type="content" source="media/tutorial-end-to-end/console-digital-twins-telemetry.png" alt-text="Screenshot of the console output showing log of temperature messages from digital twin thermostat67.":::
-Once you've verified the live temperatures logging is working successfully, you can stop running both projects. Keep the Visual Studio windows open, as you'll continue using them in the rest of the tutorial.
+Once you've verified the live temperature logging is working successfully, you can stop running both projects. Keep the Visual Studio windows open, as you'll continue using them in the rest of the tutorial.
## Propagate Azure Digital Twins events through the graph
To do so, you'll use the *ProcessDTRoutedData* Azure function to update a Room t
:::image type="content" source="media/tutorial-end-to-end/building-scenario-c.png" alt-text="Diagram of an excerpt from the full building scenario diagram highlighting the section that shows the elements after Azure Digital Twins."::: Here are the actions you'll complete to set up this data flow:
-1. [Create an event grid topic](#create-the-event-grid-topic) to enable movement of data between Azure services
-1. [Create an endpoint](#create-the-endpoint) in Azure Digital Twins that connects the instance to the event grid topic
+1. [Create an Event Grid topic](#create-the-event-grid-topic) to enable movement of data between Azure services
+1. [Create an endpoint](#create-the-endpoint) in Azure Digital Twins that connects the instance to the Event Grid topic
1. [Set up a route](#create-the-route) within Azure Digital Twins that sends twin property change events to the endpoint
-1. [Set up an Azure function](#connect-the-azure-function) that listens on the event grid topic at the endpoint, receives the twin property change events that are sent there, and updates other twins in the graph accordingly
+1. [Set up an Azure function](#connect-the-azure-function) that listens on the Event Grid topic at the endpoint, receives the twin property change events that are sent there, and updates other twins in the graph accordingly
[!INCLUDE [digital-twins-twin-to-twin-resources.md](../../includes/digital-twins-twin-to-twin-resources.md)] ### Connect the Azure function
-Next, subscribe the *ProcessDTRoutedData* Azure function to the event grid topic you created earlier, so that telemetry data can flow from the thermostat67 twin through the event grid topic to the function, which goes back into Azure Digital Twins and updates the room21 twin accordingly.
-
-To do so, you'll create an Event Grid subscription that sends data from the event grid topic that you created earlier to your *ProcessDTRoutedData* Azure function.
-
-In the [Azure portal](https://portal.azure.com/), navigate to your event grid topic by searching for its name in the top search bar. Select **+ Event Subscription**.
+Next, subscribe the *ProcessDTRoutedData* Azure function to the Event Grid topic you created earlier, so that telemetry data can flow from the thermostat67 twin through the Event Grid topic to the function, which goes back into Azure Digital Twins and updates the room21 twin accordingly.
+To do so, you'll create an Event Grid subscription that sends data from the Event Grid topic that you created earlier to your *ProcessDTRoutedData* Azure function.
-The steps to create this event subscription are similar to when you subscribed the first Azure function to IoT Hub earlier in this tutorial. This time, you don't need to specify **Device Telemetry** as the event type to listen for, and you'll connect to a different Azure function.
+Use the following CLI command to create the event subscription. There's a placeholder for you to enter a name for this event subscription, and there are also placeholders for you to enter your subscription ID, resource group, the name of your Event Grid topic, and the name of your function app.
-On the **Create Event Subscription** page, fill in the fields as follows (fields filled by default aren't mentioned):
-* **EVENT SUBSCRIPTION DETAILS** > **Name**: Give a name to your event subscription.
-* **ENDPOINT DETAILS** > **Endpoint Type**: Select **Azure Function** from the menu options.
-* **ENDPOINT DETAILS** > **Endpoint**: Select the **Select an endpoint** link, which will open a **Select Azure Function** window:
- - Fill in your **Subscription**, **Resource group**, **Function app**, and **Function** (**ProcessDTRoutedData**). Some of these values may auto-populate after selecting the subscription.
- - Select **Confirm Selection**.
-
-Back on the **Create Event Subscription** page, select **Create**.
+```azurecli-interactive
+az eventgrid event-subscription create --name <name-for-topic-event-subscription> --event-delivery-schema eventgridschema --source-resource-id /subscriptions/<your-subscription-ID>/resourceGroups/<your-resource-group>/providers/Microsoft.EventGrid/topics/<your-event-grid-topic> --endpoint-type azurefunction --endpoint /subscriptions/<your-subscription-ID>/resourceGroups/<your-resource-group>/providers/Microsoft.Web/sites/<your-function-app>/functions/ProcessDTRoutedData
+```
## Run the simulation and see the results
Once you've verified the live temperatures logging from your instance is working
## Review
-Here's a review of the scenario that you built out in this tutorial.
+Here's a review of the scenario that you built in this tutorial.
1. An Azure Digital Twins instance digitally represents a floor, a room, and a thermostat (represented by **section A** in the diagram below) 2. Simulated device telemetry is sent to IoT Hub, where the *ProcessHubToDTEvents* Azure function is listening for telemetry events. The *ProcessHubToDTEvents* Azure function uses the information in these events to set the `Temperature` property on thermostat67 (**arrow B** in the diagram).
-3. Property change events in Azure Digital Twins are routed to an event grid topic, where the *ProcessDTRoutedData* Azure function is listening for events. The *ProcessDTRoutedData* Azure function uses the information in these events to set the `Temperature` property on room21 (**arrow C** in the diagram).
+3. Property change events in Azure Digital Twins are routed to an Event Grid topic, where the *ProcessDTRoutedData* Azure function is listening for events. The *ProcessDTRoutedData* Azure function uses the information in these events to set the `Temperature` property on room21 (**arrow C** in the diagram).
:::image type="content" source="media/tutorial-end-to-end/building-scenario.png" alt-text="Diagram of the full building scenario, which shows the data flowing from a device into and out of Azure Digital Twins through various Azure services.":::
In this tutorial, you created an end-to-end scenario that shows Azure Digital Tw
Next, start looking at the concept documentation to learn more about elements you worked with in the tutorial: > [!div class="nextstepaction"]
-> [Custom models](concepts-models.md)
+> [Custom models](concepts-models.md)
event-grid Consume Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/consume-private-endpoints.md
Then, you can use a private link configured in Azure Functions or your webhook d
:::image type="content" source="./media/consume-private-endpoints/deliver-private-link-service.svg" alt-text="Deliver via private link service":::
-Under this configuration, the traffic goes over the public IP/internet from Event Grid to Event Hubs, Service Bus, or Azure Storage, but the channel can be encrypted and a managed identity of Event Grid is used. If you configure your Azure Functions or webhook deployed to your virtual network to use an Event Hubs, Service Bus, or Azure Storage via private link, that section of the traffic will evidently stay within Azure.
+Under this configuration, the secured traffic from Event Grid to Event Hubs, Service Bus, or Azure Storage, [stays on the Microsoft backbone](../networking/microsoft-global-network.md#get-the-premium-cloud-network) and a managed identity of Event Grid is used. Configuring your Azure Function or webhook from within your virtual network to use an Event Hubs, Service Bus, or Azure Storage via private link ensures the traffic between those services and your function or webhook stays within your virtual network perimeter.
## Deliver events to Event Hubs using managed identity To deliver events to event hubs in your Event Hubs namespace using managed identity, follow these steps:
To deliver events to Storage queues using managed identity, follow these steps:
## Next steps
-For more information about delivering events using a managed identity, see [Event delivery using a managed identity](managed-service-identity.md).
+For more information about delivering events using a managed identity, see [Event delivery using a managed identity](managed-service-identity.md).
event-grid Transition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/transition.md
Title: Transition from Event Grid on Azure IoT Edge to Azure IoT Edge
-description: This article explains transition from Event Grid on Azure IoT Edge to Azure IoT Edge Hub module in Azure IoT Edge runtime.
+ Title: Transition from Event Grid on IoT Edge to Azure IoT Edge native capabilities.
+description: This article explains the transition from Event Grid on Azure IoT Edge to Azure IoT Edge hub module in Azure IoT Edge runtime.
Last updated 04/13/2022
-# Transition from Event Grid on Azure IoT Edge to Azure IoT Edge native capabilities
+# Transition from Event Grid on IoT Edge to Azure IoT Edge native capabilities
-On March 31, 2023, Event Grid on Azure IoT Edge will be retired, so make sure to transition to IoT Edge native capabilities prior to that date.
+On March 31, 2023, Azure Event Grid on Azure IoT Edge will be retired. Be sure to transition to IoT Edge native capabilities before that date.
-## Why are we retiring?
+## Why are we retiring it?
-There's one major reason for deciding to retire Event Grid on IoT Edge, which is currently in Preview, in March 2023: Event Grid has been evolving in the cloud native space to provide more robust capabilities not only in Azure but also in on-prem scenarios with [Kubernetes with Azure Arc](../kubernetes/overview.md).
+There's one major reason to retire Event Grid on IoT Edge, which is currently in preview, in March 2023: Event Grid has been evolving in the cloud native space to provide more robust capabilities not only in Azure but also in on-premises scenarios with [Kubernetes with Azure Arc](../kubernetes/overview.md).
-| Event Grid on Azure IoT Edge | Azure IoT Edge Hub |
+| Event Grid on IoT Edge | IoT Edge hub |
| - | -- |
-| - Publishing and subscribing to events locally/cloud<br/>- Forwarding events to Event Grid<br/>- Forwarding events to IoT Hub<br/>- React to Blob Storage events locally | - Connectivity to Azure IoT Hub<br/>- Route messages between modules or devices locally<br/>- Offline support<br/>- Message filtering |
+| - Publish and subscribe to events locally/in the cloud<br/>- Forward events to Event Grid<br/>- Forward events to Azure IoT Hub<br/>- React to Azure Blob Storage events locally | - Connect to IoT Hub<br/>- Route messages between modules or devices locally<br/>- Get offline support<br/>- Filter messages |
-## How to transition to Azure IoT Edge features
+## How to transition to IoT Edge features
-To transition to use the Azure IoT Edge features, follow these steps.
+To use the IoT Edge features, follow these steps:
-1. Learn about the feature differences between [Event Grid on Azure IoT Edge](overview.md#when-to-use-event-grid-on-iot-edge) and [Azure IoT Edge](../../iot-edge/how-to-publish-subscribe.md).
-2. Identify your scenario based on the feature table in the next section.
+1. Learn about the feature differences between [Event Grid on IoT Edge](overview.md#when-to-use-event-grid-on-iot-edge) and [IoT Edge](../../iot-edge/how-to-publish-subscribe.md).
+2. Identify your scenario based on the feature table in the next section.
3. Follow the documentation to change your architecture and make code changes based on the scenario you want to transition. 4. Validate your updated architecture by sending and receiving messages/events.
-## Event Grid on Azure IoT Edge vs. Azure IoT Edge
+## Event Grid on IoT Edge vs. IoT Edge
The following table highlights the key differences during this transition.
-| Event Grid on Azure IoT Edge | Azure IoT Edge |
+| Event Grid on IoT Edge | IoT Edge |
| | -- |
-| Publish, subscribe and forward events locally or cloud | Use the message routing feature in IoT Edge Hub to facilitate local and cloud communication. It enables device-to-module, module-to-module, and device-to-device communications by brokering messages to keep devices and modules independent from each other. To learn more, see [using routing for IoT Edge hub](../../iot-edge/iot-edge-runtime.md#using-routing). </br> </br> If you're subscribing to IoT Hub, itΓÇÖs possible to create an event to publish to Event Grid if you need. For details, see [Azure IoT Hub and Event Grid](../../iot-hub/iot-hub-event-grid.md). |
-| Forward events to IoT Hub | Use IoT Edge Hub to optimize connections to send messages to the cloud with offline support. For details, see [IoT Edge Hub cloud communication](../../iot-edge/iot-edge-runtime.md#using-routing). |
-| React to Blob Storage events on IoT Edge (Preview) | You can use Azure Function Apps to react to blob storage events on cloud when a blob is created or updated. For more information, see [Azure Blob storage trigger for Azure Functions](../../azure-functions/functions-bindings-storage-blob-trigger.md) and [Tutorial: Deploy Azure Functions as modules - Azure IoT Edge](../../iot-edge/tutorial-deploy-function.md). Blob triggers in IoT Edge blob storage module aren't supported. |
+| Publish, subscribe, and forward events locally or to the cloud | Use the message routing feature in the IoT Edge hub to facilitate local and cloud communication. It enables device-to-module, module-to-module, and device-to-device communications by brokering messages to keep devices and modules independent from each other. To learn more, see [Using routing for an IoT Edge hub](../../iot-edge/iot-edge-runtime.md#using-routing). </br> </br> If you're subscribing to an IoT Edge hub, it's possible to create an event to publish to Event Grid, if needed. For details, see [Azure IoT Hub and Event Grid on IoT Edge](../../iot-hub/iot-hub-event-grid.md). |
+| Forward events to IoT Hub | Use the IoT Edge hub to optimize connections when sending messages to the cloud with offline support. For details, see [IoT Edge hub cloud communication](../../iot-edge/iot-edge-runtime.md#using-routing). |
+| React to Blob Storage events on IoT Edge (preview) | You can use Azure function apps to react to Blob Storage events on the cloud when a blob is created or updated. For more information, see [Azure Blob Storage trigger for Azure Functions](../../azure-functions/functions-bindings-storage-blob-trigger.md) and [Tutorial: Deploy Azure Functions as modules - Azure IoT Edge](../../iot-edge/tutorial-deploy-function.md). Blob triggers in an IoT Edge Blob Storage module aren't supported. |
event-grid Event Schema Api Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-api-management.md
description: This article describes how to use Azure API Management as an Event
Previously updated : 07/12/2021 Last updated : 06/15/2022
-# Azure API Management as an Event Grid source (Preview)
+# Azure API Management as an Event Grid source
This article provides the properties and schema for [Azure API Management](../api-management/index.yml) events. For an introduction to event schemas, see [Azure Event Grid event schema](./event-schema.md). It also gives you links to articles to use API Management as an event source.
event-hubs Monitor Event Hubs Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/monitor-event-hubs-reference.md
Azure Event Hubs supports the following dimensions for metrics in Azure Monitor.
## Runtime audit logs
-Runtime audit logs capture aggregated diagnostic information for all data plane access operations (such as send or receive events) in the Event Hubs dedicated cluster.
+Runtime audit logs capture aggregated diagnostic information for all data plane access operations (such as send or receive events) in Event Hubs.
> [!NOTE]
-> Runtime audit logs are currently available only in **premium** and **dedicated** tiers.
+> Runtime audit logs are available only in **premium** and **dedicated** tiers.
Runtime audit logs include the elements listed in the following table:
Here's an example of a runtime audit log entry:
Application metrics logs capture the aggregated information on certain metrics related to data plane operations. The captured information includes the following runtime metrics. > [!NOTE]
-> Application metrics logs are currently available only in **premium** and **dedicated** tiers.
+> Application metrics logs are available only in **premium** and **dedicated** tiers.
Name | Description - | -
event-hubs Monitor Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/monitor-event-hubs.md
Title: Monitoring Azure Event Hubs
description: Learn how to use Azure Monitor to view, analyze, and create alerts on metrics from Azure Event Hubs. Previously updated : 02/10/2022 Last updated : 06/16/2022 # Monitor Azure Event Hubs
If you use **Log Analytics** to store the diagnostic logging information, the in
The metrics and logs you can collect are discussed in the following sections.
-## Analyzing metrics
+## Analyze metrics
You can analyze metrics for Azure Event Hubs, along with metrics from other Azure services, by selecting **Metrics** from the **Azure Monitor** section on the home page for your Event Hubs namespace. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool. For a list of the platform metrics collected, see [Monitoring Azure Event Hubs data reference metrics](monitor-event-hubs-reference.md#metrics). ![Metrics Explorer with Event Hubs namespace selected](./media/monitor-event-hubs/metrics.png)
For reference, you can see a list of [all resource metrics supported in Azure Mo
> [!TIP] > Azure Monitor metrics data is available for 90 days. However, when creating charts only 30 days can be visualized. For example, if you want to visualize a 90 day period, you must break it into three charts of 30 days within the 90 day period.
-### Filtering and splitting
+### Filter and split
For metrics that support dimensions, you can apply filters using a dimension value. For example, add a filter with `EntityName` set to the name of an event hub. You can also split a metric by dimension to visualize how different segments of the metric compare with each other. For more information of filtering and splitting, see [Advanced features of Azure Monitor](../azure-monitor/essentials/metrics-charts.md). :::image type="content" source="./media/monitor-event-hubs/metrics-filter-split.png" alt-text="Image showing filtering and splitting metrics":::
-## Analyzing logs
+## Analyze logs
Using Azure Monitor Log Analytics requires you to create a diagnostic configuration and enable __Send information to Log Analytics__. For more information, see the [Collection and routing](#collection-and-routing) section. Data in Azure Monitor Logs is stored in tables, with each table having its own set of unique properties. Azure Event Hubs stores data in the following tables: **AzureDiagnostics** and **AzureMetrics**. > [!IMPORTANT]
Following are sample queries that you can use to help you monitor your Azure Eve
| where ResourceProvider == "MICROSOFT.EVENTHUB" | where Category == "ArchiveLogs" | summarize count() by "failures", "durationInSeconds" + ```
-
+
+## Use runtime logs
+
+Azure Event Hubs allows you to monitor and audit data plane interactions of your client applications using runtime audit logs and application metrics logs.
+
+Using *Runtime audit logs* you can capture aggregated diagnostic information for all data plane access operations such as publishing or consuming events.
+*Application metrics logs* capture the aggregated data on certain runtime metrics (such as consumer lag and active connections) related to client applications are connected to Event Hubs.
+
+> [!NOTE]
+> Runtime audit logs are available only in **premium** and **dedicated** tiers.
+
+### Enable runtime logs
+You can enable either runtime audit logs or application metrics logs by selecting *Diagnostic settings* from the *Monitoring* section on the Event Hubs namespace page in Azure portal. Click on *Add diagnostic setting* as shown below.
+
+![Screenshot showing the Diagnostic settings page.](./media/monitor-event-hubs/add-diagnostic-settings.png)
+
+Then you can enable log categories *RuntimeAuditLogs* or *ApplicationMetricsLogs* as needed.
+![Screenshot showing the selection of RuntimeAuditLogs and ApplicationMetricsLogs.](./media/monitor-event-hubs/configure-diagnostic-settings.png)
+
+Once runtime logs are enabled, Event Hubs will start collecting and storing them according to the diagnostic setting configuration.
+
+### Publish and consume sample data
+To collect sample runtime audit logs in your Event Hubs namespace, you can publish and consume sample data using client applications which are based on [Event Hubs SDK](../event-hubs/event-hubs-dotnet-standard-getstarted-send.md) (AMQP) or using any [Apache Kafka client application](../event-hubs/event-hubs-quickstart-kafka-enabled-event-hubs.md).
++
+### Analyze runtime audit logs
+You can analyze the collected runtime audit logs using the following sample query.
+
+```kusto
+AzureDiagnostics
+| where TimeGenerated > ago(1h)
+| where ResourceProvider == "MICROSOFT.EVENTHUB"
+| where Category == "RuntimeAuditLogs"
+```
+Up on the execution of the query you should be able to obtain corresponding audit logs in the following format.
+
+By analyzing these logs you should be able to audit how each client application interacts with Event Hubs. Each field associated with runtime audit logs are defined in [runtime audit logs reference](../event-hubs/monitor-event-hubs-reference.md#runtime-audit-logs).
++
+### Analyze application metrics
+You can analyze the collected application metrics logs using the following sample query.
+
+```kusto
+AzureDiagnostics
+| where TimeGenerated > ago(1h)
+| where Category == "ApplicationMetricsLogs"
+```
+
+Application metrics includes the following runtime metrics.
+
+Therefore you can use application metrics to monitor runtime metrics such as consumer lag or active connection from a given client application. Each field associated with runtime audit logs are defined in [application metrics logs reference](../event-hubs/monitor-event-hubs-reference.md#runtime-audit-logs).
++ ## Alerts You can access alerts for Azure Event Hubs by selecting **Alerts** from the **Azure Monitor** section on the home page for your Event Hubs namespace. See [Create, view, and manage metric alerts using Azure Monitor](../azure-monitor/alerts/alerts-metric.md) for details on creating alerts.
expressroute Expressroute About Virtual Network Gateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-about-virtual-network-gateways.md
To connect your Azure virtual network and your on-premises network via ExpressRo
## Gateway types
-When you create a virtual network gateway, you need to specify several settings. One of the required settings, '-GatewayType', specifies whether the gateway is used for ExpressRoute, or VPN traffic. The two gateway types are:
+When you create a virtual network gateway, you need to specify several settings. One of the required settings, `-GatewayType`, specifies whether the gateway is used for ExpressRoute, or VPN traffic. The two gateway types are:
* **Vpn** - To send encrypted traffic across the public Internet, you use the gateway type 'Vpn'. This is also referred to as a VPN gateway. Site-to-Site, Point-to-Site, and VNet-to-VNet connections all use a VPN gateway. * **ExpressRoute** - To send network traffic on a private connection, you use the gateway type 'ExpressRoute'. This is also referred to as an ExpressRoute gateway and is the type of gateway used when configuring ExpressRoute.
-Each virtual network can have only one virtual network gateway per gateway type. For example, you can have one virtual network gateway that uses -GatewayType Vpn, and one that uses -GatewayType ExpressRoute.
+Each virtual network can have only one virtual network gateway per gateway type. For example, you can have one virtual network gateway that uses `-GatewayType` Vpn, and one that uses `-GatewayType` ExpressRoute.
## <a name="gwsku"></a>Gateway SKUs [!INCLUDE [expressroute-gwsku-include](../../includes/expressroute-gwsku-include.md)]
-If you want to upgrade your gateway to a more powerful gateway SKU, you can use the 'Resize-AzVirtualNetworkGateway' PowerShell cmdlet or perform the upgrade directly in the ExpressRoute virtual network gateway configuration blade in the Azure portal. The following upgrades are supported:
+If you want to upgrade your gateway to a more powerful gateway SKU, you can use the `Resize-AzVirtualNetworkGateway` PowerShell cmdlet or perform the upgrade directly in the ExpressRoute virtual network gateway configuration blade in the Azure portal. The following upgrades are supported:
- Standard to High Performance - Standard to Ultra Performance
expressroute Expressroute Bfd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-bfd.md
In this scenario, BFD can help. BFD provides low-overhead link failure detection
## Enabling BFD
-BFD is configured by default under all the newly created ExpressRoute private peering interfaces on the MSEEs. As such, to enable BFD, you only need to configure BFD on both your primary and secondary devices. Configuring BFD is two-step process. You configure the BFD on the interface and then link it to the BGP session.
+BFD is configured by default under all the newly created ExpressRoute private and Microsoft peering interfaces on the MSEEs. As such, to enable BFD, you only need to configure BFD on both your primary and secondary devices. Configuring BFD is two-step process. You configure the BFD on the interface and then link it to the BGP session.
An example CE/PE (using Cisco IOS XE) configuration is shown below.
router bgp 65020
``` >[!NOTE]
->To enable BFD under an already existing private peering; you need to reset the peering. See [Reset ExpressRoute peerings][ResetPeering]
+>To enable BFD under an already existing private or Microsoft peering, you'll need to reset the peering. This will need to be done on circuits configured with private peering before August 2018 and Microsoft peering before January 2020. See [Reset ExpressRoute peerings][ResetPeering]
> ## BFD Timer Negotiation
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **[Vodafone](https://www.vodafone.com/business/global-enterprise/global-connectivity/vodafone-ip-vpn-cloud-connect)** |Supported |Supported | Amsterdam2, London, Milan, Singapore | | **[Vodafone Idea](https://www.vodafone.in/business/enterprise-solutions/connectivity/vpn-extended-connect)** | Supported | Supported | Chennai, Mumbai2 | | **XL Axiata** | Supported | Supported | Jakarta |
-| **[Zayo](https://www.zayo.com/solutions/industries/cloud-connectivity/microsoft-expressroute)** |Supported |Supported | Amsterdam, Chicago, Dallas, Denver, Dublin, Hong Kong, London, London2, Los Angeles, Montreal, New York, Paris, Phoenix, San Antonio, Seattle, Silicon Valley, Toronto, Vancouver, Washington DC, Washington DC2, Zurich|
+| **[Zayo](https://www.zayo.com/services/packet/cloudlink/)** |Supported |Supported | Amsterdam, Chicago, Dallas, Denver, Dublin, Hong Kong, London, London2, Los Angeles, Montreal, New York, Paris, Phoenix, San Antonio, Seattle, Silicon Valley, Toronto, Vancouver, Washington DC, Washington DC2, Zurich|
**+** denotes coming soon
frontdoor Classic Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/classic-overview.md
+
+ Title: Azure Front Door (classic) | Microsoft Docs
+description: This article provides an overview of Azure Front Door (classic).
+
+documentationcenter: ''
+
+editor: ''
+
+ms.devlang: na
+
+ na
+ Last updated : 06/15/2022++
+# customer intent: As an IT admin, I want to learn about Front Door and what I can use it for.
+++
+# What is Azure Front Door (classic)?
+
+Azure Front Door (classic) is a global, scalable entry-point that uses the Microsoft global edge network to create fast, secure, and widely scalable web applications. With Front Door (classic), you can transform your global consumer and enterprise applications into robust, high-performing personalized modern applications with contents that reach a global audience through Azure.
++
+Front Door (classic) works at Layer 7 (HTTP/HTTPS layer) using anycast protocol with split TCP and Microsoft's global network to improve global connectivity. Based on your routing method you can ensure that Front Door (classic) will route your client requests to the fastest and most available application backend. An application backend is any Internet-facing service hosted inside or outside of Azure. Front Door (classic) provides a range of [traffic-routing methods](front-door-routing-methods.md) and [backend health monitoring options](front-door-health-probes.md) to suit different application needs and automatic failover scenarios. Similar to [Traffic Manager](../traffic-manager/traffic-manager-overview.md), Front Door (classic) is resilient to failures, including failures to an entire Azure region.
+
+>[!NOTE]
+> Azure provides a suite of fully managed load-balancing solutions for your scenarios.
+> * If you are looking to do DNS based global routing and do **not** have requirements for Transport Layer Security (TLS) protocol termination ("SSL offload"), per-HTTP/HTTPS request or application-layer processing, review [Traffic Manager](../traffic-manager/traffic-manager-overview.md).
+> * If you want to load balance between your servers in a region at the application layer, review [Application Gateway](../application-gateway/overview.md).
+> * To do network layer load balancing, review [Load Balancer](../load-balancer/load-balancer-overview.md).
+>
+> Your end-to-end scenarios may benefit from combining these solutions as needed.
+> For an Azure load-balancing options comparison, see [Overview of load-balancing options in Azure](/azure/architecture/guide/technology-choices/load-balancing-overview).
+
+## Why use Azure Front Door (classic)?
+
+With Front Door (classic) you can build, operate, and scale out your dynamic web application and static content. Front Door (classic) enables you to define, manage, and monitor the global routing for your web traffic by optimizing for top-tier end-user performance and reliability through quick global failover.
+Key features included with Front Door (classic):
+
+* Accelerated application performance by using **[split TCP](front-door-traffic-acceleration.md?pivots=front-door-classic#connect-to-the-front-door-edge-location-split-tcp)**-based **[anycast protocol](front-door-traffic-acceleration.md?pivots=front-door-classic#select-the-front-door-edge-location-for-the-request-anycast)**.
+
+* Intelligent **[health probe](front-door-health-probes.md)** monitoring for backend resources.
+
+* **[URL-path based](front-door-route-matching.md?pivots=front-door-classic)** routing for requests.
+
+* Enables hosting of multiple websites for efficient application infrastructure.
+
+* Cookie-based **[session affinity](front-door-routing-methods.md#affinity)**.
+
+* **[SSL offloading](front-door-custom-domain-https.md)** and certificate management.
+
+* Define your own **[custom domain](front-door-custom-domain.md)**.
+
+* Application security with integrated **[Web Application Firewall (WAF)](../web-application-firewall/overview.md)**.
+
+* Redirect HTTP traffic to HTTPS with **[URL redirect](front-door-url-rewrite.md?pivots=front-door-classic)**.
+
+* Custom forwarding path with **[URL rewrite](front-door-url-rewrite.md?pivots=front-door-classic)**.
+
+* Native support of end-to-end IPv6 connectivity and **[HTTP/2 protocol](front-door-http2.md)**.
+
+## Pricing
+
+For pricing information, see [Front Door Pricing](https://azure.microsoft.com/pricing/details/frontdoor/). See [SLA for Azure Front Door](https://azure.microsoft.com/support/legal/sla/frontdoor/v1_0/).
+
+## What's new?
+
+Subscribe to the RSS feed and view the latest Azure Front Door feature updates on the [Azure Updates](https://azure.microsoft.com/updates/?category=networking&query=Azure%20Front%20Door) page.
+
+## Next steps
+- Learn how to [create a Front Door (classic)](quickstart-create-front-door.md).
+- Learn [how Front Door (classic) works](front-door-routing-architecture.md?pivots=front-door-classic).
frontdoor Front Door Traffic Acceleration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-traffic-acceleration.md
Front Door optimizes the traffic path from the end user to the backend server. T
::: zone-end
-## <a name = "anycast"></a>Select the Front Door edge location for the request (Anycast)
+## Select the Front Door edge location for the request (Anycast)
Globally, [Front Door has over 150 edge locations](edge-locations-by-region.md), or points of presence (PoPs), located in many countries and regions. Every Front Door PoP can serve traffic for any request.
-Traffic routed to the Azure Front Door edge locations uses [Anycast](https://en.wikipedia.org/wiki/Anycast) for both DNS (Domain Name System) and HTTP (Hypertext Transfer Protocol) traffic. Anycast allows for user requests to reach the closest edge location in the fewest network hops. This architecture offers better round-trip times for end users by maximizing the benefits of [Split TCP](#splittcp).
+Traffic routed to the Azure Front Door edge locations uses [Anycast](https://en.wikipedia.org/wiki/Anycast) for both DNS (Domain Name System) and HTTP (Hypertext Transfer Protocol) traffic. Anycast allows for user requests to reach the closest edge location in the fewest network hops. This architecture offers better round-trip times for end users by maximizing the benefits of [Split TCP](#connect-to-the-front-door-edge-location-split-tcp).
Front Door organizes its edge locations into primary and fallback *rings*. The outer ring has edge locations that are closer to users, offering lower latencies. The inner ring has edge locations that can handle the failover for the outer ring edge location in case any issues happen.
The outer ring is the preferred target for all traffic, and the inner ring is de
Front Door's architecture ensures that requests from your end users always reach the closest Front Door edge locations. If the preferred Front Door edge location is unhealthy, all traffic automatically moves to the next closest edge location.
-## <a name = "splittcp"></a>Connect to the Front Door edge location (Split TCP)
+## Connect to the Front Door edge location (Split TCP)
[Split TCP](https://en.wikipedia.org/wiki/Performance-enhancing_proxy) is a technique to reduce latencies and TCP problems by breaking a connection that would incur a high round-trip time into smaller pieces.
governance First Query Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-powershell.md
Title: 'Quickstart: Your first PowerShell query' description: In this quickstart, you follow the steps to enable the Resource Graph module for Azure PowerShell and run your first query. Previously updated : 07/09/2021 Last updated : 06/15/2022 ++ # Quickstart: Run your first Resource Graph query using Azure PowerShell
or `-Subscription` parameters.
results: ```azurepowershell-interactive
+ # Store the query in a variable
+ $query = 'Resources | project name, type | order by name asc | limit 5'
+ # Run Azure Resource Graph query with `order by` first, then with `limit`
- Search-AzGraph -Query 'Resources | project name, type | order by name asc | limit 5'
+ Search-AzGraph -Query $query
```
-When the final query is run several times, assuming that nothing in your environment is changing,
+When the final query is run several times, assuming that nothing in your environment changes,
the results returned are consistent and ordered by the **Name** property, but still limited to the top five results.
healthcare-apis Healthcare Apis Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-faqs.md
Previously updated : 06/10/2022 Last updated : 06/15/2022
No, unfortunately we don't offer migration capabilities at this time.
### What is the pricing of Azure Health Data Services?
-At this time, Azure Health Data Services is available for you to use at no charge.
+For pricing information, see [Azure Health Data Services pricing](https://azure.microsoft.com/pricing/details/health-data-services/).
### What regions are Azure Health Data Services available?
iot-central Tutorial Connect Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-connect-device.md
Title: Tutorial - Connect a generic client app to Azure IoT Central | Microsoft
description: This tutorial shows you how to connect a device running either a C, C#, Java, JavaScript, or Python client app to your Azure IoT Central application. You modify the automatically generated device template by adding views that let an operator interact with a connected device. Previously updated : 01/04/2022 Last updated : 06/10/2022
iot-central Tutorial Create Telemetry Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-create-telemetry-rules.md
Title: Tutorial - Create and manage rules in your Azure IoT Central application
description: This tutorial shows you how Azure IoT Central rules enable you to monitor your devices in near real time and to automatically invoke actions, such as sending an email, when the rule triggers. Previously updated : 12/21/2021 Last updated : 06/09/2022
iot-central Tutorial Define Gateway Device Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-define-gateway-device-type.md
Title: Tutorial - Define a new gateway device type in Azure IoT Central | Microsoft Docs description: This tutorial shows you, as a builder, how to define a new IoT gateway device type in your Azure IoT Central application.-- Previously updated : 12/21/2021++ Last updated : 06/09/2022
iot-central Tutorial Use Device Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-use-device-groups.md
Title: Tutorial - Use device groups in your Azure IoT Central application | Micr
description: Tutorial - Learn how to use device groups to analyze telemetry from devices in your Azure IoT Central application. Previously updated : 12/21/2021 Last updated : 06/16/2022
iot-central Tutorial Smart Meter App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/energy/tutorial-smart-meter-app.md
Title: Tutorial - Azure IoT smart meter monitoring | Microsoft Docs description: This tutorial shows you how to deploy and use the smart meter monitoring application template for IoT Central.-- Previously updated : 12/23/2021++ Last updated : 06/14/2022
iot-central Tutorial Solar Panel App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/energy/tutorial-solar-panel-app.md
Title: Tutorial - Azure IoT solar panel monitoring | Microsoft Docs description: This tutorial shows you how to deploy and use the solar panel monitoring application template for IoT Central.-- Previously updated : 12/23/2021++ Last updated : 06/14/2022
iot-central Tutorial Connected Waste Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/government/tutorial-connected-waste-management.md
Title: Tutorial - Azure IoT connected waste management | Microsoft Docs description: This tutorial shows you how to deploy and use the connected waste management application template for IoT Central.-- Previously updated : 12/22/2021++ Last updated : 06/16/2022
iot-central Tutorial Water Consumption Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/government/tutorial-water-consumption-monitoring.md
Title: Tutorial - Azure IoT water consumption monitoring | Microsoft Docs description: This tutorial shows you how to deploy and use the water consumption monitoring application template for IoT Central.-- Previously updated : 12/23/2021++ Last updated : 06/16/2022
iot-central Tutorial Water Quality Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/government/tutorial-water-quality-monitoring.md
Title: Tutorial - Azure IoT water quality monitoring | Microsoft Docs description: This tutorial shows you how to deploy and use the water quality monitoring application template for IoT Central.-- Previously updated : 12/23/2021++ Last updated : 06/15/2022
iot-central Tutorial In Store Analytics Create App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-in-store-analytics-create-app.md
-- Previously updated : 12/20/2021++ Last updated : 06/14/2022 # Tutorial: Deploy and walk through the in-store analytics application template
iot-central Tutorial In Store Analytics Customize Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-in-store-analytics-customize-dashboard.md
-- Previously updated : 08/24/2021++ Last updated : 06/14/2022 # Tutorial: Customize the dashboard and manage devices in Azure IoT Central
iot-central Tutorial Iot Central Connected Logistics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-connected-logistics.md
Title: Tutorial of IoT Connected logistics | Microsoft Docs description: A tutorial of Connected logistics application template for IoT Central--++ Previously updated : 01/06/2022 Last updated : 06/13/2022
iot-central Tutorial Iot Central Digital Distribution Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-digital-distribution-center.md
Title: Tutorial - Azure IoT Digital Distribution Center | Microsoft Docs description: This tutorial shows you how to deploy and use the digital distribution center application template for IoT Central--++ Previously updated : 01/06/2022 Last updated : 06/14/2022 # Tutorial: Deploy and walk through the digital distribution center application template
iot-central Tutorial Iot Central Smart Inventory Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-smart-inventory-management.md
Title: Tutorial - Azure IoT Smart inventory management | Microsoft Docs description: This tutorial shows you how to deploy and use smart inventory management application template for IoT Central--++ Previously updated : 12/20/2021 Last updated : 06/13/2022 # Tutorial: Deploy and walk through the smart inventory management application template
iot-dps Quick Create Simulated Device X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-x509.md
In this section, you'll use your Windows command prompt.
``` If you want to pass the certificate and password as a parameter, you can use the following format.
+
+ >[!NOTE]
+ >Additional parameters can be passed along while running the application to change the TransportType (-t) and the GlobalDeviceEndpoint (-g).
+
```cmd dotnet run -- -s 0ne00000A0A -c certificate.pfx -p 1234
iot-edge How To Add Custom Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-add-custom-metrics.md
InsightsMetrics
Once you have confirmed ingestion, you can either create a new workbook or augment an existing workbook. Use [workbooks docs](../azure-monitor/visualize/workbooks-overview.md) and queries from the curated [IoT Edge workbooks](how-to-explore-curated-visualizations.md) as a guide.
-When happy with the results, you can [share the workbook](../azure-monitor/visualize/workbooks-access-control.md) with your team or [deploy them programmatically](../azure-monitor/visualize/workbooks-automate.md) as part of your organization's resource deployments.
+When happy with the results, you can [share the workbook](../azure-monitor/visualize/workbooks-overview.md#access-control) with your team or [deploy them programmatically](../azure-monitor/visualize/workbooks-automate.md) as part of your organization's resource deployments.
## Next steps
iot-edge How To Explore Curated Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-explore-curated-visualizations.md
Click on a severity row to see alerts details. The **Alert rule** link takes you
To begin customizing a workbook, first enter editing mode. Select the **Edit** button in the menu bar of the workbook. Curated workbooks make extensive use of workbook groups. You may need to select **Edit** on several nested groups before being able to view a visualization query.
-Save your changes as a new workbook. You can [share](../azure-monitor/visualize/workbooks-access-control.md) the saved workbook with your team or [deploy them programmatically](../azure-monitor/visualize/workbooks-automate.md) as part of your organization's resource deployments.
+Save your changes as a new workbook. You can [share](../azure-monitor/visualize/workbooks-overview.md#access-control) the saved workbook with your team or [deploy them programmatically](../azure-monitor/visualize/workbooks-automate.md) as part of your organization's resource deployments.
## Next steps
iot-hub-device-update Device Update Apt Manifest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-apt-manifest.md
Title: Understand Device Update for Azure IoT Hub APT manifest | Microsoft Docs
+ Title: Understand Device Update for Azure IoT Hub apt manifest | Microsoft Docs
description: Understand how Device Update for IoT Hub uses apt manifest for a package-based update.
-# Device Update APT Manifest
+# Device Update apt manifest
-The APT Manifest is a JSON file that describes an update details required by APT Update Handler. This file can be imported into Device Update for IoT Hub just like any other update.
+The apt manifest is a JSON file that describes an update details required by apt update handler. This file can be imported into Device Update for IoT Hub just like any other update.
-[Learn More](import-update.md) about importing updates into Device Update.
+For more information, see [Import an update to Device Update for IoT Hub](import-update.md).
## Overview
-When an APT manifest is delivered to an Device Update Agent as an update, the agent will process the manifest and carry out the necessary operations. These operations include downloading and installing the packages specified in the APT Manifest file and their dependencies from a designated repository.
+When an apt manifest is delivered to a Device Update agent as an update, the agent processes the manifest and carries out the necessary operations. These operations include downloading and installing the packages specified in the apt manifest file and their dependencies from a designated repository.
-Device Update supports APT UpdateType and APT Update Handler. This support allows the Device Update Agent to evaluate the installed Debian packages and update the necessary packages.
+Device Update supports apt updateType and apt [update handler](device-update-agent-overview.md#update-handlers). This support allows the Device Update agent to evaluate the installed Debian packages and update the necessary packages.
## Schema
-An APT Manifest file is a JSON file with a versioned schema.
+An apt manifest file is a JSON file with a versioned schema.
```json {
An APT Manifest file is a JSON file with a versioned schema.
} ```
-Example:
+For example:
```json {
Example:
} ```
-### name
-
-The name for this APT Manifest. This can be whatever name or ID is meaningful for your
-scenarios. For example, `contoso-iot-edge`.
-
-### version
-
-A version number for this APT Manifest. For example, `1.0.0.0`.
--
-### packages
-
-A list of objects containing package-specific properties.
-
-#### name
-
-The name or ID of the package. For example, `iotedge`.
+Each apt manifest includes the following properties:
-#### version
+* **Name**: The name for this apt manifest. This can be whatever name or ID is meaningful for your scenarios. For example, `contoso-iot-edge`.
+* **Version**: A version number for this apt Manifest. For example, `1.0.0.0`.
+* **Packages**: A list of objects containing package-specific properties.
+ * **Name**: The name or ID of the package. For example, `iotedge`.
+ * **Version**: The desired version criteria for the package. For example, `1.0.8-2`. The version value shouldn't contain an equal sign. If version is omitted, the latest available version of specified package will be installed.
-The desired version criteria for the package. For example, `1.0.8-2`.
+Currently only exact version number is supported. The version number is the desired Debian package version in format **[epoch:]upstream_version[-debian_revision]**, where **epoch** is an unsigned int and **upstream_version** can include alphanumerics and characters such as ".","+","-" and "~". It should start with a digit.
-Currently only exact version number is supported. The version number is the desired Debian package
-version in format [epoch:]upstream_version[-debian_revision].
-
-**epoch** is an unsigned int.
-
-**upstream_version** can include alphanumerics and characters such as ".","+","-" and "~". It should start with a digit.
> [!NOTE] > '1.0.8' is equal to '1.0.8-0'
-For example, **`"name":"iotedge" and "version":"1.0.8-2"`** is equivalent to installing a package using command `apt-get install iotedge=1.0.8-2`
-
-> [!NOTE]
-> Version value doesn't contain an equal sign
-
-If version is omitted, the latest available version of specified package will be installed.
+For example, `"name":"iotedge"` and `"version":"1.0.8-2"` is equivalent to installing a package using command `apt-get install iotedge=1.0.8-2`
-[Learn More](https://www.debian.org/doc/debian-policy/ch-controlfields.html#s-f-version) about how Debian packages are versioned.
+For more information about how Debian packages are versioned, see [the Debian policy manual](https://www.debian.org/doc/debian-policy/ch-controlfields.html#s-f-version)
> [!NOTE]
-> APT package manager ignores versioning requirements given by a package when the dependent packages to install are being automatically resolved. Unless explicit versions of dependent packages are given they will use the latest, even though the package itself may specify a strict requirement (=) on a given
-version. This automatic resolution can lead to errors regarding an unmet dependency. [Learn More](https://unix.stackexchange.com/questions/350192/apt-get-not-properly-resolving-a-dependency-on-a-fixed-version-in-a-debian-ubunt)
+> The apt package manager ignores versioning requirements given by a package when the dependent packages to install are being automatically resolved. Unless explicit versions of dependent packages are given they will use the latest, even though the package itself may specify a strict requirement (=) on a given version. This automatic resolution can lead to errors regarding an unmet dependency. [Learn More](https://unix.stackexchange.com/questions/350192/apt-get-not-properly-resolving-a-dependency-on-a-fixed-version-in-a-debian-ubunt)
-If you're updating a specific version of the Azure IoT Edge security daemon, then you should include the desired version of the `aziot-edge` package and its dependent `aziot-identity-service` package in your APT manifest.
-[Learn More](../iot-edge/how-to-update-iot-edge.md#update-the-security-subsystem)
+If you're updating a specific version of the Azure IoT Edge security daemon, then you should include the desired version of the `aziot-edge` package and its dependent `aziot-identity-service` package in your apt manifest.
+For more information, see [How to update IoT Edge](../iot-edge/how-to-update-iot-edge.md#update-the-security-subsystem).
-> [!NOTE]
-> An apt manifest can be used to update Device Update agent and its dependencies. List the device update agent name and desired version in the apt manifest, like you would for any other package. This apt manifest can then be imported and deployed through the Device Update for IoT Hub pipeline.
+An apt manifest can be used to update Device Update agent and its dependencies. List the device update agent name and desired version in the apt manifest, like you would for any other package. This apt manifest can then be imported and deployed through the Device Update for IoT Hub pipeline.
## Removing packages
-You can also use an apt manifest to remove installed packages from your device. A single apt manifest can be used to remove, add and update multiple packages.
-To remove a package, add a minus sign "-" after the package name. You shouldn't include a version number for the packages you are removing.
-Removing packages through an apt manifest doesn't remove its dependencies and configurations.
+You can also use an apt manifest to remove installed packages from your device. A single apt manifest can be used to remove, add, and update multiple packages.
+
+To remove a package, add a minus sign "-" after the package name. You shouldn't include a version number for the packages you're removing. Removing a package through an apt manifest doesn't remove its dependencies and configurations.
-Example:
+For example:
```json {
Example:
] } ```
-This apt manifest will remove the package "foo" from the device(s) it is deployed to.
-## Recommended value for installed Criteria
+This apt manifest will remove the package "foo" from the device(s) it's deployed to.
+
+## Recommended value for installed criteria
-The Installed Criteria for an APT Manifest is `<name>-<version>` where `<name>` is the name of the APT Manifest and `<version>` is the version of the APT Manifest. For example, `contoso-iot-edge-1.0.0.0`.
+The installed criteria for an apt manifest is `<name>-<version>` where `<name>` is the name of the apt Manifest and `<version>` is the version of the apt manifest. For example, `contoso-iot-edge-1.0.0.0`.
-## Guidelines on creating an APT Manifest
+## Guidelines on creating an apt manifest
-While creating the APT Manifest, there are some guidelines to keep in mind:
+While creating the apt manifest, there are some guidelines to keep in mind:
-- Always ensure that the APT Manifest is a well-formed json file-- Each APT Manifest should have a unique version. Try to come up with a standardized methodology to increment the version of the APT Manifest, so that it makes sense for your scenarios and can be easily followed-- When it comes to the desired state of each individual package, specify the exact name and version of the package that you would like to install on your device. Always validate the values against the package repository that you intend to use as the source for the package-- Ensure that the packages in the APT Manifest are listed in the order they should be installed/removed-- Always validate the installation of packages on a test device to ensure the outcome is desired-- When installing a specific version of a package (For example, `iotedge 1.0.9-1`), it's best practice to also have in the APT Manifest the explicit versions of the dependent packages to be installed (For example, `libiothsm 1.0.9-1`)-- While it's not mandated, always ensure your APT Manifest is cumulative to avoid getting your device into an unknown state. A cumulative update will ensure that your devices have the desired version of every package you care about even if the device has skipped an APT Update deployment because of failure in installation, or being taken offline
+* Always ensure that the apt manifest is a well-formed json file.
+* Each apt manifest should have a unique version. Try to come up with a standardized methodology to increment the version of the apt manifest, so that it makes sense for your scenarios and can be easily followed.
+* When it comes to the desired state of each individual package, specify the exact name and version of the package that you would like to install on your device. Always validate the values against the package repository that you intend to use as the source for the package.
+* Ensure that the packages in the apt manifest are listed in the order they should be installed/removed.
+* Always validate the installation of packages on a test device to ensure the outcome is desired.
+* When installing a specific version of a package (For example, `iotedge 1.0.9-1`), it's best practice to also have in the apt manifest the explicit versions of the dependent packages to be installed (For example, `libiothsm 1.0.9-1`)
+* While it's not mandated, always ensure your apt manifest is cumulative to avoid getting your device into an unknown state. A cumulative update will ensure that your devices have the desired version of every package you care about even if the device has skipped an apt update deployment because of failure in installation, or being taken offline
For example:
-**Base APT manifest**
+**Base apt manifest**
```JSON {
For example:
} ```
-**BAD UPDATE**
+**Bad update**
This update includes the bar package, but not the foo package.
This update includes the bar package, but not the foo package.
} ```
-**GOOD UPDATE**
+**Good update**
This update includes foo package, and also includes bar package.
This update includes foo package, and also includes bar package.
## Next steps
-> [!div class="nextstepaction"]
-> [Import new update](import-update.md)
+[Import an update to Device Update](import-update.md)
iot-hub-device-update Device Update Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-compliance.md
-# Device Update Compliance
+# Device Update compliance
-In Device Update for IoT Hub, compliance measures how many devices have installed the highest version compatible update. A device
-is compliant if it has installed the highest version available update that is compatible for it.
+In Device Update for IoT Hub, compliance measures how many devices are running the latest available version. A device is compliant if it has installed the highest available version update that is compatible for it.
For example, consider an instance of Device Update with the following updates:
-|Update Name|Update Version|Compatible Device Model|
-|--|--|--|
-|Update1 |1.0 |Model1|
-|Update2 |1.0 |Model2|
-|Update3 |2.0 |Model1|
+| Update name | Update version | Compatible device model |
+|-|-|-|
+| Update1 | 1.0 | Model1 |
+| Update2 | 1.0 | Model2 |
+| Update3 | 2.0 | Model1 |
LetΓÇÖs say the following deployments have been created:
-|Deployment Name |Update Name |Targeted Group|
-|--|--|-|
-|Deployment1 |Update1 |Group1|
-|Deployment2 |Update2 |Group2|
-|Deployment3 |Update3 |Group3|
+| Deployment name | Update name | Targeted group |
+|--|-|-|
+| Deployment1 | Update1 | Group1 |
+| Deployment2 | Update2 | Group2 |
+| Deployment3 | Update3 | Group3 |
Now, consider the following devices, with their group memberships and installed versions:
-|DeviceId |Device Model |Installed Update Version|Group |Compliance|
-|--|--|--|--||
-|Device1 |Model1 |1.0 |Group1 |New updates available</span>|
-|Device2 |Model1 |2.0 |Group3 |On latest update|
-|Device3 |Model2 |1.0 |Group2 |On latest update|
-|Device4 |Model1 |1.0 |Group3 |Update in progress|
+| DeviceId | Device model | Installed update version | Group | Compliance |
+|-|--|--|-||
+| Device1 | Model1 | 1.0 | Group1 | New updates available |
+| Device2 | Model1 | 2.0 | Group3 | On latest update |
+| Device3 | Model2 | 1.0 | Group2 | On latest update |
+| Device4 | Model1 | 1.0 | Group3 | Update in progress |
-Device1 and Device4 aren't compliant because they have version 1.0 installed even
-though thereΓÇÖs a higher version update, Update3, compatible for their model in the Device Update instance. Device2 and
-Device3 are both compliant because they have the highest version updates compatible for their models installed.
+Device1 and Device4 aren't compliant because they have version 1.0 installed even though thereΓÇÖs a higher version update, Update3, compatible for their model in the Device Update instance. Device2 and Device3 are both compliant because they have the highest version updates compatible for their models installed.
-Compliance doesn't consider whether an update is deployed to a deviceΓÇÖs group or not; it looks at any updates
-published to Device Update. So in the example above, even though Device1 has installed the update deployed to it, it's considered non-compliant. Device1 will continue being considered non-compliant till it successfully installs Update3. The compliance status can help you identify whether new deployments are needed.
+Compliance doesn't consider whether an update is deployed to a deviceΓÇÖs group or not; it looks at any updates published to Device Update. So in the example above, even though Device1 has installed the update deployed to it, it's considered non-compliant. Device1 will continue being considered non-compliant until it successfully installs Update3. The compliance status can help you identify whether new deployments are needed.
As shown above, there are three compliance states in Device Update for IoT Hub:
-* **On latest update** ΓÇô the device has installed the highest version compatible update published to Device Update.
-* **Update in progress** ΓÇô an active deployment is in the process of delivering the highest version compatible update to the device.
-* **New updates available** ΓÇô a device hasn't yet installed the highest version compatible update and isn't in an active deployment for that update.
+* **On latest update** ΓÇô the device has installed the highest compatible version update published to Device Update.
+* **Update in progress** ΓÇô an active deployment is in the process of delivering the highest compatible version update to the device.
+* **New updates available** ΓÇô a device hasn't yet installed the highest compatible version update and isn't in an active deployment for that update.
iot-hub-device-update Device Update Control Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-control-access.md
-# Azure Role-based access control (RBAC) and Device Update
+# Azure role-based access control (RBAC) and Device Update
Device Update uses Azure RBAC to provide authentication and authorization for users and service APIs.
In order for other users and applications to have access to Device Update, users
| Role Name | Description | | : | :- |
-| Device Update Administrator | Has access to all device update resources |
+| Device Update Administrator | Has access to all Device Update resources |
| Device Update Reader| Can view all updates and deployments | | Device Update Content Administrator | Can view, import, and delete updates | | Device Update Content Reader | Can view updates |
A combination of roles can be used to provide the right level of access. For exa
Device Update uses Azure Active Directory (AD) for authentication to its REST APIs. To get started, you need to create and configure a client application.
-### Create client Azure AD App
+### Create client Azure AD app
-To integrate an application or service with Azure AD, [first register](../active-directory/develop/quickstart-register-app.md) a client application with Azure AD. Client application setup will vary depending on the authorization flow you'll need (users, applications or managed identities). For example, to call Device Update from:
+To integrate an application or service with Azure AD, first [register a client application with Azure AD](../active-directory/develop/quickstart-register-app.md). Client application setup will vary depending on the authorization flow you'll need (users, applications or managed identities). For example, to call Device Update from:
-* Mobile or desktop application, add `Mobile and desktop applications` platform with https://login.microsoftonline.com/common/oauth2/nativeclient for the Redirect URI.
-* Website with implicit sign-on, add `Web` platform and select `Access tokens (used for implicit flows)`.
+* Mobile or desktop application, add **Mobile and desktop applications** platform with `https://login.microsoftonline.com/common/oauth2/nativeclient` for the Redirect URI.
+* Website with implicit sign-on, add **Web** platform and select **Access tokens (used for implicit flows)**.
### Configure permissions Next, add permissions for calling Device Update to your app:
-1. Go to `API permissions` page of your app and click `Add a permission`.
-2. Go to `APIs my organization uses` and search for `Azure Device Update`.
-3. Select `user_impersonation` permission and click `Add permissions`.
-### Requesting authorization token
+1. Go to the **API permissions** page of your app and select **Add a permission**.
+2. Go to **APIs my organization uses** and search for **Azure Device Update**.
+3. Select **user_impersonation** permission and select **Add permissions**.
-Device Update REST API requires OAuth 2.0 authorization token in the request header. Following are some examples of various ways to request an authorization token.
+### Request authorization token
+
+The Device Update REST API requires an OAuth 2.0 authorization token in the request header. The following sections show some examples of ways to request an authorization token.
#### Using Azure CLI
$Scope = 'https://api.adu.microsoft.com/.default'
Get-MsalToken -ClientId $clientId -TenantId $tenantId -Authority $authority -Scopes $Scope -ClientCertificate $cert ```
-## Next Steps
-* Create device update resources and configure access control roles](./create-device-update-account.md)
+## Next steps
+
+[Create device update resources and configure access control roles](create-device-update-account.md)
iot-hub-device-update Device Update Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-deployments.md
A device group can only have one active deployment associated with it at any giv
## Dynamic deployments
-Deployments in Device Update for IoT Hub are dynamic in nature. Dynamic deployments empower users to move towards a set-and-forget management model by automatically deploying
-updates to newly provisioned, applicable devices. Any devices that are provisioned or change their group membership after a deployment is initiated, will automatically receive
-the update deployment as long as the deployment remains active without any other action on part of the user.
+Deployments in Device Update for IoT Hub are dynamic in nature. Dynamic deployments empower users to move towards a set-and-forget management model by automatically deploying updates to applicable, newly provisioned devices. Any devices that are provisioned or change their group membership after a deployment is initiated will automatically receive the update deployment as long as the deployment remains active.
-## Deployment life cycle
-
-Due to their dynamic nature, deployments remain active and in-progress until they are explicitly canceled. A deployment is considered Inactive and Superseded if a new deployment
-is created targeting the same device group. A deployment can be retried for devices that might fail. Once a deployment is canceled, it cannot be reactivated again.
+## Deployment lifecycle
+Due to their dynamic nature, deployments remain active and in-progress until they are explicitly canceled. A deployment is considered inactive and superseded if a new deployment is created targeting the same device group. A deployment can be retried for devices that might fail. Once a deployment is canceled, it cannot be reactivated.
## Next steps
iot-hub-device-update Device Update Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-groups.md
# Device groups A device group is a collection of devices. Device groups provide a way to scale deployments to many devices. Each device belongs to exactly one device group at a time.
-You may choose to create multiple device groups to organize your devices. For example, Contoso might use the "Flighting" device group for the devices in its test laboratory and
-the "Evaluation" device group for the devices that its field team uses in the operations center. Further, Contoso might choose to group their Production devices based on
-their geographic regions, so that they can update devices on a schedule that aligns with their regional timezones.
+You may choose to create multiple device groups to organize your devices. For example, Contoso might use the "Flighting" device group for the devices in its test laboratory and the "Evaluation" device group for the devices that its field team uses in the operations center. Further, Contoso might choose to group their production devices based on their geographic regions, so that they can update devices on a schedule that aligns with their regional timezones.
-## Using device or module twin tag for device group creation
+## Create device groups using device or module twin tags
Tags enable users to group devices. Devices need to have a ADUGroup key and a value in their device or module twin to allow them to be grouped. ### Device or module twin tag format
-```markdown
+```json
"tags": { "ADUGroup": "<CustomTagValue>" }
Tags enable users to group devices. Devices need to have a ADUGroup key and a va
## Default device group
-Any device that has the Device Update agent installed and provisioned, but does not have a ADUGroup tag added to its device or module twin will be added to a default group. Default groups or system-assigned groups help reduce the overhead of tagging and grouping devices, so customers can easily deploy updates to them. Default groups cannot be deleted or re-created by customers. Customers cannot change the definition or add/remove devices from a default group manually. Devices with the same device class are grouped together in a default group. Default group names are reserved within an IOT solution. Default groups will be named in the format ΓÇ£Default-(deviceClassID)ΓÇ¥. All deployment features that are available for user-defined groups are also available for default, system-assigned groups.
+Any device that has the Device Update agent installed and provisioned, but doesn't have the ADUGroup tag added to its device or module twin, will be added to a default group. Default groups, also called system-assigned groups, help reduce the overhead of tagging and grouping devices, so customers can easily deploy updates to them. Default groups can't be deleted or re-created by customers. Customers can't change the definition or add/remove devices from a default group manually. Devices with the same device class are grouped together in a default group. Default group names are reserved within an IOT solution. Default groups will be named in the format `Default-<deviceClassID>`. All deployment features that are available for user-defined groups are also available for default, system-assigned groups.
For example consider the devices with their device twin tags below:
-```markdown
+```json
"deviceId": "Device1", "tags": { "ADUGroup": "Group1" } ```
-```markdown
+```json
"deviceId": "Device2", "tags": { "ADUGroup": "Group1" } ```
-```markdown
+```json
"deviceId": "Device3", "tags": { "ADUGroup": "Group2" } ```
-```markdown
+```json
"deviceId": "Device4", ``` Below are the devices and the possible groups that can be created for them.
-|Device |Group |
-|--|--|
-|Device1 |Group1|
-|Device2 |Group1|
-|Device3 |Group2|
-|Device4 |DefaultGroup1-(deviceClassId)|
-
+| Device | Group |
+||-|
+| Device1 | Group1 |
+| Device2 | Group1 |
+| Device3 | Group2 |
+| Device4 | DefaultGroup1-(deviceClassId) |
## Invalid group
-A corresponding invalid group is created for every user-defined group. A device is added to the invalid group if it doesn't meet the compatibility requirements of the user-defined group. This can be resolved by either re-tagging and regrouping the device under a new group, or modifying it's compatibility properties through the agent configuration file.
+A corresponding invalid group is created for every user-defined group. A device is added to the invalid group if it doesn't meet the compatibility requirements of the user-defined group. This grouping can be resolved by either re-tagging and regrouping the device under a new group, or modifying its compatibility properties through the agent configuration file.
-An invalid group only exists for diagnostic purposes. Updates cannot be deployed to invalid groups
+An invalid group only exists for diagnostic purposes. Updates cannot be deployed to invalid groups.
## Next steps
-[Create device group](./create-update-group.md)
+[Create a device group](./create-update-group.md)
iot-hub-device-update Device Update Plug And Play https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-plug-and-play.md
# Device Update for IoT Hub and IoT Plug and Play
-Device Update for IoT Hub uses [IoT Plug and Play](../iot-develop/index.yml) to discover and manage devices that are over-the-air update capable. The Device Update service sends and receives properties and messages to and from devices using IoT Plug and Play interfaces. Device Update for IoT Hub requires IoT devices to implement the following interfaces and model id.
+Device Update for IoT Hub uses [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) to discover and manage devices that are over-the-air update capable. The Device Update service sends and receives properties and messages to and from devices using IoT Plug and Play interfaces. Device Update for IoT Hub requires IoT devices to implement the following interfaces and model ID.
-Concepts:
-* Understand the [IoT Plug and Play device client](../iot-develop/concepts-developer-guide-device.md?pivots=programming-language-csharp).
+For more information:
+
+* Understand the [IoT Plug and Play device client](../iot-develop/concepts-developer-guide-device.md).
* See how the [Device Update agent is implemented](https://github.com/Azure/iot-hub-device-update/blob/main/docs/agent-reference/how-to-build-agent-code.md).
-## Device Update Core Interface
+## Device Update core interface
-The 'DeviceUpdateCore' interface is used to send update actions and metadata to devices and receive update status from devices. The 'DeviceUpdateCore' interface is split into two Object properties.
+The **DeviceUpdateCore** interface is used to send update actions and metadata to devices and receive update status from devices. The DeviceUpdateCore interface is split into two object properties.
-The expected component name in your model is **"deviceUpdate"** when this interface is implemented. [Learn more about Azure IoT Plug and Play Components](../iot-develop/concepts-modeling-guide.md)
+The expected component name in your model is **"deviceUpdate"** when this interface is implemented. [Learn more about Azure IoT Plug and Play components.](../iot-develop/concepts-modeling-guide.md)
-### Agent Metadata
+### Agent metadata
-The Device Update agent uses Agent Metadata fields to send
-information to Device Update services.
+The Device Update agent uses agent metadata fields to send information to Device Update services.
|Name|Schema|Direction|Description|Example| |-|||--|--|
-|deviceProperties|Map|device to cloud|The set of properties that contain the manufacturer, model, and other device information.|See other examples for details|
-|compatPropertyNames|String (Comma separated)|device to cloud|The device reported properties that are used to check for compatibility of the device to target the update deployment. Limited to five device properties|"compatPropertyNames": "manufacturer,model"|
-|lastInstallResult|Map|device to cloud|The result reported by the agent. It contains result code, extended result code, and result details for main update and other step updates||
+|deviceProperties|Map|device to cloud|The set of properties that contain the manufacturer, model, and other device information.| See [Device properties](#device-properties) section for details. |
+|compatPropertyNames|String (comma separated)|device to cloud|The device reported properties that are used to check for compatibility of the device to target the update deployment. Limited to five device properties. |"compatPropertyNames": "manufacturer,model"|
+|lastInstallResult|Map|device to cloud|The result reported by the agent. It contains result code, extended result code, and result details for main update and other step updates.||
|resultCode|integer|device to cloud|A code that contains information about the result of the last update action. Can be populated for either success or failure.|700| |extendedResultCode|integer|device to cloud|A code that contains additional information about the result. Can be populated for either success or failure.|0x80004005| |resultDetails|string|device to cloud|Customer-defined free form string to provide additional result details. Returned to the twin without parsing||
-|stepResults|map|device to cloud|The result reported by the agent containing result code, extended result code, and result details for step updates | "step_1": { "resultCode": 0,"extendedResultCode": 0, "resultDetails": ""}|
-|state|integer|device to cloud|It is an integer that indicates the current state of the Device Update agent. See State section for details |0|
-|workflow|complex|device to cloud|It is a set of values that indicates which deployment the agent is currently working on, ID of current deployment, and acknowledgment of any retry request sent from service to agent.|"workflow": {"action": 3,"ID": "11b6a7c3-6956-4b33-b5a9-87fdd79d2f01","retryTimestamp": "2022-01-26T11:33:29.9680598Z"}|
-|installedUpdateId|string|device to cloud|An ID of the update that is currently installed (through Device Update). This value will be a string capturing the Update ID JSON or null for a device that has never taken an update through Device Update.|installedUpdateID{\"provider\":\"contoso\",\"name\":\"image-update\",\"version\":\"1.0.0\"}"|
--
-#### State
-
-It is the status reported by the Device Update (DU) agent after receiving an action from the Device Update service. `State` is reported in response to an `Action` (see `Actions` section) sent to the Device Update agent from the Device Update service. See the [overview workflow](understand-device-update.md#device-update-agent) for requests that flow between the Device Update service and the Device Update agent.
-
-|Name|Value|Description|
-||--|--|
-|Idle|0|The device is ready to receive an action from the Device Update service. After a successful update, state is returned to the `Idle` state.|
-|DeploymentInprogress|6| A deployment in progress|
-|Failed|255|A failure occurred during updating.|
-|DownloadSucceeded|2|A successful download. This status is only reported by devices with agent version 0.7.0 or older.|
-|InstallSucceeded|4|A successful install. This status is only reported by devices with agent version 0.7.0 or older.|
+|stepResults|map|device to cloud|The result reported by the agent containing result code, extended result code, and result details for step updates. | "step_1": { "resultCode": 0,"extendedResultCode": 0, "resultDetails": ""}|
+|state|integer|device to cloud| An integer that indicates the current state of the Device Update agent. | See [State](#state) section for details. |
+|workflow|complex|device to cloud| A set of values that indicate which deployment the agent is currently working on, ID of current deployment, and acknowledgment of any retry request sent from service to agent.|"workflow": {"action": 3,"ID": "11b6a7c3-6956-4b33-b5a9-87fdd79d2f01","retryTimestamp": "2022-01-26T11:33:29.9680598Z"}|
+|installedUpdateId|string|device to cloud|An ID of the update that is currently installed (through Device Update). This value is a string capturing the Update ID JSON or null for a device that has never taken an update through Device Update.|installedUpdateID{\"provider\":\"contoso\",\"name\":\"image-update\",\"version\":\"1.0.0\"}"|
-#### Device Properties
+#### Device properties
-It is the set of properties that contain the manufacturer and model.
+The **deviceProperties** field contains the manufacturer and model information for a device.
|Name|Schema|Direction|Description| |-|||--|
-|manufacturer|string|device to cloud|The device manufacturer of the device, reported through `deviceProperties`. This property is read from one of two places - the 'DeviceUpdateCore' interface will first attempt to read the 'aduc_manufacturer' value from the [Configuration file](device-update-configuration-file.md) file. If the value is not populated in the configuration file, it will default to reporting the compile-time definition for ADUC_DEVICEPROPERTIES_MANUFACTURER. This property will only be reported at boot time. Default value 'Contoso'|
-|model|string|device to cloud|The device model of the device, reported through `deviceProperties`. This property is read from one of two - the DeviceUpdateCore interface will first attempt to read the 'aduc_model' value from the [Configuration file](device-update-configuration-file.md) file. If the value is not populated in the configuration file, it will default to reporting the compile-time definition for ADUC_DEVICEPROPERTIES_MODEL. This property will only be reported at boot time. Default value 'Video'|
-|interfaceId|string|device to cloud|This property is used by the service to identify the interface version being used by the Device Update agent. It is required by Device Update service to manage and communicate with the agent. This property is set at 'dtmi:azure:iot:deviceUpdateModel;1' for device using DU agent version 0.8.0.|
-|aduVer|string|device to cloud|Version of the Device Update agent running on the device. This value is read from the build only if during compile time ENABLE_ADU_TELEMETRY_REPORTING is set to 1 (true). Customers can choose to opt-out of version reporting by setting the value to 0 (false). [How to customize Device Update agent properties](https://github.com/Azure/iot-hub-device-update/blob/main/docs/agent-reference/how-to-build-agent-code.md).|
-|doVer|string|device to cloud|Version of the Delivery Optimization agent running on the device. The value is read from the build only if during compile time ENABLE_ADU_TELEMETRY_REPORTING is set to 1 (true). Customers can choose to opt-out of the version reporting by setting the value to 0 (false).[How to customize Delivery Optimization agent properties](https://github.com/microsoft/do-client/blob/main/README.md#building-do-client-components).|
-|Custom compatibility Properties|User Defined|device to cloud|Implementer can define other device properties to be used for the compatibility check while targeting the update deployment|
+|manufacturer|string|device to cloud|The device manufacturer of the device, reported through `deviceProperties`. This property is read from one of two places - first, the DeviceUpdateCore interface attempts to read the 'aduc_manufacturer' value from the [Configuration file](device-update-configuration-file.md). If the value isn't populated in the configuration file, it defaults to reporting the compile-time definition for ADUC_DEVICEPROPERTIES_MANUFACTURER. This property is reported only at boot time. <br><br> Default value: 'Contoso'.|
+|model|string|device to cloud|The device model of the device, reported through `deviceProperties`. This property is read from one of two places - first, the DeviceUpdateCore interface attempts to read the 'aduc_model' value from the [Configuration file](device-update-configuration-file.md). If the value isn't populated in the configuration file, it defaults to reporting the compile-time definition for ADUC_DEVICEPROPERTIES_MODEL. This property is reported only at boot time. <br><br> Default value: 'Video'|
+|interfaceId|string|device to cloud|This property is used by the service to identify the interface version being used by the Device Update agent. The interface ID is required by Device Update service to manage and communicate with the agent. <br><br> Default value: 'dtmi:azure:iot:deviceUpdateModel;1' for devices using DU agent version 0.8.0.|
+|aduVer|string|device to cloud|Version of the Device Update agent running on the device. This value is read from the build only if ENABLE_ADU_TELEMETRY_REPORTING is set to 1 (true) during compile time. Customers can choose to opt out of version reporting by setting the value to 0 (false). [How to customize Device Update agent properties](https://github.com/Azure/iot-hub-device-update/blob/main/docs/agent-reference/how-to-build-agent-code.md).|
+|doVer|string|device to cloud|Version of the Delivery Optimization agent running on the device. The value is read from the build only if ENABLE_ADU_TELEMETRY_REPORTING is set to 1 (true) during compile time. Customers can choose to opt out of the version reporting by setting the value to 0 (false). [How to customize Delivery Optimization agent properties](https://github.com/microsoft/do-client/blob/main/README.md#building-do-client-components).|
+|Custom compatibility Properties|User Defined|device to cloud|Implementer can define other device properties to be used for the compatibility check while targeting the update deployment.|
+IoT Hub device twin example:
-IoT Hub Device Twin sample
```json "deviceUpdate": { "__t": "c",
IoT Hub Device Twin sample
``` >[!NOTE]
->The device or module must add the `{"__t": "c"}` marker to indicate that the element refers to a component, learn more [here](../iot-develop/concepts-convention.md#sample-multiple-components-writable-property).
+>The device or module must add the `{"__t": "c"}` marker to indicate that the element refers to a component. For more information, see [IoT Plug and Play conventions](../iot-develop/concepts-convention.md#sample-multiple-components-writable-property).
+
+#### State
+
+The **State** field is the status reported by the Device Update (DU) agent after receiving an action from the Device Update service. State is reported in response to an **Action** (see [Action section](#action) for details) sent to the Device Update agent from the Device Update service. For more information about requests that flow between the Device Update service and the Device Update agent, see the [overview workflow](understand-device-update.md#device-update-agent).
+
+|Name|Value|Description|
+||--|--|
+|Idle|0|The device is ready to receive an action from the Device Update service. After a successful update, state is returned to the `Idle` state.|
+|DeploymentInprogress|6| A deployment is in progress.|
+|Failed|255|A failure occurred during updating.|
+|DownloadSucceeded|2|A successful download. This status is only reported by devices with agent version 0.7.0 or older.|
+|InstallSucceeded|4|A successful install. This status is only reported by devices with agent version 0.7.0 or older.|
-### Service Metadata
+### Service metadata
-Service Metadata contains fields that the Device Update services uses to communicate actions and data to the Device Update agent.
+Service metadata contains fields that the Device Update services uses to communicate actions and data to the Device Update agent.
|Name|Schema|Direction|Description| |-|||--|
-|action|integer|cloud to device|It is an integer that corresponds to an action the agent should perform. Values listed in the Action section.|
-|updateManifest|string|cloud to device|Used to describe the content of an update. Generated from the [Import Manifest](create-update.md)|
+|action|integer|cloud to device| An integer that corresponds to an action the agent should perform. See [Action](#action) section for details. |
+|updateManifest|string|cloud to device|Used to describe the content of an update. Generated from the [Import manifest](create-update.md).|
|updateManifestSignature|JSON Object|cloud to device|A JSON Web Signature (JWS) with JSON Web Keys used for source verification.|
-|fileUrls|Map|cloud to device|Map of `FileID` to `DownloadUrl`. Tells the agent, which files to download and the hash to use to verify that the files were downloaded correctly.|
+|fileUrls|Map|cloud to device|Map of `FileID` to `DownloadUrl`. Tells the agent which files to download and the hash to use to verify that the files were downloaded correctly.|
#### Action
-`Actions` in this section represents the actions taken by the Device Update agent as instructed by the Device Update service. The Device Update agent will report a `State` (see `State` section) processing the `Action` received. See the [overview workflow](understand-device-update.md#device-update-agent) for requests that flow between the Device Update service and the Device Update agent.
+The **action** field represents the actions taken by the Device Update agent as instructed by the Device Update service. The Device Update agent will report a [state](#state) for processing the action received. For more information about requests that flow between the Device Update service and the Device Update agent, see the [overview workflow](understand-device-update.md#device-update-agent).
|Name|Value|Description| ||--|--|
-|ApplyDeployment|3|Apply the update. It signals to the device to apply the deployed update|
-|Cancel|255|Stop processing the current action and go back to `Idle`. It is also be used to tell the agent in the `Failed` state to go back to `Idle`.|
-|Download|0|Download published content or update and any other content needed. This action is only sent to devices with agent version 0.7.0 or older.|
-|Install|1|Install the content or update. Typically this action means to call the installer for the content or update. This action is only sent to devices with agent version 0.7.0 or older.|
-|Apply|2|Finalize the update. It signals the system to reboot if necessary. This action is only sent to devices with agent version 0.7.0 or older.|
+|applyDeployment|3|Apply the update. It signals to the device to apply the deployed update|
+|cancel|255|Stop processing the current action and go back to `Idle`, or tell an agent in the `Failed` state to go back to `Idle`.|
+|download|0|Download published content or update and any other content needed. This action is only sent to devices with agent version 0.7.0 or older.|
+|install|1|Install the content or update. Typically this action means to call the installer for the content or update. This action is only sent to devices with agent version 0.7.0 or older.|
+|apply|2|Finalize the update. It signals the system to reboot if necessary. This action is only sent to devices with agent version 0.7.0 or older.|
-## Device Information Interface
+## Device information interface
-The Device Information interface is a concept used within [IoT Plug and Play architecture](../iot-develop/overview-iot-plug-and-play.md). It contains device to cloud properties that provide information about the hardware and operating system of the device. Device Update for IoT Hub uses the DeviceInformation.manufacturer and DeviceInformation.model properties for telemetry and diagnostics. To learn more about Device Information interface, see this [example](https://devicemodels.azure.com/dtmi/azure/devicemanagement/deviceinformation-1.json).
+The device information interface is a concept used within [IoT Plug and Play architecture](../iot-develop/overview-iot-plug-and-play.md). It contains device-to-cloud properties that provide information about the hardware and operating system of the device. Device Update for IoT Hub uses the `DeviceInformation.manufacturer` and `DeviceInformation.model` properties for telemetry and diagnostics. To learn more, see this [example of the device information interface](https://devicemodels.azure.com/dtmi/azure/devicemanagement/deviceinformation-1.json).
The expected component name in your model is **deviceInformation** when this interface is implemented. [Learn about Azure IoT Plug and Play Components](../iot-develop/concepts-modeling-guide.md)
iot-hub-device-update Device Update Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-resources.md
Title: Understand Device Update for Azure IoT Hub resources | Microsoft Docs
description: Understand Device Update for Azure IoT Hub resources Previously updated : 2/11/2021 Last updated : 06/14/2022
# Device update resources
-To use Device Update for IoT Hub, you need to create a device update account and instance resource.
+To use Device Update for IoT Hub, you need to create a Device Update account and instance.
-## Device update account
+## Device Update account
A Device Update account is a resource that is created within your Azure subscription. At the Device Update account level,
-you can select the region where your Device Update account will be created. You can also set permissions to authorize users that
-will have access to Device Update.
-
+you can select the region where your Device Update account will be created. You can also set permissions to authorize users that have access to Device Update.
## Device update instance+ After an account has been created, you need to create a Device Update instance. An instance is a logical container that contains
-updates and deployments associated with a specific IoT hub. Device Update uses IoT hub as a device directory, and a communication channel with devices.
+updates and deployments associated with a specific IoT hub. Device Update uses IoT Hub as a device directory and a communication channel with devices.
During public preview, two Device update accounts can be created per subscription. Additionally, two device update instance can be created per account.
-## Configuring Device update linked IoT Hub
+## Configure the linked IoT hub
-In order for Device Update to receive change notifications from IoT Hub, Device Update integrates with the "Built-In" Event Hub. Clicking the "Configure IoT Hub" button within your instance configures the required message routes, consumer groups and access policy required to communicate with IoT devices.
+In order for Device Update to receive change notifications from IoT Hub, Device Update integrates with the built-in Event Hubs. Clicking the "Configure IoT Hub" button within your instance configures the required message routes, consumer groups, and access policy required to communicate with IoT devices.
### Message Routing
-The following Message Routes are configured for Device Update:
+The following Message Routes are automatically configured in your linked IoT hub to enable Device Update:
| Route Name | Data Source | Routing Query | Endpoint | Description | | : | :- |:- |:- |:- |
-| DeviceUpdate.DigitalTwinChanges | DigitalTwinChangeEvents | true | events | Listens for Digital Twin Changes Events |
-| DeviceUpdate.DeviceLifecycle | DeviceLifecycleEvents | opType = 'deleteDeviceIdentity' OR opType = 'deleteModuleIdentity' | events | Listens for Devices that have been deleted |
-| DeviceUpdate.DeviceTwinEvents| TwinChangeEvents | (opType = 'updateTwin' OR opType = 'replaceTwin') AND IS_DEFINED($body.tags.ADUGroup) | events | Listens for new Device Update Groups |
+| DeviceUpdate.DeviceTwinChanges| TwinChangeEvents | (opType = 'updateTwin' OR opType = 'replaceTwin') AND IS_DEFINED($body.tags.ADUGroup) | events | Listens for new Device Update groups |
+| DeviceUpdate.DigitalTwinChanges | DigitalTwinChangeEvents | true | events | Listens for Digital Twin change events |
+| DeviceUpdate.DeviceLifecycle | DeviceLifecycleEvents | opType = 'deleteDeviceIdentity' OR opType = 'deleteModuleIdentity' | events | Listens for devices that have been deleted |
+| DeviceUpdate.DeviceConnectionState | DeviceConnectionStateEvents | true | events | Listens for changes to device connection states |
> [!NOTE]
-> Route names don't really matter when configuring these routes. We are including DeviceUpdate as a prefix to make the names consistent and easily identifiable that they are being used for Device Update. The rest of the route properties should be configured as they are in the table below for the Device Update to work properly.
+> You can change the names of these routes if it makes sense for your solution. The rest of the route properties should stay configured as they are in the table for Device Update to work properly.
-### Consumer Group
+### Consumer group
-Configuring the IoT Hub also creates an event hub consumer group that is required by the Device Update Management services.
+Configuring the IoT hub also creates an event hub consumer group called **adum** that is required by the Device Update management services.
:::image type="content" source="media/device-update-resources/consumer-group.png" alt-text="Screenshot of consumer groups." lightbox="media/device-update-resources/consumer-group.png":::
-### Access Policy
+### Access policy
+
+A shared access policy named **deviceupdateservice** is used by the Device Update Management services to query for update-capable devices. The **deviceupdateservice** policy is created and given the following permissions as part of configuring the IoT Hub:
-A shared access policy named "deviceupdateservice" is required by the Device Update Management services to query for update-capable devices. The "deviceupdateservice" policy is created and given the following permissions as part of configuring the IoT Hub:
-- Registry Read-- Service Connect-- Device Connect
+- Registry read
+- Service connect
+- Device connect
:::image type="content" source="media/device-update-resources/access-policy.png" alt-text="Screenshot of access policy." lightbox="media/device-update-resources/access-policy.png":::
load-balancer Backend Pool Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/backend-pool-management.md
Title: Backend Pool Management
description: Get started learning how to configure and manage the backend pool of an Azure Load Balancer -+ Last updated 2/17/2022-+ # Backend pool management
load-balancer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/overview.md
Title: What is Basic Azure Load Balancer? description: Overview of Basic Azure Load Balancer.-+ -+ Last updated 04/14/2022
load-balancer Quickstart Basic Internal Load Balancer Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-internal-load-balancer-cli.md
Title: 'Quickstart: Create an internal basic load balancer - Azure CLI' description: This quickstart shows how to create an internal basic load balancer by using the Azure CLI.-+ Last updated 03/24/2022-+ #Customer intent: I want to create a load balancer so that I can load balance internal traffic to VMs.
load-balancer Quickstart Basic Internal Load Balancer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-internal-load-balancer-portal.md
Title: "Quickstart: Create a basic internal load balancer - Azure portal"
description: This quickstart shows how to create a basic internal load balancer by using the Azure portal. -+ Last updated 03/21/2022-+ #Customer intent: I want to create a internal load balancer so that I can load balance internal traffic to VMs.
load-balancer Quickstart Basic Internal Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-internal-load-balancer-powershell.md
Title: 'Quickstart: Create an internal basic load balancer - Azure PowerShell' description: This quickstart shows how to create an internal basic load balancer using Azure PowerShell-+ Last updated 03/24/2022-+ #Customer intent: I want to create a load balancer so that I can load balance internal traffic to VMs.
load-balancer Quickstart Basic Public Load Balancer Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-public-load-balancer-cli.md
Title: 'Quickstart: Create a basic public load balancer - Azure CLI' description: Learn how to create a public basic SKU Azure Load Balancer in this quickstart using the Azure CLI.--++ Last updated 03/16/2022
load-balancer Quickstart Basic Public Load Balancer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-public-load-balancer-portal.md
Title: 'Quickstart: Create a basic public load balancer - Azure portal' description: Learn how to create a public basic SKU Azure Load Balancer in this quickstart. --++ Last updated 03/15/2022
load-balancer Quickstart Basic Public Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-public-load-balancer-powershell.md
Title: 'Quickstart: Create a basic internal load balancer - Azure PowerShell' description: This quickstart shows how to create a basic internal load balancer using Azure PowerShell--++ Last updated 03/22/2022
load-balancer Virtual Network Ipv4 Ipv6 Dual Stack Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/virtual-network-ipv4-ipv6-dual-stack-cli.md
Title: Deploy IPv6 dual stack application - Basic Load Balancer - CLI description: Learn how to deploy a dual stack (IPv4 + IPv6) application with Basic Load Balancer using Azure CLI.-+ Last updated 03/31/2022-+ # Deploy an IPv6 dual stack application using Basic Load Balancer - CLI
load-balancer Virtual Network Ipv4 Ipv6 Dual Stack Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/virtual-network-ipv4-ipv6-dual-stack-powershell.md
Title: Deploy IPv6 dual stack application - Basic Load Balancer - PowerShell description: This article shows how deploy an IPv6 dual stack application in Azure virtual network using Azure PowerShell.-+ Last updated 03/31/2022-+
load-balancer Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/cli-samples.md
description: Azure CLI Samples documentationcenter: load-balancer-+ Last updated 06/14/2018-+ # Azure CLI Samples for Load Balancer
load-balancer Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/components.md
Title: Azure Load Balancer components
description: Overview of Azure Load Balancer components documentationcenter: na-+ na Last updated 12/27/2021-+ # Azure Load Balancer components
load-balancer Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/concepts.md
Title: Azure Load Balancer concepts
description: Overview of Azure Load Balancer concepts documentationcenter: na-+ na Last updated 11/29/2021-+
load-balancer Configure Vm Scale Set Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/configure-vm-scale-set-cli.md
Title: Configure virtual machine scale set with an existing Azure Load Balancer - Azure CLI description: Learn how to configure a virtual machine scale set with an existing Azure Load Balancer by using the Azure CLI.--++ Last updated 03/25/2020
load-balancer Configure Vm Scale Set Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/configure-vm-scale-set-portal.md
Title: Configure virtual machine scale set with an existing Azure Load Balancer - Azure portal description: Learn how to configure a virtual machine scale set with an existing Azure Load Balancer by using the Azure portal.--++ Last updated 03/25/2020
load-balancer Configure Vm Scale Set Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/configure-vm-scale-set-powershell.md
Title: Configure virtual machine scale set with an existing Azure Load Balancer - Azure PowerShell description: Learn how to configure a virtual machine scale set with an existing Azure Load Balancer.--++ Last updated 03/26/2020
load-balancer Cross Region Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/cross-region-overview.md
description: Overview of cross region load balancer tier for Azure Load Balancer. documentationcenter: na-+ na Last updated 09/22/2020-+
load-balancer Distribution Mode Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/distribution-mode-concepts.md
Title: Azure Load Balancer distribution modes description: Get started learning about the different distribution modes of Azure Load Balancer.--++ Last updated 05/24/2022
load-balancer Egress Only https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/egress-only.md
Title: Outbound-only load balancer configuration description: In this article, learn about how to create an internal load balancer with outbound NAT-+ Last updated 08/21/2021-+ # Outbound-only load balancer configuration
load-balancer Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/gateway-overview.md
Title: Gateway load balancer (Preview)
description: Overview of gateway load balancer SKU for Azure Load Balancer. --++ Last updated 12/28/2021
load-balancer Gateway Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/gateway-partners.md
Title: Azure Gateway Load Balancer partners description: Learn about partners offering their network appliances for use with this service.-+ Last updated 05/11/2022-+ # Gateway Load Balancer partners
load-balancer Howto Load Balancer Imds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/howto-load-balancer-imds.md
Title: Retrieve load balancer metadata using Azure Instance Metadata Service (IM
description: Get started learning how to retrieve load balancer metadata using Azure Instance Metadata Service. -+ Last updated 02/12/2021-+ # Retrieve load balancer metadata using Azure Instance Metadata Service (IMDS)
load-balancer Inbound Nat Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/inbound-nat-rules.md
Title: Inbound NAT rules description: Overview of what is inbound NAT rule, why to use inbound NAT rule, and how to use inbound NAT rule.-+ Last updated 2/17/2022-+ #Customer intent: As a administrator, I want to create an inbound NAT rule so that I can forward a port to a virtual machine in the backend pool of an Azure Load Balancer.
load-balancer Instance Metadata Service Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/instance-metadata-service-load-balancer.md
Title: Retrieve load balancer information by using Azure Instance Metadata Servi
description: Get started learning about using Azure Instance Metadata Service to retrieve load balancer information. -+ Last updated 02/12/2021-+ # Retrieve load balancer information by using Azure Instance Metadata Service
load-balancer Load Balancer Custom Probe Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-custom-probe-overview.md
Title: Azure Load Balancer health probes description: Learn about the different types of health probes and configuration for Azure Load Balancer-+ Last updated 02/10/2022-+ # Azure Load Balancer health probes
load-balancer Load Balancer Distribution Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-distribution-mode.md
description: In this article, get started configuring the distribution mode for Azure Load Balancer to support source IP affinity. documentationcenter: na-+ na Last updated 02/04/2021-+ # Configure the distribution mode for Azure Load Balancer
load-balancer Load Balancer Floating Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-floating-ip.md
Title: Azure Load Balancer Floating IP configuration
description: Overview of Azure Load Balancer Floating IP documentationcenter: na-+ na Last updated 12/2/2021-+
load-balancer Load Balancer Ha Ports Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ha-ports-overview.md
Title: High availability ports overview in Azure description: Learn about high availability ports load balancing on an internal load balancer. -+ na Last updated 04/14/2022-+ # High availability ports overview
load-balancer Load Balancer Ipv6 For Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-for-linux.md
description: In this article, learn how to configure DHCPv6 for Linux VMs. documentationcenter: na-+ keywords: ipv6, azure load balancer, dual stack, public ip, native ipv6, mobile, iot
na Last updated 03/22/2019-+ # Configure DHCPv6 for Linux VMs
load-balancer Load Balancer Ipv6 Internet Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-internet-cli.md
description: With this learning path, get started creating a public load balancer with IPv6 using Azure CLI. documentationcenter: na-+ keywords: ipv6, azure load balancer, dual stack, public ip, native ipv6, mobile, iot
na Last updated 06/25/2018-+ # Create a public load balancer with IPv6 using Azure CLI
load-balancer Load Balancer Ipv6 Internet Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-internet-ps.md
description: Learn how to create an Internet facing load balancer with IPv6 using PowerShell for Resource Manager documentationcenter: na-+ keywords: ipv6, azure load balancer, dual stack, public ip, native ipv6, mobile, iot
na Last updated 09/25/2017-+ # Get started creating an Internet facing load balancer with IPv6 using PowerShell for Resource Manager
load-balancer Load Balancer Ipv6 Internet Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-internet-template.md
description: Learn how to deploy IPv6 support for Azure Load Balancer and load-balanced VMs using an Azure template. documentationcenter: na-+ keywords: ipv6, azure load balancer, dual stack, public ip, native ipv6, mobile, iot
na Last updated 09/25/2017-+ # Deploy an Internet-facing load-balancer solution with IPv6 using a template
load-balancer Load Balancer Ipv6 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-overview.md
Title: Overview of IPv6 - Azure Load Balancer
description: With this learning path, get started with IPv6 support for Azure Load Balancer and load-balanced VMs. documentationcenter: na-+ keywords: ipv6, azure load balancer, dual stack, public ip, native ipv6, mobile, iot
na Last updated 08/24/2018-+ # Overview of IPv6 for Azure Load Balancer
load-balancer Load Balancer Multiple Ip Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-multiple-ip-cli.md
description: Learn how to assign multiple IP addresses to a virtual machine using Azure CLI. documentationcenter: na-+ na Last updated 06/25/2018-+ # Load balancing on multiple IP configurations using Azure CLI
load-balancer Load Balancer Multiple Ip Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-multiple-ip-powershell.md
description: In this article, learn about load balancing across primary and secondary IP configurations using Azure CLI. documentationcenter: na-+ na Last updated 09/25/2017-+ # Load balancing on multiple IP configurations using PowerShell
load-balancer Load Balancer Multiple Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-multiple-ip.md
Title: 'Tutorial: Load balance multiple IP configurations - Azure portal' description: In this article, learn about load balancing across primary and secondary NIC configurations using the Azure portal.--++ Last updated 08/08/2021
load-balancer Load Balancer Multivip Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-multivip-overview.md
Title: Multiple frontends - Azure Load Balancer
description: With this learning path, get started with an overview of multiple frontends on Azure Load Balancer documentationcenter: na-+ na Last updated 01/26/2022-+ # Multiple frontends for Azure Load Balancer
load-balancer Load Balancer Outbound Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-outbound-connections.md
Title: Source Network Address Translation (SNAT) for outbound connections
description: Learn how Azure Load Balancer is used for outbound internet connectivity (SNAT). -+ Last updated 03/01/2022-+ # Use Source Network Address Translation (SNAT) for outbound connections
load-balancer Load Balancer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-overview.md
description: Overview of Azure Load Balancer features, architecture, and implementation. Learn how the Load Balancer works and how to use it in the cloud. documentationcenter: na-+ # Customer intent: As an IT administrator, I want to learn more about the Azure Load Balancer service and what I can use it for.
na Last updated 1/25/2021-+
load-balancer Load Balancer Query Metrics Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-query-metrics-rest-api.md
Title: Retrieve metrics with the REST API
description: In this article, get started using the Azure REST APIs to collect health and usage metrics for Azure Load Balancer. -+ Last updated 11/19/2019-+ # Get Load Balancer usage metrics using the REST API
load-balancer Load Balancer Standard Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-standard-availability-zones.md
description: With this learning path, get started with Azure Standard Load Balancer and Availability Zones. documentationcenter: na-+ na Last updated 05/07/2020-+ # Load Balancer and Availability Zones
load-balancer Load Balancer Standard Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-standard-diagnostics.md
Title: Diagnostics with metrics, alerts, and resource health description: Use the available metrics, alerts, and resource health information to diagnose your load balancer.-+ Last updated 01/26/2022-+ # Standard load balancer diagnostics with metrics, alerts, and resource health
load-balancer Load Balancer Tcp Idle Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-tcp-idle-timeout.md
description: In this article, learn how to configure Azure Load Balancer TCP idle timeout and reset. documentationcenter: na-+ na Last updated 10/26/2020-+ # Configure TCP reset and idle timeout for Azure Load Balancer
load-balancer Load Balancer Tcp Reset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-tcp-reset.md
description: With this article, learn about Azure Load Balancer with bidirectional TCP RST packets on idle timeout. documentationcenter: na-+ na Last updated 10/07/2020-+ # Load Balancer TCP Reset and Idle Timeout
load-balancer Load Balancer Troubleshoot Backend Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-troubleshoot-backend-traffic.md
Title: Troubleshoot Azure Load Balancer
description: Learn how to troubleshoot known issues with Azure Load Balancer. documentationcenter: na-+
na Last updated 03/02/2022-+ # Troubleshoot Azure Load Balancer backend traffic responses
load-balancer Load Balancer Troubleshoot Health Probe Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-troubleshoot-health-probe-status.md
Title: Troubleshoot Azure Load Balancer health probe status
description: Learn how to troubleshoot known issues with Azure Load Balancer health probe status. documentationcenter: na-+
na Last updated 12/02/2020-+ # Troubleshoot Azure Load Balancer health probe status
load-balancer Load Balancer Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-troubleshoot.md
Title: Troubleshoot common issues Azure Load Balancer
description: Learn how to troubleshoot common issues with Azure Load Balancer. documentationcenter: na-+
na Last updated 01/28/2020-+ # Troubleshoot Azure Load Balancer
load-balancer Manage Inbound Nat Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage-inbound-nat-rules.md
Title: Manage inbound NAT rules for Azure Load Balancer description: In this article, you'll learn how to add and remove and inbound NAT rule in the Azure portal.--++ Last updated 03/15/2022
load-balancer Manage Probes How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage-probes-how-to.md
Title: Manage health probes for Azure Load Balancer - Azure portal description: In this article, learn how to manage health probes for Azure Load Balancer using the Azure portal--++ Last updated 03/02/2022
load-balancer Manage Rules How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage-rules-how-to.md
Title: Manage rules for Azure Load Balancer - Azure portal description: In this article, learn how to manage rules for Azure Load Balancer using the Azure portal--++ Last updated 08/23/2021
load-balancer Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage.md
Title: Azure Load Balancer portal settings description: Get started learning about Azure Load Balancer portal settings-+ Last updated 08/16/2021-+ # Azure Load Balancer portal settings
load-balancer Monitor Load Balancer Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/monitor-load-balancer-reference.md
Title: Monitoring Load Balancer data reference description: Important reference material needed when you monitor Load Balancer -+ -+ Last updated 06/29/2021
load-balancer Monitor Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/monitor-load-balancer.md
Title: Monitoring Azure Load Balancer description: Start here to learn how to monitor load balancer.--++
load-balancer Move Across Regions External Load Balancer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/move-across-regions-external-load-balancer-portal.md
Title: Move an Azure external load balancer to another Azure region by using the Azure portal description: Use an Azure Resource Manager template to move an external load balancer from one Azure region to another by using the Azure portal.-+ Last updated 09/17/2019-+ # Move an external load balancer to another region by using the Azure portal
load-balancer Move Across Regions External Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/move-across-regions-external-load-balancer-powershell.md
Title: Move Azure external Load Balancer to another Azure region using Azure PowerShell description: Use Azure Resource Manager template to move Azure external Load Balancer from one Azure region to another using Azure PowerShell.-+ Last updated 09/17/2019-+
load-balancer Move Across Regions Internal Load Balancer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/move-across-regions-internal-load-balancer-portal.md
Title: Move Azure internal Load Balancer to another Azure region using the Azure portal description: Use Azure Resource Manager template to move Azure internal Load Balancer from one Azure region to another using the Azure portal-+ Last updated 09/18/2019-+ # Move Azure internal Load Balancer to another region using the Azure portal
load-balancer Move Across Regions Internal Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/move-across-regions-internal-load-balancer-powershell.md
Title: Move Azure internal Load Balancer to another Azure region using Azure PowerShell description: Use Azure Resource Manager template to move Azure internal Load Balancer from one Azure region to another using Azure PowerShell-+ Last updated 09/17/2019-+
load-balancer Outbound Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/outbound-rules.md
Title: Outbound rules Azure Load Balancer description: This article explains how to configure outbound rules to control egress of internet traffic with Azure Load Balancer. -+ Last updated 1/6/2022-+ # <a name="outboundrules"></a>Outbound rules Azure Load Balancer
load-balancer Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/powershell-samples.md
Title: Azure PowerShell Samples - Azure Load Balancer
description: With these samples, load balance traffic to multiple websites on VMs and traffic to VMs for HA with Azure Load Balancer. documentationcenter: load-balancer-+ Last updated 12/10/2018-+ # Azure PowerShell Samples for Load Balancer
load-balancer Python Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/python-samples.md
description: With these samples, load balance traffic to multiple websites. Deploy load balancers in a HA configuration. documentationcenter: load-balancer-+ Last updated 08/20/2021-+ # Python Samples for Azure Load Balancer
load-balancer Quickstart Load Balancer Standard Internal Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-cli.md
Title: 'Quickstart: Create an internal load balancer - Azure CLI' description: This quickstart shows how to create an internal load balancer by using the Azure CLI.-+ Last updated 03/23/2022-+ #Customer intent: I want to create a load balancer so that I can load balance internal traffic to VMs.
load-balancer Quickstart Load Balancer Standard Internal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-portal.md
Title: "Quickstart: Create an internal load balancer - Azure portal"
description: This quickstart shows how to create an internal load balancer by using the Azure portal. -+ Last updated 03/21/2022-+ #Customer intent: I want to create a internal load balancer so that I can load balance internal traffic to VMs.
load-balancer Quickstart Load Balancer Standard Internal Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-powershell.md
Title: 'Quickstart: Create an internal load balancer - Azure PowerShell' description: This quickstart shows how to create an internal load balancer using Azure PowerShell-+ Last updated 03/24/2022-+ #Customer intent: I want to create a load balancer so that I can load balance internal traffic to VMs.
load-balancer Quickstart Load Balancer Standard Internal Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-template.md
Title: 'Quickstart: Create an internal load balancer by using a template' description: This quickstart shows how to create an internal Azure load balancer by using an Azure Resource Manager template (ARM template). -+ -+ Last updated 09/14/2020
load-balancer Quickstart Load Balancer Standard Public Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-cli.md
Title: "Quickstart: Create a public load balancer - Azure CLI" description: This quickstart shows how to create a public load balancer using the Azure CLI-+ Last updated 03/16/2022-+ #Customer intent: I want to create a load balancer so that I can load balance internet traffic to VMs.
load-balancer Quickstart Load Balancer Standard Public Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-portal.md
Title: "Quickstart: Create a public load balancer - Azure portal" description: This quickstart shows how to create a load balancer by using the Azure portal.-+ Last updated 03/16/2022-+ #Customer intent: I want to create a load balancer so that I can load balance internet traffic to VMs.
load-balancer Quickstart Load Balancer Standard Public Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-powershell.md
Title: 'Quickstart: Create a public load balancer - Azure PowerShell' description: This quickstart shows how to create a load balancer using Azure PowerShell--++ Last updated 03/17/2022
load-balancer Quickstart Load Balancer Standard Public Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-template.md
description: This quickstart shows how to create a load balancer by using an Azure Resource Manager template. documentationcenter: na-+ na Last updated 12/09/2020-+ #Customer intent: I want to create a load balancer by using an Azure Resource Manager template so that I can load balance internet traffic to VMs.
load-balancer Load Balancer Linux Cli Load Balance Multiple Websites Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/scripts/load-balancer-linux-cli-load-balance-multiple-websites-vm.md
Title: Load balance multiple websites - Azure CLI - Azure Load Balancer description: This Azure CLI script example shows how to load balance multiple websites to the same virtual machine documentationcenter: load-balancer-+ ms.devlang: azurecli Last updated 03/04/2022-+
load-balancer Load Balancer Linux Cli Sample Nlb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/scripts/load-balancer-linux-cli-sample-nlb.md
Title: Load balance traffic to VMs for HA - Azure CLI - Azure Load Balancer
description: This Azure CLI script example shows how to load balance traffic to VMs for high availability documentationcenter: load-balancer-+ ms.devlang: azurecli Last updated 03/04/2022-+
load-balancer Load Balancer Linux Cli Sample Zonal Frontend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/scripts/load-balancer-linux-cli-sample-zonal-frontend.md
Title: Load balance VMs within a zone - Azure CLI
description: This Azure CLI script example shows how to load balance traffic to VMs within a specific availability zone documentationcenter: load-balancer-+ # Customer intent: As an IT administrator, I want to create a load balancer that load balances incoming internet traffic to virtual machines within a specific zone in a region. ms.assetid:
Last updated 03/04/2022-+
load-balancer Load Balancer Linux Cli Sample Zone Redundant Frontend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/scripts/load-balancer-linux-cli-sample-zone-redundant-frontend.md
Title: Load balance VMs across availability zones - Azure CLI - Azure Load Balancer description: This Azure CLI script example shows how to load balance traffic to VMs across availability zones documentationcenter: load-balancer-+ # Customer intent: As an IT administrator, I want to create a load balancer that load balances incoming internet traffic to virtual machines across availability zones in a region. ms.devlang: azurecli Last updated 06/14/2018-+
load-balancer Load Balancer Windows Powershell Load Balance Multiple Websites Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/scripts/load-balancer-windows-powershell-load-balance-multiple-websites-vm.md
Title: Load balance multiple websites - Azure PowerShell - Azure Load Balancer description: This Azure PowerShell script example hows how to load balance multiple websites to the same virtual machine documentationcenter: load-balancer-+ ms.devlang: powershell Last updated 04/20/2018-+
load-balancer Load Balancer Windows Powershell Sample Nlb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/scripts/load-balancer-windows-powershell-sample-nlb.md
description: This Azure PowerShell Script Example shows how to load balance traffic to VMs for high availability documentationcenter: load-balancer-+ ms.devlang: powershell Last updated 04/20/2018-+
load-balancer Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/skus.md
Title: Azure Load Balancer SKUs
description: Overview of Azure Load Balancer SKUs documentationcenter: na-+ na Last updated 12/22/2021-+ # Azure Load Balancer SKUs
load-balancer Troubleshoot Load Balancer Imds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/troubleshoot-load-balancer-imds.md
Title: Common error codes for Azure Instance Metadata Service (IMDS)
description: Overview of common error codes and corresponding mitigation methods for Azure Instance Metadata Service (IMDS) -+ Last updated 02/12/2021-+ # Error codes: Common error codes when using IMDS to retrieve load balancer information
load-balancer Troubleshoot Outbound Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/troubleshoot-outbound-connection.md
Title: Troubleshoot SNAT exhaustion and connection timeouts
description: Resolutions for common problems with outbound connectivity with Azure Load Balancer. -+ Last updated 04/21/2022-+ # Troubleshoot SNAT exhaustion and connection timeouts
load-balancer Tutorial Add Lb Existing Scale Set Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-add-lb-existing-scale-set-portal.md
Title: 'Tutorial: Add Azure Load Balancer to an existing virtual machine scale set - Azure portal' description: In this tutorial, learn how to add a load balancer to existing virtual machine scale set using the Azure portal. --++ Last updated 4/21/2021
load-balancer Tutorial Cross Region Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-cross-region-cli.md
Title: 'Tutorial: Create a cross-region load balancer using Azure CLI' description: Get started with this tutorial deploying a cross-region Azure Load Balancer using Azure CLI.--++ Last updated 03/04/2021
load-balancer Tutorial Cross Region Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-cross-region-portal.md
Title: 'Tutorial: Create a cross-region load balancer using the Azure portal' description: Get started with this tutorial deploying a cross-region Azure Load Balancer with the Azure portal.--++ Last updated 08/02/2021
load-balancer Tutorial Cross Region Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-cross-region-powershell.md
Title: 'Tutorial: Create a cross-region load balancer using Azure PowerShell' description: Get started with this tutorial deploying a cross-region Azure Load Balancer using Azure PowerShell.--++ Last updated 02/10/2021
load-balancer Tutorial Gateway Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-gateway-cli.md
Title: 'Tutorial: Create a gateway load balancer - Azure CLI' description: Use this tutorial to learn how to create a gateway load balancer using the Azure CLI.--++ Last updated 11/02/2021
load-balancer Tutorial Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-gateway-portal.md
Title: 'Tutorial: Create a gateway load balancer - Azure portal' description: Use this tutorial to learn how to create a gateway load balancer using the Azure portal.--++ Last updated 12/03/2021
load-balancer Tutorial Gateway Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-gateway-powershell.md
Title: 'Tutorial: Create a gateway load balancer - Azure PowerShell' description: Use this tutorial to learn how to create a gateway load balancer using Azure PowerShell.--++ Last updated 11/17/2021
load-balancer Tutorial Load Balancer Ip Backend Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-load-balancer-ip-backend-portal.md
Title: 'Tutorial: Create a public load balancer with an IP-based backend - Azure portal' description: In this tutorial, learn how to create a public load balancer with an IP based backend pool.--++ Last updated 08/06/2021
load-balancer Tutorial Load Balancer Port Forwarding Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-load-balancer-port-forwarding-portal.md
Title: "Tutorial: Create a single virtual machine inbound NAT rule - Azure portal" description: This tutorial shows how to configure port forwarding using Azure Load Balancer to create a connection to a single virtual machine in an Azure virtual network.--++ Last updated 03/08/2022
load-balancer Tutorial Load Balancer Standard Public Zonal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-load-balancer-standard-public-zonal-portal.md
Title: "Tutorial: Load balance VMs within an availability zone - Azure portal"
description: This tutorial demonstrates how to create a Standard Load Balancer with zonal frontend to load balance VMs within an availability zone by using Azure portal -+ # Customer intent: As an IT administrator, I want to create a load balancer that load balances incoming internet traffic to virtual machines within a specific zone in a region. Last updated 08/15/2021-+
load-balancer Tutorial Multi Availability Sets Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-multi-availability-sets-portal.md
Title: 'Tutorial: Create a load balancer with more than one availability set in the backend pool - Azure portal' description: In this tutorial, deploy an Azure Load Balancer with more than one availability set in the backend pool.--++ Last updated 05/09/2022
load-balancer Tutorial Nat Rule Multi Instance Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-nat-rule-multi-instance-portal.md
Title: "Tutorial: Create a multiple virtual machines inbound NAT rule - Azure portal" description: This tutorial shows how to configure port forwarding using Azure Load Balancer to create a connection to multiple virtual machines in an Azure virtual network.--++ Last updated 03/10/2022
load-balancer Upgrade Basic Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basic-standard.md
Title: Upgrade a basic to standard public load balancer
description: This article shows you how to upgrade a public load balancer from basic to standard SKU. -+ Last updated 03/17/2022-+ # Upgrade from a basic public to standard public load balancer
load-balancer Upgrade Internalbasic To Publicstandard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-internalbasic-to-publicstandard.md
Title: Upgrade an internal basic load balancer - Outbound connections required description: Learn how to upgrade a basic internal load balancer to a standard public load balancer.-+ Last updated 03/17/2022-+ # Upgrade an internal basic load balancer - Outbound connections required
logic-apps Logic Apps Create Azure Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-create-azure-resource-manager-templates.md
Title: Create logic app templates for deployment
-description: Create Azure Resource Manager templates for automating deployment in Azure Logic Apps.
+ Title: Create Consumption logic app templates for deployment
+description: Create Azure Resource Manager templates to automat deployment for Consumption logic apps in Azure Logic Apps.
ms.suite: integration
Last updated 07/20/2021
-# Create Azure Resource Manager templates to automate deployment for Azure Logic Apps
+# Create Azure Resource Manager templates to automate Consumption logic app deployment for Azure Logic Apps
-To help you automate creating and deploying your logic app, this article describes the ways that you can create an [Azure Resource Manager template](../azure-resource-manager/management/overview.md) for your logic app. For an overview about the structure and syntax for a template that includes your workflow definition and other resources necessary for deployment, see [Overview: Automate deployment for logic apps with Azure Resource Manager templates](logic-apps-azure-resource-manager-templates-overview.md).
-Azure Logic Apps provides a [prebuilt logic app Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.logic/logic-app-create/azuredeploy.json) that you can reuse, not only for creating logic apps, but also to define the resources and parameters to use for deployment. You can use this template for your own business scenarios or customize the template to meet your requirements.
+To help you automatically create and deploy a Consumption logic app, this article describes the ways that you can create an [Azure Resource Manager template](../azure-resource-manager/management/overview.md). Azure Logic Apps also provides a [prebuilt logic app Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.logic/logic-app-create/azuredeploy.json) that you can reuse, not only to create Consumption logic apps, but also to define the resources and parameters for deployment. You can use this template for your own business scenarios or customize the template to meet your requirements. For an overview about the structure and syntax for a template that contains a workflow definition and other resources necessary for deployment, see [Overview: Automate deployment for logic apps with Azure Resource Manager templates](logic-apps-azure-resource-manager-templates-overview.md).
> [!IMPORTANT]
-> Make sure that connections in your template use the same Azure resource group and location as your logic app.
+>
+> This article applies only to Consumption logic apps, not Standard logic apps. Make sure that
+> connections in your template use the same Azure resource group and location as your logic app.
-For more about Azure Resource Manager templates, see these topics:
+For more information about Azure Resource Manager templates, see the following topics:
* [Azure Resource Manager template structure and syntax](../azure-resource-manager/templates/syntax.md) * [Author Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md)
Get-ParameterTemplate -TemplateFile $filename -KeyVault Static | Out-File $fileN
## Next steps > [!div class="nextstepaction"]
-> [Deploy logic app templates](../logic-apps/logic-apps-deploy-azure-resource-manager-templates.md)
+> [Deploy logic app templates](../logic-apps/logic-apps-deploy-azure-resource-manager-templates.md)
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-cluster.md
Last updated 05/02/2022
# Create an Azure Machine Learning compute cluster + > [!div class="op_single_selector" title1="Select the Azure Machine Learning CLI version you are using:"]
-> * [v1](v1/how-to-create-attach-compute-cluster.md)
-> * [v2 (preview)](how-to-create-attach-compute-cluster.md)
+> * [CLI v1](v1/how-to-create-attach-compute-cluster.md)
+> * [CLI v2 (current version)](how-to-create-attach-compute-cluster.md)
Learn how to create and manage a [compute cluster](concept-compute-target.md#azure-machine-learning-compute-managed) in your Azure Machine Learning workspace.
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
Last updated 05/04/2022
# Create and manage an Azure Machine Learning compute instance +
+> [!div class="op_single_selector" title1="Select the Azure Machine Learning CLI version you are using:"]
+> * [CLI v1](v1/how-to-create-manage-compute-instance.md)
+> * [CLI v2 (current version)](how-to-create-manage-compute-instance.md)
+ Learn how to create and manage a [compute instance](concept-compute-instance.md) in your Azure Machine Learning workspace. Use a compute instance as your fully configured and managed development environment in the cloud. For development and testing, you can also use the instance as a [training compute target](concept-compute-target.md#train) or for an [inference target](concept-compute-target.md#deploy). A compute instance can run multiple jobs in parallel and has a job queue. As a development environment, a compute instance can't be shared with other users in your workspace.
machine-learning How To Manage Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace-cli.md
In this article, you learn how to create and manage Azure Machine Learning workspaces using the Azure CLI. The Azure CLI provides commands for managing Azure resources and is designed to get you working quickly with Azure, with an emphasis on automation. The machine learning extension to the CLI provides commands for working with Azure Machine Learning resources. > [!NOTE]
-> Examples in this article refer to both CLI v1 and CLI v2 versions. If no version is specified for a command, it will work with either the v1 or CLI v2. The machine learning CLI v2 is currently in public preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads.
+> Examples in this article refer to both CLI v1 and CLI v2 versions. If no version is specified for a command, it will work with either the v1 or CLI v2.
## Prerequisites
az ml workspace create -w <workspace-name>
--container-registry "/subscriptions/<service-GUID>/resourceGroups/<resource-group-name>/providers/Microsoft.ContainerRegistry/registries/<acr-name>" ```
-# [Bring existing resources (CLI v2 - preview)](#tab/bringexistingresources2)
+# [Bring existing resources (CLI v2)](#tab/bringexistingresources2)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
az ml workspace create -w <workspace-name>
For more details on how to use these commands, see the [CLI reference pages](/cli/azure/ml(v1)/workspace).
-# [CLI v2 - preview](#tab/vnetpleconfigurationsv2cli)
+# [CLI v2](#tab/vnetpleconfigurationsv2cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
az ml workspace create -w <workspace-name>
--hbi-workspace ```
-# [CLI v2 - preview](#tab/vnetpleconfigurationsv2cli)
+# [CLI v2](#tab/vnetpleconfigurationsv2cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
To get information about a workspace, use the following command:
az ml workspace show -w <workspace-name> -g <resource-group-name> ```
-# [CLI v2 - preview](#tab/workspaceupdatev2)
+# [CLI v2](#tab/workspaceupdatev2)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
To update a workspace, use the following command:
az ml workspace update -w <workspace-name> -g <resource-group-name> ```
-# [CLI v2 - preview](#tab/workspaceupdatev2)
+# [CLI v2](#tab/workspaceupdatev2)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
If you change access keys for one of the resources used by your workspace, it ta
az ml workspace sync-keys -w <workspace-name> -g <resource-group-name> ```
-# [CLI v2 - preview](#tab/workspacesynckeysv2)
+# [CLI v2](#tab/workspacesynckeysv2)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
To delete a workspace after it is no longer needed, use the following command:
az ml workspace delete -w <workspace-name> -g <resource-group-name> ```
-# [CLI v2 - preview](#tab/workspacedeletev2)
+# [CLI v2](#tab/workspacedeletev2)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-attach-compute-cluster.md
Last updated 05/02/2022
[!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning CLI version you are using:"]
-> * [v1](how-to-create-attach-compute-cluster.md)
-> * [v2 (preview)](../how-to-create-attach-compute-cluster.md)
+> * [CLI v1](how-to-create-attach-compute-cluster.md)
+> * [CLI v2 (current version)](../how-to-create-attach-compute-cluster.md)
Learn how to create and manage a [compute cluster](../concept-compute-target.md#azure-machine-learning-compute-managed) in your Azure Machine Learning workspace.
In this article, learn how to:
This article covers only the CLI v1 way to accomplish these tasks. To see how to use the SDK, CLI v2, or studio, see [Create an Azure Machine Learning compute cluster (CLI v2)](../how-to-create-attach-compute-cluster.md)
+> [!NOTE]
+> This article covers only how to do these tasks using CLI v1. For more recent ways to manage a compute instance, see [Create an Azure Machine Learning compute cluster](../how-to-create-attach-compute-cluster.md).
+ ## Prerequisites * An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](../how-to-manage-workspace.md).
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-manage-compute-instance.md
Last updated 05/02/2022
[!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)]
-Learn how to create and manage a [compute instance](../concept-compute-instance.md) in your Azure Machine Learning workspace with CLI v1.
+> [!div class="op_single_selector" title1="Select the Azure Machine Learning CLI version you are using:"]
+> * [CLI v1](how-to-create-manage-compute-instance.md)
+> * [CLI v2 (current version)](../how-to-create-manage-compute-instance.md)
+
+Learn how to create and manage a [compute instance](../concept-compute-instance.md) in your Azure Machine Learning workspace with CLI v1.
Use a compute instance as your fully configured and managed development environment in the cloud. For development and testing, you can also use the instance as a [training compute target](../concept-compute-target.md#train) or for an [inference target](../concept-compute-target.md#deploy). A compute instance can run multiple jobs in parallel and has a job queue. As a development environment, a compute instance can't be shared with other users in your workspace.
managed-grafana Grafana App Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/grafana-app-ui.md
A Grafana dashboard is a collection of [panels](#panels) arranged in rows and co
## Next steps > [!div class="nextstepaction"]
-> [How to share an Azure Managed Grafana Preview workspace](./how-to-share-grafana-workspace.md)
+> [How to share an Azure Managed Grafana Preview instance](./how-to-share-grafana-workspace.md)
managed-grafana How To Api Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-api-calls.md
In this article, you'll learn how to call Grafana APIs within Azure Managed Graf
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).-- An Azure Managed Grafana workspace. If you don't have one yet, [create a workspace](./quickstart-managed-grafana-portal.md).
+- An Azure Managed Grafana instance. If you don't have one yet, [create an Azure Managed Grafana instance](./quickstart-managed-grafana-portal.md).
## Sign in to Azure Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.com/) with your Azure account.
-## Assign roles to the service principal of your application and of your Azure Managed Grafana Preview workspace
+## Assign roles to the service principal of your application and of your Azure Managed Grafana Preview instance
-1. Start by [Creating an Azure AD application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). This guide takes you through creating an application and assigning a role to its service principal. For simplicity, use an application located in the same Azure Active Directory (Azure AD) tenant as your Grafana workspace.
-1. Assign the role of your choice to the service principal for your Grafana resource. Refer to [How to share a Managed Grafana workspace](how-to-share-grafana-workspace.md) to learn how to grant access to a Grafana instance. Instead of selecting a user, select **Service principal**.
+1. Start by [Creating an Azure AD application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). This guide takes you through creating an application and assigning a role to its service principal. For simplicity, use an application located in the same Azure Active Directory (Azure AD) tenant as your Grafana instance.
+1. Assign the role of your choice to the service principal for your Grafana resource. Refer to [How to share a Managed Grafana instance](how-to-share-grafana-workspace.md) to learn how to grant access to a Grafana instance. Instead of selecting a user, select **Service principal**.
## Get an access token
curl -X GET \
https://<grafana-url>/api/user ```
-Replace `<access-token>` with the access token retrieved in the previous step and replace `<grafana-url>` with the URL of your Grafana instance. For example `https://grafanaworkspace-abcd.cuse.grafana.azure.com`. This URL is displayed in the Azure platform, in the **Overview** page of your Managed Grafana workspace.
+Replace `<access-token>` with the access token retrieved in the previous step and replace `<grafana-url>` with the URL of your Grafana instance. For example `https://grafanaworkspace-abcd.cuse.grafana.azure.com`. This URL is displayed in the Azure platform, in the **Overview** page of your Managed Grafana instance.
:::image type="content" source="media/managed-grafana-how-to-api-endpoint.png" alt-text="Screenshot of the Azure platform. Endpoint displayed in the Overview page."::: ## Next steps > [!div class="nextstepaction"]
-> [Grafana UI](./grafana-app-ui.md)
+> [Grafana UI](./grafana-app-ui.md)
managed-grafana How To Data Source Plugins Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-data-source-plugins-managed-identity.md
Last updated 3/31/2022
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).-- An Azure Managed Grafana workspace. If you don't have one yet, [create a workspace](./how-to-permissions.md).
+- An Azure Managed Grafana instance. If you don't have one yet, [create an Azure Managed Grafana instance](./how-to-permissions.md).
- A resource including monitoring data with Managed Grafana monitoring permissions. Read [how to configure permissions](how-to-permissions.md) for more information. ## Sign in to Azure
Other data sources include:
- [TestData DB](https://grafana.com/docs/grafana/latest/datasources/testdata/) - [Zipkin](https://grafana.com/docs/grafana/latest/datasources/zipkin/)
-You can find all available Grafana data sources by going to your workspace and selecting this page from the left menu: **Configuration** > **Data sources** > **Add a data source** . Search for the data source you need from the available list. For more information about data sources, go to [Data sources](https://grafana.com/docs/grafana/latest/datasources/) on the Grafana Labs website.
+You can find all available Grafana data sources by going to your resource and selecting this page from the left menu: **Configuration** > **Data sources** > **Add a data source** . Search for the data source you need from the available list. For more information about data sources, go to [Data sources](https://grafana.com/docs/grafana/latest/datasources/) on the Grafana Labs website.
:::image type="content" source="media/managed-grafana-how-to-source-plugins.png" alt-text="Screenshot of the Add data source page."::: ## Default configuration for Azure Monitor
-The Azure Monitor data source is automatically added to all new Managed Grafana resources. To review or modify its configuration, follow these steps in your workspace endpoint:
+The Azure Monitor data source is automatically added to all new Managed Grafana resources. To review or modify its configuration, follow these steps in your Managed Grafana endpoint:
1. From the left menu, select **Configuration** > **Data sources**. :::image type="content" source="media/managed-grafana-how-to-source-configuration.png" alt-text="Screenshot of the Add data sources page.":::
-1. Azure Monitor should be listed as a built-in data source for your workspace. Select **Azure Monitor**.
+1. Azure Monitor should be listed as a built-in data source for your Managed Grafana instance. Select **Azure Monitor**.
1. In **Settings**, authenticate through **Managed Identity** and select your subscription from the dropdown list or enter your **App Registration** details :::image type="content" source="media/managed-grafana-how-to-source-configuration-Azure-Monitor-settings.png" alt-text="Screenshot of the Azure Monitor page in data sources.":::
-Authentication and authorization are subsequently made through the provided managed identity. With Managed Identity, you can assign permissions for your Managed Grafana workspace to access Azure Monitor data without having to manually manage service principals in Azure Active Directory (Azure AD).
+Authentication and authorization are subsequently made through the provided managed identity. With Managed Identity, you can assign permissions for your Managed Grafana instance to access Azure Monitor data without having to manually manage service principals in Azure Active Directory (Azure AD).
## Next steps > [!div class="nextstepaction"] > [Modify access permissions to Azure Monitor](./how-to-permissions.md)
-> [Share an Azure Managed Grafana workspace](./how-to-share-grafana-workspace.md)
+> [Share an Azure Managed Grafana instance](./how-to-share-grafana-workspace.md)
managed-grafana How To Monitor Managed Grafana Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-monitor-managed-grafana-workspace.md
Title: 'How to monitor your workspace with logs in Azure Managed Grafana Preview'
-description: Learn how to monitor your workspace in Azure Managed Grafana Preview with logs
+ Title: 'How to monitor your Azure Managed Grafana Preview instance with logs'
+description: Learn how to monitor your Azure Managed Grafana Preview instance with logs.
Last updated 3/31/2022
-# How to monitor your workspace with logs in Azure Managed Grafana Preview
+# How to monitor your Azure Managed Grafana Preview instance with logs
-In this article, you'll learn how to monitor an Azure Managed Grafana Preview workspace by configuring diagnostic settings and accessing event logs.
+In this article, you'll learn how to monitor an Azure Managed Grafana Preview instance by configuring diagnostic settings and accessing event logs.
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).-- An Azure Managed Grafana workspace with access to at least one data source. If you don't have a workspace yet, [create an Azure Managed Grafana workspace](./how-to-permissions.md) and [add a data source](how-to-data-source-plugins-managed-identity.md).
+- An Azure Managed Grafana instance with access to at least one data source. If you don't have a Managed Grafana instance yet, [create an Azure Managed Grafana instance](./how-to-permissions.md) and [add a data source](how-to-data-source-plugins-managed-identity.md).
## Sign in to Azure
Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.
## Add diagnostic settings
-To monitor an Azure Managed Grafana workspace, the first step to take is to configure diagnostic settings. In this process, you'll configure the streaming export of your workspace's logs to a destination of your choice.
+To monitor an Azure Managed Grafana instance, the first step to take is to configure diagnostic settings. In this process, you'll configure the streaming export of your instance's logs to a destination of your choice.
You can create up to five different diagnostic settings to send different logs to independent destinations.
-1. Open a Managed Grafana workspace, and go to **Diagnostic settings**, under **Monitoring**
+1. Open a Managed Grafana resource, and go to **Diagnostic settings**, under **Monitoring**
:::image type="content" source="media/managed-grafana-monitoring-diagnostic-overview.png" alt-text="Screenshot of the Azure platform. Diagnostic settings.":::
You can create up to five different diagnostic settings to send different logs t
| Destination | Description | Settings | |-|-|-| | Log Analytics workspace | Send data to a Log Analytics workspace | Select the **subscription** containing an existing Log Analytics workspace, then select the **Log Analytics workspace** |
- | Storage account | Archive data to a storage account | Select the **subscription** containing an existing storage account, then select the **storage account**. Only storage accounts in the same region as the Grafana workspace are displayed in the dropdown menu. |
- | Event hub | Stream to an event hub | Select a **subscription** and an existing Azure Event Hub **namespace**. Optionally also choose an existing **event hub**. Lastly, choose an **event hub policy** from the list. Only event hubs in the same region as the Grafana workspace are displayed in the dropdown menu. |
+ | Storage account | Archive data to a storage account | Select the **subscription** containing an existing storage account, then select the **storage account**. Only storage accounts in the same region as the Grafana instance are displayed in the dropdown menu. |
+ | Event hub | Stream to an event hub | Select a **subscription** and an existing Azure Event Hub **namespace**. Optionally also choose an existing **event hub**. Lastly, choose an **event hub policy** from the list. Only event hubs in the same region as the Grafana instance are displayed in the dropdown menu. |
| Partner solution | Send to a partner solution | Select a **subscription** and a **destination**. For more information about available destinations, go to [partner destinations](../azure-monitor/partners.md). | :::image type="content" source="media/managed-grafana-monitoring-settings.png" alt-text="Screenshot of the Azure platform. Diagnostic settings configuration.":::
You can create up to five different diagnostic settings to send different logs t
Now that you've configured your diagnostic settings, Azure will stream all new events to your selected destinations and generate logs. You can now create queries and access logs to monitor your application.
-1. In your Managed Grafana workspace, select **Logs** from the left menu. The Azure platform displays a **Queries** page, with suggestions of queries to choose from.
+1. In your Managed Grafana instance, select **Logs** from the left menu. The Azure platform displays a **Queries** page, with suggestions of queries to choose from.
:::image type="content" source="media/managed-grafana-monitoring-logs-menu.png" alt-text="Screenshot of the Azure platform. Open Logs.":::
Now that you've configured your diagnostic settings, Azure will stream all new e
> [!div class="nextstepaction"] > [Grafana UI](./grafana-app-ui.md)
-> [How to share an Azure Managed Grafana workspace](./how-to-share-grafana-workspace.md)
+> [How to share an Azure Managed Grafana instance](./how-to-share-grafana-workspace.md)
managed-grafana How To Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-permissions.md
Title: How to modify access permissions to Azure Monitor
-description: Learn how to manually set up permissions that allow your Azure Managed Grafana Preview workspace to access a data source
+description: Learn how to manually set up permissions that allow your Azure Managed Grafana Preview instance to access a data source
Previously updated : 3/31/2022 Last updated : 6/10/2022 # How to modify access permissions to Azure Monitor
-By default, when a Grafana workspace is created, Azure Managed Grafana grants it the Monitoring Reader role for all Azure Monitor data and Log Analytics resources within a subscription.
+By default, when a Grafana instance is created, Azure Managed Grafana grants it the Monitoring Reader role for all Azure Monitor data and Log Analytics resources within a subscription.
-This means that the new Grafana workspace can access and search all monitoring data in the subscription, including viewing the Azure Monitor metrics and logs from all resources, and any logs stored in Log Analytics workspaces in the subscription.
+This means that the new Grafana instance can access and search all monitoring data in the subscription, including viewing the Azure Monitor metrics and logs from all resources, and any logs stored in Log Analytics workspaces in the subscription.
In this article, you'll learn how to manually edit permissions for a specific resource. ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).-- An Azure Managed Grafana workspace. If you don't have one yet, [create a workspace](./quickstart-managed-grafana-portal.md).
+- An Azure Managed Grafana instance. If you don't have one yet, [create an Azure Managed Grafana instance](/azure/managed-grafana/quickstart-managed-grafana-portal).
- An Azure resource with monitoring data and write permissions, such as [User Access Administrator](../../articles/role-based-access-control/built-in-roles.md#user-access-administrator) or [Owner](../../articles/role-based-access-control/built-in-roles.md#owner) ## Sign in to Azure Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.com/) with your Azure account.
-## Edit Azure Monitor permissions for an Azure Managed Grafana workspace
+## Edit Azure Monitor permissions
To change permissions for a specific resource, follow these steps:
To change permissions for a specific resource, follow these steps:
:::image type="content" source="media/permissions/permissions-members.png" alt-text="Screenshot of the Azure platform selecting members.":::
-1. Select the **Subscription** containing your Managed Grafana workspace
+1. Select the **Subscription** containing your Managed Grafana instance
1. Select a **Managed identity** from the options in the dropdown list
-1. Select your Managed Grafana workspace from the list.
+1. Select the Managed Grafana instance from the list.
1. Click **Select** to confirm
- :::image type="content" source="media/permissions/permissions-managed-identities.png" alt-text="Screenshot of the Azure platform selecting the workspace.":::
+ :::image type="content" source="media/permissions/permissions-managed-identities.png" alt-text="Screenshot of the Azure platform selecting the instance.":::
1. Click **Next**, then **Review + assign** to confirm the application of the new permission
For more information about how to use Managed Grafana with Azure Monitor, go to
## Next steps > [!div class="nextstepaction"]
-> [How to configure data sources for Azure Managed Grafana](./how-to-data-source-plugins-managed-identity.md)
+> [How to configure data sources for Azure Managed Grafana](./how-to-data-source-plugins-managed-identity.md)
managed-grafana How To Share Grafana Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-share-grafana-workspace.md
Title: How to share an Azure Managed Grafana Preview workspace
+ Title: How to share an Azure Managed Grafana Preview instance
description: 'Azure Managed Grafana: learn how you can share access permissions and dashboards with your team and customers.'
Last updated 3/31/2022
-# How to share an Azure Managed Grafana Preview workspace
+# How to share an Azure Managed Grafana Preview instance
-A DevOps team may build dashboards to monitor and diagnose an application or infrastructure that it manages. Likewise, a support team may use a Grafana monitoring solution for troubleshooting customer issues. In these scenarios, multiple users will be accessing one Grafana workspace. Azure Managed Grafana enables such sharing by allowing you to set the custom permissions on a workspace that you own. This article explains what permissions are supported and how to grant permissions to share dashboards with your internal teams or external customers.
+A DevOps team may build dashboards to monitor and diagnose an application or infrastructure that it manages. Likewise, a support team may use a Grafana monitoring solution for troubleshooting customer issues. In these scenarios, multiple users will be accessing one Grafana instance. Azure Managed Grafana enables such sharing by allowing you to set the custom permissions on an instance that you own. This article explains what permissions are supported and how to grant permissions to share dashboards with your internal teams or external customers.
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).-- An Azure Managed Grafana workspace. If you don't have one yet, [create a workspace](./how-to-permissions.md).
+- An Azure Managed Grafana instance. If you don't have one yet, [create a Managed Grafana instance](./how-to-permissions.md).
## Supported Grafana roles Azure Managed Grafana supports the Admin, Viewer and Editor roles: -- The Admin role provides full control of the workspace including viewing, editing, and configuring data sources.-- The Editor role provides read-write access to the dashboards in the workspace-- The Viewer role provides read-only access to dashboards in the workspace.
+- The Admin role provides full control of the instance including viewing, editing, and configuring data sources.
+- The Editor role provides read-write access to the dashboards in the instance.
+- The Viewer role provides read-only access to dashboards in the instance.
-The Admin role is automatically assigned to the creator of a Grafana workspace. More details on Admin, Editor, and Viewer roles can be found at [Grafana organization roles](https://grafana.com/docs/grafana/latest/permissions/organization_roles/#compare-roles).
+The Admin role is automatically assigned to the creator of a Grafana instance. More details on Admin, Editor, and Viewer roles can be found at [Grafana organization roles](https://grafana.com/docs/grafana/latest/permissions/organization_roles/#compare-roles).
Grafana user roles and assignments are fully integrated with the Azure Active Directory (Azure AD). You can add any Azure AD user or security group to a Grafana role and grant them access permissions associated with that role. You can manage these permissions from the Azure portal or the command line. This section explains how to assign users to the Viewer or Editor role in the Azure portal.
Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.
## Assign an Admin, Viewer or Editor role to a user
-1. Open your Managed Grafana workspace.
+1. Open your Managed Grafana instance.
1. Select **Access control (IAM)** in the navigation menu. 1. Click **Add**, then **Add role assignment**
Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.
> [!div class="nextstepaction"] > [How to configure data sources for Azure Managed Grafana](./how-to-data-source-plugins-managed-identity.md) > [How to modify access permissions to Azure Monitor](./how-to-permissions.md)
-> [How to call Grafana APIs in your automation with Azure Managed Grafana](./how-to-api-calls.md)
+> [How to call Grafana APIs in your automation with Azure Managed Grafana](./how-to-api-calls.md)
managed-grafana Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/overview.md
You can create dashboards instantaneously by importing existing charts directly
## Next steps > [!div class="nextstepaction"]
-> [Create a workspace in Azure Managed Grafana Preview using the Azure portal](./quickstart-managed-grafana-portal.md).
+> [Create an Azure Managed Grafana Preview instance using the Azure portal](./quickstart-managed-grafana-portal.md)
managed-grafana Quickstart Managed Grafana Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/quickstart-managed-grafana-cli.md
Title: 'Quickstart: create a workspace in Azure Managed Grafana Preview using the Azure CLI'
-description: Learn how to create a Managed Grafana workspace using the Azure CLI
+ Title: 'Quickstart: create an Azure Managed Grafana Preview instance using the Azure CLI'
+description: Learn how to create a Managed Grafana instance using the Azure CLI
Previously updated : 05/11/2022 Last updated : 06/10/2022 ms.devlang: azurecli
-# Quickstart: Create a workspace in Azure Managed Grafana Preview using the Azure CLI
+# Quickstart: Create an Azure Managed Grafana Preview instance using the Azure CLI
-This quickstart describes how to use the Azure Command-Line Interface (CLI) to create a new workspace in Azure Managed Grafana Preview.
+Get started by creating an Azure Managed Grafana Preview workspace using the Azure CLI. Creating a workspace will generate a Managed Grafana instance.
> [!NOTE] > The CLI experience for Azure Managed Grafana Preview is part of the amg extension for the Azure CLI (version 2.30.0 or higher). The extension will automatically install the first time you run an `az grafana` command.
Run the code below to create an Azure Managed Grafana workspace.
| Parameter | Description | Example | |--|--|-|
-| --name | Choose a unique name for your new Managed Grafana workspace. | *grafana-test* |
-| --location | Choose an Azure Region where Managed Grafana is available. | *eastus* |
+| --name | Choose a unique name for your new Managed Grafana instance. | *grafana-test* |
+| --location | Choose an Azure Region where Managed Grafana is available. | *eastus* |
```azurecli az grafana create --name <managed-grafana-resource-name> --resource-group <resource-group-name> ```
-Once the deployment is complete, you'll see a note in the output of the command line stating that instance was successfully created, alongside with additional information about the deployment.
+Once the deployment is complete, you'll see a note in the output of the command line stating that the instance was successfully created, alongside with additional information about the deployment.
-## Open your new Managed Grafana dashboard
+## Access your new Managed Grafana instance
-Now let's check if you can access your new Managed Grafana dashboard.
+Now let's check if you can access your new Managed Grafana instance.
1. Take note of the **endpoint** URL ending by `eus.grafana.azure.com`, listed in the CLI output.
-1. Open a browser and enter the endpoint URL. You should now see your Azure Managed Grafana Dashboard. From there, you can finish setting up your Grafana installation.
+1. Open a browser and enter the endpoint URL. You should now see your Azure Managed Grafana instance. From there, you can finish setting up your Grafana installation.
> [!NOTE]
-> If creating a Grafana workspace fails the first time, please try again. The failure might be due to a limitation in our backend, and we are actively working to fix.
+> If creating a Grafana instance fails the first time, please try again. The failure might be due to a limitation in our backend, and we are actively working to fix.
## Clean up resources
-If you're not going to continue to use this workspace, delete the Azure resources you created.
+If you're not going to continue to use this instance, delete the Azure resources you created.
`az group delete -n <resource-group-name> --yes`
If you're not going to continue to use this workspace, delete the Azure resource
> [!div class="nextstepaction"] > [How to configure data sources for Azure Managed Grafana](./how-to-data-source-plugins-managed-identity.md)-
managed-grafana Quickstart Managed Grafana Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/quickstart-managed-grafana-portal.md
Title: 'Quickstart: create a workspace in Azure Managed Grafana Preview using the Azure portal'
-description: Learn how to create a Managed Grafana workspace using the Azure portal
+ Title: 'Quickstart: create an Azure Managed Grafana Preview instance using the Azure portal'
+description: Learn how to create a Managed Grafana workspace to generate a new Managed Grafana instance in the Azure portal
Previously updated : 04/18/2022 Last updated : 06/10/2022+
-# Quickstart: Create a workspace in Azure Managed Grafana Preview using the Azure portal
+# Quickstart: Create an Azure Managed Grafana Preview instance using the Azure portal
-Get started by using the Azure portal to create a new workspace in Azure Managed Grafana Preview.
+Get started by creating an Azure Managed Grafana Preview workspace using the Azure portal. Creating a workspace will generate a Managed Grafana instance.
## Prerequisite
An Azure account with an active subscription. [Create an account for free](https
1. Select **Create**.
-1. In the Create Grafana Workspace pane, enter the following settings.
+1. In the **Create Grafana Workspace** pane, enter the following settings.
:::image type="content" source="media/managed-grafana-quickstart-portal-form.png" alt-text="Screenshot of the Azure portal. Create workspace form."::: | Setting | Sample value | Description | ||||
- | Subscription ID | mysubscription | Select the Azure subscription you want to use. |
- | Resource group name | myresourcegroup | Select or create a resource group for your Azure Managed Grafana resources. |
- | Location | East US | Use Location to specify the geographic location in which to host your resource. Choose the location closest to you. |
- | Name | mygrafanaworkspace | Enter a unique resource name. It will be used as the domain name in your workspace URL. |
+ | Subscription ID | *mysubscription* | Select the Azure subscription you want to use. |
+ | Resource group name | *myresourcegroup* | Select or create a resource group for your Azure Managed Grafana resources. |
+ | Location | *East US* | Use Location to specify the geographic location in which to host your resource. Choose the location closest to you. |
+ | Name | *mygrafanaworkspace* | Enter a unique resource name. It will be used as the domain name in your Managed Grafana instance URL. |
-1. Select **Next : Permission >** to access rights for your Grafana dashboard and data sources:
+1. Select **Next : Permission >** to access rights for your Grafana instance and data sources:
1. Make sure the **System assigned identity** is set to **On**. The box **Add role assignment to this identity with 'Monitoring Reader' role on target subscription** should also be checked for this Managed Identity to get access to your current subscription. 1. Make sure that you're listed as a Grafana administrator. You can also add more users as administrators at this point or later.
An Azure account with an active subscription. [Create an account for free](https
If you uncheck this option (or if the option grays out for you), someone with the Owner role on the subscription can do the role assignment to give you the Grafana Admin permission. > [!NOTE]
- > If creating a Grafana workspace fails the first time, please try again. The failure might be due to a limitation in our backend, and we are actively working to fix.
+ > If creating a Managed Grafana instance fails the first time, please try again. The failure might be due to a limitation in our backend, and we are actively working to fix.
1. Optionally select **Next : Tags** and add tags to categorize resources. 1. Select **Next : Review + create >** and then **Create**. Your Azure Managed Grafana resource is deploying.
-## Connect to your Managed Grafana workspace
+## Access your Managed Grafana instance
1. Once the deployment is complete, select **Go to resource** to open your resource. :::image type="content" source="media/managed-grafana-quickstart-portal-deployment-complete.png" alt-text="Screenshot of the Azure portal. Message: Your deployment is complete.":::
-1. In the **Overview** tab's Essentials section, note the **Endpoint** URL. Open it to access the newly created Managed Grafana workspace. Single sign-on via Azure Active Directory should have been configured for you automatically. If prompted, enter your Azure account.
+1. In the **Overview** tab's Essentials section, select the **Endpoint** URL. Single sign-on via Azure Active Directory should have been configured for you automatically. If prompted, enter your Azure account.
:::image type="content" source="media/managed-grafana-quickstart-workspace-overview.png" alt-text="Screenshot of the Azure portal. Endpoint URL display.":::
- :::image type="content" source="media/managed-grafana-quickstart-portal-grafana-workspace.png" alt-text="Screenshot of a Managed Grafana dashboard.":::
+ :::image type="content" source="media/managed-grafana-quickstart-portal-grafana-workspace.png" alt-text="Screenshot of a Managed Grafana instance.":::
You can now start interacting with the Grafana application to configure data sources, create dashboards, reporting and alerts.
managed-grafana Troubleshoot Managed Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/troubleshoot-managed-grafana.md
+
+ Title: 'Troubleshoot Azure Managed Grafana'
+description: Troubleshoot Azure Managed Grafana issues related to fetching data, managing Managed Grafana dashboards, speed and more.
++++ Last updated : 05/30/2022++
+# Troubleshoot issues for Azure Managed Grafana
+
+This article guides you to troubleshoot errors with Azure Managed Grafana, and suggests solutions to resolve them.
+
+## Access right alerts are displayed when creating the workspace
+
+When creating a Managed Grafana instance from the Azure portal, the user gets an alert in the **Basics** tab: **You might not have enough access right at below subscription or resource group to enable all features, please see next 'Permission' tab for details.**
++
+In the **Permissions** tab, another alert is displayed: **You must be a subscription 'Owner' or 'User Access Administrator' to use this feature.**
+Role assignment controls are disabled.
+
+These alerts are triggered because the user isn't a subscription Administrator or Owner and the following consequences will occur when the user creates the workspace:
+
+- The user won't get the "Grafana Admin" role for the new Grafana instance
+- The system-assigned managed identity of this Grafana instance won't get the Monitoring Reader role.
+
+### Solution 1: proceed and get admin help
+
+Proceed with the creation of the Managed Grafana workspace. You should know that you won't be able to use the Managed Grafana instance until your subscription admin assigns you a Grafana Admin, Grafana Editor or Grafana Viewer role.
+
+### Solution 2: select another subscription
+
+The user can select another subscription in the **Basics** tab. They must be an admin or an owner. The banner will disappear.
+
+## Azure Managed Grafana instance creation fails
+
+An error is displayed when the user creates a Managed Grafana instance from the Azure portal.
+
+### Solution 1: edit the instance name
+
+If you get an error while filling out the form to create the Managed Grafana instance, you may have given an invalid name to your Grafana instance.
++
+Enter a name that:
+
+- Is unique in the entire Azure region. It can't already be used by another user.
+- Is 30 characters long or smaller
+- Begins with a letter. The rest can only be alphanumeric characters or hyphens, and the name must end with an alphanumeric character.
+
+### Solution 2: review deployment error
+
+1. Review the Managed Grafana deployment details and read the status message.
+
+ :::image type="content" source="media/troubleshoot/troubleshoot-deployment-error.png" alt-text="Screenshot of the Azure platform: Instance deployment error." lightbox="media/troubleshoot/troubleshoot-deployment-error.png":::
+
+1. Do the following action, depending on the error message:
+
+- The status message states that the region isn't supported and provides a list of supported Azure regions. Try deploying a new Managed Grafana instance again. When trying to create a Managed Grafana instance for the first time, the Azure portal suggests Azure regions that aren't available. These regions won't be displayed on your second try.
+- The status message states that the role assignment update isn't permitted. The user isn't a subscription owner. If the resource deployment succeeded and the role assignment failed, ask someone with Owner or Administrator access control over your subscription to:
+
+ - Assign the Monitoring reader role at the subscription level to the managed identity of the Managed Grafana instance
+ - Assign you a Grafana Admin role for this new Managed Grafana instance
+- If status message mentions a conflict, then someone may have created another an instance with the same name at the same time, or the name check failed earlier, leading to a conflict later on. Delete this instance and create another one with a different name.
+
+## User can't access their Managed Grafana instance
+
+The user has successfully created an Azure Managed Grafana instance but can't access their Managed Grafana instance, when going to the endpoint URL.
+
+### Solution 1: use an Azure AD account
+
+Managed Grafana doesn't support Microsoft accounts. Sign in with an Azure AD account.
+
+### Solution 2: check the provisioning state
+
+If you get a page with an error message such as "can't reach this page", stating that the page took too long to respond, follow the process below:
+
+ :::image type="content" source="media/troubleshoot/troubleshoot-generic-browser-error.png" alt-text="Screenshot of a browser: can't reach page.":::
+
+1. In the Azure platform, open your instance and go to the **Overview** page. Make sure that the **Provisioning State** is **Succeeded** and that all other fields in the **Essentials** section are populated. If everything seems good, continue to follow the process below. Otherwise, delete and recreate an instance.
+
+ :::image type="content" source="media/troubleshoot/troubleshoot-healthy-instance.png" alt-text="Screenshot of the Azure platform. Overview - Essentials.":::
+
+1. If you saw several browser redirects and then landed on a generic browser error page as shown above, then it means there's a failure in the backend.
+
+1. If you have a firewall blocking outbound traffic, allow access to your instance, to your URL ending in grafana.azure.com, and Azure AD.
+
+### Solution 3: fix access role issues
+
+If you get an error page stating "No Roles Assigned":
+
+ :::image type="content" source="media/troubleshoot/troubleshoot-no-roles-assigned.png" alt-text="Screenshot of the browser. No roles assigned.":::
+
+This issue can happen if:
+
+1. You didn't have permission to add a Grafana Admin role for yourself. Refer to [Access right alerts are displayed when creating the workspace](#access-right-alerts-are-displayed-when-creating-the-workspace) for more information.
+
+1. You used the CLI, an ARM template or other another means to create the workspace that isn't the Azure portal. Only the Azure portal will automatically add you as a Grafana Admin. In all other cases, you must manually give yourself the Grafana Admin role.
+ 1. In your Grafana workspace, select **Access control (IAM) > Add role assignment** to add this role assignment. You must have the Administrator or Owner access role for the subscription or Managed Grafana resource to make this role assignment. Ask your administrator to assist you if you don't have sufficient access.
+ 1. Your account is a foreign account: the Grafana instance isn't registered in your home tenant.
+ 1. If you recently addressed this problem and have been assigned a sufficient Grafana role, you may need to wait for some time before the cookie expires and get refreshed. This process normally takes 5 min. If in doubts, delete all cookies or start a private browser session to force a fresh new cookie with new role information.
+
+## Azure Managed Grafana dashboard panel doesn't display any data
+
+One or several Managed Grafana dashboard panels show no data.
+
+### Solution: review your dashboard settings
+
+Context: Grafana dashboards are set up to fetch new data periodically. If the dashboard is refreshed too often for the underlying query to load, the panel will be stuck without ever being able to load and display data.
+
+1. Check how frequently the dashboard is configured to refresh data?
+ 1. In your dashboard, go to **Dashboard settings**.
+ 1. In the general settings, lower the **Auto refresh** rate of the dashboard to be no faster than the time the query takes to load.
+1. When a query takes too long to retrieve data. Grafana will automatically time out certain dependency calls that take longer than, for example, 30 seconds. Check that there are no unusual slow-downs on the query's end.
+
+## Azure Monitor can't fetch data
+
+Every Grafana instance comes pre-configured with an Azure Monitor data source. When trying to use a pre-provisioned dashboard, the user finds that the Azure Monitor data source can't fetch data.
+
+### Solution: review your Azure Monitor Data settings
+
+1. Find a pre-provisioned dashboard by opening your Managed Grafana endpoint and selecting **Dashboards** > **Browse**. Then select a dashboard, for example **Azure Monitor** > **Azure App monitoring - Application Insights**.
+1. Make sure the dropdowns near the top are populated with a subscription, resource group and resource name. In the screenshot example below, the **Resource** dropdown is set to null. In this case, select a resource name. You may need to select another resource group that contains a type of resource the dashboard was designed for. In this example, you need to pick a resource group that has an Application Insights resource.
+
+ :::image type="content" source="media/troubleshoot/troubleshoot-dashboard-resource.png" alt-text="Screenshot of the Managed Grafana workspace: Checking dashboard information.":::
+
+1. Open the Azure Monitor data source set-up page
+
+ 1. In your Managed Grafana endpoint, select **Configurations** in the left menu and select **Data Sources**.
+ 1. Select **Azure Monitor**
+
+1. If the data source uses Managed Identity, then:
+
+ 1. Select the **Load Subscriptions** button to make a quick test. If **Default Subscription** is populated with your subscription, Managed Grafana can access Azure Monitor within this subscription. If not, then there are permission issues.
+
+ :::image type="content" source="media/troubleshoot/troubleshoot-load-subscriptions.png" alt-text="Screenshot of the Managed Grafana workspace: Load subscriptions.":::
+
+ 1. Check if the system assigned managed identity option is turned on in the Azure portal. If not, turn it on manually:
+ 1. Open your Managed Grafana instance in the Azure portal.
+ 1. In the left menu, under **Settings**, select **Identity**.
+ 1. Select **Status**: **On** and select **Save**
+
+ :::image type="content" source="media/troubleshoot/troubleshoot-managed-identity.png" alt-text="Screenshot of the Azure platform: Turn on system-assigned managed identity." lightbox="media/troubleshoot/troubleshoot-managed-identity-expanded.png":::
+
+ 1. Check if the managed identity has the Monitoring Reader role assigned to the Managed Grafana instance. If not, add it manually from the Azure portal:
+ 1. Open your Managed Grafana instance in the Azure portal.
+ 1. In the left-menu, under **Settings**, select **Identity**.
+ 1. Select **Azure role assignments**.
+ 1. There should be a **Monitoring Reader** role displayed, assigned to your Managed Grafana instance. If not, select Add role assignment and add the **Monitoring Reader** role.
+
+ :::image type="content" source="media/troubleshoot/troubleshoot-add-role-assignment.png" alt-text="Screenshot of the Azure platform: Adding role assignment.":::
+
+1. If the data source uses an **App Registration** authentication:
+ 1. In your Grafana endpoint, go to **Configurations > Data Sources > Azure Monitor** and check if the information for **Directory (tenant) ID** and **Application (client) ID** is correct.
+ 1. Check if the service principal has the Monitoring Reader role assigned to the Managed Grafana instance. If not, add it manually from the Azure portal.
+ 1. If needed, reapply the Client Secret
+
+ :::image type="content" source="media/troubleshoot/troubleshoot-azure-monitor-app-registration.png" alt-text="Screenshot of the Managed Grafana workspace: Check app registration authentication details.":::
+
+## Azure Data Explorer can't fetch data
+
+The Azure Data Explorer data source can't fetch data.
+
+### Solution: review your Azure Data Explorer settings
+
+1. Find a pre-provisioned dashboard by opening your Managed Grafana endpoint and selecting **Dashboards** > **Browse**. Then select a dashboard, for example **Azure Monitor** > **Azure Data Explorer Cluster Resource Insights**.
+1. Make sure the dropdowns near the top are populated with a data source, subscription, resource group, name space, resource, and workspace name. In the screenshot example below, we chose a resource group that doesn't contain any Data Explorer cluster. In this case, select another resource group that contains a Data Explorer cluster.
+
+ :::image type="content" source="media/troubleshoot/troubleshoot-dashboard-data-explorer.png" alt-text="Screenshot of the Managed Grafana workspace: Checking dashboard information for Azure Data Explorer.":::
+
+1. Check the Azure Data Explorer data source and see how authentication is set up. You can currently only set up authentication for Azure Data Explorer through Azure Active Directory (Azure AD).
+1. In your Grafana endpoint, go to **Configurations > Data Sources > Azure Data Explorer**
+1. Check if the information listed for **Azure cloud**, **Cluster URL**, **Directory (tenant) ID**, **Application (client) ID**, and **Client secret** is correct. If needed, create a new key to add as a client secret.
+1. At the top of the page, you can find instructions guiding you through the process to grant necessary permissions to this Azure AD app to read the Azure Data Explorer database.
+1. Make sure that your Azure Data Explorer instance doesn't have a firewall that blocks access to Managed Grafana. The Azure Data Explorer database needs to be exposed to the public internet.
+
+## Dashboard import fails
+
+The user gets an error when importing a dashboard from the gallery or a JSON file. An error message appears: **The dashboard has been changed by someone else**.
+
+### Solution: edit dashboard name or UID
+
+Most of the time this error occurs because the user is trying to import a dashboard that has the same name or unique identifier (UID) as another dashboard.
+
+To check if your Managed Grafana instance already has a dashboard with the same name:
+
+1. In your Grafana endpoint, select **Dashboards** from the navigation menu on the left and then **Browse**.
+1. Review dashboard names.
+
+ :::image type="content" source="media/troubleshoot/troubleshoot-dashboards-list.png" alt-text="Screenshot of the browser. Dashboard: browse.":::
+
+1. Rename the old or the new dashboard.
+1. You can also edit the UID of a JSON dashboard before importing it by editing the field named **uid** in the JSON file.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Configure data sources](./how-to-data-source-plugins-managed-identity.md)
migrate How To Set Up Appliance Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-set-up-appliance-vmware.md
If validation fails, you can select a **Failed** status to see the validation er
:::image type="content" source="./media/tutorial-discover-vmware/add-server-credentials-multiple.png" alt-text="Screenshot that shows providing and validating multiple credentials.":::
+> [!NOTE]
+> Ensure that the following special characters are not passed in any credentials as they are not supported for SSO passwords:
+> - Non-ASCII characters. [Learn more](https://en.wikipedia.org/wiki/ASCII).
+> - Ampersand (&)
+> - Semicolon (;)
+> - Double quotation mark (")
+> - Single quotation mark (')
+> - Circumflex (^)
+> - Backslash (\\)
+> - Percentage (%)
+> - Angle brackets (<,>)
+> - Pound (£)
+ ### Start discovery To start vCenter Server discovery, in **Step 3: Provide server credentials to perform software inventory, agentless dependency analysis, discovery of SQL Server instances and databases and discovery of ASP.NET web apps in your VMware environment.**, select **Start discovery**. After the discovery is successfully initiated, you can check the discovery status by looking at the vCenter Server IP address or FQDN in the sources table.
migrate Migrate Support Matrix Vmware Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware-migration.md
The table summarizes agentless migration requirements for VMware VMs.
**Simultaneous replication** | Up to 300 simultaneously replicating VMs per vCenter Server with 1 appliance. Up to 500 simultaneously replicating VMs per vCenter Server when an additional [scale-out appliance](./how-to-scale-out-for-migration.md) is deployed. **Automatic installation of Azure VM agent (Windows and Linux Agent)** | Supported for Windows Server 2008 R2 onwards. <br/> Supported for RHEL6, RHEL7, CentOS7, Ubuntu 14.04, Ubuntu 16.04, Ubuntu18.04, Ubuntu 19.04, Ubuntu 19.10, Ubuntu 20.04.
+> [!NOTE]
+> Ensure that the following special characters are not passed in any credentials as they are not supported for SSO passwords:
+> - Non-ASCII characters. [Learn more](https://en.wikipedia.org/wiki/ASCII).
+> - Ampersand (&)
+> - Semicolon (;)
+> - Double quotation mark (")
+> - Single quotation mark (')
+> - Circumflex (^)
+> - Backslash (\\)
+> - Percentage (%)
+> - Angle brackets (<,>)
+> - Pound (£)
+ > [!Note] > In addition to the Internet connectivity, for Linux VMs, ensure that the following packages are installed for successful installation of Microsoft Azure Linux agent (waagent):
->- Python 2.6+
->- OpenSSL 1.0+
->- OpenSSH 5.3+
->- Filesystem utilities: sfdisk, fdisk, mkfs, parted
->- Password tools: chpasswd, sudo
->- Text processing tools: sed, grep
->- Network tools: ip-route
+> - Python 2.6+
+> - OpenSSL 1.0+
+> - OpenSSH 5.3+
+> - Filesystem utilities: sfdisk, fdisk, mkfs, parted
+> - Password tools: chpasswd, sudo
+> - Text processing tools: sed, grep
+> - Network tools: ip-route
> [!TIP] > Using the Azure portal you'll be able to select up to 10 VMs at a time to configure replication. To replicate more VMs you can use the portal and add the VMs to be replicated in multiple batches of 10 VMs, or use the Azure Migrate PowerShell interface to configure replication. Ensure that you don't configure simultaneous replication on more than the maximum supported number of VMs for simultaneous replications.
mysql Concepts Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-workbooks.md
Azure Database for MySQL Flexible Server has three available templates:
* List top 5 longest queries * Summarize slow queries by minimum, maximum, average, and standard deviation query time
-You can also edit and customize these templates according to your requirements. For more information, see [Azure Monitor workbooks overview](../../azure-monitor/visualize/workbooks-overview.md#editing-mode).
+You can also edit and customize these templates according to your requirements. For more information, see [Azure Workbooks](../../azure-monitor/visualize/workbooks-overview.md).
## Access the workbook templates
You can also display the list of templates by going to the **Public Templates**
## Next steps-- Learn about [access control](../../azure-monitor/visualize/workbooks-access-control.md) in Azure Monitor workbooks.-- Learn more about [visualization options](../../azure-monitor/visualize/workbooks-overview.md#visualizations) in Azure Monitor workbooks.
+- Learn about [Azure workbooks access control](../../azure-monitor/visualize/workbooks-overview.md#access-control).
+- Learn more about [Azure workbooks visualization options](../../azure-monitor/visualize/workbooks-visualizations.md).
mysql Tutorial Configure Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-configure-audit.md
In the workbook, you can view the following visualizations:
:::image type="content" source="./media/tutorial-configure-audit/audit-summary.png" alt-text="Screenshot of workbook template 'Audit Connection Events'."::: >[!Note]
-> * You can also edit these templates and customize them according to your requirements. For more information, see the "Editing mode" section of the [Azure Monitor workbooks overview](../../azure-monitor/visualize/workbooks-overview.md#editing-mode).
+> * You can also edit these templates and customize them according to your requirements. For more information, see the "Editing mode" section of [Azure Workbooks](../../azure-monitor/visualize/workbooks-overview.md).
> * For a quick view, you can also pin the workbooks or Log Analytics query to your dashboard. For more information, see [Create a dashboard in the Azure portal](../../azure-portal/azure-portal-dashboards.md). The *Administrative Actions on the service* view gives you details on activity performed on the service. It helps to determine the *what, who, and when* for any write operations (PUT, POST, DELETE) that are performed on the resources in your subscription.
mysql Tutorial Deploy Springboot On Aks Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-deploy-springboot-on-aks-vnet.md
az acr login && mvn compile jib:build
We'll now create an AKS cluster in the virtual network *vnet-mysqlaksdemo*.
-In this tutorial, we'll use Azure CNI networking in AKS. If you'd like to configure kubenet networking instead, see [Use kubenet networking in AKS](../../aks/configure-kubenet.md#create-a-service-principal-and-assign-permissions).
+In this tutorial, we'll use Azure CNI networking in AKS. If you'd like to configure kubenet networking instead, see [Use kubenet networking in AKS](../../aks/configure-kubenet.md).
1. Create a subnet *subnet-aks* for the AKS cluster to use.
mysql Tutorial Query Performance Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-query-performance-insights.md
In the workbook, you can view the following visualizations:
>[!Note] > * To view resource utilization, you can use the Overview template.
-> * You can also edit these templates and customize them according to your requirements. For more information, see [Azure Monitor workbooks overview](../../azure-monitor/visualize/workbooks-overview.md#editing-mode).
+> * You can also edit these templates and customize them according to your requirements. For more information, see [Azure Workbooks](../../azure-monitor/visualize/workbooks-overview.md).
> * For a quick view, you can also pin the workbooks or Log Analytics query to your Dashboard. For more information, see [Create a dashboard in the Azure portal](../../azure-portal/azure-portal-dashboards.md). In Query Performance Insight, two metrics that can help you find potential bottlenecks are *duration* and *execution count*. Long-running queries have the greatest potential for locking resources longer, blocking other users, and limiting scalability.
network-watcher Network Watcher Nsg Flow Logging Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md
Also, when a NSG is deleted, by default the associated flow log resource is dele
- [Azure Container Instances (ACI)](https://azure.microsoft.com/services/container-instances/) - [Logic Apps](https://azure.microsoft.com/services/logic-apps/)
+> [!NOTE]
+> App services deployed under App Service Plan do not support NSG Flow Logs. Please refer [this documentaion](/articles/app-service/overview-vnet-integration.md#how-regional-virtual-network-integration-works.md) for additional details.
+ ## Best practices **Enable on critical subnets**: Flow Logs should be enabled on all critical subnets in your subscription as an auditability and security best practice.
openshift Support Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/support-lifecycle.md
See the following guide for the [past Red Hat OpenShift Container Platform (upst
|4.7|February 2021| July 15 2021|4.9 GA| |4.8|July 2021| Sept 15 2021|4.10 GA| |4.9|November 2021| February 1 2022|4.11 GA|
-|4.10|March 2022| May 20 2022|4.12 GA|
+|4.10|March 2022| June 21 2022|4.12 GA|
## FAQ
postgresql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-backup-restore.md
Previously updated : 11/30/2021 Last updated : 06/16/2021 # Backup and restore in Azure Database for PostgreSQL - Flexible Server
PITR is useful in scenarios like these:
- A user accidentally deletes data, a table, or a database. - An application accidentally overwrites good data with bad data because of an application defect.
+- You want to clone your server for test, development, or for data verification.
-With continuous backup of transaction logs, you'll be able to restore to the last transaction. You can choose between two restore options:
+With continuous backup of transaction logs, you'll be able to restore to the last transaction. You can choose between the following restore options:
- **Latest restore point (now)**: This is the default option. It allows you to restore the server to the latest point in time. - **Custom restore point**: This option allows you to choose any point in time within the retention period defined for this flexible server. By default, the latest time in UTC is automatically selected. Automatic selection is useful if you want to restore to the last committed transaction for test purposes. You can optionally choose other days and times.
-The estimated time to recover depends on several factors, including the volume of transaction logs to process after the previous backup time, and the total number of databases recovering in the same region at the same time. The overall recovery time usually takes from few minutes up to a few hours.
+For latest and custom restore point options, the estimated time to recover depends on several factors, including the volume of transaction logs to process after the previous backup time, and the total number of databases recovering in the same region at the same time. The overall recovery time usually takes from few minutes up to a few hours.
If you've configured your server within a virtual network, you can restore to the same virtual network or to a different virtual network. However, you can't restore to public access. Similarly, if you configured your server with public access, you can't restore to private virtual network access.
If you've configured your server within a virtual network, you can restore to th
> >If you accidentally deleted your server, please reach out to support. In some cases, your server might be restored with or without data loss. + ## Geo-redundant backup and restore (preview) To enable geo-redundant backup from the **Compute + storage** pane in the Azure portal, see the [quickstart guide](./quickstart-create-server-portal.md).
postgresql Concepts Compare Single Server Flexible Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compare-single-server-flexible-server.md
Previously updated : 12/08/2021 Last updated : 06/14/2022 # Comparison chart - Azure Database for PostgreSQL Single Server and Flexible Server
-The following table provides a high-level features and capabilities comparisons between Single Server and Flexible Server. For most new deployments, we recommend using Flexible Server. However, you should consider your own requirements against the comparison table below.
+The following table provides a list of high-level features and capabilities comparisons between Single Server and Flexible Server. For most new deployments, we recommend using Flexible Server. However, you should consider your own requirements against the comparison table below.
| **Feature / Capability** | **Single Server** | **Flexible Server** | | - | - | - |
The following table provides a high-level features and capabilities comparisons
| **Compute & Storage** | | | | Compute tiers | Basic, General Purpose, Memory Optimized | Burstable, General Purpose, Memory Optimized | | Burstable SKUs | No | Yes |
-| Ability to scale across compute tiers | Cannot scale Basic tier | Yes. Can scale across tiers |
+| Ability to scale across compute tiers | Can't scale Basic tier | Yes. Can scale across tiers |
| Stop/Start | No | Yes (for all compute SKUs). Only compute is stopped/started |
-| Max. Storage size | 1 TB (Basic), 4 TB or 16 TB (GP,MO). Note: Not all regions support 16 TB. | 16 TB |
+| Max. Storage size | 1 TB (Basic), 4 TB or 16 TB (GP, MO). Note: Not all regions support 16 TB. | 16 TB |
| Min storage size | 5 GB (Basic), 100 GB (GP, MO) | 32 GB |
-| Storage auto-grow | Yes (1 GB increments) | No |
-| Max IOPS | Basic - Variable. GP/MO: up to 20K | Up to 20K |
+| Storage auto-grow | Yes (1-GB increments) | No |
+| Max IOPS | Basic - Variable. GP/MO: up to 20 K | Up to 20 K |
| **Networking/Security** | | | | Supported networking | Virtual network, private link, public access | Private access (VNET injection in a delegated subnet), public access) | | Public access control | Firewall | Firewall |
The following table provides a high-level features and capabilities comparisons
| Private DNS Zone support | No | Yes | | Ability to move between private and public access | No | No | | TLS support | TLS 1.2 | TLS 1.2, 1.3 enforced|
-| Can turn off SSL | Yes | No |
+| Can turn off SSL | Yes | Yes (set ``require_secure_transport`` to OFF) |
| SCRAM authentication | No | Yes | | **High Availability** | | | | Zone-redundant HA | No | Yes (a synchronous standby is established on another zone within a region) |
The following table provides a high-level features and capabilities comparisons
| Support for PgLogical extension | No | Yes | | Support logical replication with HA | N/A | Limited | | **Disaster Recovery** | | |
-| Cross region DR | Using read replicas, geo-redundant backup | N/A |
+| Cross region DR | Using read replicas, geo-redundant backup | Geo-redundant backup (Preview) in select regions|
| DR using replica | Using async physical replication | N/A | | Automatic failover | No | N/A | | Can use the same r/w endpoint | No | N/A |
The following table provides a high-level features and capabilities comparisons
| Cross-region support | Yes | N/A | | **Maintenance Window** | | | | System scheduled window | Yes | Yes |
-| Customer scheduled window| No | Yes (can choose any 1hr on any day) |
-| Notice period | 3 days | 5 days |
-| Maintenance period | Anytime within 15 hrs window | 1hr window |
+| Customer scheduled window| No | Yes (can choose any 1 hr on any day) |
+| Notice period | Three days | Five days |
+| Maintenance period | Anytime within 15-hrs window | 1 hr window |
| **Metrics** | | | | Errors | Failed connections | Failed connections| | Latency | Max lag across replicas, Replica lag | N/A |
The following table provides a high-level features and capabilities comparisons
| PgCron, lo, pglogical | No | Yes | | pgAudit | Preview | Yes | | **Security** | | |
-| Azure Active Directory Support (AAD) | Yes | No |
-| Customer managed encryption key (BYOK) | Yes | No |
+| Azure Active Directory Support(AAD) | Yes | No |
+| Customer managed encryption key(BYOK) | Yes | No |
| SCRAM Authentication (SHA-256) | No | Yes | | Secure Sockets Layer support (SSL) | Yes | Yes | | **Other features** | | | | Alerts | Yes | Yes | | Microsoft Defender for Cloud | Yes | No |
-| Resource health | Yes | No |
+| Resource health | Yes | Yes |
| Service health | Yes | Yes | | Performance insights (iPerf) | Yes | Yes | | Major version upgrades support | No | No |
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
Previously updated : 05/11/2022 Last updated : 06/15/2022
Picture below shows transition for VM and storage failure.
:::image type="content" source="./media/overview/overview-azure-postgres-flex-virtualmachine.png" alt-text="Flexible server - VM and storage failures":::
-If zone redundant high availability is configured, the service provisions and maintains a warm standby server across availability zone within the same Azure region. The data changes on the source server is synchronously replicated to the standby server to ensure zero data loss. With zone redundant high availability, once the planned or unplanned failover event is triggered, the standby server comes online immediately and is available to process incoming transactions. This allows the service resiliency from availability zone failure within an Azure region that supports multiple availability zones as shown in the picture below.
+If zone redundant high availability is configured, the service provisions and maintains a warm standby server across availability zone within the same Azure region. The data changes on the source server are synchronously replicated to the standby server to ensure zero data loss. With zone redundant high availability, once the planned or unplanned failover event is triggered, the standby server comes online immediately and is available to process incoming transactions. This allows the service resiliency from availability zone failure within an Azure region that supports multiple availability zones as shown in the picture below.
:::image type="content" source="./media/business-continuity/concepts-zone-redundant-high-availability-architecture.png" alt-text="Zone redundant high availability":::
The flexible server service allows you to stop and start server on-demand to low
The flexible server service uses the FIPS 140-2 validated cryptographic module for storage encryption of data at-rest. Data, including backups, and temporary files created while running queries are encrypted. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys can be system managed (default). The service encrypts data in-motion with transport layer security (SSL/TLS) enforced by default. The service enforces and supports TLS versions 1.2 only.
-Flexible servers allows full private access to the servers using Azure virtual network (VNet integration). Servers in Azure virtual network can only be reached and connected through private IP addresses. With VNet integration, public access is denied and servers cannot be reached using public endpoints.
+Flexible servers allow full private access to the servers using Azure virtual network (VNet integration). Servers in Azure virtual network can only be reached and connected through private IP addresses. With VNet integration, public access is denied and servers cannot be reached using public endpoints.
## Monitoring and alerting
One advantage of running your workload in Azure is global reach. The flexible se
| Australia Southeast | :heavy_check_mark: | :x: | :x: | | Brazil South | :heavy_check_mark: (v3 only) | :x: | :x: | | Canada Central | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| Canada East | :heavy_check_mark: | :x: | :x: |
| Central India | :heavy_check_mark: | :heavy_check_mark: ** | :heavy_check_mark: | | Central US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | East Asia | :heavy_check_mark: | :heavy_check_mark: ** | :x: |
One advantage of running your workload in Azure is global reach. The flexible se
| Germany West Central | :heavy_check_mark: | :heavy_check_mark: | :x: | | Japan East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Japan West | :heavy_check_mark: | :x: | :heavy_check_mark: |
+| Jio India West | :heavy_check_mark: (v3 only)| :x: | :x: |
| Korea Central | :heavy_check_mark: | :heavy_check_mark: ** | :x: | | Korea South | :heavy_check_mark: | :x: | :x: | | North Central US | :heavy_check_mark: | :x: | :x: |
postgresql Quickstart Create Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-cli.md
Create an [Azure resource group](../../azure-resource-manager/management/overvie
az group create --name myresourcegroup --location westus ```
-Create a flexible server with the `az postgres flexible-server create` command. A server can contain multiple databases. The following command creates a server using service defaults and values from your Azure CLI's [local context](/cli/local-context):
+Create a flexible server with the `az postgres flexible-server create` command. A server can contain multiple databases. The following command creates a server using service defaults and values from your Azure CLI's [local context](https://docs.azure.cn/cli/local-context):
```azurecli az postgres flexible-server create
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
Previously updated : 04/14/2022 Last updated : 06/15/2022 # Release notes - Azure Database for PostgreSQL - Flexible Server
Last updated 04/14/2022
This page provides latest news and updates regarding feature additions, engine versions support, extensions, and any other announcements relevant for Flexible Server - PostgreSQL.
-## Release" June 2022
+## Release: June 2022
* Support for [extensions](concepts-extensions.md) PLV8, pgrouting with new servers<sup>$</sup>
-* Version updates for [extension](concepts-extensions.md) PostGIS
+* Version updates for [extension](concepts-extensions.md) PostGIS.
+
+<sup>**$**</sup> New servers get these features automatically. In your existing servers, these features are enabled during your server's future maintenance window.
+
+## Release: May 2022
+
+* Support for [new regions](overview.md#azure-regions) Jio India West, Canada East.
## Release: April 2022
purview Concept Guidelines Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-guidelines-pricing.md
Microsoft Purview, formally known as Azure Purview, provides a single pane of gl
## Factors impacting Azure Pricing - There are **direct** and **indirect** costs that need to be considered while planning budgeting and cost management.
-## Direct costs
-- Direct costs impacting Microsoft Purview pricing are based on the following three dimensions: - [**Elastic data map**](#elastic-data-map) - [**Automated scanning & classification**](#automated-scanning-classification-and-ingestion) - [**Advanced resource sets**](#advanced-resource-sets)
-### Elastic data map
+## Elastic data map
- The **Data map** is the foundation of the Microsoft Purview governance portal architecture and so needs to be up to date with asset information in the data estate at any given point
Direct costs impacting Microsoft Purview pricing are based on the following thre
- However, the data map scales automatically between the minimal and maximal limits of that elasticity window, to cater to changes in the data map with respect to two key factors - **operation throughput** and **metadata storage**
-#### Operation throughput
+### Operation throughput
- An event driven factor based on the Create, Read, Update, Delete operations performed on the data map - Some examples of the data map operations would be:
Direct costs impacting Microsoft Purview pricing are based on the following thre
- The **burst duration** is the percentage of the month that such bursts (in elasticity) are expected because of growing metadata or higher number of operations on the data map
-#### Metadata storage
+### Metadata storage
- If the number of assets reduces in the data estate, and are then removed in the data map through subsequent incremental scans, the storage component automatically reduces and so the data map scales down
-### Automated scanning, classification, and ingestion
+## Automated scanning, classification, and ingestion
There are two major automated processes that can trigger ingestion of metadata into the Microsoft Purview Data Map: 1. Automatic scans using native [connectors](azure-purview-connector-overview.md). This process includes three main steps:
There are two major automated processes that can trigger ingestion of metadata i
- Ingestion of metadata and lineage into the Microsoft Purview Data Map if the account is connected to any Azure Data Factory or Azure Synapse pipelines.
-#### 1. Automatic scans using native connectors
+### 1. Automatic scans using native connectors
- A **full scan** processes all assets within a selected scope of a data source whereas an **incremental scan** detects and processes assets, which have been created, modified, or deleted since the previous successful scan
There are two major automated processes that can trigger ingestion of metadata i
- Align your scan schedules with Self-Hosted Integration Runtime (SHIR) VMs (Virtual Machines) size to avoid extra costs linked to virtual machines
-#### 2. Automated ingestion using Azure Data Factory and/or Azure Synapse pipelines
+### 2. Automated ingestion using Azure Data Factory and/or Azure Synapse pipelines
- metadata and lineage are ingested from Azure Data Factory or Azure Synapse pipelines every time the pipelines run in the source system.
-### Advanced resource sets
+## Advanced resource sets
- The Microsoft Purview Data Map uses **resource sets** to address the challenge of mapping large numbers of data assets to a single logical resource by providing the ability to scan all the files in the data lake and find patterns (GUID, localization patterns, etc.) to group them as a single asset in the data map
purview How To Data Owner Policies Azure Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-azure-sql-db.md
SELECT * FROM sys.dm_server_external_policy_actions
-- Lists the roles that are part of a policy published to this server SELECT * FROM sys.dm_server_external_policy_roles
+-- Lists the links between the roles and actions, could be used to join the two
+SELECT * FROM sys.dm_server_external_policy_role_actions
+
+-- Lists all Azure AD principals that were given connect permissions
+SELECT * FROM sys.dm_server_external_policy_principals
+ -- Lists Azure AD principals assigned to a given role on a given resource scope SELECT * FROM sys.dm_server_external_policy_role_members+
+-- Lists Azure AD principals, joined with roles, joined with their data actions
+SELECT * FROM sys.dm_server_external_policy_principal_assigned_actions
``` ## Additional information
purview Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/overview.md
Title: Introduction to Microsoft Purview (formerly Azure Purview)
-description: This article provides an overview of Microsoft Purview (formerly Azure Purview), including its features and the problems it addresses. Microsoft Purview enables any user to register, discover, understand, and consume data sources.
--
+ Title: Introduction to Microsoft Purview governance solutions
+description: This article is an overview of the solutions that Microsoft Purview provides through the Microsoft Purview governance portal, and describes how they work together to help you manage your on-premises, multi-cloud, and software-as-a-service data.
++ Last updated 05/16/2022
-# What is Microsoft Purview (formerly Azure Purview)?
+# What's available in the Microsoft Purview governance portal?
-Microsoft Purview is a unified data governance service that helps you manage and govern your on-premises, multi-cloud, and software-as-a-service (SaaS) data. Microsoft Purview allows you to:
+Microsoft Purview's solutions in the governance portal provide a unified data governance service that helps you manage your on-premises, multi-cloud, and software-as-a-service (SaaS) data. The Microsoft Purview governance portal allows you to:
- Create a holistic, up-to-date map of your data landscape with automated data discovery, sensitive data classification, and end-to-end data lineage. - Enable data curators to manage and secure your data estate. - Empower data consumers to find valuable, trustworthy data.
route-server Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/overview.md
Azure Route Server simplifies dynamic routing between your network virtual appliance (NVA) and your virtual network. It allows you to exchange routing information directly through Border Gateway Protocol (BGP) routing protocol between any NVA that supports the BGP routing protocol and the Azure Software Defined Network (SDN) in the Azure Virtual Network (VNET) without the need to manually configure or maintain route tables. Azure Route Server is a fully managed service and is configured with high availability. > [!IMPORTANT]
-> Azure Route Servers created before November 1st, 2021, that don't have a public IP address associated, are deployed with the Public preview offering. The public preview offering is not backed by Generally Available SLA and support. To deploy Azure Route Server with the Generally Available offering, and to acheive Generally Available SLA and support, please delete and recreate Route Server.
+> Azure Route Servers created before November 1st, 2021, that don't have a public IP address associated, are deployed with the Public preview offering. The public preview offering is not backed by Generally Available SLA and support. To deploy Azure Route Server with the Generally Available offering, and to achieve Generally Available SLA and support, please delete and recreate your Route Server.
## How does it work?
Azure Route Server simplifies configuration, management, and deployment of your
* You no longer need to update [User-Defined Routes](../virtual-network/virtual-networks-udr-overview.md) manually whenever your NVA announces new routes or withdraw old ones.
-* You can peer multiple instances of your NVA with Azure Route Server. You can configure the BGP attributes in your NVA and, depending on your design (e.g., active-active for performance or active-passive for resiliency), let Azure Route Server know which NVA instance is active or which one is passive.
+* You can peer multiple instances of your NVA with Azure Route Server. You can configure the BGP attributes in your NVA and, depending on your design (for example, active-active for performance or active-passive for resiliency), let Azure Route Server know which NVA instance is active or which one is passive.
* The interface between NVA and Azure Route Server is based on a common standard protocol. As long as your NVA supports BGP, you can peer it with Azure Route Server. For more information, see [Route Server supported routing protocols](route-server-faq.md#protocol).
route-server Quickstart Configure Route Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/quickstart-configure-route-server-cli.md
This article helps you configure Azure Route Server to peer with a Network Virtu
:::image type="content" source="media/quickstart-configure-route-server-portal/environment-diagram.png" alt-text="Diagram of Route Server deployment environment using the Azure CLI." border="false"::: > [!IMPORTANT]
-> If you have an Azure Route Server created before September 1st and it doesn't have a public IP address asssociated, you'll need to recreate the Route Server so it can obtain an IP address for management purpose.
+> Azure Route Servers created before November 1st, 2021, that don't have a public IP address associated, are deployed with the Public preview offering. The public preview offering is not backed by Generally Available SLA and support. To deploy Azure Route Server with the Generally Available offering, and to achieve Generally Available SLA and support, please delete and recreate your Route Server.
## Prerequisites
route-server Quickstart Configure Route Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/quickstart-configure-route-server-portal.md
This article helps you configure Azure Route Server to peer with a Network Virtu
:::image type="content" source="media/quickstart-configure-route-server-portal/environment-diagram.png" alt-text="Diagram of Route Server deployment environment using the Azure portal." border="false"::: > [!IMPORTANT]
-> If you have an Azure Route Server created before September 1st and it doesn't have a public IP address asssociated, you'll need to recreate the Route Server so it can obtain an IP address for management purpose.
+> Azure Route Servers created before November 1st, 2021, that don't have a public IP address associated, are deployed with the Public preview offering. The public preview offering is not backed by Generally Available SLA and support. To deploy Azure Route Server with the Generally Available offering, and to achieve Generally Available SLA and support, please delete and recreate your Route Server.
## Prerequisites
This article helps you configure Azure Route Server to peer with a Network Virtu
## Create a Route Server
-### Sign in to your Azure account and select your subscription.
+### Sign in to your Azure account and select your subscription
From a browser, navigate to the [Azure portal](https://portal.azure.com) and sign in with your Azure account.
route-server Quickstart Configure Route Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/quickstart-configure-route-server-powershell.md
This article helps you configure Azure Route Server to peer with a Network Virtu
:::image type="content" source="media/quickstart-configure-route-server-portal/environment-diagram.png" alt-text="Diagram of Route Server deployment environment using the Azure PowerShell." border="false"::: > [!IMPORTANT]
-> If you have an Azure Route Server created before September 1st and it doesn't have a public IP address asssociated, you'll need to recreate the Route Server so it can obtain an IP address for management purpose.
+> Azure Route Servers created before November 1st, 2021, that don't have a public IP address associated, are deployed with the Public preview offering. The public preview offering is not backed by Generally Available SLA and support. To deploy Azure Route Server with the Generally Available offering, and to achieve Generally Available SLA and support, please delete and recreate your Route Server.
## Prerequisites
search Cognitive Search Debug Session https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-debug-session.md
Title: Debug Sessions concepts (preview)
+ Title: Debug Sessions concepts
-description: Debug Sessions, accessed through the Azure portal, provides an IDE-like environment where you can identify and fix errors, validate changes, and push changes to skillsets in the AI enrichment pipeline. Debug Sessions is a preview feature.
+description: Debug Sessions, accessed through the Azure portal, provides an IDE-like environment where you can identify and fix errors, validate changes, and push changes to skillsets in an enrichment pipeline.
Previously updated : 12/30/2021 Last updated : 06/15/2022 # Debug Sessions in Azure Cognitive Search
-Debug Sessions is a visual editor that works with an existing skillset in the Azure portal, exposing the structure and content of a single enriched document, as it's produced by an indexer and skillset, for the duration of the session. Because you are working with a live document, the session is interactive - you can identify errors, modify and invoke skill execution, and validate the results in real time. If your changes resolve the problem, you can commit them to a published skillset to apply the fixes globally.
-
-> [!Important]
-> Debug Sessions is a preview feature provided under [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+Debug Sessions is a visual editor that works with an existing skillset in the Azure portal, exposing the structure and content of a single enriched document, as it's produced by an indexer and skillset for the duration of the session. Because you are working with a live document, the session is interactive - you can identify errors, modify and invoke skill execution, and validate the results in real time. If your changes resolve the problem, you can commit them to a published skillset to apply the fixes globally.
## How a debug session works
-When you start a session, the search service creates a copy of the skillset, indexer, and a data source containing a single document that will be used to test the skillset. All session state will be saved to a container in an Azure Storage account that you provide.
+When you start a session, the search service creates a copy of the skillset, indexer, and a data source containing a single document that will be used to test the skillset. All session state will be saved to a blob container in an Azure Storage account that you provide. You can reuse the same container for all subsequent debug sessions you create. A helpful container name might be "cognitive-search-debug-sessions".
A cached copy of the enriched document and skillset is loaded into the visual editor so that you can inspect the content and metadata of the enriched document, with the ability to check each document node and edit any aspect of the skillset definition. Any changes made within the session are cached. Those changes will not affect the published skillset unless you commit them. Committing changes will overwrite the production skillset.
The visual editor is organized into tabs and panes. This section introduces the
The **Skill Graph** provides a visual hierarchy of the skillset and its order of execution from top to bottom. Skills that are dependent upon the output of other skills are positioned lower in the graph. Skills at the same level in the hierarchy can execute in parallel. Color coded labels of skills in the graph indicate the types of skills that are being executed in the skillset (TEXT or VISION).
-Selecting a skill in the graph will display the details of that instance of the skill in the right pane, including it's definition, errors or warnings, and execution history. The **Skill Graph** is where you will select which skill to debug or enhance. The details pane to the right is where you edit and explore.
+Selecting a skill in the graph will display the details of that instance of the skill in the right pane, including its definition, errors or warnings, and execution history. The **Skill Graph** is where you will select which skill to debug or enhance. The details pane to the right is where you edit and explore.
:::image type="content" source="media/cognitive-search-debug/skills-graph.png" alt-text="Screenshot of Skills Graph tab." border="true":::
Selecting a skill in the graph will display the details of that instance of the
When you select an object in the **Skill Graph**, the adjacent pane provides interactive work areas in a tabbed layout. An illustration of the details pane can be found in the previous screenshot.
-Skill details includes the following areas:
+Skill details include the following areas:
+ **Skill Settings** shows a formatted version of the skill definition. + **Skill JSON Editor** shows the raw JSON document of the definition.
A skill can execute multiple times in a skillset for a single document. For exam
The execution history enables tracking a specific enrichment back to the skill that generated it. Clicking on a skill input navigates to the skill that generated that input, providing a stack-trace like feature. This allows identification of the root cause of a problem that may manifest in a downstream skill.
-When debugging an error with a custom skill, there is the option to generate a request for a skill invocation in the execution history.
+When you debug an error with a custom skill, there is the option to generate a request for a skill invocation in the execution history.
## AI Enrichments tab > Enriched Data Structure
search Cognitive Search How To Debug Skillset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-how-to-debug-skillset.md
Previously updated : 06/02/2022 Last updated : 06/15/2022 # Debug an Azure Cognitive Search skillset in Azure portal
Start a portal-based debug session to identify and resolve errors, validate chan
A debug session is a cached indexer and skillset execution, scoped to a single document, that you can use to edit and test your changes interactively. If you're unfamiliar with how a debug session works, see [Debug sessions in Azure Cognitive Search](cognitive-search-debug-session.md). To practice a debug workflow with a sample document, see [Tutorial: Debug sessions](cognitive-search-tutorial-debug-sessions.md).
-> [!Important]
-> Debug sessions is a preview portal feature, provided under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Prerequisites + An existing enrichment pipeline, including a data source, a skillset, an indexer, and an index.
-+ You must have at least **Contributor** role over the Search service, to be able to run Debug Sessions.
++ A **Contributor** role assignment in the Search service. + An Azure Storage account, used to save session state.
-+ You must have at least **Storage Blob Data Contributor** role assgined over the Storage account.
-
-+ If the Azure Storage account has configured a firewall, you must configure it to [provide access to the Search service](search-indexer-howto-access-ip-restricted.md).
++ A **Storage Blob Data Contributor** role assignment in Azure Storage. ++ If the Azure Storage account is behind a firewall, configure it to [allow Search service access](search-indexer-howto-access-ip-restricted.md). ## Limitations
A Debug Session works with all generally available [indexer data sources](search
+ The MongoDB API (preview) of Cosmos DB is currently not supported.
-+ For the SQL API of Cosmos DB, if a row fails during index and there is no corresponding metadata, the debug session might not pick the correct row.
++ For the SQL API of Cosmos DB, if a row fails during index and there's no corresponding metadata, the debug session might not pick the correct row. + For the SQL API of Cosmos DB, if a partitioned collection was previously non-partitioned, a Debug Session won't find the document. - ## Create a debug session 1. [Sign in to Azure portal](https://portal.azure.com) and find your search service.
Custom skills can be more challenging to debug because the code runs externally.
``` > [!NOTE]
- > By default, Azure Functions are exposed on 7071. Other tools and configurations might require that you provide a different port.
+ > By default, Azure functions are exposed on 7071. Other tools and configurations might require that you provide a different port.
1. When ngrok starts, copy and save the public forwarding URL for the next step. The forwarding URL is randomly generated.
Within the debug session, modify your Custom Web API Skill URI to call the ngrok
You can edit the skill definition in the portal.
-### Test
+### Test your code
At this point, new requests from your debug session should now be sent to your local Azure Function. You can use breakpoints in your Visual Studio code to debug your code or run step by step. -
-## Expected behaviors
-
-+ If debugging for a CosmosDB SQL data source, if the CosmosDB SQL collection was previously non-partitioned, and then it was changed to a partitioned collection on the CosmosDB end, Debug Sessions won't be able to pick up the correct document from CosmosDB.
-+ CosmosDB SQL errors omit some metadata about what row failed, so in some cases, Debug Sessions wonΓÇÖt pick the correct row.
-- ## Next steps Now that you understand the layout and capabilities of the Debug Sessions visual editor, try the tutorial for a hands-on experience. > [!div class="nextstepaction"]
-> [Tutorial: Explore Debug sessions](./cognitive-search-tutorial-debug-sessions.md)
+> [Tutorial: Explore Debug sessions](./cognitive-search-tutorial-debug-sessions.md)
search Cognitive Search Tutorial Debug Sessions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-tutorial-debug-sessions.md
Title: 'Tutorial: Debug skillsets'
-description: Debug sessions (preview) is an Azure portal tool used to find, diagnose, and repair problems in a skillset.
+description: Debug sessions is an Azure portal tool used to find, diagnose, and repair problems in a skillset.
Previously updated : 12/31/2021 Last updated : 06/15/2022 # Tutorial: Debug a skillset using Debug Sessions
Skillsets coordinate a series of actions that analyze or transform content, wher
In this article, you'll use **Debug sessions** to find and fix missing inputs and outputs. The tutorial is all-inclusive. It provides sample data, a Postman collection that creates objects, and instructions for debugging problems in the skillset.
-> [!Important]
-> Debug sessions is a preview feature provided under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
- ## Prerequisites Before you begin, have the following prerequisites in place:
There are two ways to research this error. The first is to look at where the inp
1. Still in the **Enriched Data Structure**, open the Expression Evaluator **</>** for the "language" node and copy the expression `/document/language`.
-1. In the right pane, select **Skill Settings** for the #1 skill and open the Expression Evaluator **</>** for the input "languageCode."
+1. In the right pane, select **Skill Settings** for the #1 skill and open the Expression Evaluator **</>** for the input "languageCode".
1. Paste the new value, `/document/language` into the Expression box and click **Evaluate**. It should display the correct input "en". 1. Select **Save**.
-1. Select **Run**.
+1. Select **Run**.
-After the debug session execution completes, check the Errors/Warnings tab and it will show that all of the input warnings are gone. There now remains just the two warnings about output fields for organizations and locations.
+After the debug session execution completes, check the Errors/Warnings tab and it will show that all of the input warnings are gone. There now remain just the two warnings about output fields for organizations and locations.
## Fix missing skill output values
search Search Api Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-api-preview.md
Previously updated : 05/27/2022 Last updated : 06/15/2022 + # Preview features in Azure Cognitive Search
-This article is a comprehensive list of all features that are in public preview. Preview functionality is provided without a service level agreement, and is not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+This article is a comprehensive list of all features that are in public preview. Preview functionality is provided under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/), without a service level agreement, and isn't recommended for production workloads.
-Preview features that transition to general availability are removed from this list. If a feature isn't listed below, you can assume it is generally available. For announcements regarding general availability, see [Service Updates](https://azure.microsoft.com/updates/?product=search) or [What's New](whats-new.md).
+Preview features that transition to general availability are removed from this list. If a feature isn't listed below, you can assume it's generally available or retired. For announcements regarding general availability, see [Service Updates](https://azure.microsoft.com/updates/?product=search) or [What's New](whats-new.md).
|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Category | Description | Availability | |||-||
Preview features that transition to general availability are removed from this l
| [**featuresMode parameter**](/rest/api/searchservice/preview-api/search-documents#query-parameters) | Relevance (scoring) | Relevance score expansion to include details: per field similarity score, per field term frequency, and per field number of unique tokens matched. You can consume these data points in [custom scoring solutions](https://github.com/Azure-Samples/search-ranking-tutorial). | Add this query parameter using [Search Documents](/rest/api/searchservice/preview-api/search-documents), API versions 2021-04-30-Preview, 2020-06-30-Preview, or 2019-05-06-Preview. | | [**Azure Machine Learning (AML) skill**](cognitive-search-aml-skill.md) | AI enrichment (skills) | A new skill type to integrate an inferencing endpoint from Azure Machine Learning. Get started with [this tutorial](cognitive-search-tutorial-aml-custom-skill.md). | Use [Search Preview REST API](/rest/api/searchservice/), API versions 2021-04-30-Preview, 2020-06-30-Preview, or 2019-05-06-Preview. Also available in the portal, in skillset design, assuming Cognitive Search and Azure ML services are deployed in the same subscription. | | [**Incremental enrichment**](cognitive-search-incremental-indexing-conceptual.md) | AI enrichment (skills) | Adds caching to an enrichment pipeline, allowing you to reuse existing output if a targeted modification, such as an update to a skillset or another object, does not change the content. Caching applies only to enriched documents produced by a skillset.| Add this configuration setting using [Create or Update Indexer Preview REST API](/rest/api/searchservice/create-indexer), API versions 2021-04-30-Preview, 2020-06-30-Preview, or 2019-05-06-Preview. |
-| [**Debug Sessions**](cognitive-search-debug-session.md) | Portal, AI enrichment (skills) | An in-session skillset editor used to investigate and resolve issues with a skillset. Fixes applied during a debug session can be saved to a skillset in the service. | Portal only, using mid-page links on the Overview page to open a debug session. |
| [**moreLikeThis**](search-more-like-this.md) | Query | Finds documents that are relevant to a specific document. This feature has been in earlier previews. | Add this query parameter in [Search Documents Preview REST API](/rest/api/searchservice/search-documents) calls, with API versions 2021-04-30-Preview, 2020-06-30-Preview, 2019-05-06-Preview, 2016-09-01-Preview, or 2017-11-11-Preview. | ## How to call a preview REST API
search Search Sku Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-sku-manage-costs.md
There is no meter on the number of queries, query responses, or documents ingest
Data traffic might also incur networking costs. See the [Bandwidth pricing](https://azure.microsoft.com/pricing/details/bandwidth/).
-Several premium features ([Knowledge store](knowledge-store-concept-intro.md), [Debug Sessions](cognitive-search-debug-session.md), [Enrichment cache (preview)](cognitive-search-incremental-indexing-conceptual.md)) have a dependency on Azure Storage. The meters for Azure Storage apply in this case, and the associated storage costs of using these features will be included in the Azure Storage bill.
+Several premium features such as [knowledge store](knowledge-store-concept-intro.md), [Debug Sessions](cognitive-search-debug-session.md), and [enrichment cache](cognitive-search-incremental-indexing-conceptual.md) have a dependency on Azure Storage. The meters for Azure Storage apply in this case, and the associated storage costs of using these features will be included in the Azure Storage bill.
[Customer-managed keys](search-security-manage-encryption-keys.md) provide double encryption of sensitive content. This feature requires a billable [Azure Key Vault](https://azure.microsoft.com/pricing/details/key-vault/)).
search Search Synonyms Tutorial Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-synonyms-tutorial-sdk.md
Previously updated : 12/18/2020 Last updated : 06/16/2022 #Customer intent: As a developer, I want to understand synonym implementation, benefits, and tradeoffs.
In `RunQueriesWithNonExistentTermsInIndex`, issue search queries with "five star
Phrase queries, such as "five star", must be enclosed in quotation marks, and might also need escape characters depending on your client.
-```csharp
+```bash
Console.WriteLine("Search the entire index for the phrase \"five star\":\n"); results = searchClient.Search<Hotel>("\"five star\"", searchOptions); WriteDocuments(results);
After the "before" queries are run, the sample code enables synonyms. Enabling s
After the synonym map is uploaded and the index is updated to use the synonym map, the second `RunQueriesWithNonExistentTermsInIndex` call outputs the following:
-```dos
+```bash
Search the entire index for the phrase "five star": Name: Fancy Stay Category: Luxury Tags: [pool, view, wifi, concierge]
search Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/whats-new.md
Previously updated : 06/07/2022 Last updated : 06/15/2022 + # What's new in Azure Cognitive Search Learn what's new in the service. Bookmark this page to keep up to date with service updates.
Learn what's new in the service. Bookmark this page to keep up to date with serv
* [**Preview features**](search-api-preview.md) is a list of current features that haven't been approved for production workloads. * [**Previous versions**](/previous-versions/azure/search/) is an archive of earlier feature announcements.
+## June 2022
+
+|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description | Availability |
+||--||
+| [Debug Sessions](cognitive-search-debug-session.md) | Debug sessions, a built-in editor that runs in Azure portal, is now generally available. | Generally available. |
+ ## May 2022 |Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description | Availability |
security Secure Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/secure-deploy.md
description: This article discusses best practices to consider during the releas
Previously updated : 06/12/2019 Last updated : 06/15/2022
The focus of the release phase is readying a project for public release. This in
### Check your applicationΓÇÖs performance before you launch
-Check your application's performance before you launch it or deploy updates to production. Run cloud-based [load tests](/azure/load-testing/) by using Visual Studio to find performance problems in your application, improve deployment quality, make sure that your application is always up or available, and that your application can handle traffic for your launch.
+Check your application's performance before you launch it or deploy updates to production. Use Azure Load Testing to run cloud-based [load tests](/azure/load-testing/) to find performance problems in your application, improve deployment quality, make sure that your application is always up or available, and that your application can handle traffic for your launch.
### Install a web application firewall
security Ransomware Features Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/ransomware-features-resources.md
Keeping your resources safe is a joint effort between your cloud provider, Azure
Microsoft Defender for Cloud is a unified infrastructure security management system that strengthens the security posture of your data centers and provides advanced threat protection across your hybrid workloads in the cloud whether they're in Azure or not - as well as on premises.
-Microsoft Defender for Cloud's threat protection enables you to detect and prevent threats at the Infrastructure as a Service (IaaS) layer, non-Azure servers as well as for Platforms as a Service (PaaS) in Azure.
+Defender for Cloud's threat protection enables you to detect and prevent threats at the Infrastructure as a Service (IaaS) layer, non-Azure servers as well as for Platforms as a Service (PaaS) in Azure.
-Security Center's threat protection includes fusion kill-chain analysis, which automatically correlates alerts in your environment based on cyber kill-chain analysis, to help you better understand the full story of an attack campaign, where it started and what kind of impact it had on your resources.
+Defender for Cloud's threat protection includes fusion kill-chain analysis, which automatically correlates alerts in your environment based on cyber kill-chain analysis, to help you better understand the full story of an attack campaign, where it started and what kind of impact it had on your resources.
Key Features: - Continuous security assessment: Identify Windows and Linux machines with missing security updates or insecure OS settings and vulnerable Azure configurations. Add optional watchlists or events you want to monitor.
sentinel Iot Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/iot-solution.md
The following table describes the out-of-the-box analytics rules provided in the
| **High bandwidth in the network** | An unusually high bandwidth may be an indication of a new service/process on the network, such as backup, or an indication of malicious activity on the network, such as a cyber threat attempting to manipulate the SCADA network. | | **Denial of Service** | This alert detects attacks that would prevent the use or proper operation of the DCS system. | | **Unauthorized remote access to the network** | Unauthorized remote access to the network can compromise the target device. <br><br> This means that if another device on the network is compromised, the target devices can be accessed remotely, increasing the attack surface. |
+| **No traffic on Sensor Detected** | A sensor that no longer detects network traffic indicates that the system may be insecure. |
# [Create and maintain analytics rules manually](#tab/create-and-maintain-analytics-rules-manually)
sentinel Migration Convert Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-convert-dashboards.md
This article describes how to review, plan, and convert your current workbooks t
- **Discover dashboards**. Gather information about your dashboards, including design, parameters, data sources, and other details. Identify the purpose or usage of each dashboard. - **Select**. DonΓÇÖt migrate all dashboards without consideration. Focus on dashboards that are critical and used regularly.-- **Consider permissions**. Consider who are the target users for workbooks. Microsoft Sentinel uses Azure Workbooks, and [access is controlled](../azure-monitor/visualize/workbooks-access-control.md) using Azure Role Based Access Control (RBAC). To create dashboards outside Azure, for example for business execs without Azure access, using a reporting tool such as Power BI.
+- **Consider permissions**. Consider who are the target users for workbooks. Microsoft Sentinel uses Azure Workbooks, and [access is controlled](../azure-monitor/visualize/workbooks-overview.md#access-control) using Azure Role Based Access Control (RBAC). To create dashboards outside Azure, for example for business execs without Azure access, using a reporting tool such as Power BI.
## Prepare for the dashboard conversion
sentinel Deploy Data Connector Agent Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-data-connector-agent-container.md
If you're not using SNC, then your SAP configuration and authentication secrets
1. Run the following command to **Create a VM** in Azure (substitute actual names for the `<placeholders>`): ```azurecli
- az vm create --resource-group <resource group name> --name <VM Name> --image Canonical:0001-com-ubuntu-server-focal:20_04-lts-gen2:latest --admin-username <azureuser> --public-ip-address "" --size Standard_D2as_v5 --generate-ssh-keys --assign-identity
+ az vm create --resource-group <resource group name> --name <VM Name> --image Canonical:0001-com-ubuntu-server-focal:20_04-lts-gen2:latest --admin-username <azureuser> --public-ip-address "" --size Standard_D2as_v5 --generate-ssh-keys --assign-identity --role <role name> --scope <subscription Id>
+ ``` For more information, see [Quickstart: Create a Linux virtual machine with the Azure CLI](../../virtual-machines/linux/quick-create-cli.md).
service-fabric Service Fabric Concepts Partitioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-concepts-partitioning.md
Title: Partitioning Service Fabric services description: Learn how to partition Service Fabric stateless and stateful services Previously updated : 06/30/2017 Last updated : 06/16/2022 # Partition Service Fabric reliable services
As we literally want to have one partition per letter, we can use 0 as the low k
You also need to update the LowKey and HighKey properties of the StatefulService element in the ApplicationManifest.xml as shown below. ```xml
- <Service Name="Processing">
- <StatefulService ServiceTypeName="ProcessingType" TargetReplicaSetSize="[Processing_TargetReplicaSetSize]" MinReplicaSetSize="[Processing_MinReplicaSetSize]">
+ <Service Name="Alphabet.Processing">
+ <StatefulService ServiceTypeName="Alphabet.ProcessingType" TargetReplicaSetSize="[Processing_TargetReplicaSetSize]" MinReplicaSetSize="[Processing_MinReplicaSetSize]">
<UniformInt64Partition PartitionCount="[Processing_PartitionCount]" LowKey="0" HighKey="25" /> </StatefulService>
- </Service>
+ </Service>
``` 6. For the service to be accessible, open up an endpoint on a port by adding the endpoint element of ServiceManifest.xml (located in the PackageRoot folder) for the Alphabet.Processing service as shown below:
site-recovery Azure To Azure How To Enable Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication.md
Enable replication. This procedure assumes that the primary Azure region is East
2. Note the following fields: - **Source**: The point of origin of the VMs, which in this case is **Azure**. - **Source location**: The Azure region from where you want to protect your VMs. For this illustration, the source location is 'East Asia'
+ >[!NOTE]
+ >For cross-regional disaster recovery, the source location should be different from the Recovery Services Vault and it's Resource Group's location. However, it can be same as any of them for zonal disaster recovery.
+ >
- **Deployment model**: Azure deployment model of the source machines. - **Source subscription**: The subscription to which your source VMs belong. This can be any subscription within the same Azure Active Directory tenant where your recovery services vault exists. - **Resource Group**: The resource group to which your source virtual machines belong. All the VMs under the selected resource group are listed for protection in the next step.
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
As average churn on the disks increases, the number of disks that a storage acco
V1 storage account | 600 disks | 300 disks V2 storage account | 1500 disks | 750 disks
-Please note that the above limits are applicable to hybrid DR scenarios only.
+Please note that the above limits are applicable to VMWare and Hyper-V scenarios only.
## Vault tasks
spring-cloud Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/faq.md
East US, East US 2, Central US, South Central US, North Central US, West US, Wes
### In which regions is Azure Spring Apps Enterprise tier available?
-East US, East US 2, South Central US, West US 2, West US 3, West Europe, North Europe, UK South, Southeast Asia, Australia East.
+East US, East US 2, Central US, South Central US, North Central US, West US, West US 2, West US 3, West Europe, North Europe, UK South, Southeast Asia, Australia East, Canada Central, UAE North, Central India, Korea Central, East Asia, South Africa North, Brazil South, and France Central.
### Is any customer data stored outside of the specified region?
spring-cloud How To Configure Ingress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-configure-ingress.md
+
+ Title: How to configure ingress for Azure Spring Apps
+description: Describes how to configure ingress for Azure Spring Apps.
++++ Last updated : 05/27/2022+++
+# Customize the ingress configuration in Azure Spring Apps
+
+**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+
+This article shows you how to set and update the ingress configuration in Azure Spring Apps by using the Azure portal and Azure CLI.
+
+The Azure Spring Apps service uses an underlying ingress controller to handle application traffic management. Currently, the following ingress setting is supported for customization.
+
+| Name | Ingress setting | Default value | Valid range | Description |
+|-|--||-|-|
+| ingress-read-timeout | proxy-read-timeout | 300 | \[1,1800\] | The timeout in seconds for reading a response from a proxied server. |
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- [The Azure CLI](/cli/azure/install-azure-cli).
+- The Azure Spring Apps extension. Use the following command to remove previous versions and install the latest extension. If you previously installed the spring-cloud extension, uninstall it to avoid configuration and version mismatches.
+
+ ```azurecli
+ az extension remove --name spring
+ az extension add --name spring
+ az extension remove --name spring-cloud
+ ```
+
+## Set the ingress configuration when creating a service
+
+You can set the ingress configuration when creating a service by using the following CLI command.
+
+```azurecli
+az spring create \
+ --resource-group <resource-group-name> \
+ --name <service-name> \
+ --ingress-read-timeout 300
+```
+
+This command will create a service with ingress read timeout set to 300 seconds.
+
+## Update the ingress configuration for an existing service
+
+### [Azure portal](#tab/azure-portal)
+
+To update the ingress configuration for an existing service, use the following steps:
+
+1. Sign in to the portal using an account associated with the Azure subscription that contains the Azure Spring Apps instance.
+2. Navigate to the **Networking** pane, then select the **Ingress configuration** tab.
+3. Update the ingress configuration, and then select **Save**.
+
+ :::image type="content" source="media/how-to-configure-ingress/config-ingress-read-timeout.png" lightbox="media/how-to-configure-ingress/config-ingress-read-timeout.png" alt-text="Screenshot of Azure portal example for config ingress read timeout.":::
+
+### [Azure CLI](#tab/azure-cli)
+
+To update the ingress configuration for an existing service, use the following command:
+
+```azurecli
+az spring update \
+ --resource-group <resource-group-name> \
+ --name <service-name> \
+ --ingress-read-timeout 600
+```
+
+This command will update the ingress read timeout to 600 seconds.
+
+## Next steps
+
+- [Learn more about ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers)
+- [Learn more about NGINX ingress controller](https://kubernetes.github.io/ingress-nginx)
storage Access Tiers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-overview.md
description: Azure storage offers different access tiers so that you can store y
Previously updated : 05/18/2022 Last updated : 06/16/2022
The following operations are supported for blobs in the Archive tier:
- [Copy Blob](/rest/api/storageservices/copy-blob) - [Delete Blob](/rest/api/storageservices/delete-blob)
+- [Undelete Blob](/rest/api/storageservices/undelete-blob)
- [Find Blobs by Tags](/rest/api/storageservices/find-blobs-by-tags) - [Get Blob Metadata](/rest/api/storageservices/get-blob-metadata) - [Get Blob Properties](/rest/api/storageservices/get-blob-properties)
storage Storage Account Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-move.md
Previously updated : 05/11/2020 Last updated : 06/15/2022
Deploy the template to create a new storage account in the target region.
+> [!TIP]
+> If you receive an error which states that the XML specified is not syntactically valid, compare the JSON in your template with the schemas described in the [Azure Resource Manager documentation](/azure/templates/microsoft.storage/allversions).
+ ### Configure the new storage account Some features won't export to a template, so you'll have to add them to the new storage account.
storage File Sync Firewall And Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-firewall-and-proxy.md
You can then use the IP address ranges in `$ipAddressRanges` to update your fire
Once a server is registered with the Azure File Sync service, the Test-StorageSyncNetworkConnectivity cmdlet and ServerRegistration.exe can be used to test communications with all endpoints (URLs) specific to this server. This cmdlet can help troubleshoot when incomplete communication prevents the server from fully working with Azure File Sync and it can be used to fine-tune proxy and firewall configurations.
-To run the network connectivity test, install Azure File Sync agent version 9.1 or later and run the following PowerShell commands:
+To run the network connectivity test, run the following PowerShell commands:
```powershell Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll"
storage Storage Files Scale Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-scale-targets.md
There are two main types of storage accounts for Azure Files:
| Maximum request rate (Max IOPS) | <ul><li>20,000, with large file share feature enabled<sup>2</sup></li><li>1,000 or 100 requests per 100 ms, default</li></ul> | <ul><li>Baseline IOPS: 3000 + 1 IOPS per GiB, up to 100,000</li><li>IOPS bursting: Max (10000, 3x IOPS per GiB), up to 100,000</li></ul> | | Throughput (ingress + egress) for a single file share (MiB/sec) | <ul><li>Up to 300 MiB/sec, with large file share feature enabled<sup>2</sup></li><li>Up to 60 MiB/sec, default</li></ul> | 100 + CEILING(0.04 * ProvisionedStorageGiB) + CEILING(0.06 * ProvisionedStorageGiB) | | Maximum number of share snapshots | 200 snapshots | 200 snapshots |
-| Maximum object name length (total pathname including all directories and filename) | 2,048 characters | 2,048 characters |
-| Maximum individual pathname component length (in the path \A\B\C\D, each letter represents a directory or file that is an individual component) | 255 characters | 255 characters |
+| Maximum object name length<sup>3</sup> (full pathname including all directories, file names, and backslash characters) | 2,048 characters | 2,048 characters |
+| Maximum length of individual pathname component<sup>3</sup> (in the path \A\B\C\D, each letter represents a directory or file that is an individual component) | 255 characters | 255 characters |
| Hard link limit (NFS only) | N/A | 178 | | Maximum number of SMB Multichannel channels | N/A | 4 | | Maximum number of stored access policies per file share | 5 | 5 |
There are two main types of storage accounts for Azure Files:
<sup>2</sup> Default on standard file shares is 5 TiB, see [Create an Azure file share](./storage-how-to-create-file-share.md) for the details on how to create file shares with 100 TiB size and increase existing standard file shares up to 100 TiB. To take advantage of the larger scale targets, you must change your quota so that it is larger than 5 TiB.
+<sup>3</sup> Azure Files enforces certain [naming rules](/rest/api/storageservices/naming-and-referencing-shares--directories--files--and-metadata#directory-and-file-names) for directory and file names.
+ ### File scale targets | Attribute | Files in standard file shares | Files in premium file shares | |-|-|-|
synapse-analytics Apache Spark Azure Portal Add Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-azure-portal-add-libraries.md
Title: Package management
-description: Learn how to add and manage libraries used by Apache Spark in Azure Synapse Analytics.
-
+ Title: Manage Apache Spark packages
+description: Learn how to add and manage libraries used by Apache Spark in Azure Synapse Analytics. Libraries provide reusable code for use in your programs or projects.
+ - Previously updated : 03/01/2020-+ Last updated : 06/08/2022+ + # Manage libraries for Apache Spark in Azure Synapse Analytics
-Libraries provide reusable code that you may want to include in your programs or projects.
-You may need to update your serverless Apache Spark pool environment for various reasons. For example, you may find that:
-- one of your core dependencies released a new version.-- you need an extra package for training your machine learning model or preparing your data.-- you have found a better package and no longer need the older package.-- your team has built a custom package that you need available in your Apache Spark pool.
+Libraries provide reusable code that you might want to include in your programs or projects.
-To make third party or locally built code available to your applications, you can install a library onto one of your serverless Apache Spark pools or notebook session.
+You might need to update your serverless Apache Spark pool environment for various reasons. For example, you might find that:
+
+- One of your core dependencies released a new version.
+- You need an extra package for training your machine learning model or preparing your data.
+- You have found a better package and no longer need the older package.
+- Your team has built a custom package that you need available in your Apache Spark pool.
+
+To make third party or locally built code available to your applications, install a library onto one of your serverless Apache Spark pools or notebook session.
## Default Installation
-Apache Spark in Azure Synapse Analytics has a full Anacondas install plus extra libraries. The full libraries list can be found at [Apache Spark version support](apache-spark-version-support.md).
-When a Spark instance starts up, these libraries will automatically be included. Additional packages can be added at the Spark pool level or session level.
+Apache Spark in Azure Synapse Analytics has a full Anaconda install plus extra libraries. The full libraries list can be found at [Apache Spark version support](apache-spark-version-support.md).
+
+When a Spark instance starts, these libraries are included automatically. More packages can be added at the Spark pool level or session level.
## Workspace packages
-When developing custom applications or models, your team may develop various code artifacts like wheel or jar files to package your code.
-In Synapse, workspace packages can be custom or private wheel or jar files. You can upload these packages to your workspace and later assign them to a specific Spark pool. Once assigned, these workspace packages are automatically installed on all Spark pool sessions.
+When your team develops custom applications or models, you might develop various code artifacts like *.whl* or *.jar* files to package your code.
+
+In Synapse, workspace packages can be custom or private *.whl* or *.jar* files. You can upload these packages to your workspace and later assign them to a specific Spark pool. Once assigned, these workspace packages are installed automatically on all Spark pool sessions.
-To learn more about how to manage workspace libraries, visit the following how-to guides:
+To learn more about how to manage workspace libraries, see the following articles:
-- [Python workspace packages: ](./apache-spark-manage-python-packages.md#install-wheel-files) Upload Python wheel files as a workspace package and later add these packages to specific serverless Apache Spark pools.-- [Scal#workspace-packages) Upload Scala and Java jar files as a workspace package and later add these packages to specific serverless Apache Spark pools.
+- [Python workspace packages: ](./apache-spark-manage-python-packages.md#install-wheel-files) Upload Python *.whl* files as a workspace package and later add these packages to specific serverless Apache Spark pools.
+- [Scal#workspace-packages) Upload Scala and Java *.jar* files as a workspace package and later add these packages to specific serverless Apache Spark pools.
## Pool packages
-In some cases, you may want to standardize the set of packages that are used on a given Apache Spark pool. This standardization can be useful if the same packages are commonly installed by multiple people on your team.
-Using the Azure Synapse Analytics pool management capabilities, you can configure the default set of libraries that you would like installed on a given serverless Apache Spark pool. These libraries are installed on top of the [base runtime](./apache-spark-version-support.md).
+In some cases, you might want to standardize the packages that are used on an Apache Spark pool. This standardization can be useful if the same packages are commonly installed by multiple people on your team.
-Currently, pool management is only supported for Python. For Python, Synapse Spark pools use Conda to install and manage Python package dependencies. When specifying your pool-level libraries, you can now provide a requirements.txt or an environment.yml. This environment configuration file is used every time a Spark instance is created from that Spark pool.
+Using the Azure Synapse Analytics pool management capabilities, you can configure the default set of libraries to install on a given serverless Apache Spark pool. These libraries are installed on top of the [base runtime](./apache-spark-version-support.md).
-To learn more about these capabilities, visit the documentation on [Python pool management](./apache-spark-manage-python-packages.md#pool-libraries).
+Currently, pool management is only supported for Python. For Python, Synapse Spark pools use Conda to install and manage Python package dependencies. When specifying your pool-level libraries, you can now provide a *requirements.txt* or an *environment.yml* file. This environment configuration file is used every time a Spark instance is created from that Spark pool.
+
+To learn more about these capabilities, see [Python pool management](./apache-spark-manage-python-packages.md#pool-libraries).
> [!IMPORTANT]
-> - If the package you are installing is large or takes a long time to install, this affects the Spark instance start up time.
+>
+> - If the package you are installing is large or takes a long time to install, this fact affects the Spark instance start up time.
> - Altering the PySpark, Python, Scala/Java, .NET, or Spark version is not supported. > - Installing packages from PyPI is not supported within DEP-enabled workspaces. ## Session-scoped packages
-Often, when doing interactive data analysis or machine learning, you may find that you want to try out newer packages or you may need packages that are not already available on your Apache Spark pool. Instead of updating the pool configuration, users can now use session-scoped packages to add, manage, and update session dependencies.
-Session-scoped packages allow users to define package dependencies at the start of their session. When you install a session-scoped package, only the current session has access to the specified packages. As a result, these session-scoped packages will not impact other sessions or jobs using the same Apache Spark pool. In addition, these libraries are installed on top of the base runtime and pool level packages.
+Often, when doing interactive data analysis or machine learning, you might try newer packages or you might need packages that are currently unavailable on your Apache Spark pool. Instead of updating the pool configuration, users can now use session-scoped packages to add, manage, and update session dependencies.
+
+Session-scoped packages allow users to define package dependencies at the start of their session. When you install a session-scoped package, only the current session has access to the specified packages. As a result, these session-scoped packages don't affect other sessions or jobs using the same Apache Spark pool. In addition, these libraries are installed on top of the base runtime and pool level packages.
-To learn more about how to manage session-scoped packages, visit the following how-to guides:
+To learn more about how to manage session-scoped packages, see the following articles:
-- [Python session packages: ](./apache-spark-manage-python-packages.md) At the start of a session, provide a Conda *environment.yml* to install additional Python packages from popular repositories. -- [Scal) At the start of your session, provide a list of jar files to install using `%%configure`.
+- [Python session packages: ](./apache-spark-manage-python-packages.md) At the start of a session, provide a Conda *environment.yml* to install more Python packages from popular repositories.
+- [Scal) At the start of your session, provide a list of *.jar* files to install using `%%configure`.
## Next steps+ - View the default libraries: [Apache Spark version support](apache-spark-version-support.md)
synapse-analytics Sql Data Warehouse How To Find Queries Running Beyond Wlm Elapsed Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-find-queries-running-beyond-wlm-elapsed-timeout.md
+
+ Title: Identify queries running beyond workload group query execution timeout
+description: Identify queries that are running beyond the workload groups query execution timeout value.
++++++ Last updated : 06/13/2022++++
+# Identify queries running beyond workload group query execution timeout
+
+This article covers guidance on how to identify queries that are running beyond the query_execution_timeout_sec value configured for the workload group.
+
+Azure Synapse Analytics provides the ability to [create workload groups for isolating resources](sql-data-warehouse-workload-isolation.md) and [classify workloads to the appropriate workload group](sql-data-warehouse-workload-classification.md). As part of the workload group definition, query_execution_timeout_sec can be configured to set the maximum execution time, in seconds, allowed before the query is canceled. However, to prevent the return of partial results, queries will not be canceled once they have reached the return phase of execution.
+
+If these queries should be stopped, you can manually kill the session, associated with the query. This article covers guidance on identifying these queries.
+
+## Basic troubleshooting information
+
+To find queries that are running longer than the configured execution timeout and are in the return step phase:
+
+- View the [workload groups configuration](#view-workload-groups-configuration)
+- Find workload group [queries running beyond a specific time](#find-queries-running-beyond-specific-time)
+- Check [query's current execution step](#check-query-execution-step) to see if it is in the return operation step
+
+Alternatively, a [combined query](#find-all-queries-running-beyond-workload-group-execution-time) is provided below that finds all requests, in the return step phase, that are running longer than the set max execution time for the workload group to which they are classified.
+
+Once the queries have been identified, they can [manually be terminated with the KILL command](#manually-terminate-queries).
++
+### View workload groups configuration
+
+#### Azure portal
+
+To view the execution timeout configured for a workload group in the Azure portal:
+
+1. Go to the Azure Synapse workspace under which the dedicated SQL Pool of interest has been created.
+2. On the left side pane, all pool types created under the workspace are listed. Select **SQL Pools** under **Analytical Pools** section.
+3. Select the dedicated SQL pool of interest.
+4. In the left side pane, select **Workload management** under **Settings**.
+5. Under **Workload groups** section, find the workload group of interest.
+6. Click on the context menu, (...) button on the far right, and select **Settings**
++
+#### T-SQL
+
+To view workload groups using T-SQL, [connect to the dedicated SQL Pool using SQL Server Management Studio (SSMS)](../sql/get-started-ssms.md) and issue following query:
+
+```sql
+SELECT * FROM sys.workload_management_workload_groups;
+```
+
+For more information, see [sys.workload_management_workload_groups](/sql/relational-databases/system-catalog-views/sys-workload-management-workload-groups-transact-sql).
++
+### Find queries running beyond specific time
+
+#### T-SQL
+
+To view queries running longer than the configured execution timeout, using the timeout value from the workload group above, issue the following query:
+
+```sql
+DECLARE @GROUP_NAME varchar(128);
+DECLARE @TIMEOUT_VALUE_MS INT;
+
+SET @GROUP_NAME = '<group_name>';
+SET @TIMEOUT_VALUE_MS = '<execution_timeout_ms>';
+
+SELECT *
+FROM sys.dm_pdw_exec_requests
+WHERE group_name = @GROUP_NAME AND status = 'Running' AND total_elapsed_time > @TIMEOUT_VALUE_MS
+```
+
+For more information, see [sys.dm_pdw_exec_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-exec-requests-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true).
+
+### Check query execution step
+
+To check if the query is in the return operation step using the request ID from the prior step, issue the following query:
+
+```sql
+DECLARE @REQUEST_ID varchar(20);
+SET @REQUEST_ID = '<request_id>';
+
+SELECT * FROM sys.dm_pdw_request_steps
+WHERE request_id = @REQUEST_ID AND status = 'Running' AND operation_type = 'ReturnOperation'
+ORDER BY step_index;
+```
+
+For more information, see [sys.dm_pdw_request_steps](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-request-steps-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true).
+
+### Find all queries running beyond workload group execution time
+
+To find all the requests that are in the return step phase and are running longer than their workload groupΓÇÖs configured execution timeout, issue the following query:
+
+```sql
+SELECT DISTINCT ExecRequests.request_id, ExecRequests.session_id, ExecRequests.total_elapsed_time,
+ ExecRequests.group_name, (WorkloadGroups.query_execution_timeout_sec * 1000) AS GroupQueryExecutionTimeoutMs
+FROM sys.dm_pdw_exec_requests AS ExecRequests
+JOIN sys.workload_management_workload_groups AS WorkloadGroups ON WorkloadGroups.name = ExecRequests.group_name
+JOIN sys.dm_pdw_request_steps AS RequestSteps ON ExecRequests.request_id = RequestSteps.request_id
+WHERE ExecRequests.status = 'Running' AND ExecRequests.total_elapsed_time > (WorkloadGroups.query_execution_timeout_sec * 1000)
+ AND RequestSteps.status = 'Running' AND RequestSteps.operation_type = 'ReturnOperation'
+```
+
+### Manually terminate queries
+
+If you would manually like to terminate these queries, you can use the KILL command for the session(s) identified above.
+
+```sql
+KILL '<session-id>'
+```
+
+For more information, see [KILL (Transact SQL)](/sql/t-sql/language-elements/kill-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true).
+
+## Next steps
+- For more information on workload classification, see [Workload Classification](sql-data-warehouse-workload-classification.md).
+- For more information on workload importance, see [Workload Importance](sql-data-warehouse-workload-importance.md)
+
synapse-analytics How To Pause Resume Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/how-to-pause-resume-pipelines.md
Evaluate the desired state, Pause or Resume, and the current status, Online, or
1. On the **Activities** tab, copy the code below into the **Expression**. ```HTTP
- @concat(activity('CheckState').output.value[0].properties.status,'-',pipeline().parameters.PauseOrResume)
+ @concat(activity('CheckState').output.properties.status,'-',pipeline().parameters.PauseOrResume)
``` Where Check State is the name of the preceding Web activity with output.properties.status defining the current status and pipeline().parameters.PauseOrResume indicates the desired state.
virtual-desktop Azure Monitor Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-monitor-costs.md
Using the default Pay-as-you-go model for [Log Analytics pricing](https://azure.
This section will explain how to measure and manage data ingestion to reduce costs.
-To learn about managing rights and permissions to the workbook, see [Access control](../azure-monitor/visualize/workbooks-access-control.md).
+To learn about managing rights and permissions to the workbook, see [Access control](../azure-monitor/visualize/workbooks-overview.md#access-control).
>[!NOTE] >Removing data points will impact their corresponding visuals in Azure Monitor for Azure Virtual Desktop.
virtual-machines Dav4 Dasv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dav4-dasv4-series.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-The Dav4-series and Dasv4-series are new sizes utilizing AMD's 2.35Ghz EPYC<sup>TM</sup> 7452 processor in a multi-threaded configuration with up to 256 MB L3 cache dedicating 8 MB of that L3 cache to every 8 cores increasing customer options for running their general purpose workloads. The Dav4-series and Dasv4-series have the same memory and disk configurations as the D & Dsv3-series.
+The Dav4-series and Dasv4-series run on 2nd Generation AMD EPYC<sup>TM</sup> 7452 or 3rd Generation EPYC<sup>TM</sup> 7763v processors in a multi-threaded configuration. The Dav4-series and Dasv4-series have the same memory and disk configurations as the D & Dsv3-series.
## Dav4-series
-Dav4-series sizes are based on the 2.35Ghz AMD EPYC<sup>TM</sup> 7452 processor that can achieve a boosted maximum frequency of 3.35GHz. The Dav4-series sizes offer a combination of vCPU, memory and temporary storage for most production workloads. Data disk storage is billed separately from virtual machines. To use premium SSD, use the Dasv4 sizes. The pricing and billing meters for Dasv4 sizes are the same as the Dav4-series.
+The Dav4-series run on 2nd Generation AMD EPYC<sup>TM</sup> 7452 (up to 3.35GHz) or 3rd Generation EPYC<sup>TM</sup> 7763v processors (up to 3.5GHz). The Dav4-series sizes offer a combination of vCPU, memory and temporary storage for most production workloads. Data disk storage is billed separately from virtual machines. To use premium SSD, use the Dasv4 sizes. The pricing and billing meters for Dasv4 sizes are the same as the Dav4-series.
[ACU](acu.md): 230-260<br> [Premium Storage](premium-storage-performance.md): Not Supported<br>
Dav4-series sizes are based on the 2.35Ghz AMD EPYC<sup>TM</sup> 7452 processor
## Dasv4-series
-Dasv4-series sizes are based on the 2.35Ghz AMD EPYC<sup>TM</sup> 7452 processor that can achieve a boosted maximum frequency of 3.35GHz and use premium SSD. The Dasv4-series sizes offer a combination of vCPU, memory and temporary storage for most production workloads.
+The Dasv4-series run on 2nd Generation AMD EPYC<sup>TM</sup> 7452 (up to 3.35GHz) or 3rd Generation EPYC<sup>TM</sup> 7763v processors (up to 3.5GHz) and use premium SSD. The Dasv4-series sizes offer a combination of vCPU, memory and temporary storage for most production workloads.
[ACU](acu.md): 230-260<br> [Premium Storage](premium-storage-performance.md): Supported<br>
virtual-machines Eav4 Easv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/eav4-easv4-series.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-The Eav4-series and Easv4-series utilize AMD's 2.35Ghz EPYC<sup>TM</sup> 7452 processor in a multi-threaded configuration with up to 256MB L3 cache, increasing options for running most memory optimized workloads. The Eav4-series and Easv4-series have the same memory and disk configurations as the Ev3 & Esv3-series.
+The Eav4-series and Easv4-series run on 2nd Generation AMD EPYC<sup>TM</sup> 7452 or 3rd Generation EPYC<sup>TM</sup> 7763v processors in a multi-threaded configuration. The Eav4-series and Easv4-series have the same memory and disk configurations as the Ev3 & Esv3-series.
## Eav4-series
The Eav4-series and Easv4-series utilize AMD's 2.35Ghz EPYC<sup>TM</sup> 7452 pr
[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br> <br>
-Eav4-series sizes are based on the 2.35Ghz AMD EPYC<sup>TM</sup> 7452 processor that can achieve a boosted maximum frequency of 3.35GHz. The Eav4-series sizes are ideal for memory-intensive enterprise applications. Data disk storage is billed separately from virtual machines. To use premium SSD, use the Easv4-series sizes. The pricing and billing meters for Easv4 sizes are the same as the Eav3-series.
+The Eav4-series run on 2nd Generation AMD EPYC<sup>TM</sup> 7452 (up to 3.35GHz) or 3rd Generation EPYC<sup>TM</sup> 7763v processors (up to 3.5GHz). The Eav4-series sizes are ideal for memory-intensive enterprise applications. Data disk storage is billed separately from virtual machines. To use premium SSD, use the Easv4-series sizes. The pricing and billing meters for Easv4 sizes are the same as the Eav3-series.
| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / Read MBps / Write MBps | Max NICs | Expected network bandwidth (Mbps) | | --|--|--|--|--|--|--|--|
Eav4-series sizes are based on the 2.35Ghz AMD EPYC<sup>TM</sup> 7452 processor
[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br> <br>
-Easv4-series sizes are based on the 2.35Ghz AMD EPYC<sup>TM</sup> 7452 processor that can achieve a boosted maximum frequency of 3.35GHz and use premium SSD. The Easv4-series sizes are ideal for memory-intensive enterprise applications.
+The Easv4-series run on 2nd Generation AMD EPYC<sup>TM</sup> 7452 (up to 3.35GHz) or 3rd Generation EPYC<sup>TM</sup> 7763v processors (up to 3.5GHz) and use premium SSD. The Easv4-series sizes are ideal for memory-intensive enterprise applications.
| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS / MBps (cache size in GiB) | Max burst cached and temp storage throughput: IOPS / MBps<sup>1</sup> | Max uncached disk throughput: IOPS / MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Expected network bandwidth (Mbps) | |--|--|--|--|--|--|--|--|--|--|--|
virtual-machines Oms Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/oms-linux.md
Previously updated : 11/02/2021 Last updated : 06/15/2022 # Log Analytics virtual machine extension for Linux
The following table provides a mapping of the version of the Log Analytics VM ex
| Log Analytics Linux VM extension version | Log Analytics Agent bundle version | |--|--|
+| 1.14.16 | [1.14.16](https://github.com/microsoft/OMS-Agent-for-Linux/releases/tag/OMSAgent_v1.14.16-0) |
+| 1.14.13 | [1.14.13](https://github.com/microsoft/OMS-Agent-for-Linux/releases/tag/OMSAgent_v1.14.13-0) |
+| 1.14.11 | [1.14.11](https://github.com/microsoft/OMS-Agent-for-Linux/releases/tag/OMSAgent_v1.14.11-0) |
+| 1.14.9 | [1.14.9](https://github.com/microsoft/OMS-Agent-for-Linux/releases/tag/OMSAgent_v1.14.9-0) |
+| 1.13.40 | [1.13.40](https://github.com/microsoft/OMS-Agent-for-Linux/releases/tag/OMSAgent_v1.13.40-0) |
+| 1.13.35 | [1.13.35](https://github.com/microsoft/OMS-Agent-for-Linux/releases/tag/OMSAgent_v1.13.35-0) |
| 1.13.33 | [1.13.33](https://github.com/microsoft/OMS-Agent-for-Linux/releases/tag/OMSAgent_v1.13.33-0) | | 1.13.27 | [1.13.27](https://github.com/microsoft/OMS-Agent-for-Linux/releases/tag/OMSAgent_v1.13.27-0) | | 1.13.15 | [1.13.9-0](https://github.com/microsoft/OMS-Agent-for-Linux/releases/tag/OMSAgent_v1.13.9-0) |
Extension execution output is logged to the following file:
/opt/microsoft/omsagent/bin/stdout ```
+To retrieve the OMS extension version installed on a VM, run the following command using Azure PowerShell.
+
+```powershell
+Get-AzVMExtension -ResourceGroupName my_resource_group -VMName my_vm_name -Name OmsAgentForLinux -Status
+```
+ ### Error codes and their meanings | Error Code | Meaning | Possible Action |
virtual-machines Compute Benchmark Scores https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/compute-benchmark-scores.md
The following CoreMark benchmark scores show compute performance for select Azur
### Ev4 (03/25/2021 PBIID:9198755)+ | VM Size | CPU | vCPUs | NUMA Nodes | Memory(GiB) | Avg Score | StdDev | StdDev% | #Runs | | | | : | : | : | : | : | : | : | | Standard_E2_v4 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 2 | 1 | 16.0 | 30,825 | 2,765 | 8.97% | 406 |
virtual-machines Monitor Sap On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/monitor-sap-on-azure.md
Supported databases:
Azure Monitor for SAP Solutions uses the [Azure Monitor](../../../azure-monitor/overview.md) capabilities of [Log Analytics](../../../azure-monitor/logs/log-analytics-overview.md) and [Workbooks](../../../azure-monitor/visualize/workbooks-overview.md). With it, you can: -- Create [custom visualizations](../../../azure-monitor/visualize/workbooks-overview.md#getting-started) by editing the default Workbooks provided by Azure Monitor for SAP Solutions.
+- Create [custom visualizations](../../../azure-monitor/visualize/workbooks-getting-started.md) by editing the default Workbooks provided by Azure Monitor for SAP Solutions.
- Write [custom queries](../../../azure-monitor/logs/log-analytics-tutorial.md). - Create [custom alerts](../../../azure-monitor/alerts/alerts-log.md) by using Azure Log Analytics workspace. - Take advantage of the [flexible retention period](../../../azure-monitor/logs/data-retention-archive.md) in Azure Monitor Logs/Log Analytics.
virtual-network Nat Gateway Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-gateway-resource.md
For guides on how to enable NSG flow logs, see [Enabling NSG Flow Logs](../../ne
Each NAT gateway can provide up to 50 Gbps of throughput. You can split your deployments into multiple subnets and assign each subnet or group of subnets a NAT gateway to scale out.
-Each NAT gateway public IP address provides 64,512 SNAT ports to make outbound connections. NAT gateway can support up to 50,000 concurrent connections per public IP address to the same destination endpoint over the internet for TCP and UDP. Review the following section for details and the [troubleshooting article](./troubleshoot-nat.md) for specific problem resolution guidance.
+Each NAT gateway public IP address provides 64,512 SNAT ports to make outbound connections. NAT gateway can support up to 50,000 concurrent connections per public IP address to the same destination endpoint over the internet for TCP and UDP. NAT gateway can process 1M packets per second and scale up to 5M packets per second.
+
+Review the following section for details and the [troubleshooting article](./troubleshoot-nat.md) for specific problem resolution guidance.
## Protocols
virtual-network Virtual Network Nsg Manage Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-nsg-manage-log.md
Resource logging is enabled separately for *each* NSG you want to collect diagno
## Enable logging
-You can use the [Azure Portal](#azure-portal), [PowerShell](#powershell), or the [Azure CLI](#azure-cli) to enable resource logging.
+You can use the [Azure portal](#azure-portal), [PowerShell](#powershell), or the [Azure CLI](#azure-cli) to enable resource logging.
-### Azure Portal
+### Azure portal
1. Sign in to the [portal](https://portal.azure.com). 2. Select **All services**, then type *network security groups*. When **Network security groups** appear in the search results, select it.
You can use the [Azure Portal](#azure-portal), [PowerShell](#powershell), or the
You can run the commands that follow in the [Azure Cloud Shell](https://shell.azure.com/powershell), or by running PowerShell from your computer. The Azure Cloud Shell is a free interactive shell. It has common Azure tools preinstalled and configured to use with your account. If you run PowerShell from your computer, you need the Azure PowerShell module, version 1.0.0 or later. Run `Get-Module -ListAvailable Az` on your computer, to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to sign in to Azure with an account that has the [necessary permissions](virtual-network-network-interface.md#permissions).
-To enable resource logging, you need the Id of an existing NSG. If you don't have an existing NSG, you can create one with [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup).
+To enable resource logging, you need the ID of an existing NSG. If you don't have an existing NSG, you can create one with [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup).
Retrieve the network security group that you want to enable resource logging for with [Get-AzNetworkSecurityGroup](/powershell/module/az.network/get-aznetworksecuritygroup). For example, to retrieve an NSG named *myNsg* that exists in a resource group named *myResourceGroup*, enter the following command:
Set-AzDiagnosticSetting `
-Enabled $true ```
-If you only want to log data for one category or the other, rather than both, add the `-Categories` option to the previous command, followed by *NetworkSecurityGroupEvent* or *NetworkSecurityGroupRuleCounter*. If you want to log to a different [destination](#log-destinations) than a Log Analytics workspace, use the appropriate parameters for an Azure [Storage account](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-storage) or [Event Hub](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-event-hubs).
+If you only want to log data for one category or the other, rather than both, add the `-Categories` option to the previous command, followed by *NetworkSecurityGroupEvent* or *NetworkSecurityGroupRuleCounter*. If you want to log to a different [destination](#log-destinations) than a Log Analytics workspace, use the appropriate parameters for an Azure [Storage account](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-storage) or [Event Hubs](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-event-hubs).
View and analyze logs. For more information, see [View and analyze logs](#view-and-analyze-logs).
View and analyze logs. For more information, see [View and analyze logs](#view-a
You can run the commands that follow in the [Azure Cloud Shell](https://shell.azure.com/bash), or by running the Azure CLI from your computer. The Azure Cloud Shell is a free interactive shell. It has common Azure tools preinstalled and configured to use with your account. If you run the CLI from your computer, you need version 2.0.38 or later. Run `az --version` on your computer, to find the installed version. If you need to upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If you are running the CLI locally, you also need to run `az login` to sign in to Azure with an account that has the [necessary permissions](virtual-network-network-interface.md#permissions).
-To enable resource logging, you need the Id of an existing NSG. If you don't have an existing NSG, you can create one with [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create).
+To enable resource logging, you need the ID of an existing NSG. If you don't have an existing NSG, you can create one with [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create).
Retrieve the network security group that you want to enable resource logging for with [az network nsg show](/cli/azure/network/nsg#az-network-nsg-show). For example, to retrieve an NSG named *myNsg* that exists in a resource group named *myResourceGroup*, enter the following command:
az monitor diagnostic-settings create \
If you don't have an existing workspace, you can create one using the [Azure portal](../azure-monitor/logs/quick-create-workspace.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [PowerShell](/powershell/module/az.operationalinsights/new-azoperationalinsightsworkspace). There are two categories of logging you can enable logs for.
-If you only want to log data for one category or the other, remove the category you don't want to log data for in the previous command. If you want to log to a different [destination](#log-destinations) than a Log Analytics workspace, use the appropriate parameters for an Azure [Storage account](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-storage) or [Event Hub](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-event-hubs).
+If you only want to log data for one category or the other, remove the category you don't want to log data for in the previous command. If you want to log to a different [destination](#log-destinations) than a Log Analytics workspace, use the appropriate parameters for an Azure [Storage account](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-storage) or [Event Hubs](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-event-hubs).
View and analyze logs. For more information, see [View and analyze logs](#view-and-analyze-logs).
View and analyze logs. For more information, see [View and analyze logs](#view-a
Diagnostics data can be: - [Written to an Azure Storage account](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-storage), for auditing or manual inspection. You can specify the retention time (in days) using resource diagnostic settings.-- [Streamed to an Event hub](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-event-hubs) for ingestion by a third-party service, or custom analytics solution, such as PowerBI.
+- [Streamed to an Event hub](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-event-hubs) for ingestion by a third-party service, or custom analytics solution, such as Power BI.
- [Written to Azure Monitor logs](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-storage). ## Log categories
The rule counter log contains information about each rule applied to resources.
## View and analyze logs To learn how to view resource log data, see [Azure platform logs overview](../azure-monitor/essentials/platform-logs-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json). If you send diagnostics data to:-- **Azure Monitor logs**: You can use the [network security group analytics](../azure-monitor/insights/azure-networking-analytics.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-network-security-group-analytics-solution-in-azure-monitor
+- **Azure Monitor logs**: You can use the [network security group analytics](../azure-monitor/insights/azure-networking-analytics.md?toc=%2fazure%2fvirtual-network%2ftoc.json
) solution for enhanced insights. The solution provides visualizations for NSG rules that allow or deny traffic, per MAC address, of the network interface in a virtual machine. - **Azure Storage account**: Data is written to a PT1H.json file. You can find the: - Event log in the following path: `insights-logs-networksecuritygroupevent/resourceId=/SUBSCRIPTIONS/[ID]/RESOURCEGROUPS/[RESOURCE-GROUP-NAME-FOR-NSG]/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/[NSG NAME]/y=[YEAR]/m=[MONTH/d=[DAY]/h=[HOUR]/m=[MINUTE]`