Updates from: 04/27/2023 05:51:52
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Application Provisioning Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-log-analytics.md
Previously updated : 04/25/2023 Last updated : 04/26/2023
active-directory Expression Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/expression-builder.md
Previously updated : 10/20/2022 Last updated : 04/26/2023
In application provisioning, you use expressions for attribute mappings. You acc
To use expression builder, select a function and attribute and then enter a suffix if needed. Then select **Add expression** to add the expression to the code box. To learn more about the functions available and how to use them, see [Reference for writing expressions for attribute mappings](functions-for-customizing-application-data.md).
-Test the expression by searching for a user or providing values and selecting **Test expression**. The output of the expression test will appear in the **View expression output** box.
+Test the expression by searching for a user or providing values and selecting **Test expression**. The output of the expression test appears in the **View expression output** box.
When you're satisfied with the expression, move it to an attribute mapping. Copy and paste it into the expression box for the attribute mapping you're working on. ## Known limitations
-* Extension attributes are not available for selection in the expression builder. However, extension attributes can be used in the attribute mapping expression.
+* Extension attributes aren't available for selection in the expression builder. However, extension attributes can be used in the attribute mapping expression.
## Next steps
-[Reference for writing expressions for attribute mappings](functions-for-customizing-application-data.md)
+[Reference for writing expressions for attribute mappings](functions-for-customizing-application-data.md)
active-directory Sap Successfactors Attribute Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/sap-successfactors-attribute-reference.md
Previously updated : 10/20/2022 Last updated : 04/26/2023
The table below captures the list of SuccessFactors attributes included by defau
- [SuccessFactors to Active Directory User Provisioning](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md) - [SuccessFactors to Azure AD User Provisioning](../saas-apps/sap-successfactors-inbound-provisioning-cloud-only-tutorial.md)
-Please refer to the [SAP SuccessFactors integration reference](./sap-successfactors-integration-reference.md#retrieving-additional-attributes) to extend the schema for additional attributes.
+Please refer to the [SAP SuccessFactors integration reference](./sap-successfactors-integration-reference.md#retrieving-more-attributes) to extend the schema for additional attributes.
| \# | SuccessFactors Entity | SuccessFactors Attribute | Operation Type | |-|-||-|
active-directory Sap Successfactors Integration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/sap-successfactors-integration-reference.md
Previously updated : 10/20/2022 Last updated : 04/26/2023
This article explains how the integration works and how you can customize the pr
## Establishing connectivity Azure AD provisioning service uses basic authentication to connect to Employee Central OData API endpoints. When setting up the SuccessFactors provisioning app, use the *Tenant URL* parameter in the *Admin Credentials* section to configure the [API data center URL](https://apps.support.sap.com/sap/support/knowledge/en/2215682).
-To further secure the connectivity between Azure AD provisioning service and SuccessFactors, you can add the Azure AD IP ranges in the SuccessFactors IP allowlist using the steps described below:
+To further secure the connectivity between Azure AD provisioning service and SuccessFactors, add the Azure AD IP ranges in the SuccessFactors IP allowlist:
1. Download the [latest IP Ranges](https://www.microsoft.com/download/details.aspx?id=56519) for the Azure Public Cloud 1. Open the file and search for tag **AzureActiveDirectory**
To further secure the connectivity between Azure AD provisioning service and Suc
1. Log in to SuccessFactors admin portal to add IP ranges to the allowlist. Refer to SAP [support note 2253200](https://apps.support.sap.com/sap/support/knowledge/en/2253200). You can now [enter IP ranges](https://answers.sap.com/questions/12882263/whitelisting-sap-cloud-platform-ip-address-range-i.html) in this tool. ## Supported entities
-For every user in SuccessFactors, Azure AD provisioning service retrieves the following entities. Each entity is expanded using the OData API *$expand* query parameter. Refer to the *Retrieval rule* column below. Some entities are expanded by default, while some entities are expanded only if a specific attribute is present in the mapping.
+For every user in SuccessFactors, Azure AD provisioning service retrieves the following entities. Each entity is expanded using the OData API *$expand* query parameter as outlined in the *Retrieval rule* column. Some entities are expanded by default, while some entities are expanded only if a specific attribute is present in the mapping.
| \# | SuccessFactors Entity | OData Node | Retrieval rule | |-|-|||
-| 1 | PerPerson | *root node* | Always |
-| 2 | PerPersonal | personalInfoNav | Always |
-| 3 | PerPhone | phoneNav | Always |
-| 4 | PerEmail | emailNav | Always |
-| 5 | EmpEmployment | employmentNav | Always |
-| 6 | User | employmentNav/userNav | Always |
-| 7 | EmpJob | employmentNav/jobInfoNav | Always |
-| 8 | EmpEmploymentTermination | activeEmploymentsCount | Always |
-| 9 | User's manager | employmentNav/userNav/manager/empInfo | Always |
-| 10 | FOCompany | employmentNav/jobInfoNav/companyNav | Only if `company` or `companyId` attribute is mapped |
-| 11 | FODepartment | employmentNav/jobInfoNav/departmentNav | Only if `department` or `departmentId` attribute is mapped |
-| 12 | FOBusinessUnit | employmentNav/jobInfoNav/businessUnitNav | Only if `businessUnit` or `businessUnitId` attribute is mapped |
-| 13 | FOCostCenter | employmentNav/jobInfoNav/costCenterNav | Only if `costCenter` or `costCenterId` attribute is mapped |
-| 14 | FODivision | employmentNav/jobInfoNav/divisionNav | Only if `division` or `divisionId` attribute is mapped |
-| 15 | FOJobCode | employmentNav/jobInfoNav/jobCodeNav | Only if `jobCode` or `jobCodeId` attribute is mapped |
-| 16 | FOPayGrade | employmentNav/jobInfoNav/payGradeNav | Only if `payGrade` attribute is mapped |
-| 17 | FOLocation | employmentNav/jobInfoNav/locationNav | Only if `location` attribute is mapped |
-| 18 | FOCorporateAddressDEFLT | employmentNav/jobInfoNav/addressNavDEFLT | If mapping contains one of the following attributes: `officeLocationAddress, officeLocationCity, officeLocationZipCode` |
-| 19 | FOEventReason | employmentNav/jobInfoNav/eventReasonNav | Only if `eventReason` attribute is mapped |
-| 20 | EmpGlobalAssignment | employmentNav/empGlobalAssignmentNav | Only if `assignmentType` is mapped |
-| 21 | EmploymentType Picklist | employmentNav/jobInfoNav/employmentTypeNav | Only if `employmentType` is mapped |
-| 22 | EmployeeClass Picklist | employmentNav/jobInfoNav/employeeClassNav | Only if `employeeClass` is mapped |
-| 23 | EmplStatus Picklist | employmentNav/jobInfoNav/emplStatusNav | Only if `emplStatus` is mapped |
-| 24 | AssignmentType Picklist | employmentNav/empGlobalAssignmentNav/assignmentTypeNav | Only if `assignmentType` is mapped |
-| 25 | Position | employmentNav/jobInfoNav/positionNav | Only if `positioNav` is mapped |
-| 26 | Manager User | employmentNav/jobInfoNav/managerUserNav | Only if `managerUserNav` is mapped |
+| 1 | `PerPerson` | `*root node*` | Always |
+| 2 | `PerPersonal` | `personalInfoNav` | Always |
+| 3 | `PerPhone` | `phoneNav` | Always |
+| 4 | `PerEmail` | `emailNav` | Always |
+| 5 | `EmpEmployment` | `employmentNav` | Always |
+| 6 | `User` | `employmentNav/userNav` | Always |
+| 7 | `EmpJob` | `employmentNav/jobInfoNav` | Always |
+| 8 | `EmpEmploymentTermination` | `activeEmploymentsCount` | Always |
+| 9 | `User's manager` | `employmentNav/userNav/manager/empInfo` | Always |
+| 10 | `FOCompany` | `employmentNav/jobInfoNav/companyNav` | Only if `company` or `companyId` attribute is mapped |
+| 11 | `FODepartment` | `employmentNav/jobInfoNav/departmentNav` | Only if `department` or `departmentId` attribute is mapped |
+| 12 | `FOBusinessUnit` | `employmentNav/jobInfoNav/businessUnitNav` | Only if `businessUnit` or `businessUnitId` attribute is mapped |
+| 13 | `FOCostCenter` | `employmentNav/jobInfoNav/costCenterNav` | Only if `costCenter` or `costCenterId` attribute is mapped |
+| 14 | `FODivision` | `employmentNav/jobInfoNav/divisionNav` | Only if `division` or `divisionId` attribute is mapped |
+| 15 | `FOJobCode` | `employmentNav/jobInfoNav/jobCodeNav` | Only if `jobCode` or `jobCodeId` attribute is mapped |
+| 16 | `FOPayGrade` | `employmentNav/jobInfoNav/payGradeNav` | Only if `payGrade` attribute is mapped |
+| 17 | `FOLocation` | `employmentNav/jobInfoNav/locationNav` | Only if `location` attribute is mapped |
+| 18 | `FOCorporateAddressDEFLT` | `employmentNav/jobInfoNav/addressNavDEFLT` | If mapping contains one of the following attributes: `officeLocationAddress, officeLocationCity, officeLocationZipCode` |
+| 19 | `FOEventReason` | `employmentNav/jobInfoNav/eventReasonNav` | Only if `eventReason` attribute is mapped |
+| 20 | `EmpGlobalAssignment` | `employmentNav/empGlobalAssignmentNav` | Only if `assignmentType` is mapped |
+| 21 | `EmploymentType Picklist` | `employmentNav/jobInfoNav/employmentTypeNav` | Only if `employmentType` is mapped |
+| 22 | `EmployeeClass Picklist` | `employmentNav/jobInfoNav/employeeClassNav` | Only if `employeeClass` is mapped |
+| 23 | `EmplStatus Picklist` | `employmentNav/jobInfoNav/emplStatusNav` | Only if `emplStatus` is mapped |
+| 24 | `AssignmentType Picklist` | `employmentNav/empGlobalAssignmentNav/assignmentTypeNav` | Only if `assignmentType` is mapped |
+| 25 | `Position` | `employmentNav/jobInfoNav/positionNav` | Only if `positioNav` is mapped |
+| 26 | `Manager User` | `employmentNav/jobInfoNav/managerUserNav` | Only if `managerUserNav` is mapped |
## How full sync works Based on the attribute-mapping, during full sync Azure AD provisioning service sends the following "GET" OData API query to fetch effective data of all active and terminated workers.
Based on the attribute-mapping, during full sync Azure AD provisioning service s
For each SuccessFactors user, the provisioning service looks for an account in the target (Azure AD/on-premises Active Directory) using the matching attribute defined in the mapping. For example: if *personIdExternal* maps to *employeeId* and is set as the matching attribute, then the provisioning service uses the *personIdExternal* value to search for the user with *employeeId* filter. If a user match is found, then it updates the target attributes. If no match is found, then it creates a new entry in the target.
-To validate the data returned by your OData API endpoint for a specific `personIdExternal`, update the `SuccessFactorsAPIEndpoint` in the API query below with your API data center server URL and use a tool like [Postman](https://www.postman.com/downloads/) to invoke the query. If the "in" filter does not work, you can try the "eq" filter.
+To validate the data returned by your OData API endpoint for a specific `personIdExternal`, update the `SuccessFactorsAPIEndpoint` in the API query with your API data center server URL and use a tool like [Postman](https://www.postman.com/downloads/) to invoke the query. If the "in" filter does not work, you can try the "eq" filter.
``` https://[SuccessFactorsAPIEndpoint]/odata/v2/PerPerson?$format=json&
employmentNav/jobInfoNav/employmentTypeNav,employmentNav/jobInfoNav/employeeClas
After full sync, Azure AD provisioning service maintains `LastExecutionTimestamp` and uses it to create delta queries for retrieving incremental changes. The timestamp attributes present in each SuccessFactors entity, such as `lastModifiedDateTime`, `startDate`, `endDate`, and `latestTerminationDate`, are evaluated to see if the change falls between the `LastExecutionTimestamp` and `CurrentExecutionTime`. If yes, then the entry change is considered to be effective and processed for sync.
-Here is the OData API request template that Azure AD uses to query SuccessFactors for incremental changes. You can update the variables `SuccessFactorsAPIEndpoint`, `LastExecutionTimestamp` and `CurrentExecutionTime` in the request template below use a tool like [Postman](https://www.postman.com/downloads/) to check what data is returned. Alternatively, you can also retrieve the actual request payload from SuccessFactors by [enabling OData API Audit logs](#enabling-odata-api-audit-logs-in-successfactors).
+Here is the OData API request template that Azure AD uses to query SuccessFactors for incremental changes. You can update the variables `SuccessFactorsAPIEndpoint`, `LastExecutionTimestamp` and `CurrentExecutionTime` in the request template use a tool like [Postman](https://www.postman.com/downloads/) to check what data is returned. Alternatively, you can also retrieve the actual request payload from SuccessFactors by [enabling OData API Audit logs](#enabling-odata-api-audit-logs-in-successfactors).
``` https://[SuccessFactorsAPIEndpoint]/odata/v2/PerPerson/$count?$format=json&$filter=(personEmpTerminationInfoNav/activeEmploymentsCount ne null) and
https://[SuccessFactorsAPIEndpoint]/odata/v2/PerPerson/$count?$format=json&$filt
## Reading attribute data
-When Azure AD provisioning service queries SuccessFactors, it retrieves a JSON result set. The JSON result set includes a number of attributes stored in Employee Central. By default, the provisioning schema is configured to retrieve only a subset of those attributes.
+When Azure AD provisioning service queries SuccessFactors, it retrieves a JSON result set. The JSON result set includes many attributes stored in Employee Central. By default, the provisioning schema is configured to retrieve only a subset of those attributes.
-To retrieve additional attributes, follow the steps listed below:
+To retrieve more attributes, follow the steps listed:
1. Browse to **Enterprise Applications** -> **SuccessFactors App** -> **Provisioning** -> **Edit Provisioning** -> **attribute-mapping page**. 1. Scroll down and click **Show advanced options**.
The next section provides a list of common scenarios for editing the JSONPath va
JSONPath is a query language for JSON that is similar to XPath for XML. Like XPath, JSONPath allows for the extraction and filtration of data out of a JSON payload.
-By using JSONPath transformation, you can customize the behavior of the Azure AD provisioning app to retrieve custom attributes and handle scenarios such as rehire, worker conversion and global assignment.
+By using JSONPath transformation, you can customize the behavior of the Azure AD provisioning app to retrieve custom attributes and handle scenarios such as rehiring, worker conversion and global assignment.
This section covers how you can customize the provisioning app for the following HR scenarios:
-* [Retrieving additional attributes](#retrieving-additional-attributes)
+* [Retrieving more attributes](#retrieving-more-attributes)
* [Retrieving custom attributes](#retrieving-custom-attributes) * [Mapping employment status to account status](#mapping-employment-status-to-account-status)
-* [Handling worker conversion and rehire scenario](#handling-worker-conversion-and-rehire-scenario)
+* [Handling worker conversion and rehiring scenarios](#handling-worker-conversion-and-rehiring-scenarios)
* [Retrieving current active employment record](#retrieving-current-active-employment-record) * [Handling global assignment scenario](#handling-global-assignment-scenario) * [Handling concurrent jobs scenario](#handling-concurrent-jobs-scenario)
This section covers how you can customize the provisioning app for the following
* [Provisioning users in the Onboarding module](#provisioning-users-in-the-onboarding-module) * [Enabling OData API Audit logs in SuccessFactors](#enabling-odata-api-audit-logs-in-successfactors)
-### Retrieving additional attributes
+### Retrieving more attributes
The default Azure AD SuccessFactors provisioning app schema ships with [90+ pre-defined attributes](sap-successfactors-attribute-reference.md).
-To add more SuccessFactors attributes to the provisioning schema, use the steps listed below:
+To add more SuccessFactors attributes to the provisioning schema, use the steps listed:
-1. Use the OData query below to retrieve data for a valid test user from Employee Central.
+1. Use the OData query to retrieve data for a valid test user from Employee Central.
``` https://[SuccessFactorsAPIEndpoint]/odata/v2/PerPerson?$format=json&
To add more SuccessFactors attributes to the provisioning schema, use the steps
* If the attribute is part of *User* entity, then look for the attribute under *employmentNav/userNav* node. * If the attribute is part of *EmpJob* entity, then look for the attribute under *employmentNav/jobInfoNav* node. 1. Construct the JSON Path associated with the attribute and add this new attribute to the list of SuccessFactors attributes.
- * Example 1: Let's say you want to add the attribute *okToRehire*, which is part of *employmentNav* entity, then use the JSONPath `$.employmentNav.results[0].okToRehire`
+ * Example 1: Let's say you want to add the attribute `okToRehire`, which is part of `employmentNav` entity, then use the JSONPath `$.employmentNav.results[0].okToRehire`
* Example 2: Let's say you want to add the attribute *timeZone*, which is part of *userNav* entity, then use the JSONPath `$.employmentNav.results[0].userNav.timeZone` * Example 3: Let's say you want to add the attribute *flsaStatus*, which is part of *jobInfoNav* entity, then use the JSONPath `$.employmentNav.results[0].jobInfoNav.results[0].flsaStatus` 1. Save the schema.
By default, the following custom attributes are pre-defined in the Azure AD Succ
* *customString1-customString15* from the EmpEmployment (employmentNav) entity called *empNavCustomString1-empNavCustomString15* * *customString1-customString15* from the EmpJobInfo (jobInfoNav) entity called *empJobNavCustomString1-empNavJobCustomString15*
-Let's say, in your Employee Central instance, *customString35* attribute in *EmpJobInfo* stores the location description. You want to flow this value to Active Directory *physicalDeliveryOfficeName* attribute. To configure attribute-mapping for this scenario, use the steps given below:
+Let's say, in your Employee Central instance, *customString35* attribute in *EmpJobInfo* stores the location description. You want to flow this value to Active Directory *physicalDeliveryOfficeName* attribute. To configure attribute-mapping for this scenario, use the steps:
1. Edit the SuccessFactors attribute list to add a new attribute called *empJobNavCustomString35*. 1. Set the JSONPath API expression for this attribute as:
If you are running into any of these issues or prefer mapping employment status
* R = Retired * T = Terminated
-Use the steps below to update your mapping to retrieve these codes.
+Use the steps to update your mapping to retrieve these codes.
1. Open the attribute-mapping blade of your SuccessFactors provisioning app. 1. Under **Show advanced options**, click on **Edit SuccessFactors attribute list**.
Use the steps below to update your mapping to retrieve these codes.
| Provisioning Job | Account status attribute | Mapping expression | | - | | |
- | SuccessFactors to Active Directory User Provisioning | accountDisabled | Switch(\[emplStatus\], "True", "A", "False", "U", "False", "P", "False") |
- | SuccessFactors to Azure AD User Provisioning | accountEnabled | Switch(\[emplStatus\], "False", "A", "True", "U", "True", "P", "True") |
+ | SuccessFactors to Active Directory User Provisioning | `accountDisabled` | `Switch(\[emplStatus\], "True", "A", "False", "U", "False", "P", "False")` |
+ | SuccessFactors to Azure AD User Provisioning | `accountEnabled` | `Switch(\[emplStatus\], "False", "A", "True", "U", "True", "P", "True")` |
1. Save the changes. 1. Test the configuration using [provision on demand](provision-on-demand.md). 1. After confirming that sync works as expected, restart the provisioning job.
-### Handling worker conversion and rehire scenario
+### Handling worker conversion and rehiring scenarios
-**About worker conversion scenario:** Worker conversion is the process of converting an existing full-time employee to a contractor or a contractor to full-time. In this scenario, Employee Central adds a new *EmpEmployment* entity along with a new *User* entity for the same *Person* entity. The *User* entity nested under the previous *EmpEmployment* entity is set to null.
+**About worker conversion scenario:** Worker conversion is the process of converting an existing full-time employee to a contractor or a contractor to a full-time employee. In this scenario, Employee Central adds a new *EmpEmployment* entity along with a new *User* entity for the same *Person* entity. The *User* entity nested under the previous *EmpEmployment* entity is set to null.
-**About rehire scenario:** In SuccessFactors, there are two options to process rehires:
+**About rehiring scenarios:** In SuccessFactors, there are two options to process rehiring employees:
* Option 1: Create a new person profile in Employee Central * Option 2: Reuse existing person profile in Employee Central If your HR process uses Option 1, then no changes are required to the provisioning schema. If your HR process uses Option 2, then Employee Central adds a new *EmpEmployment* entity along with a new *User* entity for the same *Person* entity.
-To handle both these scenarios so that the new employment data shows up when a conversion or rehire occurs, you can bulk update the provisioning app schema using the steps listed below:
+To handle both these scenarios so that the new employment data shows up when a conversion or rehire occurs, you can bulk update the provisioning app schema using the steps listed:
1. Open the attribute-mapping blade of your SuccessFactors provisioning app. 1. Scroll down and click **Show advanced options**.
To handle both these scenarios so that the new employment data shows up when a c
Using the JSONPath root of `$.employmentNav.results[0]` or `$.employmentNav.results[-1:]` to fetch employment records works in most scenarios and keeps the configuration simple. However, depending on how your SuccessFactors instance is configured, there may be a need to update this configuration to ensure that the connector always fetches the latest active employment record.
-This section describes how you can update the JSONPath settings to definitely retrieve the current active employment record of the user. It also handles worker conversion and rehire scenarios.
+This section describes how you can update the JSONPath settings to definitely retrieve the current active employment record of the user. It also handles worker conversion and rehiring scenarios.
1. Open the attribute-mapping blade of your SuccessFactors provisioning app. 1. Scroll down and click **Show advanced options**.
This section describes how you can update the JSONPath settings to definitely re
| **String to find** | **String to use for replace** | **Purpose** | | | -- | |
- | $.employmentNav.results\[0\].<br>jobInfoNav.results\[0\].emplStatus | $.employmentNav..jobInfoNav..results\[?(@.emplStatusNav.externalCode == 'A' \|\| @.emplStatusNav.externalCode == 'U' \|\| @.emplStatusNav.externalCode == 'P' )\].emplStatusNav.externalCode | With this find-replace, we are adding the ability to expand emplStatusNav OData object. |
- | $.employmentNav.results\[0\].<br>jobInfoNav.results\[0\] | $.employmentNav..jobInfoNav..results\[?(@.emplStatusNav.externalCode == 'A' \|\| @.emplStatusNav.externalCode == 'U' \|\| @.emplStatusNav.externalCode == 'P')\] | With this find-replace, we instruct the connector to always retrieve attributes associated with the active SuccessFactors EmpJobInfo record. Attributes associated with terminated/inactive records in SuccessFactors will be ignored. |
- | $.employmentNav.results\[0\] | $.employmentNav..results\[?(@.jobInfoNav..results\[?(@.emplStatusNav.externalCode == 'A' \|\| @.emplStatusNav.externalCode == 'U' \|\| @.emplStatusNav.externalCode == 'P')\])\] | With this find-replace, we instruct the connector to always retrieve attributes associated with the active SuccessFactors Employment record. Attributes associated with terminated/inactive records in SuccessFactors will be ignored. |
+ | `$.employmentNav.results\[0\].<br>jobInfoNav.results\[0\].emplStatus` | `$.employmentNav..jobInfoNav..results\[?(@.emplStatusNav.externalCode == 'A' \|\| @.emplStatusNav.externalCode == 'U' \|\| @.emplStatusNav.externalCode == 'P' )\].emplStatusNav.externalCode` | With this find-replace, we are adding the ability to expand emplStatusNav OData object. |
+ | `$.employmentNav.results\[0\].<br>jobInfoNav.results\[0\]` | `$.employmentNav..jobInfoNav..results\[?(@.emplStatusNav.externalCode == 'A' \|\| @.emplStatusNav.externalCode == 'U' \|\| @.emplStatusNav.externalCode == 'P')\]` | With this find-replace, we instruct the connector to always retrieve attributes associated with the active SuccessFactors EmpJobInfo record. Attributes associated with terminated/inactive records in SuccessFactors will be ignored. |
+ | `$.employmentNav.results\[0\]` | `$.employmentNav..results\[?(@.jobInfoNav..results\[?(@.emplStatusNav.externalCode == 'A' \|\| @.emplStatusNav.externalCode == 'U' \|\| @.emplStatusNav.externalCode == 'P')\])\]` | With this find-replace, we instruct the connector to always retrieve attributes associated with the active SuccessFactors Employment record. Attributes associated with terminated/inactive records in SuccessFactors will be ignored. |
1. Save the schema. 1. The above process updates all JSONPath expressions.
This section describes how you can update the JSONPath settings to definitely re
| Provisioning Job | Account status attribute | Expression to use if account status is based on "activeEmploymentsCount" | Expression to use if account status is based on "emplStatus" value | | -- | | -- | - |
- | SuccessFactors to Active Directory User Provisioning | accountDisabled | Switch(\[activeEmploymentsCount\], "False", "0", "True") | Switch(\[emplStatus\], "True", "A", "False", "U", "False", "P", "False") |
- | SuccessFactors to Azure AD User Provisioning | accountEnabled | Switch(\[activeEmploymentsCount\], "True", "0", "False") | Switch(\[emplStatus\], "False", "A", "True", "U", "True", "P", "True") |
+ | SuccessFactors to Active Directory User Provisioning | `accountDisabled` | `Switch(\[activeEmploymentsCount\], "False", "0", "True")` | `Switch(\[emplStatus\], "True", "A", "False", "U", "False", "P", "False")` |
+ | SuccessFactors to Azure AD User Provisioning | `accountEnabled` | `Switch(\[activeEmploymentsCount\], "True", "0", "False")` | `Switch(\[emplStatus\], "False", "A", "True", "U", "True", "P", "True")` |
1. Save your changes. 1. 1. Test the configuration using [provision on demand](provision-on-demand.md).
When a user in Employee Central is processed for global assignment, SuccessFacto
* One *EmpEmployment* + *User* entity that corresponds to home assignment with *assignmentClass* set to "ST" and * Another *EmpEmployment* + *User* entity that corresponds to the global assignment with *assignmentClass* set to "GA"
-To fetch attributes belonging to the standard assignment and global assignment user profile, use the steps listed below:
+To fetch attributes belonging to the standard assignment and global assignment user profile, use the steps listed:
1. Open the attribute-mapping blade of your SuccessFactors provisioning app. 1. Scroll down and click **Show advanced options**.
To fetch attributes belonging to the standard assignment and global assignment u
1. Scroll down and click **Show advanced options**. 1. Click on **Edit attribute list for SuccessFactors**. 1. Add new attributes to fetch global assignment data. For example: if you want to fetch the department name associated with a global assignment profile, you can add the attribute *globalAssignmentDepartment* with the JSONPath expression set to `$.employmentNav.results[?(@.assignmentClass == 'GA')].jobInfoNav.results[0].departmentNav.name_localized`.
-1. You can now either flow both department values to Active Directory attributes or selectively flow a value using expression mapping. Example: the below expression sets the value of AD *department* attribute to *globalAssignmentDepartment* if present, else it sets the value to *department* associated with standard assignment.
+1. You can now either flow both department values to Active Directory attributes or selectively flow a value using expression mapping. Example: the expression sets the value of AD *department* attribute to *globalAssignmentDepartment* if present, else it sets the value to *department* associated with standard assignment.
* `IIF(IsPresent([globalAssignmentDepartment]),[globalAssignmentDepartment],[department])` 1. Save the mapping.
To fetch attributes belonging to the standard assignment and global assignment u
### Handling concurrent jobs scenario When a user in Employee Central has concurrent/multiple jobs, there are two *EmpEmployment* and *User* entities with *assignmentClass* set to "ST".
-To fetch attributes belonging to both jobs, use the steps listed below:
+To fetch attributes belonging to both jobs, use the steps listed:
1. Open the attribute-mapping blade of your SuccessFactors provisioning app. 1. Scroll down and click **Show advanced options**.
To fetch attributes belonging to both jobs, use the steps listed below:
### Retrieving position details
-The SuccessFactors connector supports expansion of the position object. To expand and retrieve position object attributes such as job level or position names in a specific language, you can use JSONPath expressions as shown below.
+The SuccessFactors connector supports expansion of the position object. To expand and retrieve position object attributes such as job level or position names in a specific language, you can use JSONPath expressions as shown.
| Attribute Name | JSONPath expression | | -- | - |
The SuccessFactors connector supports expansion of the position object. To expan
| positionNameDE | $.employmentNav.results[0].jobInfoNav.results[0].positionNav.externalName_de_DE | ### Provisioning users in the Onboarding module
-Inbound user provisioning from SAP SuccessFactors to on-premises Active Directory and Azure AD now supports advance provisioning of pre-hires present in the SAP SuccessFactors Onboarding 2.0 module. Upon encountering a new hire profile with future start date, the Azure AD provisioning service queries SAP SuccessFactors to get new hires with one of the following status codes: `active`, `inactive`, `active_external_suite`. The status code `active_external_suite` corresponds to pre-hires present in the SAP SuccessFactors Onboarding 2.0 module. For a description of these status codes, refer to [SAP support note 2736579](https://launchpad.support.sap.com/#/notes/0002736579).
+Inbound user provisioning from SAP SuccessFactors to on premises Active Directory and Azure AD now supports advance provisioning of prehires present in the SAP SuccessFactors Onboarding 2.0 module. When the Azure AD provisioning service encounters a new hire profile with a future start date, it queries SAP SuccessFactors to get new hires with one of the following status codes: `active`, `inactive`, `active_external_suite`. The status code `active_external_suite` corresponds to pre-hires present in the SAP SuccessFactors Onboarding 2.0 module. For a description of these status codes, refer to [SAP support note 2736579](https://launchpad.support.sap.com/#/notes/0002736579).
The default behavior of the provisioning service is to process pre-hires in the Onboarding module.
The SuccessFactors Writeback app uses the following logic to update the User obj
* As a first step, it looks for *userId* attribute in the change set. If it is present, then it uses "UserId" for making the SuccessFactors API call. * If *userId* is not found, then it defaults to using the *personIdExternal* attribute value.
-Usually the *personIdExternal* attribute value in SuccessFactors matches the *userId* attribute value. However, in scenarios such as rehire and worker conversion, an employee in SuccessFactors may have two employment records, one active and one inactive. In such scenarios, to ensure that write-back updates the active user profile, please update the configuration of the SuccessFactors provisioning apps as described below. This configuration ensures that *userId* is always present in the change set visible to the connector and is used in the SuccessFactors API call.
+Usually the *personIdExternal* attribute value in SuccessFactors matches the *userId* attribute value. However, in scenarios such as rehiring and worker conversion, an employee in SuccessFactors may have two employment records, one active and one inactive. In such scenarios, to ensure that write-back updates the active user profile, please update the configuration of the SuccessFactors provisioning apps as described. This configuration ensures that *userId* is always present in the change set visible to the connector and is used in the SuccessFactors API call.
1. Open the SuccessFactors to Azure AD user provisioning app or SuccessFactors to on-premises AD user provisioning app. 1. Ensure that an extensionAttribute *(extensionAttribute1-15)* in Azure AD always stores the *userId* of every worker's active employment record. This can be achieved by mapping SuccessFactors *userId* attribute to an extensionAttribute in Azure AD. > [!div class="mx-imgBorder"] > ![Inbound UserID attribute mapping](./media/sap-successfactors-integration-reference/inbound-userid-attribute-mapping.png)
-1. For guidance regarding JSONPath settings, refer to the section [Handling worker conversion and rehire scenario](#handling-worker-conversion-and-rehire-scenario) to ensure the *userId* value of the active employment record flows into Azure AD.
+1. For guidance regarding JSONPath settings, refer to the section [Handling worker conversion and rehiring scenarios](#handling-worker-conversion-and-rehiring-scenarios) to ensure the *userId* value of the active employment record flows into Azure AD.
1. Save the mapping. 1. Run the provisioning job to ensure that the *userId* values flow into Azure AD. > [!NOTE]
Usually the *personIdExternal* attribute value in SuccessFactors matches the *us
1. Go to *Attribute mapping -> Advanced -> Review Schema* to open the JSON schema editor. 1. Download a copy of the schema as backup. 1. In the schema editor, hit Ctrl-F and search for the JSON node containing the userId mapping, where it is mapped to a source Azure AD attribute.
-1. Update the flowBehavior attribute from "FlowWhenChanged" to "FlowAlways" as shown below.
+1. Update the flowBehavior attribute from "FlowWhenChanged" to "FlowAlways" as shown.
> [!div class="mx-imgBorder"] > ![Mapping flow behavior update](./media/sap-successfactors-integration-reference/mapping-flow-behavior-update.png) 1. Save the mapping and test the write-back scenario with provisioning-on-demand.
Usually the *personIdExternal* attribute value in SuccessFactors matches the *us
* [Learn how to configure SuccessFactors to Active Directory provisioning](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md) * [Learn how to configure writeback to SuccessFactors](../saas-apps/sap-successfactors-writeback-tutorial.md) * [Learn more about supported SuccessFactors Attributes for inbound provisioning](sap-successfactors-attribute-reference.md)----------
active-directory How To Mfa Authenticator Lite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-authenticator-lite.md
Previously updated : 03/15/2023 Last updated : 04/25/2023
Users receive a notification in Outlook mobile to approve or deny sign-in, or th
## Prerequisites - Your organization needs to enable Microsoft Authenticator (second factor) push notifications for some users or groups by using the Authentication methods policy. You can edit the Authentication methods policy by using the Azure portal or Microsoft Graph API.+
+ >[!TIP]
+ >We recommend that you also enable [system-preferred multifactor authentication (MFA)](concept-system-preferred-multifactor-authentication.md) when you enable Authenticator Lite. With system-preferred MFA enabled, users try to sign-in with Authenticator Lite before they try less secure telephony methods like SMS or voice call.
+ - If your organization is using the Active Directory Federation Services (AD FS) adapter or Network Policy Server (NPS) extensions, upgrade to the latest versions for a consistent experience. - Users enabled for shared device mode on Outlook mobile aren't eligible for Authenticator Lite. - Users must run a minimum Outlook mobile version.
If enabled for Authenticator Lite, users are prompted to register their account
:::image type="content" border="true" source="./media/how-to-mfa-authenticator-lite/registration.png" alt-text="Screenshot of how to register Authenticator Lite."::: >[!NOTE]
->Users with no MFA methods registered will be prompted to download the Authenticator App when they begin registration flow. For the most seamless Authenticator Lite registration experience, [provision your users a TAP](https://learn.microsoft.com/azure/active-directory/authentication/howto-authentication-temporary-access-pass) (temporary access pass) which they can use during registration.
+>If they don't have any MFA methods registered, users are prompted to download Authenticator when they begin the registration flow. For the most seamless experience, provision users with a [Temporary Access Pass (TAP)](howto-authentication-temporary-access-pass.md) that they can use during Authenticator Lite registration.
## Monitoring Authenticator Lite usage
active-directory Howto Authentication Passwordless Phone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-phone.md
Previously updated : 03/21/2023 Last updated : 04/26/2023 -+
This logic generally prevents a user in a hybrid tenant from being directed to A
### On-premises users
-An end user can be enabled for multifactor authentication (MFA) through an on-premises. The user can still create and utilize a single passwordless phone sign-in credential.
+An end user can be enabled for multifactor authentication (MFA) through an on-premises identity provider. The user can still create and utilize a single passwordless phone sign-in credential.
If the user attempts to upgrade multiple installations (5+) of Microsoft Authenticator with the passwordless phone sign-in credential, this change might result in an error.
active-directory Howto Sspr Authenticationdata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-authenticationdata.md
Previously updated : 01/29/2023 Last updated : 04/26/2023
The following fields can be set through PowerShell:
> [!IMPORTANT] > Azure AD PowerShell is planned for deprecation. You can start using [Microsoft Graph PowerShell](/powershell/microsoftgraph/overview) to interact with Azure AD as you would in Azure AD PowerShell, or use the [Microsoft Graph REST API for managing authentication methods](/graph/api/resources/authenticationmethods-overview).
-### Use Azure AD PowerShell version 1
+### Use Microsoft Graph PowerShell
-To get started, [download and install the Azure AD PowerShell module](/previous-versions/azure/jj151815(v=azure.100)#bkmk_installmodule). After it's installed, use the following steps to configure each field.
+To get started, [download and install the Microsoft Graph PowerShell module](/powershell/microsoftgraph/overview).
-#### Set the authentication data with Azure AD PowerShell version 1
+To quickly install from recent versions of PowerShell that support `Install-Module`, run the following commands. The first line checks to see if the module is already installed:
```PowerShell
-Connect-MsolService
-
-Set-MsolUser -UserPrincipalName user@domain.com -AlternateEmailAddresses @("email@domain.com")
-Set-MsolUser -UserPrincipalName user@domain.com -MobilePhone "+1 4251234567"
-Set-MsolUser -UserPrincipalName user@domain.com -PhoneNumber "+1 4252345678"
-
-Set-MsolUser -UserPrincipalName user@domain.com -AlternateEmailAddresses @("email@domain.com") -MobilePhone "+1 4251234567" -PhoneNumber "+1 4252345678"
+Get-Module Microsoft.Graph
+Install-Module Microsoft.Graph
+Select-MgProfile -Name "beta"
+Connect-MgGraph -Scopes "User.ReadWrite.All"
```
-#### Read the authentication data with Azure AD PowerShell version 1
+After the module is installed, use the following steps to configure each field.
+
+#### Set the authentication data with Microsoft Graph PowerShell
```PowerShell
-Connect-MsolService
+Connect-MgGraph -Scopes "User.ReadWrite.All"
-Get-MsolUser -UserPrincipalName user@domain.com | select AlternateEmailAddresses
-Get-MsolUser -UserPrincipalName user@domain.com | select MobilePhone
-Get-MsolUser -UserPrincipalName user@domain.com | select PhoneNumber
+Update-MgUser -UserId 'user@domain.com' -otherMails @("emails@domain.com")
+Update-MgUser -UserId 'user@domain.com' -mobilePhone "+1 4251234567"
+Update-MgUser -UserId 'user@domain.com' -businessPhones "+1 4252345678"
-Get-MsolUser | select DisplayName,UserPrincipalName,AlternateEmailAddresses,MobilePhone,PhoneNumber | Format-Table
+Update-MgUser -UserId 'user@domain.com' -otherMails @("emails@domain.com") -mobilePhone "+1 4251234567" -businessPhones "+1 4252345678"
```
-#### Read the Authentication Phone and Authentication Email options
-
-To read the **Authentication Phone** and **Authentication Email** when you use PowerShell version 1, use the following commands:
+#### Read the authentication data with Microsoft Graph PowerShell
```PowerShell
-Connect-MsolService
-Get-MsolUser -UserPrincipalName user@domain.com | select -Expand StrongAuthenticationUserDetails | select PhoneNumber
-Get-MsolUser -UserPrincipalName user@domain.com | select -Expand StrongAuthenticationUserDetails | select Email
+Connect-MgGraph -Scopes "User.Read.All"
+
+Get-MgUser -UserId 'user@domain.com' | select otherMails
+Get-MgUser -UserId 'user@domain.com' | select mobilePhone
+Get-MgUser -UserId 'user@domain.com' | select businessPhones
+
+Get-MgUser -UserId 'user@domain.com' | Select businessPhones, mobilePhone, otherMails | Format-Table
```
-### Use Azure AD PowerShell version 2
+### Use Azure AD PowerShell
To get started, [download and install the Azure AD version 2 PowerShell module](/powershell/module/azuread/).
Get-AzureADUser -ObjectID user@domain.com | select TelephoneNumber
Get-AzureADUser | select DisplayName,UserPrincipalName,otherMails,Mobile,TelephoneNumber | Format-Table ```
-### Use Microsoft Graph PowerShell
-
-To get started, [download and install the Microsoft Graph PowerShell module](/powershell/microsoftgraph/overview).
-
-To quickly install from recent versions of PowerShell that support `Install-Module`, run the following commands. The first line checks to see if the module is already installed:
-
-```PowerShell
-Get-Module Microsoft.Graph
-Install-Module Microsoft.Graph
-Select-MgProfile -Name "beta"
-Connect-MgGraph -Scopes "User.ReadWrite.All"
-```
-
-After the module is installed, use the following steps to configure each field.
-
-#### Set the authentication data with Microsoft Graph PowerShell
-
-```PowerShell
-Connect-MgGraph -Scopes "User.ReadWrite.All"
-
-Update-MgUser -UserId 'user@domain.com' -otherMails @("emails@domain.com")
-Update-MgUser -UserId 'user@domain.com' -mobilePhone "+1 4251234567"
-Update-MgUser -UserId 'user@domain.com' -businessPhones "+1 4252345678"
-
-Update-MgUser -UserId 'user@domain.com' -otherMails @("emails@domain.com") -mobilePhone "+1 4251234567" -businessPhones "+1 4252345678"
-```
-
-#### Read the authentication data with Microsoft Graph PowerShell
-
-```PowerShell
-Connect-MgGraph -Scopes "User.Read.All"
-
-Get-MgUser -UserId 'user@domain.com' | select otherMails
-Get-MgUser -UserId 'user@domain.com' | select mobilePhone
-Get-MgUser -UserId 'user@domain.com' | select businessPhones
-
-Get-MgUser -UserId 'user@domain.com' | Select businessPhones, mobilePhone, otherMails | Format-Table
-```
- ## Next steps Once authentication contact information is pre-populated for users, complete the following tutorial to enable self-service password reset:
active-directory Partner List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/partner-list.md
Microsoft verified partners can help you onboard Microsoft Entra Permissions Man
* **Product Expertise**
- Our partners will help you navigate Permissions Management, letting you in on best
+ Our partners help you navigate Permissions Management, letting you in on best
practices and guidance to enhance your security strategy. * **Risk Assessment**
- Partners will guide you through the Entra Permissions Management risk assessment and
+ Partners guide you through the Entra Permissions Management risk assessment and
help you identify top permission risks across your multicloud infrastructure. * **Onboarding and Deployment Support**
If you're a partner and would like to be considered for the Entra Permissions Ma
| ![Screenshot of Ascent Solutions logo.](media/partner-list/partner-ascent-solutions.png) | [Ascent Solutions Microsoft Entra Permissions Management Rapid Risk Assessment](https://www.meetascent.com/resources/microsoft-entra-permissions-rapid-risk-assessment) | ![Screenshot of Synergy Advisors logo.](media/partner-list/partner-synergy-advisors.png) | [Synergy Advisors Identity Optimization](https://synergyadvisors.biz/solutions-item/identity-optimization/) | ![Screenshot of BDO Digital logo.](media/partner-list/partner-bdo-digital.png) | [BDO Digital Managing Permissions Across Multicloud](https://www.bdodigital.com/services/security-compliance/cybersecurity/entra-permissions-management)
+| ![Screenshot of Mazzy Technologies logo.](media/partner-list/partner-mazzy-technologies.png) | [Mazzy Technologies Identity](https://mazzytechnologies.com/identity%3A-microsoft-entra)
## Next steps * For an overview of Permissions Management, see [What's Permissions Management?](overview.md)
active-directory Customize Branding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/customize-branding.md
Adding custom branding requires one of the following licenses:
- Azure AD Premium 2 - Office 365 (for Office apps)
+At least one of the previously listed licenses is sufficient to add and manage the company branding in your tenant.
+ Azure AD Premium editions are available for customers in China using the worldwide instance of Azure AD. Azure AD Premium editions aren't currently supported in the Azure service operated by 21Vianet in China. For more information about licensing and editions, see [Sign up for Azure AD Premium](active-directory-get-started-premium.md). The **Global Administrator** role is required to customize company branding.
active-directory Multi Tenant Common Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multi-tenant-common-considerations.md
Previously updated : 08/26/2022 Last updated : 04/19/2023 - # Common considerations for multi-tenant user management
-There are many considerations that are relevant to more than one collaboration pattern.
+This article is the third in a series of articles that provide guidance for configuring and providing user lifecycle management in Azure Active Directory (Azure AD) multi-tenant environments. The following articles in the series provide more information as described.
-## Directory object considerations
+- [Multi-tenant user management introduction](multi-tenant-user-management-introduction.md) is the first in the series of articles that provide guidance for configuring and providing user lifecycle management in Azure Active Directory (Azure AD) multi-tenant environments.
+- [Multi-tenant user management scenarios](multi-tenant-user-management-scenarios.md) describes three scenarios for which you can use multi-tenant user management features: end user-initiated, scripted, and automated.
+- [Common solutions for multi-tenant user management](multi-tenant-common-solutions.md) when single tenancy doesn't work for your scenario, this article provides guidance for these challenges: automatic user lifecycle management and resource allocation across tenants, sharing on-premises apps across tenants.
-You can use the console to manually create an invitation for a guest user account. When you do, the user object is created with a user type of *Guest*. Using other techniques to create invitations enable you to set the user type to something other than a Guest account. For example, when using the API you can configure whether the account is a member account or a guest account.
+The guidance helps to you achieve a consistent state of user lifecycle management. Lifecycle management includes provisioning, managing, and deprovisioning users across tenants using the available Azure tools that include [Azure AD B2B collaboration](../external-identities/what-is-b2b.md) (B2B) and [cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md).
-* Some of the [limits on Guest functionality can be removed](../external-identities/user-properties.md#guest-user-permissions).
+Synchronization requirements are unique to your organization's specific needs. As you design a solution to meet your organization's requirements, the following considerations in this article will help you identify your best options.
-* [You can convert Guest accounts to a user type of Member.](../external-identities/user-properties.md#can-azure-ad-b2b-users-be-added-as-members-instead-of-guests)
+- Cross-tenant synchronization
+- Directory object
+- Azure AD Conditional Access
+- Additional access control
+- Office 365
-> **IMPORTANT**
-> If you convert from a guest account to a user account, there might be issues with how Exchange Online handles B2B accounts. You canΓÇÖt mail-enable accounts invited as guest members. To get a guest member account mail-enabled, the best approach is to:
->* Invite the cross-org users as guest accounts.
->* Show the accounts in the GAL.
->* Set the UserType to Member.
+## Cross-tenant synchronization
-Using this approach, the accounts show up as MailUser in Exchange Online.
+[Cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md) can address collaboration and access challenges of multi-tenant organizations. The following table shows common synchronization use cases. You can use both cross-tenant synchronization and customer development to satisfy use cases when considerations are relevant to more than one collaboration pattern.
-### Issues with using mail-contact objects instead of external users or members
+| Use case | Cross-tenant sync | Custom development |
+| - | - | - |
+| User lifecycle management | ![Checkmark icon](media/multi-tenant-user-management-scenarios/checkmark.svg) | ![Checkmark icon](media/multi-tenant-user-management-scenarios/checkmark.svg) |
+| File sharing and app access | ![Checkmark icon](media/multi-tenant-user-management-scenarios/checkmark.svg) | ![Checkmark icon](media/multi-tenant-user-management-scenarios/checkmark.svg) |
+| Support sync to/from sovereign clouds | | ![Checkmark icon](media/multi-tenant-user-management-scenarios/checkmark.svg) |
+| Control sync from resource tenant | | ![Checkmark icon](media/multi-tenant-user-management-scenarios/checkmark.svg) |
+| Sync Group objects | | ![Checkmark icon](media/multi-tenant-user-management-scenarios/checkmark.svg) |
+| Sync Manager links | | ![Checkmark icon](media/multi-tenant-user-management-scenarios/checkmark.svg) |
+| Attribute level Source of Authority | | ![Checkmark icon](media/multi-tenant-user-management-scenarios/checkmark.svg) |
+| Azure AD write-back to AD | | ![Checkmark icon](media/multi-tenant-user-management-scenarios/checkmark.svg) |
-You can represent users from another tenant using a traditional GAL synchronization. If a GAL synchronization is done rather than using Azure AD B2B collaboration, a mail-contact object is created.
+## Directory object considerations
-* A mail-contact object and a mail-enabled guest user (member or guest) can't coexist in the same tenant with the same email address at the same time.
+### Inviting an external user with UPN versus SMTP Address
-* If a mail-contact object exists for the same mail address as the invited guest user, the guest user will be created, but is NOT mail enabled.
+Azure AD B2B expects that a user's **UserPrincipalName** (UPN) is the primary SMTP (Email) address for sending invitations. When the user's UPN is the same as their primary SMTP address, B2B works as expected. However, if the UPN is different than the external user's primary SMTP address, it may fail to resolve when a user accepts an invitation, which may be a challenge if you don't know the user's real UPN. You need to discover and use the UPN when sending invitations for B2B.
-* If the mail-enabled guest user exists with the same mail, an attempt to create a mail-contact object will throw an exception at creation time.
+The [Microsoft Exchange Online](#microsoft-exchange-online) section of this article explains how to change the default primary SMTP on external users. This technique is useful if you want all email and notifications for an external to flow to the real primary SMTP address as opposed to the UPN. It may be a requirement if the UPN isn't route-able for mail flow.
-The following are the results of various mail-contact objects and guest user states.
+### Converting an external user's UserType
-| Existing state| Provisioning scenario| Effective result |
-| - |-|-|
-| None| Invite B2B Member| Non-mail enabled member user. See Important note above |
-| None| Invite B2B Guest| Mail-enable guest user |
-| Mail-contact object exists| Invite B2B Member| Error ΓÇô Conflict of Proxy Addresses |
-| Mail-contact object exists| Invite B2B Guest| Mail-contact and Non-Mail enabled guest user. See Important note above |
-| Mail-enabled B2B Guest user| Create mail-contact object| Error |
-| Mail-enabled B2B Member user exists| Create mail-contact| Error |
+When you use the console to manually create an invitation for an external user account, it creates the user object with a guest user type. Using other techniques to create invitations enable you to set the user type to something other than an external guest account. For example, when using the API, you can configure whether the account is an external member account or an external guest account.
+- Some of the [limits on guest functionality can be removed](../external-identities/user-properties.md#guest-user-permissions).
+- You can [convert guest accounts to member user type.](../external-identities/user-properties.md#can-azure-ad-b2b-users-be-added-as-members-instead-of-guests)
-**Microsoft does not recommend traditional GAL synchronization**. Instead, use Azure AD B2B collaboration to create:
+If you convert from an external guest user to an external member user account, there might be issues with how Exchange Online handles B2B accounts. You can't mail-enable accounts that you invited as external member users. To mail-enable an external member account, use the following best approach.
-* External guest users that you enable to show in the GAL
+- Invite the cross-org users as external guest user accounts.
+- Show the accounts in the GAL.
+- Set the UserType to Member.
-* Create external member users, which show in the GAL by default, but aren't mail-enabled.
+When you use this approach, the accounts show up as MailUser objects in Exchange Online and across Office 365. Also, note there's a timing challenge. Make sure the user is visible in the GAL by checking both Azure AD user ShowInAddressList property aligns with the Exchange Online PowerShell HiddenFromAddressListsEnabled property (that are reverse of each other). The [Microsoft Exchange Online](#microsoft-exchange-online) section of this article provides more information on changing visibility.
-Some organizations use the mail-contact object to show users in the GAL. This approach integrates a GAL without providing other permissions as mail-contacts aren't security principals.
+It's possible to convert a member user to a guest user, which is useful for internal users that you want to restrict to guest-level permissions. Internal guest users are users that aren't employees of your organization but for whom you manage their users and credentials. It may allow you to avoid licensing the internal guest user.
-A better approach to achieve this goal is to:
-* Invite guest users
-* Unhide them from the GAL
-* Disable them by [blocking them from sign in](/powershell/module/azuread/set-azureaduser).
+### Issues with using mail contact objects instead of external users or members
-A mail-contact object can't be converted to a user object. Therefore, any properties associated with a mail-contact object can't be transferred. For example, group memberships and other resource access aren't transferred.
+You can represent users from another tenant using a traditional GAL synchronization. If you perform a GAL synchronization rather than using Azure AD B2B collaboration, it creates a mail contact object.
-Using a mail-contact object to represent a user presents the following challenges.
+- A mail contact object and a mail-enabled external member or guest user can't coexist in the same tenant with the same email address at the same time.
+- If a mail contact object exists for the same mail address as the invited external user, it creates the external user but isn't mail-enabled.
+- If the mail-enabled external user exists with the same mail, an attempt to create a mail contact object throws an exception at creation time.
-* **Office 365 Groups** ΓÇô Office 365 groups support policies governing the types of users allowed to be members of groups and interact with content associated with groups. For example, a group may not allow guest accounts to join. These policies can't govern mail-contact objects.
+> [!NOTE]
+> Using mail contacts requires Active Directory Directory Services (AD DS) or Exchange Online PowerShell. Microsoft Graph doesn't provide an API call for managing contacts.
-* **Azure AD Self-service group management (SSGM)** ΓÇô Mail-contact objects aren't eligible to be members in groups using the SSGM feature. More tools may be needed to manage groups with recipients represented as contacts instead of user objects.
+The following table displays the results of mail contact objects and external user states.
-* **Azure AD Identity Governance - Access Reviews** ΓÇô The access reviews feature can be used to review and attest to membership of Office 365 group. Access reviews are based on user objects. Members represented by mail-contact objects are out of scope of access reviews.
+| Existing state | Provisioning scenario | Effective result |
+| - | - | - |
+| None | Invite B2B Member | Non-mail-enabled member user. See important note above. |
+| None | Invite B2B Guest | Mail-enable external user. |
+| Mail contact object exists | Invite B2B Member | Error. Conflict of Proxy Addresses. |
+| Mail contact object exists | Invite B2B Guest | Mail-contact and Non-Mail enabled external user. See important note above. |
+| Mail-enabled external guest user | Create mail contact object | Error |
+| Mail-enabled external member user exists | Create mail-contact | Error |
-* **Azure AD Identity Governance - Entitlement Management (EM)** ΓÇô When EM is used to enable self-service access requests for external users via the companyΓÇÖs EM portal, a user object is created at the time of request. Mail-contact objects aren't supported.
+Microsoft recommends using Azure AD B2B collaboration (instead of traditional GAL synchronization) to create:
-## Azure AD conditional access considerations
+- External users that you enable to show in the GAL.
+- External member users that show in the GAL by default but aren't mail-enabled.
-The state of the user, device, or network in the userΓÇÖs home tenant isn't conveyed to the resource tenant. Therefore, a guest user account might not satisfy conditional access (CA) policies that use the following controls.
+You can choose to use the mail contact object to show users in the GAL. This approach integrates a GAL without providing other permissions because mail contacts aren't security principals.
-* **Require multi-factor authentication** ΓÇô Guest users will be required to register/respond to MFA in the resource tenant, even if MFA was satisfied in the home tenant, resulting in multiple MFA challenges. Also, if they need to reset their MFA proofs they might not be aware of the multiple MFA proof registrations across tenants. The lack of awareness might require the user to contact an administrator in the home tenant, resource tenant, or both.
+Follow this recommended approach to achieve the goal:
-* **Require device to be marked as compliant**ΓÇô Device identity isn't registered in the resource tenant, so the guest user will be blocked from accessing resources that require this control.
+- Invite guest users.
+- Unhide them from the GAL.
+- Disable them by [blocking them from sign-in](/powershell/module/azuread/set-azureaduser).
-* **Require Hybrid Azure AD Joined device** - Device identity isn't registered in the resource tenant (or on-premises Active Directory connected to resource tenant), so the guest user will be blocked from accessing resources that require this control.
+A mail contact object can't convert to a user object. Therefore, properties associated with a mail contact object can't transfer (such as group memberships and other resource access). Using a mail contact object to represent a user comes with the following challenges.
-* **Require approved client app or Require app protection policy** ΓÇô External guest users canΓÇÖt apply resource tenant Intune Mobile App Management (MAM) policy because it also requires device registration. Resource tenant Conditional Access (CA) policy using this control doesnΓÇÖt allow home tenant MAM protection to satisfy the policy. External Guest users should be excluded from every MAM-based CA policy.
+- **Office 365 Groups.** Office 365 Groups support policies governing the types of users allowed to be members of groups and interact with content associated with groups. For example, a group may not allow guest users to join. These policies can't govern mail contact objects.
+- **Azure AD Self-service group management (SSGM).** Mail contact objects aren't eligible to be members in groups using the SSGM feature. You may need more tools to manage groups with recipients represented as contacts instead of user objects.
+- **Azure AD Identity Governance, Access Reviews.** You can use the access reviews feature to review and attest to membership of Office 365 group. Access reviews are based on user objects. Members represented by mail contact objects are out of scope for access reviews.
+- **Azure AD Identity Governance, Entitlement Management (EM).** When you use EM to enable self-service access requests for external users in the company's EM portal, it creates a user object the time of request. It doesn't support mail contact objects.
-Additionally, while the following CA conditions can be used, be aware of the possible ramifications.
+## Azure AD conditional access considerations
-* **Sign-in risk and user risk** ΓÇô The sign in risk and user risk are determined in part by user behavior in their home tenant. The data and risk score is stored in the home tenant.
-ΓÇÄIf resource tenant policies block a guest user, a resource tenant admin might not be able to enable access. For more information, see [Identity Protection and B2B users](../identity-protection/concept-identity-protection-b2b.md).
+The state of the user, device, or network in the user's home tenant doesn't convey to the resource tenant. Therefore, an external user might not satisfy conditional access (CA) policies that use the following controls.
-* **Locations** ΓÇô The named location definitions that are defined in the resource tenant are used to determine the scope of the policy. Trusted locations managed in the home tenant aren't evaluated in the scope of the policy. In some scenarios, organizations might want to share trusted locations across tenants. To share trusted locations, the locations must be defined in each tenant where the resources and conditional access policies are defined.
+Where allowed, you can override this behavior with [Cross-Tenant Access Settings (CTAS)](../external-identities/cross-tenant-access-overview.md) that honor MFA and device compliance from the home tenant.
-## Other access control considerations
+- **Require multi-factor authentication.** Without CTAS configured, an external user must register/respond to MFA in the resource tenant (even if MFA was satisfied in the home tenant), which results in multiple MFA challenges. If they need to reset their MFA proofs, they might not be aware of the multiple MFA proof registrations across tenants. The lack of awareness might require the user to contact an administrator in the home tenant, resource tenant, or both.
+- **Require device to be marked as compliant.** Without CTAS configured, device identity isn't registered in the resource tenant, so the external user can't access resources that require this control.
+- **Require Hybrid Azure AD Joined device.** Without CTAS configured, device identity isn't registered in the resource tenant (or on-premises Active Directory connected to resource tenant), so the external user can't access resources that require this control.
+- **Require approved client app or Require app protection policy.** Without CTAS configured, external users can't apply the resource tenant Intune Mobile App Management (MAM) policy because it also requires device registration. Resource tenant Conditional Access (CA) policy, using this control, doesn't allow home tenant MAM protection to satisfy the policy. Exclude external users from every MAM-based CA policy.
-More considerations when configuring access control.
+Additionally, while you can use the following CA conditions, be aware of the possible ramifications.
-* Define [access control policies](../external-identities/authentication-conditional-access.md) to control access to resources.
-* Design CA policies with guest users in mind.
-* Create policies specifically for guest users.
-* If your organization is using the [All Users] condition in your existing CA policy, this policy will affect guest users because [Guest] users are in scope of [All Users].
-* Create dedicated CA policies for [Guest] accounts.
+- **Sign-in risk and user risk.** User behavior in their home tenant determines, in part, the sign-in risk and user risk. The home tenant stores the data and risk score. If resource tenant policies block an external user, a resource tenant admin might not be able to enable access. [Identity Protection and B2B users](../identity-protection/concept-identity-protection-b2b.md) explains how Identity Protection detects compromised credentials for Azure AD users.
+- **Locations.** The named location definitions in the resource tenant determine the scope of the policy. The scope of the policy doesn't evaluate trusted locations managed in the home tenant. If your organization wants to share trusted locations across tenants, define the locations in each tenant where you define the resources and conditional access policies.
-For information on hardening dynamic groups that utilize the [All Users] expression, see [Dynamic groups and Azure AD B2B collaboration](../external-identities/use-dynamic-groups.md).
+## Other access control considerations
-### Require User Assignment
+The following are considerations for configuring access control.
-If an application has the [User assignment required?] property set to [No], guest users can access the application. Application admins must understand access control impacts, especially if the application contains sensitive information. For more information, see [How to restrict your Azure AD app to a set of users](../develop/howto-restrict-your-app-to-a-set-of-users.md).
+- Define [access control policies](../external-identities/authentication-conditional-access.md) to control access to resources.
+- Design CA policies with external users in mind.
+- Create policies specifically for external users.
+- If your organization is using the [**all users** dynamic group](../external-identities/use-dynamic-groups.md) condition in your existing CA policy, this policy affects external users because they are in scope of **all users**.
+- Create dedicated CA policies for external accounts.
-### Terms and Conditions
+### Require user assignment
-[Azure AD terms of use](../conditional-access/terms-of-use.md) provides a simple method that organizations can use to present information to end users. You can use terms of use to require guest users to approve terms of use before accessing your resources.
+If an application has the **User assignment required?** property set to **No**, external users can access the application. Application admins must understand access control impacts, especially if the application contains sensitive information. [Restrict your Azure AD app to a set of users in an Azure AD tenant](../develop/howto-restrict-your-app-to-a-set-of-users.md) explains how registered applications in an Azure Active Directory (Azure AD) tenant are, by default, available to all users of the tenant who successfully authenticate.
-### Licensing considerations for guest users with Azure AD Premium features
+### Terms and conditions
+
+[Azure AD terms of use](../conditional-access/terms-of-use.md) provides a simple method that organizations can use to present information to end users. You can use terms of use to require external users to approve terms of use before accessing your resources.
-Azure AD External Identities (guest user) pricing is based on monthly active users (MAU). The number of active users is the count of unique users with authentication activity within a calendar month. MAU billing helps you reduce costs by offering a free tier and flexible, predictable pricing. In addition, the first 50,000 MAUs per month are free for both Premium P1 and Premium P2 features. Premium features include Conditional Access Policies and Azure AD Multi-Factor Authentication for guest users.
+### Licensing considerations for guest users with Azure AD Premium features
-For more information, see [MAU billing model for Azure AD External Identities](../external-identities/external-identities-pricing.md).
+Azure AD External Identities pricing is based on monthly active users (MAU). The number of active users is the count of unique users with authentication activity within a calendar month. [Billing model for Azure AD External Identities](../external-identities/external-identities-pricing.md) describes how pricing is based on monthly active users (MAU), which is the count of unique users with authentication activity within a calendar month.
## Office 365 considerations
-The following information addresses Office 365 in the context of this paperΓÇÖs scenarios. Detailed information is available at [Office 365 inter-tenant collaboration](/office365/enterprise/office-365-inter-tenant-collaboration).
+The following information addresses Office 365 in the context of this paper's scenarios. Detailed information is available at [Microsoft 365 inter-tenant collaboration 365 inter-tenant collaboration](/office365/enterprise/office-365-inter-tenant-collaboration) describes options that include using a central location for files and conversations, sharing calendars, using IM, audio/video calls for communication, and securing access to resources and applications.
### Microsoft Exchange Online
-Exchange online limits certain functionality for guest users. The limits may be lessened by creating external members instead of external guests. However, none of the following are supported for external users at this time.
-
-* A guest user can be assigned an Exchange Online license. However, they're prevented from being issued a token for Exchange Online. The results are that they aren't able to access the resource.
-
- * Guest users can't use shared or delegated Exchange Online mailboxes in the resource tenant.
-
- * A guest user can be assigned to a shared mailbox, but can't access it.
+Exchange online limits certain functionality for external users. You can lessen the limits by creating external member users instead of external guest users. Support for external users has the following limitations.
-* Guest users need to be unhidden in order to be included in the GAL. By default, they're hidden.
-
- * Hidden guest users are created at invite time. The creation is independent of whether the user has redeemed their invitation. So, if all guest users are unhidden, the list includes user objects of guest users who haven't redeemed an invitation. Based on your scenario, you may or may not want the objects listed.
-
- * Guest users may be unhidden using [Exchange Online PowerShell](/powershell/exchange/exchange-online-powershell-v2?view=exchange-ps&preserve-view=true) only. You may execute the [Set-MailUser](/powershell/module/exchange/set-mailuser?view=exchange-ps&preserve-view=true) PowerShell cmdlet to set the HiddenFromAddressListsEnabled property to a value of $false.
-ΓÇÄ
-`ΓÇÄSet-MailUser [GuestUserUPN] -HiddenFromAddressListsEnabled:$false`
-ΓÇÄ
-ΓÇÄWhere [GuestUserUPN] is the calculated UserPrincipalName. Example:
-ΓÇÄ
-`ΓÇÄSet-MailUser guestuser1_contoso.com#EXT#@fabricam.onmicrosoft.com -HiddenFromAddressListsEnabled:$false`
-
-* Updates to Exchange-specific properties, such as the PrimarySmtpAddress, ExternalEmailAddress, EmailAddresses, and MailTip, can only be set using [Exchange Online PowerShell](/powershell/exchange/exchange-online-powershell-v2?view=exchange-ps&preserve-view=true). The Exchange Online Admin Center doesn't allow you to modify the attributes using the GUI.
+- You can assign an Exchange Online license to an external user. However, you can't issue to them a token for Exchange Online. The results are that they can't access the resource.
+ - External users can't use shared or delegated Exchange Online mailboxes in the resource tenant.
+ - You can assign an external user to a shared mailbox but they can't access it.
+- You need to unhide external users to include them in the GAL. By default, they're hidden.
+ - Hidden external users are created at invite time. The creation is independent of whether the user has redeemed their invitation. So, if all external users are unhidden, the list includes user objects of external users who haven't redeemed an invitation. Based on your scenario, you may or may not want the objects listed.
+ - External users may be unhidden using [Exchange Online PowerShell](/powershell/exchange/exchange-online-powershell-v2). You can execute the [Set-MailUser](/powershell/module/exchange/set-mailuser) PowerShell cmdlet to set the **HiddenFromAddressListsEnabled** property to a value of **\$false**.
+
+For example:
-As shown above, you can use the [Set-MailUser](/powershell/module/exchange/set-mailuser?view=exchange-ps&preserve-view=true) PowerShell cmdlet for mail-specific properties. More user properties you can modify with the [Set-User](/powershell/module/exchange/set-user?view=exchange-ps&preserve-view=true) PowerShell cmdlet. Most of the properties can also be modified using the Azure AD Graph APIs.
+```Set-MailUser [ExternalUserUPN] -HiddenFromAddressListsEnabled:\$false\```
-### Microsoft SharePoint Online
+Where **ExternalUserUPN** is the calculated **UserPrincipalName.**
-SharePoint Online has its own service-specific permissions depending on if the user is a member of guest in the Azure Active Directory tenant.
+For example:
-For more information, see [Office 365 external sharing and Azure Active Directory B2B collaboration](../external-identities/o365-external-user.md).
+```Set-MailUser externaluser1_contoso.com#EXT#@fabricam.onmicrosoft.com\ -HiddenFromAddressListsEnabled:\$false```
-After enabling external sharing in SharePoint Online, the ability to search for guest users in the SharePoint Online people picker is OFF by default. This setting prohibits guest users from being discoverable when they're hidden from the Exchange Online GAL. You can enable guest users to become visible in two ways (not mutually exclusive):
+- External users may be unhidden using [Azure AD PowerShell](/powershell/module/azuread). You can execute the [Set-AzureADUser](/powershell/module/azuread/set-azureaduser) PowerShell cmdlet to set the **ShowInAddressList** property to a value of **\$true.**
+
+For example:
-* You can enable the ability to search for guest users in a few ways:
- * Modify the setting 'ShowPeoplePickerSuggestionsForGuestUsers' at the tenant and site collection level.
- * Set the feature using the [Set-SPOTenant](/powershell/module/sharepoint-online/Set-SPOTenant?view=sharepoint-ps&preserve-view=true) and [Set-SPOSite](/powershell/module/sharepoint-online/set-sposite?view=sharepoint-ps&preserve-view=true) [SharePoint Online PowerShell](/powershell/sharepoint/sharepoint-online/connect-sharepoint-online?view=sharepoint-ps&preserve-view=true) cmdlets.
-ΓÇÄ
+```Set-AzureADUser -ObjectId [ExternalUserUPN] -ShowInAddressList:\$true\```
-* Guest users that are visible in the Exchange Online GAL are also visible in the SharePoint Online people picker. The accounts are visible regardless of the setting for 'ShowPeoplePickerSuggestionsForGuestUsers'.
+Where **ExternalUserUPN** is the calculated **UserPrincipalName.**
-### Microsoft Teams
+For example:
-Microsoft Teams has features to limit access and based on user type. Changes to user type might affect content access and features available.
+```Set-AzureADUser -ObjectId externaluser1_contoso.com#EXT#@fabricam.onmicrosoft.com\ -ShowInAddressList:\$true```
-* The ΓÇ£tenant switchingΓÇ¥ mechanism for Microsoft Teams might require users to manually switch the context of their Teams client when working in Teams outside their home tenant.
+- There's a timing delay when you update attributes and must perform additional automation afterwards, which is a result of the backend sync that occurs between Azure AD and Exchange Online. Make sure the user is visible in the GAL by checking that the Azure AD user property **ShowInAddressList** aligns with the Exchange Online PowerShell property **HiddenFromAddressListsEnabled** (that are reverse of each other) before continuing operations.
+- You can only set updates to Exchange-specific properties (such as the **PrimarySmtpAddress**, **ExternalEmailAddress**, **EmailAddresses**, and **MailTip**) using [Exchange Online PowerShell](/powershell/exchange/exchange-online-powershell-v2). The Exchange Online Admin Center doesn't allow you to modify the attributes using the GUI.
-* You can enable Teams users from another entire external domain to find, call, chat, and set up meetings with your users with Teams Federation. For more information, see [Manage external access in Microsoft Teams](/microsoftteams/manage-external-access).
+As shown above, you can use the [Set-MailUser](/powershell/module/exchange/set-mailuser) PowerShell cmdlet for mail-specific properties. There are user properties that you can modify with the [Set-User](/powershell/module/exchange/set-user) PowerShell cmdlet. You can modify most properties with the Azure AD Graph APIs.
-
+One of the most useful features of **Set-MailUser** is the ability to manipulate the **EmailAddresses** property. This multi-valued attribute may contain multiple proxy addresses for the external user (such as SMTP, X500, SIP). By default, an external user has the primary SMTP address stamped correlating to the **UserPrincipalName** (UPN). If you want to change the primary SMTP and/or add SMTP addresses, you can set this property. You can't use the Exchange Admin Center; you must use Exchange Online PowerShell. [Add or remove email addresses for a mailbox in Exchange Online](/exchange/recipients-in-exchange-online/manage-user-mailboxes/add-or-remove-email-addresses) shows different ways to modify a multivalued property such as **EmailAddresses.**
-### Licensing considerations for guest users in Teams
+### Microsoft SharePoint Online
-When using Azure B2B with Office 365 workloads, there are some key considerations. There are instances in which guest accounts don't have the same experience as a member account.
+SharePoint Online has its own service-specific permissions depending on whether the user (internal or external) is of type member or guest in the Azure Active Directory tenant. [Office 365 external sharing and Azure Active Directory B2B collaboration](../external-identities/o365-external-user.md) describes how you can enable integration with SharePoint and OneDrive to share files, folders, list items, document libraries, and sites with people outside your organization, while using Azure B2B for authentication and management.
-**Microsoft groups**. See [Adding guests to office 365 Groups](https://support.office.com/article/adding-guests-to-office-365-groups-bfc7a840-868f-4fd6-a390-f347bf51aff6) to better understand the guest account experience in Microsoft Groups.
+After you enable external sharing in SharePoint Online, the ability to search for guest users in the SharePoint Online people picker is **OFF** by default. This setting prohibits guest users from being discoverable when they're hidden from the Exchange Online GAL. You can enable guest users to become visible in two ways (not mutually exclusive):
-**Microsoft Teams**. See [Team owner, member, and guest capabilities in Teams](https://support.office.com/article/team-owner-member-and-guest-capabilities-in-teams-d03fdf5b-1a6e-48e4-8e07-b13e1350ec7b?ui=en-US&rs=en-US&ad=US) to better understand the guest account experience in Microsoft Teams.
+- You can enable the ability to search for guest users in these ways:
+ - Modify the **ShowPeoplePickerSuggestionsForGuestUsers** setting at the tenant and site collection level.
+ - Set the feature using the [Set-SPOTenant](/powershell/module/sharepoint-online/Set-SPOTenant) and [Set-SPOSite](/powershell/module/sharepoint-online/set-sposite) [SharePoint Online PowerShell](/powershell/sharepoint/sharepoint-online/connect-sharepoint-online) cmdlets.
+- Guest users that are visible in the Exchange Online GAL are also visible in the SharePoint Online people picker. The accounts are visible regardless of the setting for **ShowPeoplePickerSuggestionsForGuestUsers**.
-You can enable a full fidelity experience in Teams by using B2B External Members. Office 365 recently clarified its licensing policy for Multi-tenant organizations.
+### Microsoft Teams
-* Users that are licensed in their home tenant may access resources in another tenant within the same legal entity. The access is granted using **External Members** setting with no extra licensing fees. The setting applies for SharePoint, OneDrive for Business, Teams, and Groups.
+Microsoft Teams has features to limit access and based on user type. Changes to user type can affect content access and features available. Microsoft Teams will require users to change their context using the tenant switching mechanism of their Teams client when working in Teams outside their home tenant.
- * Engineering work is underway to automatically check the license status of a user in their home tenant and enable them to participate as a Member with no extra license assignment or configuration. However, for customers who wish to use External Members now, there's a licensing workaround that requires the Account Executive to work with the Microsoft Business Desk.
+The tenant switching mechanism for Microsoft Teams might require users to manually switch the context of their Teams client when working in Teams outside their home tenant.
- * From now until the engineered licensing solution is enabled, customers can utilize a *Teams Trial license*. The license can be assigned to each user in their foreign tenant. The license has a one-year duration and enables all of the workloads listed above.
+You can enable Teams users from another entire external domain to find, call, chat, and set up meetings with your users with Teams Federation. [Manage external meetings and chat with people and organizations using Microsoft identities](/microsoftteams/manage-external-access) describes how you can allow users in your organization to chat and meet with people outside the organization who are using Microsoft as an identity provider.
- * For customers that wish to convert B2B Guests into B2B Members there are several known issues with Microsoft Teams such as the inability to create new channels and the ability to add applications to an existing Team.
+### Licensing considerations for guest users in Teams
-* **Identity Governance** features (Entitlement Management, Access Reviews) may require other licenses for guest users or external members. Work with the Account Team or Business Desk to get right answer for your organization.
+When you use Azure B2B with Office 365 workloads, key considerations include instances in which guest users (internal or external) don't have the same experience as member users.
-**Other products** (like Dynamics CRM) may require licensing in every tenant in which a user is represented. Work with your account team to get the right answer for your organization.
+- **Microsoft Groups.** [Adding guests to Office 365 Groups](https://support.office.com/article/adding-guests-to-office-365-groups-bfc7a840-868f-4fd6-a390-f347bf51aff6) describes how guest access in Microsoft 365 Groups lets you and your team collaborate with people from outside your organization by granting them access to group conversations, files, calendar invitations, and the group notebook.
+- **Microsoft Teams.** [Team owner, member, and guest capabilities in Teams](https://support.office.com/article/team-owner-member-and-guest-capabilities-in-teams-d03fdf5b-1a6e-48e4-8e07-b13e1350ec7b) describes the guest account experience in Microsoft Teams. You can enable a full fidelity experience in Teams by using external member users. Office 365 recently clarified its licensing policy for multi-tenant organizations. Users licensed in their home tenant may access resources in another tenant within the same legal entity. You can grant access using the external members setting with no extra licensing fees. The setting applies for SharePoint, OneDrive for Business, Teams, and Groups.
+- **Identity Governance features.** Entitlement Management and access reviews may require other licenses for external users.
+- **Other products.** Products such as Dynamics CRM may require licensing in every tenant in which a user is represented.
## Next steps
-[Multi-tenant user management introduction](multi-tenant-user-management-introduction.md)
-
-[Multi-tenant end user management scenarios](multi-tenant-user-management-scenarios.md)
-[Multi-tenant common solutions](multi-tenant-common-solutions.md)
+- [Multi-tenant user management introduction](multi-tenant-user-management-introduction.md) is the first in the series of articles that provide guidance for configuring and providing user lifecycle management in Azure Active Directory (Azure AD) multi-tenant environments.
+- [Multi-tenant user management scenarios](multi-tenant-user-management-scenarios.md) describes three scenarios for which you can use multi-tenant user management features: end user-initiated, scripted, and automated.
+- [Common solutions for multi-tenant user management](multi-tenant-common-solutions.md) when single tenancy doesn't work for your scenario, this article provides guidance for these challenges: automatic user lifecycle management and resource allocation across tenants, sharing on-premises apps across tenants.
active-directory Multi Tenant Common Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multi-tenant-common-solutions.md
Previously updated : 08/26/2022 Last updated : 04/19/2023 - # Common solutions for multi-tenant user management
-There are two specific challenges our customers have solved using current tools. Their solutions are detailed below. Microsoft recommends a single tenant wherever possible and is working on tools to resolve these challenges more easily. If single tenancy does not work for your scenario, these solutions have worked for customers today.
-
-## Automatic User Lifecycle Management and resource allocation across tenants
-
-A customer acquires a competitor they previously had close business relationships with. The organizations will maintain their corporate identities.
+This article is the fourth in a series of articles that provide guidance for configuring and providing user lifecycle management in Azure Active Directory (Azure AD) multi-tenant environments. The following articles in the series provide more information as described.
-### Current state
-
-Currently, the organizations are synchronizing each otherΓÇÖs users as contact-mail objects so that they show in each otherΓÇÖs directories.
+- [Multi-tenant user management introduction](multi-tenant-user-management-introduction.md) is the first in the series.
+- [Multi-tenant user management scenarios](multi-tenant-user-management-scenarios.md) describes three scenarios for which you can use multi-tenant user management features: end user-initiated, scripted, and automated.
+- [Common considerations for multi-tenant user management](multi-tenant-common-considerations.md) provides guidance for these considerations: cross-tenant synchronization, directory object, Azure AD Conditional Access, additional access control, and Office 365.
-* Each resource tenant has a mail-contact object enabled for all users in the other tenant.
+The guidance helps to you achieve a consistent state of user lifecycle management. Lifecycle management includes provisioning, managing, and deprovisioning users across tenants using the available Azure tools that include [Azure AD B2B collaboration](../external-identities/what-is-b2b.md) (B2B) and [cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md).
-* No access to applications is possible across tenants.
+Microsoft recommends a single tenant wherever possible. If single tenancy doesn't work for your scenario, reference the following solutions that Microsoft customers have successfully implemented for these challenges:
-### Goals
+- Automatic user lifecycle management and resource allocation across tenants
+- Sharing on-premises apps across tenants
-This customer had the following goals:
+## Automatic user lifecycle management and resource allocation across tenants
-* Every user continues to be shown in each organizationΓÇÖs GAL.
+A customer acquires a competitor with whom they previously had close business relationships. The organizations want to maintain their corporate identities.
- * User account lifecycle changes in the home tenant automatically reflected in the resource tenant GAL.
+### Current state
- * Attribute changes in home tenants (such as department, name, SMTP address) automatically reflected in resource tenant GAL and the home GAL.
+Currently, the organizations are synchronizing each other's users as mail contact objects so that they show in each other's directories. Each resource tenant has enabled mail contact objects for all users in the other tenant. Across tenants, no access to applications is possible.
-* Users can access applications and resources in the resource tenant.
+### Goals
-* Users can self-serve access requests to resources.
+The customer has the following goals.
-### Solution architecture
+- Every user appears in each organization's GAL.
+ - User account lifecycle changes in the home tenant automatically reflected in the resource tenant GAL.
+ - Attribute changes in home tenants (such as department, name, SMTP address) automatically reflected in resource tenant GAL and the home GAL.
+- Users can access applications and resources in the resource tenant.
+- Users can self-serve access requests to resources.
-The organizations will use a point-to-point architecture with a synchronization engine such as MIM.
+### Solution architecture
-![Example of a point-to-point architecture](media/multi-tenant-common-solutions/point-to-point-sync.png)
+The organizations use a point-to-point architecture with a synchronization engine such as Microsoft Identity Manager (MIM). The following diagram illustrates an example of point-to-point architecture for this solution.
-Each tenant admin does the following to create the user objects:
+ Diagram Title: Point-to-point architecture solution. On the left, a box labeled Company A contains internal users and external users. On the right, a box labeled Company B contains internal users and external users. Between Company A and Company B, sync engine interactions go from Company A to Company B and from Company B to Company A.
-1. Ensure that their database of users is up to date.
+Each tenant admin performs the following steps to create the user objects.
+1. Ensure that their user database is up to date.
1. [Deploy and configure MIM](/microsoft-identity-manager/microsoft-identity-manager-deploy).-
- 1. Address existing contact objects.
-
- 1. Create B2B External Member objects for the other tenantΓÇÖs members.
-
- 1. Synchronize user object attributes.
-
+ 1. Address existing contact objects.
+ 1. Create external member user objects for the other tenant's internal member users.
+ 1. Synchronize user object attributes.
1. Deploy and configure [Entitlement Management](../governance/entitlement-management-overview.md) access packages.-
- 1. Resources to be shared
-
- 1. Expiration and access review policies
+ 1. Resources to be shared.
+ 1. Expiration and access review policies.
## Sharing on-premises apps across tenants
-This customer, with multiple peer organizations, has a need to share on-premises applications from one of the tenants.
+A customer with multiple peer organizations needs to share on-premises applications from one of the tenants.
### Current state
-Multiple peer organizations are synchronizing B2B Guest users in a mesh topology, enabling resource allocation to their cloud applications across tenants. They currently
-
-* Share applications in Azure AD.
+Peer organizations are synchronizing external users in a mesh topology, enabling resource allocation to cloud applications across tenants. The customer offers following functionality.
-* Ensure user Lifecycle Management in resource tenant is automated based on home tenant. That is, add, modify, delete is reflected.
+- Share applications in Azure AD.
+- Automated user lifecycle management in resource tenant on home tenant (reflecting add, modify, and delete).
-* Only member users in Company A access Company AΓÇÖs on-premises apps.
+The following diagram illustrates this scenario, where only internal users in Company A access Company A's on-premises apps.
-![Multi-tenant scenario](media/multi-tenant-user-management-scenarios/mesh.png)
+ Diagram Title: Mesh topology. On the top left, a box labeled Company A contains internal users and external users. On the top right, a box labeled Company B contains internal users and external users. On the bottom left, a box labeled Company C contains internal users and external users. On the bottom right, a box labeled Company D contains internal users and external users. Between Company A and Company B and between Company C and Company D, sync engine interactions go between the companies on the left and the companies on the right.
### Goals
-Along with the current functionality, they would like to
-
-* Provide access to Company AΓÇÖs on-premises resources for the external guest users.
+Along with the current functionality, they want to offer the following.
-* Apps with SAML authentication
-
-* Apps with Integrated Windows Authentication and Kerberos
+- Provide access to Company A's on-premises resources for the external users.
+- Apps with SAML authentication.
+- Apps with Integrated Windows Authentication and Kerberos.
### Solution architecture
-Company A is currently providing SSO to on premises apps for its own members via Azure Application Proxy.
+Company A provides SSO to on-premises apps for its own internal users using Azure Application Proxy as illustrated in the following diagram.
-![Example of appliction access](media/multi-tenant-common-solutions/app-access-scenario.png)
+ Diagram Title: Azure Application Proxy architecture solution. On the top left, a box labeled https: //sales.constoso.com contains a globe icon to represent a webiste. Below it, a group of icons represent the User and are connected by an arrow from the User to the website. On the top right, a cloud shape labeled Azure Active Directory contains an icon labeled Application Proxy Service. An arrow connects the website to the cloud shape. On the bottom right, a box labeled DMZ has the subtitle On-premises. An arrow connects the cloud shape to the DMZ box, splitting in two to point to icons labeled Connector. Below the Connector icon on the left, an arrow points down and splits in two to point to icons labeled App 1 and App 2. Below the Connector icon on the right, an arrow points down to an icon labeled App 3.
-To enable their guest users to access the same on-premises applications Admins in tenet A will:
+Admins in tenant A perform the following steps to enable their external users to access the same on-premises applications.
1. [Configure access to SAML apps](../external-identities/hybrid-cloud-to-on-premises.md#access-to-saml-apps).
+1. [Configure access to other applications](../external-identities/hybrid-cloud-to-on-premises.md#access-to-iwa-and-kcd-apps).
+1. Create on-premises users through [MIM](../external-identities/hybrid-cloud-to-on-premises.md#create-b2b-guest-user-objects-through-mim) or [PowerShell](https://www.microsoft.com/download/details.aspx?id=51495).
-2. [Configure access to other applications](../external-identities/hybrid-cloud-to-on-premises.md#access-to-iwa-and-kcd-apps).
-
-3. Create on-premises guest users through [MIM](../external-identities/hybrid-cloud-to-on-premises.md#create-b2b-guest-user-objects-through-mim) or [PowerShell](https://www.microsoft.com/en-us/download/details.aspx?id=51495).
-
-For more information about B2B collaboration, see
+The following articles provide additional information about B2B collaboration.
-[Grant B2B users in Azure AD access to your on-premises resources](../external-identities/hybrid-cloud-to-on-premises.md)
-
-[Azure Active Directory B2B collaboration for hybrid organizations](../external-identities/hybrid-organizations.md)
+- [Grant B2B users in Azure AD access to your on-premises resources](../external-identities/hybrid-cloud-to-on-premises.md) describes how you can provide B2B users access to on-premises apps.
+- [Azure Active Directory B2B collaboration for hybrid organizations](../external-identities/hybrid-organizations.md) describes how you can give your external partners access to apps and resources in your organization.
## Next steps
-[Multi-tenant user management introduction](multi-tenant-user-management-introduction.md)
-
-[Multi-tenant end user management scenarios](multi-tenant-user-management-scenarios.md)
-[Multi-tenant common considerations](multi-tenant-common-considerations.md)
+- [Multi-tenant user management introduction](multi-tenant-user-management-introduction.md) is the first in the series of articles that provide guidance for configuring and providing user lifecycle management in Azure Active Directory (Azure AD) multi-tenant environments.
+- [Multi-tenant user management scenarios](multi-tenant-user-management-scenarios.md) describes three scenarios for which you can use multi-tenant user management features: end user-initiated, scripted, and automated.
+- [Common considerations for multi-tenant user management](multi-tenant-common-considerations.md) provides guidance for these considerations: cross-tenant synchronization, directory object, Azure AD Conditional Access, additional access control, and Office 365.
active-directory Multi Tenant User Management Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multi-tenant-user-management-introduction.md
Previously updated : 09/25/2021 Last updated : 04/19/2023
+# Multi-tenant user management introduction
-# Multi-tenant user management
+This article is the first in a series of articles that provide guidance for configuring and providing user lifecycle management in Azure Active Directory (Azure AD) multi-tenant environments. The following articles in the series provide more information as described.
-Provisioning users into a single Azure Active Directory (Azure AD) tenant provides a unified view of resources and a single set of policies and controls. This approach enables consistent user lifecycle management.
+- [Multi-tenant user management scenarios](multi-tenant-user-management-scenarios.md) describes three scenarios for which you can use multi-tenant user management features: end user-initiated, scripted, and automated.
+- [Common considerations for multi-tenant user management](multi-tenant-common-considerations.md) provides guidance for these considerations: cross-tenant synchronization, directory object, Azure AD Conditional Access, additional access control, and Office 365.
+- [Common solutions for multi-tenant user management](multi-tenant-common-solutions.md) when single tenancy doesn't work for your scenario, this article provides guidance for these challenges: automatic user lifecycle management and resource allocation across tenants, sharing on-premises apps across tenants.
-**Microsoft recommends a single tenant when possible**. However, immediate consolidation to a single Azure AD tenant isn't always possible. Multi-tenant organizations may span two or more Azure AD tenants. This can result in unique cross-tenant collaboration and management requirements.
+The guidance helps to you achieve a consistent state of user lifecycle management. Lifecycle management includes provisioning, managing, and deprovisioning users across tenants using the available Azure tools that include [Azure AD B2B collaboration](../external-identities/what-is-b2b.md) (B2B) and [cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md).
-Organizations may have identity and access management (IAM) requirements that are complicated by:
+Provisioning users into a single Azure Active Directory (Azure AD) tenant provides a unified view of resources and a single set of policies and controls. This approach enables consistent user lifecycle management.
-* mergers, acquisitions, and divestitures.
+Microsoft recommends a single tenant when possible. Having multiple tenants can result in unique cross-tenant collaboration and management requirements. When consolidation to a single Azure AD tenant isn't possible, multi-tenant organizations may span two or more Azure AD tenants for reasons that include the following.
-* collaboration across public, sovereign, and or regional clouds.
+- Mergers
+- Acquisitions
+- Divestitures
+- Collaboration across public, sovereign, and regional clouds
+- Political or organizational structures that prohibit consolidation to a single Azure AD tenant
-* political or organizational structures prohibiting consolidation to a single Azure AD tenant.
+## Azure AD B2B collaboration
-The guidance also provides guidance to help you achieve a consistent state of user lifecycle management. That is, provisioning, managing, and deprovisioning users across tenants using the tools available with Azure. Specifically, by using [Azure AD B2B collaboration](../external-identities/what-is-b2b.md).
+Azure AD B2B collaboration (B2B) enables you to securely share your company's applications and services with external users. When users can come from any organization, B2B helps you maintain control over access to your IT environment and data.
-## Azure AD B2B collaboration
+You can use B2B collaboration to provide external access for your organization's users to access multiple tenants that you manage. Traditionally, B2B external user access can authorize access to users that your own organization doesn't manage. However, external user access can manage access across multiple tenants that your organization manages.
-Azure AD collaboration enables you to securely share your company's applications and services with external guest users. The users can come from any organization. Using Azure AD B2B collaboration helps you maintain control over access to your IT environment and data.
-Azure AD B2B collaboration can also be used to provide guest access to internal users. Traditionally, B2B guest user access is used to authorize access to external users that aren't managed by your own organization. However, guest user access can also be used to manage access across multiple tenants managed by your organization. While not truly a B2B solution, Azure AD B2B collaboration can be used to manage internal users across your multi-tenant scenario.
+An area of confusion with Azure AD B2B collaboration surrounds the [properties of a B2B guest user](../external-identities/user-properties.md). The difference between internal versus external user accounts and member versus guest user types contributes to confusion. Initially, all internal users are member users with **UserType** attribute set to *Member* (member users). An internal user has an account in your Azure AD that is authoritative and authenticates to the tenant where the user resides. A member user is a licensed user with default [member-level permissions](../fundamentals/users-default-permissions.md) in the tenant. Treat member users as employees of your organization.
-The following links provide additional information you can visit to find out more about Azure AD B2B collaboration:
+You can invite an internal user of one tenant into another tenant as an external user. An external user signs in with an external Azure AD account, social identity, or other external identity provider. External users authenticate outside the tenant to which you invite the external user. At the B2B first release, all external users were of **UserType** *Guest* (guest users). Guest users have [restricted permissions](../fundamentals/users-default-permissions.md) in the tenant. For example, guest users can't enumerate the list of all users nor groups in the tenant directory.
-| Article| Description |
-| - |-|
-| **Conceptual articles**| |
-| [B2B best practices](../external-identities/b2b-fundamentals.md)| Recommendations for the smoothest experience for your users and administrators.|
-| [B2B and Office 365 external sharing](../external-identities/o365-external-user.md)| Explains the similarities and differences among sharing resources through B2B, office 365, and SharePoint/OneDrive.|
-| [Properties on an Azure AD B2B collaboration user](../external-identities/user-properties.md)| Describes the properties and states of the B2B guest user object in Azure Active Directory (Azure AD). The description provides details before and after invitation redemption.|
-| [B2B user tokens](../external-identities/user-token.md)| Provides examples of the bearer tokens for B2B a B2B guest user.|
-| [Conditional access for B2B](../external-identities/authentication-conditional-access.md)| Describes how conditional access and MFA work for guest users.|
-| **How-to articles**| |
-| [Use PowerShell to bulk invite Azure AD B2B collaboration users](../external-identities/bulk-invite-powershell.md)| Learn how to use PowerShell to send bulk invitations to external users.|
-| [Enforce multifactor authentication for B2B guest users](../external-identities/b2b-tutorial-require-mfa.md)|Use conditional access and MFA policies to enforce tenant, app, or individual guest user authentication levels. |
-| [Email one-time passcode authentication](../external-identities/one-time-passcode.md)| The Email one-time passcode feature authenticates B2B guest users when they can't be authenticated through other means like Azure AD, a Microsoft account (MSA), or Google federation.|
+For the **UserType** property on users, B2B supports flipping the bit from internal to external, and vice versa, which contributes to the confusion.
-## Terminology
+You can change an internal user from member user to guest user. For example, you can have an unlicensed internal guest user with guest-level permissions in the tenant, which is useful when you provide a user account and credentials to a person that isn't an employee of your organization.
-These terms are used throughout this content:
+You can change an external user from guest user to member user, giving member-level permissions to the external user. Making this change is useful when you manage multiple tenants for your organization and need to give member-level permissions to a user across all tenants. This need may occur regardless of whether the user is internal or external in any given tenant. Member users may require more [licenses](../external-identities/external-identities-pricing.md).
-* **Resource tenant**: The Azure AD tenant containing the resources that users want to share with others.
+Most documentation for B2B refers to an external user as a guest user. It conflates the **UserType** property in a way that assumes all guest users are external. When documentation calls out a guest user, it assumes that it's an external guest user. This article specifically and intentionally refers to external versus internal and member user versus guest user.
-* **Home tenant**: The Azure AD tenant containing users requiring access to the resources in the resource tenant.
+## Cross-tenant synchronization
-* **User lifecycle management**: The process of provisioning, managing, and deprovisioning user access to resources.
+[Cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md) enables multi-tenant organizations to provide seamless access and collaboration experiences to end users, leveraging existing B2B external collaboration capabilities. The feature doesn't allow cross-tenant synchronization across Microsoft sovereign clouds (such as Microsoft 365 US Government GCC High, DOD or Office 365 in China). See [Common considerations for multi-tenant user management](multi-tenant-common-considerations.md#cross-tenant-synchronization) for help with automated and custom cross-tenant synchronization scenarios.
-* **Unified GAL**: Each user in each tenant can see users from each organization in their Global Address List (GAL).
+Watch Arvind Harinder talk about the cross-tenant sync capability in Azure AD (embedded below).
-## Deciding how to meet your requirements
+> [!VIDEO https://www.youtube.com/embed/7B-PQwNfGBc]
+
+The following conceptual and how-to articles provide information about Azure AD B2B collaboration and cross-tenant synchronization.
+
+### Conceptual articles
+
+- [B2B best practices](../external-identities/b2b-fundamentals.md) features recommendations for providing the smoothest experience for users and administrators.
+- [B2B and Office 365 external sharing](../external-identities/o365-external-user.md) explains the similarities and differences among sharing resources through B2B, Office 365, and SharePoint/OneDrive.
+- [Properties on an Azure AD B2B collaboration user](../external-identities/user-properties.md) describes the properties and states of the external user object in Azure AD. The description provides details before and after invitation redemption.
+- [B2B user tokens](../external-identities/user-token.md) provides examples of the bearer tokens for B2B for an external user.
+- [Conditional access for B2B](../external-identities/authentication-conditional-access.md) describes how conditional access and MFA work for external users.
+- [Cross-tenant access settings](../external-identities/cross-tenant-access-overview.md) provides granular control over how external Azure AD organizations collaborate with you (inbound access) and how your users collaborate with external Azure AD organizations (outbound access).
+- [Cross-tenant synchronization overview](../multi-tenant-organizations/cross-tenant-synchronization-overview.md) explains how to automate creating, updating, and deleting Azure AD B2B collaboration users across tenants in an organization.
+
+### How-to articles
+
+- [Use PowerShell to bulk invite Azure AD B2B collaboration users](../external-identities/bulk-invite-powershell.md) describes how to use PowerShell to send bulk invitations to external users.
+- [Enforce multifactor authentication for B2B guest users](../external-identities/b2b-tutorial-require-mfa.md) explains how you can use conditional access and MFA policies to enforce tenant, app, or individual external user authentication levels.
+- [Email one-time passcode authentication](../external-identities/one-time-passcode.md) describes how the Email one-time passcode feature authenticates external users when they can't authenticate through other means like Azure AD, a Microsoft account (MSA), or Google Federation.
-Your organizationΓÇÖs unique requirements will determine your strategy for managing your users across tenants. To create an effective strategy, you must consider:
+## Terminology
+
+The following terms in Microsoft content refer to multi-tenant collaboration in Azure AD.
+
+- **Resource tenant:** The Azure AD tenant containing the resources that users want to share with others.
+- **Home tenant:** The Azure AD tenant containing users that require access to the resources in the resource tenant.
+- **Internal user:** An internal user has an account that is authoritative and authenticates to the tenant where the user resides.
+- **External user:** An external user has an external Azure AD account, social identity, or other external identity provider to sign in. The external user authenticates somewhere outside the tenant to which you have invited the external user.
+- **Member user:** An internal or external member user is a licensed user with default member-level permissions in the tenant. Treat member users as employees of your organization.
+- **Guest user:** An internal or external guest user has restricted permissions in the tenant. Guest users aren't employees of your organization (such as users for partners). Most B2B documentation refers to B2B Guests, which primarily refers to external guest user accounts.
+- **User lifecycle management:** The process of provisioning, managing, and deprovisioning user access to resources.
+- **Unified GAL:** Each user in each tenant can see users from each organization in their Global Address List (GAL).
+
+## Deciding how to meet your requirements
-* Number of tenants
+Your organization's unique requirements influence your strategy for managing users across tenants. To create an effective strategy, consider the following requirements.
-* Type of organization
+- Number of tenants
+- Type of organization
+- Current topologies
+- Specific user synchronization needs
-* Current topologies
+### Common requirements
-* Specific user synchronization needs
+Organizations initially focus on requirements that they want in place for immediate collaboration. Sometimes called *Day One* requirements, they focus on enabling end users to smoothly merge without interrupting their ability to generate value. As you define Day One and administrative requirements, consider including the following requirements and needs.
-### Common Requirements
+### Communications requirements
-Many organizations initially focus on requirements they want in place for immediate collaboration. Sometimes known as Day One requirements, these requirements focus on enabling end users to merge smoothly without interrupting their ability to generate value for the company. As you define your Day One and administrative requirements, consider including these goals:
+- **Unified global address list:** Each user can see all other users in the GAL in their home tenant.
+- **Free/busy information:** Enable users to discover each other's availability. You can do so with [Organization relationships in Exchange Online](/exchange/sharing/organization-relationships/create-an-organization-relationship).
+- **Chat and presence:** Enable users to determine others' presence and initiate instant messaging. Configure through [external access in Microsoft Teams](/microsoftteams/trusted-organizations-external-meetings-chat).
+- **Book resources such as meeting rooms:** Enable users to book conference rooms or other resources across the organization. Cross-tenant conference room booking isn't currently available.
+- **Single email domain:** Enable all users to send and receive mail from a single email domain (for example, `users@contoso.com`). Sending requires an email address rewrite solution.
-| Requirement categories| Common needs|
-| | - |
-| **Communications Requirements**| |
-| Unified global address list| Each user can see all other users in the GAL in their home tenant. |
-| Free/Busy information| Enable users to discover each otherΓÇÖs availability. You can do this with [Organization relationships in Exchange Online](/exchange/sharing/organization-relationships/create-an-organization-relationship).|
-| Chat and presence| Enable users to determine othersΓÇÖ presence and initiate instant messaging. This can be configured through [external access in Microsoft Teams](/microsoftteams/manage-external-access).|
-| Book resources such as meeting rooms| Enable users to book conference rooms or other resources across the organization. Cross-tenant conference room booking isn't possible today.|
-ΓÇÄSingle email domain| Enable all users to send and receive mail from a single email domain, for example *users@contoso.com*. Sending requires a third party address rewrite solution today.|
-| **Access requirements**| |
-| Document access| Enable users to share documents from SharePoint, OneDrive, and Teams |
-| Administration| Allow administrators to manage configuration of subscriptions and services deployed across multiple tenants |
-| Application access| Allow end users to access applications across the organization |
-| Single Sign-on| Enable users to access resources across the organization without the need to enter more credentials.|
+### Access requirements
+- **Document access:** Enable users to share documents from SharePoint, OneDrive, and Teams.
+- **Administration:** Allow administrators to manage configuration of subscriptions and services deployed across multiple tenants.
+- **Application access:** Allow end users to access applications across the organization.
+- **Single Sign On:** Enable users to access resources across the organization without the need to enter more credentials.
### Patterns for account creation
-There are several mechanisms available for creating and managing the lifecycle of your guest user accounts. Microsoft has distilled three common patterns. You can use the patterns to help define and implement your requirements. Choose which best aligns with your scenario and then focus on the details for that pattern.
+Microsoft mechanisms for creating and managing the lifecycle of your external user accounts follow three common patterns. You can use these patterns to help define and implement your requirements. Choose the pattern that best aligns with your scenario and then focus on the pattern details.
| Mechanism | Description | Best when | | - | - | - |
-| [End-user-initiated](multi-tenant-user-management-scenarios.md#end-user-initiated-scenario) | Resource tenant admins delegate the ability to invite guest users to the tenant, an app, or a resource to users within the resource tenant. Users from the home tenant are invited or sign up individually. | <li>Users need improvised access to resources. <li>No automatic synchronization of user attributes is necessary.<li>Unified GAL is not needed. |
-|[Scripted](multi-tenant-user-management-scenarios.md#scripted-scenario) | Resource tenant administrators deploy a scripted ΓÇ£pullΓÇ¥ process to automate discovery and provisioning of guest users to support sharing scenarios. | <li>No more than two tenants.<li>No automatic synchronization of user attributes is necessary.<li>Users need pre-configured (not improvised) access to resources.|
-|[Automated](multi-tenant-user-management-scenarios.md#automated-scenario)|Resource tenant admins use an identity provisioning system to automate the provisioning and deprovisioning processes. | <li>Full identity lifecycle management with provisioning and deprovisioning must be automated.<li>Attribute syncing is required to populate the GAL details and support dynamic entitlement scenarios.<li>Users need pre-configured (not ad hoc) access to resources on ΓÇ£Day OneΓÇ¥.|
-
+| [End user-initiated](multi-tenant-user-management-scenarios.md#end-user-initiated-scenario) | Resource tenant admins delegate the ability to invite external users to the tenant, an app, or a resource to users within the resource tenant. You can invite users from the home tenant or they can individually sign up. | Unified Global Address List on Day One not required. |
+|[Scripted](multi-tenant-user-management-scenarios.md#scripted-scenario) | Resource tenant administrators deploy a scripted *pull* process to automate discovery and provisioning of external users to support sharing scenarios. | Small number of tenants (such as two). |
+|[Automated](multi-tenant-user-management-scenarios.md#automated-scenario)| Resource tenant admins use an identity provisioning system to automate the provisioning and deprovisioning processes. | You need Unified Global Address List across tenants. |
## Next steps
-[Multi-tenant user management scenarios](multi-tenant-user-management-scenarios.md)
-
-[Multi-tenant common considerations](multi-tenant-common-considerations.md)
-
-[Multi-tenant common solutions](multi-tenant-common-solutions.md)
-
-[Multi-tenant synchronization from Active Directory](../hybrid/plan-connect-topologies.md#multiple-azure-ad-tenants)
+- [Multi-tenant user management scenarios](multi-tenant-user-management-scenarios.md) describes three scenarios for which you can use multi-tenant user management features: end user-initiated, scripted, and automated.
+- [Common considerations for multi-tenant user management](multi-tenant-common-considerations.md) provides guidance for these considerations: cross-tenant synchronization, directory object, Azure AD Conditional Access, additional access control, and Office 365.
+- [Common solutions for multi-tenant user management](multi-tenant-common-solutions.md) when single tenancy doesn't work for your scenario, this article provides guidance for these challenges: automatic user lifecycle management and resource allocation across tenants, sharing on-premises apps across tenants.
+- [Multi-tenant synchronization from Active Directory](../hybrid/plan-connect-topologies.md) describes various on-premises and Azure Active Directory (Azure AD) topologies that use Azure AD Connect sync as the key integration solution.
active-directory Multi Tenant User Management Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multi-tenant-user-management-scenarios.md
Previously updated : 08/26/2022 Last updated : 04/19/2023
# Multi-tenant user management scenarios
-## End-user initiated scenario
-For the end-user initiated scenario, resource tenant administrators delegate certain abilities to users in the tenant. Administrators enable end users to invite guest users to the tenant, an app, or a resource. Users from the home tenant are invited or sign up individually.
+This article is the second in a series of articles that provide guidance for configuring and providing user lifecycle management in Azure Active Directory (Azure AD) multi-tenant environments. The following articles in the series provide more information as described.
-An example use case would be for a global professional services firm who works with subcontractors on a project. Subcontractor users require access to the firmΓÇÖs applications and documents. Admins at the firm can delegate to firm end users the ability to invite subcontractors or configure self-service for subcontractor resource access.
+- [Multi-tenant user management introduction](multi-tenant-user-management-introduction.md) is the first in the series of articles that provide guidance for configuring and providing user lifecycle management in Azure Active Directory (Azure AD) multi-tenant environments.
+- [Common considerations for multi-tenant user management](multi-tenant-common-considerations.md) provides guidance for these considerations: cross-tenant synchronization, directory object, Azure AD Conditional Access, additional access control, and Office 365.
+- [Common solutions for multi-tenant user management](multi-tenant-common-solutions.md) when single tenancy doesn't work for your scenario, this article provides guidance for these challenges: automatic user lifecycle management and resource allocation across tenants, sharing on-premises apps across tenants.
-### Provision accounts
+The guidance helps to you achieve a consistent state of user lifecycle management. Lifecycle management includes provisioning, managing, and deprovisioning users across tenants using the available Azure tools that include [Azure AD B2B collaboration](../external-identities/what-is-b2b.md) (B2B) and [cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md).
-There are many ways end users can get invited to access resource tenant resources. Here are five of the most widely used:
+This article describes three scenarios for which you can use multi-tenant user management features.
-* [Application-based invitations](../external-identities/o365-external-user.md). Microsoft applications may enable invitation of guest users. B2B invitation settings must be configured both in Azure AD B2B and in the relevant application or applications.
+- End user-initiated
+- Scripted
+- Automated
-* [MyApps](../manage-apps/my-apps-deployment-plan.md). Users invite and assign a guest user to an application using MyApps. The user account must have [application self-service sign up](../manage-apps/manage-self-service-access.md) approver permissions. They can invite guest users to a group if they're a group owner.
+## End user-initiated scenario
-* [Entitlement Management](../governance/entitlement-management-overview.md): Enables admins or resource owners to tie resources, allowed external organizations, guest user expiration, and access policies together in access packages. Access packages can be published to enable self-service sign-up for resource access by guest users.
+In end user-initiated scenarios, resource tenant administrators delegate certain abilities to users in the tenant. Administrators enable end users to invite external users to the tenant, an app, or a resource. You can invite users from the home tenant or they can individually sign up.
-* [Azure portal ](../external-identities/add-users-administrator.md) End users given the [Guest Inviter role](../external-identities/external-collaboration-settings-configure.md) can sign in to the Azure portal and invite guest users from the Users menu in Azure Active Directory.
+For example, a global professional services firm collaborates with subcontractors on projects. Subcontractors (external users) require access to the firm's applications and documents. Firm admins can delegate to its end users the ability to invite subcontractors or configure self-service for subcontractor resource access.
-* [Programmatic (PowerShell, Graph API)](../external-identities/customize-invitation-api.md) End users given the [Guest Inviter role](../external-identities/external-collaboration-settings-configure.md) can invite guest users via PowerShell or Graph API.
+### Provisioning accounts
-### Redeem invitations
+Here are the most widely used ways to invite end users to access tenant resources.
-As part of provisioning accounts to access a resource, email invitations are sent to the invited users email address. When an invited user receives an invitation, they can:
-* Follow the link contained in the email to the redemption URL.
-* Try to access the resource directly.
+- [**Application-based invitations.**](../external-identities/o365-external-user.md) Microsoft applications (such as Teams and SharePoint) can enable external user invitations. Configure B2B invitation settings in both Azure AD B2B and in the relevant applications.
+- [**MyApps.**](../manage-apps/my-apps-deployment-plan.md) Users can invite and assign external users to applications using MyApps. The user account must have [application self-service sign up](../manage-apps/manage-self-service-access.md) approver permissions. Group owners can invite external users to their groups.
+- [**Entitlement management.**](../governance/entitlement-management-overview.md) Enable admins or resource owners to create access packages with resources, allowed external organizations, external user expiration, and access policies. Publish access packages to enable external user self-service sign-up for resource access.
+- [**Azure portal.**](../external-identities/add-users-administrator.md) End users with the [Guest Inviter role](../external-identities/external-collaboration-settings-configure.md) can sign in to the Azure portal and invite external users from the **Users** menu in Azure AD.
+- [**Programmatic (PowerShell, Graph API).**](../external-identities/customize-invitation-api.md) End users with the [Guest Inviter role](../external-identities/external-collaboration-settings-configure.md) can use PowerShell or Graph API to invite external users.
-When the user tries to access the resource directly, it is named just-in-time (JIT) redemption. The following are the user experiences for each redemption method.
+### Redeeming invitations
-#### Redemption URL
+When you provision accounts to access a resource, email invitations go to the invited user's email address.
-By accessing the [redemption URL](../external-identities/redemption-experience.md) in the email, the invited user can approve or deny the invitation (creating a guest user account if necessary).
+When an invited user receives an invitation, they can follow the link contained in the email to the redemption URL. In doing so, the invited user can approve or deny the invitation and, if necessary, create an external user account.
-#### Just-In-Time Redemption
+Invited users can also try to directly access the resource, referred to as just-in-time (JIT) redemption, if either of the following scenarios are true.
-The user can access the resource URL directly for just-in-time redemption if:
+- The invited user already has an Azure AD or Microsoft account, or
+- Admins have enabled [email one-time passcodes](../external-identities/one-time-passcode.md).
-* The invited user already has an Azure AD or Microsoft account
-ΓÇÄ-or-
+During JIT redemption, the following considerations may apply.
-* If [email one-time passcodes](../external-identities/one-time-passcode.md) is enabled
-
-A few points during JIT redemption:
-
-* If administrators have not suppressed accepting privacy terms, the user must accept the Privacy Terms agreement page before accessing the resource.
-
-* PowerShell allows control over whether an email is sent when inviting [via PowerShell](/powershell/module/azuread/new-azureadmsinvitation?view=azureadps-2.0&preserve-view=true).
-
-* You can allow or block invitations to guest users from specific organizations by using an [allowlist or a blocklist](../external-identities/allow-deny-list.md).
+- If administrators haven't [suppressed consent prompts](../external-identities/cross-tenant-access-settings-b2b-collaboration.md), the user must consent before accessing the resource.
+- PowerShell allows control over whether an email is sent when [using PowerShell](https://microsoft-my.sharepoint.com/powershell/module/azuread/new-azureadmsinvitation?view=azureadps-2.0&preserve-view=true) to invite users.
+- You can allow or block invitations to external users from specific organizations by using an [allowlist or a blocklist](../external-identities/allow-deny-list.md).
For more information, see [Azure Active Directory B2B collaboration invitation redemption](../external-identities/redemption-experience.md).
-#### Important ΓÇô enable one-time passcode authentication
+### Enabling one-time passcode authentication
-We strongly recommend enabling [email one time passcode authentication](../external-identities/one-time-passcode.md). This feature authenticates guest users when they can't be authenticated through other means, such as:
+In scenarios where you allow for ad hoc B2B, enable [email one-time passcode authentication](../external-identities/one-time-passcode.md). This feature authenticates external users when you can't authenticate them through other means, such as:
-* Azure AD
+- Azure AD.
+- Microsoft account (MSA).
+- Gmail account through Google Federation.
+- Account from a SAML/WS-Fed IDP through Direct Federation.
-* A Microsoft account (MSA)
+With one-time passcode authentication, there's no need to create a Microsoft account. When the external user redeems an invitation or accesses a shared resource, they receive a temporary code at their email address. They then enter the code to continue signing in.
-* A Gmail account through Google federation
+### Managing accounts
-* An account from a SAML/WS-Fed IDP through Direct Federation
+In the end user-initiated scenario, the resource tenant administrator manages external user accounts in the resource tenant (not updated based on the updated values in the home tenant). The only visible attributes received include the email address and display name.
-With one-time passcode authentication, there's no need to create a Microsoft account. When the guest user redeems an invitation or accesses a shared resource, they receive a temporary code. The code is sent to their email address and then they enter the code to continue signing in.
+You can configure more attributes on external user objects to facilitate different scenarios (such as entitlement scenarios). You can include populating the address book with contact details. For example, consider the following attributes.
-Without email one-time passcode authentication enabled, a Microsoft Account or a just-in-time ΓÇ£unmanagedΓÇ¥ Azure AD tenant may be created.
+- **HiddenFromAddressListsEnabled** [ShowInAddressList]
+- **FirstName** [GivenName]
+- **LastName** [SurName]
+- **Title**
+- **Department**
+- **TelephoneNumber**
->**Important**: Microsoft is deprecating the creation of unmanaged tenants and their users as this feature becomes Generally Available (GA) in each cloud environment.
+You might set these attributes to add external users to the global address list (GAL) and to people search (such as SharePoint People Picker). Other scenarios may require different attributes (such as setting entitlements and permissions for Access Packages, Dynamic Group Membership, and SAML Claims).
-### Manage accounts
+By default, the GAL hides invited external users. Set external user attributes to be unhidden to include them in the unified GAL. The Microsoft Exchange Online section of [Common considerations for multi-tenant user management](multi-tenant-common-considerations.md) describes how you can lessen limits by creating external member users instead of external guest users.
-The resource tenant administrator manages guest user accounts in the resource tenant. Guest users accounts aren't updated based on the updated values in the home tenant. In fact, the only visible attributes received include the email address and display name.
+### Deprovisioning accounts
-You can configure more attributes on guest user objects to facilitate scenarios. For example, you can include populating the address book with contact details, or in entitlement scenarios. For example, consider:
+End user-initiated scenarios decentralize access decisions, which can create the challenge of deciding when to remove an external user and their associated access. [Entitlement management](../governance/entitlement-management-overview.md) and [access reviews](../governance/manage-guest-access-with-access-reviews.md) let you review and remove existing external users and their resource access.
-* HiddenFromAddressListsEnabled
+When you invite users outside of entitlement management, you must create a separate process to review and manage their access. For example, if you directly invite an external user through SharePoint Online, it isn't in your entitlement management process.
-* GivenName
-
-* Surname
+## Scripted scenario
-* Title
+In the scripted scenario, resource tenant administrators deploy a scripted pull process to automate discovery and external user provisioning.
-* Department
+For example, a company acquires a competitor. Each company has a single Azure AD tenant. They want the following Day One scenarios to work without users having to perform any invitation or redemption steps. All users must be able to:
-* TelephoneNumber
+- Use single sign-on to all provisioned resources.
+- Find each other and resources in a unified GAL.
+- Determine each other's presence and initiate chat.
+- Access applications based on dynamic group membership.
-These attributes might be set to [add guests to the global address list](/microsoft-365/admin/create-groups/manage-guest-access-in-groups?view=o365-worldwide&preserve-view=true). Other scenarios may require different attributes, such as for setting entitlements and permissions for Access Packages, Dynamic Group Membership, SAML Claims, etc.
+In this scenario, each organization's tenant is the home tenant for its existing employees while being the resource tenant for the other organization's employees.
-Note: Invited guest users are hidden from the global address list (GAL) by default. Set guest user attributes to be unhidden for them to be included in the unified GAL. For more information, see the [Microsoft Exchange Online](multi-tenant-common-considerations.md#microsoft-exchange-online) documentation.
+### Provisioning accounts
-### Deprovision accounts
+With [Delta Query](/graph/delta-query-overview), tenant admins can deploy a scripted pull process to automate discovery and identity provisioning to support resource access. This process checks the home tenant for new users. It uses the B2B Graph APIs to provision new users as external users in the resource tenant as illustrated in the following multi-tenant topology diagram.
-End-user initiated scenarios decentralize access decisions. However, decentralizing access decisions creates the challenge of deciding when to remove a guest user and its associated access. [Entitlement Management](../governance/entitlement-management-overview.md) and [access reviews](../governance/manage-guest-access-with-access-reviews.md) provide a way to review and remove existing guest users and their access to resources.
+ Diagram Title: Multi-tenant topology diagram. On the left, a box labeled Company A contains internal users and external users. On the right, a box labeled Company B contains internal users and external users. Between Company A and Company B, an interaction goes from Company A to Company B with the label, Script to pull A users to B. Another interaction goes from Company B to Company A with the label, Script to pull B users to A.
-Note: If users are invited outside of entitlement management, you must create a separate process to review and manage those guest usersΓÇÖ access. For example, if the guest user is invited directly through SharePoint Online, it is not included in your entitlement management process.
+- Tenant administrators prearrange credentials and consent to allow each tenant to read.
+- Tenant administrators automate enumeration and pulling scoped users to the resource tenant.
+- Use Microsoft Graph API with consented permissions to read and provision users with the invitation API.
+- Initial provisioning can read source attributes and apply them to the target user object.
-## Scripted scenario
+### Managing accounts
-For the scripted scenario, resource tenant administrators deploy a scripted pull process to automate discovery and provisioning of guest users. This approach is common for customers using a scripted mechanism.
+The resource organization may augment profile data to support sharing scenarios by updating the user's metadata attributes in the resource tenant. However, if ongoing synchronization is necessary, then a synchronized solution might be a better option.
-An example use case would be a global shipping company that is acquired a competitor. Each company has a single Azure AD tenant. They want the following ΓÇ£day oneΓÇ¥ scenarios to work, without users having to perform any invitation or redemption steps. All users must be able to:
+### Deprovisioning accounts
-* Use single sign-on to all resources to which they are provisioned
+[Delta Query](/graph/delta-query-overview) can signal when an external user needs to be deprovisioned. [Entitlement management](../governance/entitlement-management-overview.md) and [access reviews](../governance/manage-guest-access-with-access-reviews.md) can provide a way to review and remove existing external users and their access to resources.
-* Find each other and also find other resources in a unified GAL
+If you invite users outside of entitlement management, create a separate process to review and manage external user access. For example, if you invite the external user directly through SharePoint Online, it isn't in your entitlement management process.
-* Determine each otherΓÇÖs presence and be able to initiate instant messages
+## Automated scenario
-* Access an application based on dynamic group membership
+Synchronized sharing across tenants is the most complex of the patterns described in this article. This pattern enables more automated management and deprovisioning options than end user-initiated or scripted.
-In this case, each organizationΓÇÖs tenant is the home tenant for its existing employees, and the resource tenant for the other organizationΓÇÖs employees.
+In automated scenarios, resource tenant admins use an identity provisioning system to automate provisioning and deprovisioning processes. In scenarios within Microsoft's Commercial Cloud instance, we have [cross-tenant synchronization](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/seamless-application-access-and-lifecycle-management-for-multi/ba-p/3728752). In scenarios that span Microsoft Sovereign Cloud instances, you need other approaches because cross-tenant synchronization doesn't yet support cross-cloud.
-### Provision accounts
+For example, within a Microsoft Commercial Cloud instance, a multi-national conglomeration has multiple subsidiaries with the following requirements.
-With [Delta Query](/graph/delta-query-overview), tenant admins can deploy a scripted pull process to automate discovery and provisioning of identities to support resource access. This process checks the home tenant for new users and uses the B2B Graph APIs to provision those users as invited users in the resource tenant. The following diagram shows the components.
-### Multi-tenant topology
+- Each has their own Azure AD tenant and need to work together.
+- In addition to synchronizing new users among tenants, automatically synchronize attribute updates and automate deprovisioning.
+- If an employee is no longer at a subsidiary, remove their account from all other tenants during the next synchronization.
-![Multi-tenant scenario](media\multi-tenant-user-management-scenarios\multi-tenant-scripted-scenario.png)
+In an expanded, cross-cloud scenario, a Defense Industrial Base (DIB) contractor has a defense-based and commercial-based subsidiary. These have competing regulation requirements:
-* Administrators of each tenant pre-arrange credentials and consent to allow read of each tenant.
+- The US defense business resides in a US Sovereign Cloud tenant such as Microsoft 365 US Government GCC High and Azure Government.
+- The commercial business resides in a separate Azure AD tenant in Commercial such as an Azure AD environment running on the global Azure cloud.
-* Allows tenant administrators to automate enumeration and ΓÇ£pullingΓÇ¥ scoped users to resource tenant.
+To act like a single company deployed into a cross-cloud architecture, all users synchronize to both tenants. This approach enables unified GAL availability across both tenants and may ensure that users automatically synchronized to both tenants include entitlements and restrictions to applications and content. Example requirements include:
-* Use MS Graph API with consented permissions to read and provision users via the invitation API.
+- US employees may have ubiquitous access to both tenants.
+- Non-US employees show in the unified GAL of both tenants but don't have access to protected content in the GCC High tenant.
-* Initial provisioning can read source attributes and apply them to the target user object.
+This scenario requires automatic synchronization and identity management to configure users in both tenants while associating them with the proper entitlement and data protection policies.
-### Manage accounts
+[Cross-cloud B2B](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/collaborate-securely-across-organizational-boundaries-and/ba-p/3094109) requires you to configure [Cross-Tenant Access Settings](../external-identities/cross-cloud-settings.md) for each organization with which you want to collaborate in the remote cloud instance.
-The resource organization may choose to augment profile data to support sharing scenarios by updating the userΓÇÖs metadata attributes in the resource tenant. However, if ongoing synchronization is necessary, then a synchronized solution might be a better option.
+### Provisioning accounts
-### Deprovision accounts
+This section describes three techniques for automating account provisioning in the automated scenario.
-[Delta Query](/graph/delta-query-overview) can signal when a guest user needs to be deprovisioned. [Entitlement Management](../governance/entitlement-management-overview.md) and [access reviews](../governance/manage-guest-access-with-access-reviews.md) can also provide a way to review and remove existing guest users and their access to resources.
+#### Technique 1: Use the [built-in cross-tenant synchronization capability in Azure AD](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/seamless-application-access-and-lifecycle-management-for-multi/ba-p/3728752)
-Note: If users are invited outside of entitlement management, you must create a separate process to review and manage those guest usersΓÇÖ access. For example, if the guest user is invited directly through SharePoint Online, it is not included in your entitlement management process.
+This approach only works when all tenants that you need to synchronize are in the same cloud instance (such as Commercial to Commercial).
-## Automated Scenario
+#### Technique 2: Provision accounts with Microsoft Identity Manager
-By far, the most complex pattern is synchronized sharing across tenants. This pattern enables more automated management and deprovisioning scenarios than user-initiated or scripted. For automated scenarios, resource tenant admins use an identity provisioning system to automate the provisioning and deprovisioning processes.
+Use an external Identity and Access Management (IAM) solution such as [Microsoft Identity Manager](https://microsoft.sharepoint-df.com/microsoft-identity-manager/microsoft-identity-manager-2016) (MIM) as a synchronization engine.
-An example use case would be a multinational conglomeration that has multiple subsidiaries. Each has their own Azure AD tenant, but need to work together. In addition to synchronizing new users among tenants, attribute updates must be automatically synchronized. Deprovisioning must be automated. For example, if an employee is no longer at a subsidiary, their account should be removed from all other tenants during the next synchronization.
+This advanced deployment uses MIM as a synchronization engine. MIM calls the [Microsoft Graph API](https://developer.microsoft.com/graph) and [Exchange Online PowerShell](/powershell/exchange/exchange-online/exchange-online-powershell?view=exchange-ps&preserve-view=true). Alternative implementations can include the cloud-hosted [Active Directory Synchronization Service](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview) (ADSS) managed service offering from [Microsoft Industry Solutions](https://www.microsoft.com/industrysolutions). There are non-Microsoft offerings that you can create from scratch with other IAM offerings (such as SailPoint, Omada, and OKTA).
-Or, consider the following expanded scenario. A Defense Industrial Base (DIB) contractor has a defence-based and commercial-based subsidiary. These have competing regulation requirements:
+You perform a cloud-to-cloud synchronization of identity (users, contacts, and groups) from one tenant to another as illustrated in the following diagram.
-* The US defense business resides in a US sovereign cloud tenant. For example, Microsoft 365 US Government GCC High.
+ Diagram Title: Cloud-to-cloud identity synchronization. On the left, a box labeled Company A contains internal users and external users. On the right, a box labeled Company B contains internal users and external users. Between Company A and Company B, sync engine interactions go from Company A to Company B and from Company B to Company A.
-* The commercial business resides in a separate Azure AD tenant in the public. For example, an Azure AD environment running on the global Azure cloud.
+Considerations that are outside the scope of this article include integration of on-premises applications.
- To act like a single company deployed into a ΓÇ£cross-sovereign cloudΓÇ¥ architecture, all users are synchronized to both tenants. This enables a unified GAL available across both tenants. It may also ensure that users automatically synchronized to both tenants include entitlements and restrictions to applications and content. For example:
+#### Technique 3: Provision accounts with Azure AD Connect
-* US employees may have ubiquitous access to both tenants.
+This technique only applies for complex organizations that manage all identity in traditional Windows Server-based Active Directory Domain Services (AD DS). The approach uses Azure AD Connect as the synchronization engine as illustrated in the following diagram.
-* Non-US employees show in the unified GAL of both tenants but does not have access to protected content in the GCC High tenant.
+ Diagram Title: Provision accounts with Azure AD Connect. The diagram shows four main components. A box on the left represents the Customer. A cloud shape on the right represents B2B Conversion. At the top center, a box containing a cloud shape represents Microsoft Commercial Cloud. At the bottom center, a box containing a cloud shape represents Microsoft US Government Sovereign Cloud. Inside the Customer box, a Windows Server Active Directory icon connects to two boxes, each with an Azure AD Connect label. The connections are dashed red lines with arrows at both ends and a refresh icon. Inside the Microsoft Commercial Cloud shape is another cloud shape that represents Microsoft Azure Commercial. Inside is another cloud shape that represents Azure Active Directory. To the right of the Microsoft Azure Commercial cloud shape is a box that represents Office 365 with a label, Public Multi-Tenant. A solid red line with arrows at both ends connects the Office 365 box with the Microsoft Azure Commercial cloud shape and a label, Hybrid Workloads. Two dashed lines connect from the Office 365 box to the Azure Active Directory cloud shape. One has an arrow on the end that connects to Azure Active Directory. The other has arrows on both ends. A dashed line with arrows on both ends connects the Azure Active Directory cloud shape to the top Customer Azure AD Connect box. A dashed line with arrows on both ends connects the Microsoft Commercial Cloud shape to the B2B Conversion cloud shape. Inside the Microsoft US Government Sovereign Cloud box is another cloud shape that represents Microsoft Azure Government. Inside is another cloud shape that represents Azure Active Directory. To the right of the Microsoft Azure Commercial cloud shape is a box that represents Office 365 with a label, US Gov GCC-High L4. A solid red line with arrows at both ends connects the Office 365 box with the Microsoft Azure Government cloud shape and a label, Hybrid Workloads. Two dashed lines connect from the Office 365 box to the Azure Active Directory cloud shape. One has an arrow on the end that connects to Azure Active Directory. The other has arrows on both ends. A dashed line with arrows on both ends connects the Azure Active Directory cloud shape to the bottom Customer Azure AD Connect box. A dashed line with arrows on both ends connects the Microsoft Commercial Cloud shape to the B2B Conversion cloud shape.
-This will require automatic synchronization and identity management to configure users in both tenants while associating them with the proper entitlement and data protection policies.
+Unlike the MIM technique, all identity sources (users, contacts, and groups) come from traditional Windows Server-based Active Directory Domain Services (AD DS). The AD DS directory is typically an on-premises deployment for a complex organization that manages identity for multiple tenants. Cloud-only identity isn't in scope for this technique. All identity must be in AD DS to include them in scope for synchronization.
-### Provision accounts
+Conceptually, this technique synchronizes a user into a home tenant as an internal member user (default behavior). Alternatively, it may synchronize a user into a resource tenant as an external user (customized behavior).
-This advanced deployment uses [Microsoft Identity Manager](/microsoft-identity-manager/microsoft-identity-manager-2016) (MIM) as a synchronization engine. MIM calls the [MS Graph API](https://developer.microsoft.com/graph) and [Exchange Online PowerShell](/powershell/exchange/exchange-online/exchange-online-powershell?view=exchange-ps&preserve-view=true). Alternative implementations can include the cloud hosted [Active Directory Synchronization Services](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview) (ADSS) managed service offering from [Microsoft Consulting Services](https://www.microsoft.com/en-us/msservices). There are also non-Microsoft offerings that can be created from scratch with other identity management offerings.
+Microsoft supports this dual sync user technique with careful considerations to what modifications occur in the Azure AD Connect configuration. For example, if you make modifications to the wizard-driven setup configuration, you need to document the changes if you must rebuild the configuration during a support incident.
-These are complex scenarios and we recommend you work with your partners, Microsoft account team, and any other available resources throughout your planning and execution.
+Out of the box, Azure AD Connect can't synchronize an external user. You must augment it with an external process (such as a PowerShell script) to convert the users from internal to external accounts.
-Note: There are considerations that are outside the scope of this document. For example, [integration of on-premises applications](../app-proxy/what-is-application-proxy.md).
+Benefits of this technique include Azure AD Connect synchronizing identity with attributes stored in AD DS. Synchronization may include address book attributes, manager attributes, group memberships, and other hybrid identity attributes into all tenants within scope. It deprovisions identity in alignment with AD DS. It doesn't require a more complex IAM solution to manage the cloud identity for this specific task.
-### Choose the right topology
+There's a one-to-one relationship of Azure AD Connect per tenant. Each tenant has its own configuration of Azure AD Connect that you can individually alter to support member and/or external user account synchronization.
-Most customers use one of two topologies in automated scenarios.
+### Choosing the right topology
-* A mesh topology enables sharing of all resources in all tenants. Users from other tenants are created in each resource tenant as guest users.
+Most customers use one of the following topologies in automated scenarios.
-* A single resource tenant topology uses a single tenant (the resource tenant), in which users from other companies are external guest users.
+- A mesh topology enables sharing of all resources in all tenants. You create users from other tenants in each resource tenant as external users.
+- A single resource tenant topology uses a single tenant (the resource tenant), in which users from other tenants are external users.
-The following table can be used s a decision tree while you are designing your solution. We illustrate both topologies following the table. To help you determine which is right for your organization, consider the following.
+Reference the following table as a decision tree while you design your solution. Following the table, diagrams illustrate both topologies to help you determine which is right for your organization.
-Comparison of mesh versus single resource tenant topologies
+#### Comparison of mesh versus single resource tenant topologies
-| Consideration| Mesh topology| Single resource tenant |
+| Consideration | Mesh topology | Single resource tenant |
| - | - |-|
-| Each company has separate Azure AD tenant with users and resources| Yes| Yes |
-| **Resource location and collaboration**| | |
-| Shared apps and other resources remain in their current home tenant| Yes| No - only resources in the resource tenant are shared. |
-| All viewable in individual companyΓÇÖs GALs (Unified GAL)| Yes| No |
-| **Resource access and administration**| | |
-| ALL applications connected to Azure AD can be shared among all companies| Yes| No - only those in the resource tenant are shared. Those remaining in other tenants aren't. |
-| Global resource administration ΓÇÄ| Continue at tenant level| Consolidated in the resource tenant |
-| Licensing ΓÇô Office 365 <br>SharePoint Online, unified GAL, Teams access all support guests; however, other Exchange Online scenarios do not| Continues at tenant level| Continues at tenant level |
-| Licensing ΓÇô [Azure AD (premium)](../external-identities/external-identities-pricing.md)| First 50 K Monthly Active Users are free (per tenant).| First 50 K Monthly Active Users are free. |
-| Licensing ΓÇô SaaS apps| Remain in individual tenants, may require licenses per user per tenant| All shared resources reside in the single resource tenant. You can investigate consolidating licenses to the single tenant if appropriate. |
+| Each company has separate Azure AD tenant with users and resources | Yes | Yes |
+| **Resource location and collaboration** | | |
+| Shared apps and other resources remain in their current home tenant | Yes | No. You can share only apps and other resources in the resource tenant. You can't share apps and other resources remaining in other tenants. |
+| All viewable in individual company's GALs (Unified GAL) | Yes| No |
+| **Resource access and administration** | | |
+| You can share ALL applications connected to Azure AD among all companies. | Yes | No. Only applications in the resource tenant are shared. You can't share applications remaining in other tenants. |
+| Global resource administration | Continue at tenant level. | Consolidated in the resource tenant. |
+| Licensing: Office 365 SharePoint Online, unified GAL, Teams access all support guests; however, other Exchange Online scenarios don't. | Continues at tenant level. | Continues at tenant level. |
+| Licensing: [Azure AD (premium)](../external-identities/external-identities-pricing.md) | First 50 K Monthly Active Users are free (per tenant). | First 50 K Monthly Active Users are free. |
+| Licensing: SaaS apps | Remain in individual tenants, may require licenses per user per tenant. | All shared resources reside in the single resource tenant. You can investigate consolidating licenses to the single tenant if appropriate. |
#### Mesh topology
-![Mesh topology](media/multi-tenant-user-management-scenarios/mesh.png)
+The following diagram illustrates mesh topology.
-In a mesh topology, every user in each home tenant is synchronized to each of the other tenants, which become resource tenants.
+ Diagram Title: Mesh topology. On the top left, a box labeled Company A contains internal users and external users. On the top right, a box labeled Company B contains internal users and external users. On the bottom left, a box labeled Company C contains internal users and external users. On the bottom right, a box labeled Company D contains internal users and external users. Between Company A and Company B and between Company C and Company D, sync engine interactions go between the companies on the left and the companies on the right.
-* This enables any resource within a tenant to be shared with guest users.
+In a mesh topology, every user in each home tenant synchronizes to each of the other tenants, which become resource tenants.
-* This enables each organization to see all users in the conglomerate. In the illustration above there are four unified GALs, each of which contains the home users and the guest users from the other three tenants.
+- You can share any resource within a tenant with external users.
+- Each organization can see all users in the conglomerate. In the above diagram, there are four unified GALs, each of which contains the home users and the external users from the other three tenants.
-See the [common considerations](multi-tenant-common-considerations.md#directory-object-considerations) section of this document for additional information on provisioning, managing, and deprovisioning users in this scenario.
+[Common considerations for multi-tenant user management](multi-tenant-common-considerations.md) provides information on provisioning, managing, and deprovisioning users in this scenario.
-#### Mesh topology for cross-sovereign cloud
+#### Mesh topology for cross-cloud
-The mesh topology can be used in as few as two tenants, such as in the scenario for the DIB defense contractor straddling a cross-sovereign cloud solution. As with the mesh topology, every user in each home tenant is synchronized to the other tenant, that effectively becomes a resource tenant. In the illustration above, the public Commercial tenant member user is synchronized to the US sovereign GCC High tenant as a guest user account. At the same time, the GCC High member user is synchronized to Commercial as a guest user account.
+You can use the mesh topology in as few as two tenants, such as in the scenario for a DIB defense contractor straddling a cross-sovereign cloud solution. As with the mesh topology, each user in each home tenant synchronizes to the other tenant, which becomes a resource tenant. In the [Technique 3 section](#technique-3-provision-accounts-with-azure-ad-connect) diagram, the public Commercial tenant internal user synchronizes to the US sovereign GCC High tenant as an external user account. At the same time, the GCC High internal user synchronizes to Commercial as an external user account.
->**Note**: The illustration also describes where the data is stored. Data categorization and compliance is outside the scope of this whitepaper, but demonstrates that you can include entitlements and restrictions to applications and content. Content may include where a member userΓÇÖs ΓÇÿpersonal dataΓÇÖ resides. For example, data stored in their Exchange Online mailbox or OneDrive for Business. The content might only be in their home tenant, not in the resource tenant. Shared data might reside in either tenant. You can restrict access to the content through access control and conditional access policies.
+The diagram also illustrates data storage locations. Data categorization and compliance is outside the scope of this article, but you can include entitlements and restrictions to applications and content. Content may include locations where an internal user's user-owned data resides (such as data stored in an Exchange Online mailbox or OneDrive for Business). The content may be in their home tenant and not in the resource tenant. Shared data may reside in either tenant. You can restrict access to the content through access control and conditional access policies.
#### Single resource tenant topology
-![Single resource tenant](media/multi-tenant-user-management-scenarios/single-resource-tenant-scenario.png)
+The following diagram illustrates single resource tenant topology.
-In a single resource tenant topology, users and their attributes are synchronized to the resource tenant (Company A in the illustration above).
+ Diagram Title: Single resource tenant topology. At the top, a box that represents Company A contains three boxes. On the left, a box represents all shared resources. In the middle, a box represents internal users. On the right, a box represents external users. Below the Company A box is a box that represents the sync engine. Three arrows connect the sync engine to Company A. Below the sync engine box, at the bottom of the diagram, are three boxes that represent Company B, Company C, and Company D. An arrow connects each of them to the sync engine box. Inside each of the bottom company boxes is a label, Microsoft Graph API Exchange online PowerShell, and icons that represent internal users.
-* All resources shared among the member organizations must reside in the single resource tenant.
- * If multiple subsidiaries have subscriptions to the same SaaS apps, this could be an opportunity to consolidate those subscriptions.
+In a single resource tenant topology, users and their attributes synchronize to the resource tenant (Company A in the above diagram).
-* Only the GAL in the resource tenant displays users from all companies.
+- All resources shared among the member organizations must reside in the single resource tenant. If multiple subsidiaries have subscriptions to the same SaaS apps, there's an opportunity to consolidate those subscriptions.
+- Only the GAL in the resource tenant displays users from all companies.
-### Manage accounts
+### Managing accounts
-This solution detects and syncs attribute changes from source tenant users to resource tenant guest users. These attributes can be used to make authorization decisions. For example, when using dynamic groups.
+This solution detects and syncs attribute changes from source tenant users to resource tenant external users. You can use these attributes to make authorization decisions (such as when you're using dynamic groups).
-### Deprovision accounts
+### Deprovisioning accounts
-Automation detects deletion of the object in source environment and deletes the associated guest user object in the target environment.
+Automation detects object deletion in the source environment and deletes the associated external user object in the target environment.
-See the [Common considerations](multi-tenant-common-considerations.md) section of this document for additional information on provisioning, managing, and deprovisioning users in this scenario.
+[Common considerations for multi-tenant user management](multi-tenant-common-considerations.md) provides additional information on provisioning, managing, and deprovisioning users in this scenario.
## Next steps
-[Multi-tenant user management introduction](multi-tenant-user-management-introduction.md)
-
-[Multi-tenant common considerations](multi-tenant-common-considerations.md)
-[Multi-tenant common solutions](multi-tenant-common-solutions.md)
+- [Multi-tenant user management introduction](multi-tenant-user-management-introduction.md) is the first in the series of articles that provide guidance for configuring and providing user lifecycle management in Azure Active Directory (Azure AD) multi-tenant environments.
+- [Common considerations for multi-tenant user management](multi-tenant-common-considerations.md) provides guidance for these considerations: cross-tenant synchronization, directory object, Azure AD Conditional Access, additional access control, and Office 365.
+- [Common solutions for multi-tenant user management](multi-tenant-common-solutions.md) when single tenancy doesn't work for your scenario, this article provides guidance for these challenges: automatic user lifecycle management and resource allocation across tenants, sharing on-premises apps across tenants.
active-directory Howto Manage Inactive User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md
This article explains a method to handle obsolete user accounts in Azure Active
## What are inactive user accounts?
-Inactive accounts are user accounts that aren't required anymore by members of your organization to gain access to your resources. One key identifier for inactive accounts is that they haven't been used *for a while* to sign in to your environment. Because inactive accounts are tied to the sign-in activity, you can use the timestamp of the last sign-in that was successful to detect them.
+Inactive accounts are user accounts that aren't required anymore by members of your organization to gain access to your resources. One key identifier for inactive accounts is that they haven't been used *for a while* to sign in to your environment. Because inactive accounts are tied to the sign-in activity, you can use the timestamp of the last time an account attempted to sign in to detect inactive accounts.
The challenge of this method is to define what *for a while* means for your environment. For example, users might not sign in to an environment *for a while*, because they are on vacation. When defining what your delta for inactive user accounts is, you need to factor in all legitimate reasons for not signing in to your environment. In many organizations, the delta for inactive user accounts is between 90 and 180 days.
-The last successful sign-in provides potential insights into a user's continued need for access to resources. It can help with determining if group membership or app access is still needed or could be removed. For external user management, you can understand if an external user is still active within the tenant or should be cleaned up.
+The last sign-in provides potential insights into a user's continued need for access to resources. It can help with determining if group membership or app access is still needed or could be removed. For external user management, you can understand if an external user is still active within the tenant or should be cleaned up.
## Detect inactive user accounts with Microsoft Graph <a name="how-to-detect-inactive-user-accounts"></a>
-You can detect inactive accounts by evaluating the `lastSignInDateTime` property exposed by the `signInActivity` resource type of the **Microsoft Graph API**. The `lastSignInDateTime` property shows the last time a user made a successful interactive sign-in to Azure AD. Using this property, you can implement a solution for the following scenarios:
+You can detect inactive accounts by evaluating the `lastSignInDateTime` property exposed by the `signInActivity` resource type of the **Microsoft Graph API**. The `lastSignInDateTime` property shows the last time a user attempted to make an interactive sign-in attempt in Azure AD. Using this property, you can implement a solution for the following scenarios:
- **Last sign-in date and time for all users**: In this scenario, you need to generate a report of the last sign-in date of all users. You request a list of all users, and the last `lastSignInDateTime` for each respective user: - `https://graph.microsoft.com/v1.0/users?$select=displayName,signInActivity`
The following details relate to the `lastSignInDateTime` property.
- AuditLog.Read.All - User.Read.All -- Each interactive sign-in that was successful results in an update of the underlying data store. Typically, successful sign-ins show up in the related sign-in report within 10 minutes.
+- Each interactive sign-in attempt results in an update of the underlying data store. Typically, sign-ins show up in the related sign-in report within 6 hours.
-- To generate a `lastSignInDateTime` timestamp, you need a successful sign-in. The value of the `lastSignInDateTime` property may be blank if:
- - The last successful sign-in of a user took place before April 2020.
- - The affected user account was never used for a successful sign-in.
+- To generate a `lastSignInDateTime` timestamp, you an attempted sign-in. The value of the `lastSignInDateTime` property may be blank if:
+ - The last attempted sign-in of a user took place before April 2020.
+ - The affected user account was never used for a sign-in attempt.
- The last sign-in date is associated with the user object. The value is retained until the next sign-in of the user.
active-directory Ideo Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ideo-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md). * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
-* [A IDEO tenant](https://www.shape.space/product/pricing)
+* [A IDEO tenant](https://www.saasworthy.com/product/shape-space/pricing)
* A user account on IDEO | Shape with Admin permissions.
active-directory Infor Cloudsuite Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/infor-cloudsuite-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
> [!TIP] > You may also choose to enable SAML-based single sign-on for Infor CloudSuite, following the instructions provided in the [Infor CloudSuite Single sign-on tutorial](./infor-cloud-suite-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, though these two features compliment each other.
-> [!NOTE]
-> To learn more about Infor CloudSuite's SCIM endpoint, refer [this](https://docs.infor.com/mingle/12.0.x/en-us/minceolh/jho1449382121585.html#).
- ### To configure automatic user provisioning for Infor CloudSuite in Azure AD: 1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
advisor Advisor How To Improve Reliability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-how-to-improve-reliability.md
+
+ Title: Improve reliability of your business-critical applications using Azure Advisor.
+description: Use Azure Advisor to evaluate the reliability posture of your business-critical applications, assess risks and plan improvements.
+ Last updated : 04/25/2023+++
+# Improve the reliability of your business-critical applications using Azure Advisor
+
+Azure Advisor helps you assess and improve the reliability of your business-critical applications.
+
+## Reliability recommendations
+
+You can get reliability recommendations on the **Reliability** tab on the Advisor dashboard.
+
+1. Sign in to the [**Azure portal**](https://portal.azure.com).
+
+1. Search for and select [**Advisor**](https://aka.ms/azureadvisordashboard) from any page.
+
+1. On the **Advisor** dashboard, select the **Reliability** tab.
+
+## Reliability workbook
+
+You can evaluate the reliability of posture of your applications, assess risks and plan improvements using the new Reliability workbook template, which is available in Azure Advisor.
+
+1. Sign in to the [**Azure portal**](https://portal.azure.com).
+
+1. Search for and select [**Advisor**](https://aka.ms/azureadvisordashboard) from any page.
+
+1. Select **Workbooks** item in the left menu.
+
+1. Open **Reliability** workbook template.
+++
+## Next steps
+
+For more information about Advisor recommendations, see:
+* [Reliability recommendations](advisor-reference-reliability-recommendations.md)
+* [Introduction to Advisor](advisor-overview.md)
+* [Get started with Advisor](advisor-get-started.md)
++
aks Aks Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-diagnostics.md
Title: Azure Kubernetes Service (AKS) Diagnostics Overview
+ Title: Azure Kubernetes Service (AKS) Diagnose and Solve Problems Overview
description: Learn about self-diagnosing clusters in Azure Kubernetes Service.++ Previously updated : 11/15/2022 Last updated : 03/10/2023+
-# Azure Kubernetes Service Diagnostics (preview) overview
+# Azure Kubernetes Service Diagnose and Solve Problems overview
-Troubleshooting Azure Kubernetes Service (AKS) cluster issues plays an important role in maintaining your cluster, especially if your cluster is running mission-critical workloads. AKS Diagnostics (preview) is an intelligent, self-diagnostic experience with the following features:
+Troubleshooting Azure Kubernetes Service (AKS) cluster issues plays an important role in maintaining your cluster, especially if your cluster is running mission-critical workloads. AKS Diagnose and Solve Problems is an intelligent, self-diagnostic experience that:
+* Helps you identify and resolve problems in your cluster.
+* Requires no extra configuration or billing cost.
+
+## Open AKS Diagnose and Solve Problems
-* Helps you identify and resolve problems in your cluster.
-* Is cloud-native.
-* Requires no extra configuration or billing costs.
+To access AKS Diagnose and Solve Problems:
+1. Navigate to your Kubernetes cluster in the [Azure portal](https://portal.azure.com).
+2. Click on **Diagnose and solve problems** in the left navigation, which opens AKS Diagnose and Solve Problems.
+3. Choose a category that best describes the issue of your cluster by:
+ * Referring the keywords in each tile description on the homepage.
+ * Typing a keyword that best describes your issue in the search bar.
-## Open AKS Diagnostics
+![screenshot of AKS Diagnose and Solve Problems Homepage.](./media/concepts-diagnostics/aks-diagnostics-homepage.PNG)
-To access AKS Diagnostics:
-
-1. Sign in to the [Azure portal](https://portal.azure.com)
-1. From **All services** in the Azure portal, select **Kubernetes Service**.
-1. Select **Diagnose and solve problems** in the left navigation, which opens AKS Diagnostics.
-1. Choose a category that best describes the issue of your cluster, like _Cluster Node Issues_, using the keywords in the homepage tile or typing a keyword that best describes your issue in the search bar.
-
-![Homepage](./media/concepts-diagnostics/aks-diagnostics-homepage.png)
## View a diagnostic report
-After selecting a category, you can view a diagnostic report specific to your cluster. Diagnostic reports intelligently call out any issues in your cluster with status icons. You can drill down on each topic by clicking **More Info** to see a detailed description of:
+To initiate the tool and retrieve the results in a seamless manner, click on the tile to troubleshoot. The left navigation pane has an _Overview_ option which runs all the diagnostics in that particular category. The issues that are found with the cluster will be displayed on the right panel. To obtain a comprehensive understanding of the issue, click on View details for each tile, which will provide a detailed description of:
-* Issues
+* Issue summary
+* Error details
* Recommended actions * Links to helpful docs * Related-metrics * Logging data
-Diagnostic reports generate based on the current state of your cluster after running various checks. They can be useful for pinpointing the problem of your cluster and understanding next steps to resolve the issue.
+Based on the outcome, you may follow the detailed instructions or peruse the documentation links to effectively resolve the issue at hand.
-![Diagnostic Report](./media/concepts-diagnostics/diagnostic-report.png)
+**Example scenario 1**: I observed that my application is getting disconnected or experiencing intermittent connection issues. In response, I click **Connectivity Issues** tile to investigate the potential causes.
-![Expanded Diagnostic Report](./media/concepts-diagnostics/node-issues.png)
+![screenshot of AKS Diagnose and solve problems Results - Networking Tile.](./media/concepts-diagnostics/aks-diagnostics-tile.png)
-## Cluster insights
+I received a diagnostic alert indicating that the disconnection may be related to my *Cluster DNS*. To gather more information, I clicked on *View details*.
-The following diagnostic checks are available in **Cluster Insights**.
+![Screenshot of AKS Diagnose and solve problems - Networking.](./media/concepts-diagnostics/aks-diagnostics-results.png)
-### Cluster Node Issues
+Based on the diagnostic result, it appears that the issue may be related to known DNS issues or VNET configuration. Thankfully, I can use the documentation links provided to address the issue and resolve the problem.
-Cluster Node Issues checks for node-related issues that cause your cluster to behave unexpectedly. Specifically:
+![Screenshot of AKS Diagnose and Solve Problems Results - Networking - Cluster DNS.](./media/concepts-diagnostics/aks-diagnostics-network.png)
-- Node readiness issues-- Node failures-- Insufficient resources-- Node missing IP configuration-- Node CNI failures-- Node not found-- Node power off-- Node authentication failure-- Node kube-proxy stale
+Furthermore, if the recommended documentation based on the diagnostic results does not resolve the issue, you can return to the previous step in Diagnostics and refer to additional documentation.
-### Create, read, update & delete (CRUD) operations
+![Screenshot of AKS Diagnose and solve problem result - Additional - Docs.](./media/concepts-diagnostics/aks-diagnostics-doc.png)
-CRUD Operations checks for any CRUD operations that cause issues in your cluster. Specifically:
+## Use AKS Diagnose and Solve Problems for Best Practices
-- In-use subnet delete operation error-- Network security group delete operation error-- In-use route table delete operation error-- Referenced resource provisioning error-- Public IP address delete operation error-- Deployment failure due to deployment quota-- Operation error due to organization policy-- Missing subscription registration-- VM extension provisioning error-- Subnet capacity-- Quota exceeded error
+Deploying applications on AKS requires adherence to best practices to guarantee optimal performance, availability, and security. To this end, the AKS Diagnose and Solve Problems **Best Practices** tile provides an array of best practices that can assist in managing various aspects such as VM resource provisioning, cluster upgrades, scaling operations, subnet configuration, and other essential aspects of a cluster's configuration. Leveraging the AKS Diagnose and Solve Problems can be vital in ensuring that your cluster adheres to best practices and that any potential issues are identified and resolved in a timely and effective manner. By incorporating AKS Diagnose and Solve Problems into your operational practices, you can be confident in the reliability and security of your application in production.
-### Identity and security management
+**Example Scenario 2**: My cluster seems to be in good health. All nodes are ready, and my application runs without any issues. However, I am curious about the best practices I can follow to prevent potential problems. So, I click on the **Best Practices** tile. After reviewing the recommendations, I discovered that even though my cluster appears healthy at the moment, there are still some things I can do to avoid latency, throttling or VM uptime issues in the future.
-Identity and Security Management detects authentication and authorization errors that prevent communication to your cluster. Specifically,
+![Screenshot of AKS Diagnose and solve problem - Best - Practice.](./media/concepts-diagnostics/aks-diagnostics-best.png)
-- Node authorization failures-- 401 errors-- 403 errors
+![Screenshot of AKS Diagnose and solve problem - Best - result.](./media/concepts-diagnostics/aks-diagnostics-practice.png)
## Next steps
aks Azure Csi Files Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-files-storage-provision.md
Last updated 01/18/2023
# Create and use a volume with Azure Files in Azure Kubernetes Service (AKS)
-A persistent volume represents a piece of storage that has been provisioned for use with Kubernetes pods. A persistent volume can be used by one or many pods, and can be dynamically or statically provisioned. If multiple pods need concurrent access to the same storage volume, you can use Azure Files to connect using the [Server Message Block (SMB) protocol][smb-overview]. This article shows you how to dynamically create an Azure Files share for use by multiple pods in an Azure Kubernetes Service (AKS) cluster.
+A persistent volume represents a piece of storage that has been provisioned for use with Kubernetes pods. A persistent volume can be used by one or many pods, and can be dynamically or statically provisioned. If multiple pods need concurrent access to the same storage volume, you can use Azure Files to connect using the [Server Message Block (SMB) protocol][smb-overview]. This article shows you how to dynamically create an Azure file share for use by multiple pods in an Azure Kubernetes Service (AKS) cluster.
This article shows you how to:
kubectl apply -f azurefiles-mount-options-pvc.yaml
## Next steps
-For Azure File CSI driver parameters, see [CSI driver parameters][CSI driver parameters].
+For Azure Files CSI driver parameters, see [CSI driver parameters][CSI driver parameters].
For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-csi.md
To create an AKS cluster with CSI drivers support, see [Enable CSI drivers on AK
> [!NOTE] > *In-tree drivers* refers to the current storage drivers that are part of the core Kubernetes code versus the new CSI drivers, which are plug-ins.
-## Azure File CSI driver new features
+## Azure Files CSI driver new features
-In addition to the original in-tree driver features, Azure File CSI driver supports the following new features:
+In addition to the original in-tree driver features, Azure Files CSI driver supports the following new features:
- Network File System (NFS) version 4.1 - [Private endpoint][private-endpoint-overview]
For more information on Kubernetes volumes, see [Storage options for application
## Dynamically create Azure Files PVs by using the built-in storage classes
-A storage class is used to define how an Azure file share is created. A storage account is automatically created in the [node resource group][node-resource-group] for use with the storage class to hold the Azure Files shares. Choose one of the following [Azure storage redundancy SKUs][storage-skus] for *skuName*:
+A storage class is used to define how an Azure file share is created. A storage account is automatically created in the [node resource group][node-resource-group] for use with the storage class to hold the Azure files share. Choose one of the following [Azure storage redundancy SKUs][storage-skus] for *skuName*:
* **Standard_LRS**: Standard locally redundant storage * **Standard_GRS**: Standard geo-redundant storage
A storage class is used to define how an Azure file share is created. A storage
> [!NOTE] > Azure Files supports Azure Premium Storage. The minimum premium file share capacity is 100 GiB.
-When you use storage CSI drivers on AKS, there are two more built-in `StorageClasses` that uses the Azure File CSI storage drivers. The other CSI storage classes are created with the cluster alongside the in-tree default storage classes.
+When you use storage CSI drivers on AKS, there are two more built-in `StorageClasses` that uses the Azure Files CSI storage drivers. The other CSI storage classes are created with the cluster alongside the in-tree default storage classes.
-- `azurefile-csi`: Uses Azure Standard Storage to create an Azure Files share.-- `azurefile-csi-premium`: Uses Azure Premium Storage to create an Azure Files share.
+- `azurefile-csi`: Uses Azure Standard Storage to create an Azure file share.
+- `azurefile-csi-premium`: Uses Azure Premium Storage to create an Azure file share.
-The reclaim policy on both storage classes ensures that the underlying Azure Files share is deleted when the respective PV is deleted. The storage classes also configure the file shares to be expandable, you just need to edit the [persistent volume claim][persistent-volume-claim-overview] (PVC) with the new size.
+The reclaim policy on both storage classes ensures that the underlying Azure files share is deleted when the respective PV is deleted. The storage classes also configure the file shares to be expandable, you just need to edit the [persistent volume claim][persistent-volume-claim-overview] (PVC) with the new size.
-To use these storage classes, create a PVC and respective pod that references and uses them. A PVC is used to automatically provision storage based on a storage class. A PVC can use one of the pre-created storage classes or a user-defined storage class to create an Azure Files share for the desired SKU and size. When you create a pod definition, the PVC is specified to request the desired storage.
+To use these storage classes, create a PVC and respective pod that references and uses them. A PVC is used to automatically provision storage based on a storage class. A PVC can use one of the pre-created storage classes or a user-defined storage class to create an Azure files share for the desired SKU and size. When you create a pod definition, the PVC is specified to request the desired storage.
Create an [example PVC and pod that prints the current date into an `outfile`](https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/deploy/example/statefulset.yaml) by running the [kubectl apply][kubectl-apply] commands:
The output of the command resembles the following example:
storageclass.storage.k8s.io/my-azurefile created ```
-The Azure File CSI driver supports creating [snapshots of persistent volumes](https://kubernetes-csi.github.io/docs/snapshot-restore-feature.html) and the underlying file shares.
+The Azure Files CSI driver supports creating [snapshots of persistent volumes](https://kubernetes-csi.github.io/docs/snapshot-restore-feature.html) and the underlying file shares.
> [!NOTE] > This driver only supports snapshot creation, restore from snapshot is not supported by this driver. Snapshots can be restored from Azure portal or CLI. For more information about creating and restoring a snapshot, see [Overview of share snapshots for Azure Files][share-snapshots-overview].
accountname.file.core.windows.net:/accountname/pvc-fa72ec43-ae64-42e4-a8a2-55660
## Windows containers
-The Azure File CSI driver also supports Windows nodes and containers. To use Windows containers, follow the [Windows containers quickstart](./learn/quick-windows-container-deploy-cli.md) to add a Windows node pool.
+The Azure Files CSI driver also supports Windows nodes and containers. To use Windows containers, follow the [Windows containers quickstart](./learn/quick-windows-container-deploy-cli.md) to add a Windows node pool.
After you have a Windows node pool, use the built-in storage classes like `azurefile-csi` or create a custom one. You can deploy an example [Windows-based stateful set](https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/deploy/example/windows/statefulset.yaml) that saves timestamps into a file `data.txt` by running the [kubectl apply][kubectl-apply] command:
aks Azure Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-netapp-files.md
This section walks you through the installation of Astra Trident using the opera
1. Before creating a backend, you need to update [backend-anf.yaml][backend-anf.yaml] to include details about the Azure NetApp Files subscription, such as: * `subscriptionID` for the Azure subscription where Azure NetApp Files will be enabled.
- * `tenantID`, `clientID`, and `clientSecret` from an [App Registration][azure-ad-app-registration] in Azure Active Directory (AD) with sufficient permissions for the Azure NetApp Files service. The App Registration include the `Owner` or `Contributor` role that's predefined by Azure.
+ * `tenantID`, `clientID`, and `clientSecret` from an [App Registration][azure-ad-app-registration] in Azure Active Directory (AD) with sufficient permissions for the Azure NetApp Files service. The App Registration includes the `Owner` or `Contributor` role that's predefined by Azure.
* An Azure location that contains at least one delegated subnet. In addition, you can choose to provide a different service level. Azure NetApp Files provides three [service levels](../azure-netapp-files/azure-netapp-files-service-levels.md): Standard, Premium, and Ultra.
A storage class is used to define how a unit of storage is dynamically created w
kubectl apply -f anf-storageclass.yaml ```
- The output of the command resembles the following example::
+ The output of the command resembles the following example:
```console storageclass/azure-netapp-files created
After the PVC is created, a pod can be spun up to access the Azure NetApp Files
Normal Started 10s kubelet Started container nginx ```
-## Using Azure tags
-
-For more details on using Azure tags, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags].
- ## Next steps Astra Trident supports many features with Azure NetApp Files. For more information, see:
aks Cis Ubuntu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cis-ubuntu.md
Title: Azure Kubernetes Service (AKS) Ubuntu image alignment with Center for Int
description: Learn how AKS applies the CIS benchmark Last updated 04/19/2023-+
aks Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-storage.md
Title: Concepts - Storage in Azure Kubernetes Services (AKS) description: Learn about storage in Azure Kubernetes Service (AKS), including volumes, persistent volumes, storage classes, and claims Previously updated : 01/18/2023 Last updated : 04/26/2023
This article introduces the core concepts that provide storage to your applicati
Kubernetes typically treats individual pods as ephemeral, disposable resources. Applications have different approaches available to them for using and persisting data. A *volume* represents a way to store, retrieve, and persist data across pods and through the application lifecycle.
-Traditional volumes are created as Kubernetes resources backed by Azure Storage. You can manually create data volumes to be assigned to pods directly, or have Kubernetes automatically create them. Data volumes can use: [Azure Disks][disks-types], [Azure Files][storage-files-planning], [Azure NetApp Files][azure-netapp-files-service-levels], or [Azure Blobs][storage-account-overview].
+Traditional volumes are created as Kubernetes resources backed by Azure Storage. You can manually create data volumes to be assigned to pods directly, or have Kubernetes automatically create them. Data volumes can use: [Azure Disk][disks-types], [Azure Files][storage-files-planning], [Azure NetApp Files][azure-netapp-files-service-levels], or [Azure Blobs][storage-account-overview].
> [!NOTE]
-> Depending on the VM SKU that's being used, the Azure Disks CSI driver might have a per-node volume limit. For some powerful VMs (for example, 16 cores), the limit is 64 volumes per node. To identify the limit per VM SKU, review the **Max data disks** column for each VM SKU offered. For a list of VM SKUs offered and their corresponding detailed capacity limits, see [General purpose virtual machine sizes][general-purpose-machine-sizes].
+> Depending on the VM SKU that's being used, the Azure Disk CSI driver might have a per-node volume limit. For some powerful VMs (for example, 16 cores), the limit is 64 volumes per node. To identify the limit per VM SKU, review the **Max data disks** column for each VM SKU offered. For a list of VM SKUs offered and their corresponding detailed capacity limits, see [General purpose virtual machine sizes][general-purpose-machine-sizes].
-### Azure Disks
+To help determine best fit for your workload between Azure Files and Azure NetApp Files, review the information provided in the article [Azure Files and Azure NetApp Files comparison][azure-files-azure-netapp-comparison].
-Use [Azure Disks][azure-disk-csi] to create a Kubernetes *DataDisk* resource. Disks types include:
+### Azure Disk
+
+Use [Azure Disk][azure-disk-csi] to create a Kubernetes *DataDisk* resource. Disks types include:
* Ultra Disks * Premium SSDs
Use [Azure Disks][azure-disk-csi] to create a Kubernetes *DataDisk* resource. Di
> [!TIP] > For most production and development workloads, use Premium SSD.
-Because Azure Disks are mounted as *ReadWriteOnce*, they're only available to a single node. For storage volumes that can be accessed by pods on multiple nodes simultaneously, use Azure Files.
+Because Azure Disk are mounted as *ReadWriteOnce*, they're only available to a single node. For storage volumes that can be accessed by pods on multiple nodes simultaneously, use Azure Files.
### Azure Files
Like using a secret:
Volumes defined and created as part of the pod lifecycle only exist until you delete the pod. Pods often expect their storage to remain if a pod is rescheduled on a different host during a maintenance event, especially in StatefulSets. A *persistent volume* (PV) is a storage resource created and managed by the Kubernetes API that can exist beyond the lifetime of an individual pod.
-You can use [Azure Disks](azure-csi-disk-storage-provision.md) or [Azure Files](azure-csi-files-storage-provision.md) to provide the PersistentVolume. As noted in the [Volumes](#volumes) section, the choice of Disks or Files is often determined by the need for concurrent access to the data or the performance tier.
+You can use [Azure Disk](azure-csi-disk-storage-provision.md) or [Azure Files](azure-csi-files-storage-provision.md) to provide the PersistentVolume. As noted in the [Volumes](#volumes) section, the choice of Disks or Files is often determined by the need for concurrent access to the data or the performance tier.
![Persistent volumes in an Azure Kubernetes Services (AKS) cluster](media/concepts-storage/persistent-volumes.png)
-A PersistentVolume can be *statically* created by a cluster administrator, or *dynamically* created by the Kubernetes API server. If a pod is scheduled and requests currently unavailable storage, Kubernetes can create the underlying Azure Disk or Files storage and attach it to the pod. Dynamic provisioning uses a *StorageClass* to identify what type of Azure storage needs to be created.
+A PersistentVolume can be *statically* created by a cluster administrator, or *dynamically* created by the Kubernetes API server. If a pod is scheduled and requests currently unavailable storage, Kubernetes can create the underlying Azure Disk or File storage and attach it to the pod. Dynamic provisioning uses a *StorageClass* to identify what type of Azure storage needs to be created.
> [!IMPORTANT] > Persistent volumes can't be shared by Windows and Linux pods due to differences in file system support between the two operating systems.
For clusters using the [Container Storage Interface (CSI) drivers][csi-storage-d
||| | `managed-csi` | Uses Azure StandardSSD locally redundant storage (LRS) to create a Managed Disk. The reclaim policy ensures that the underlying Azure Disk is deleted when the persistent volume that used it is deleted. The storage class also configures the persistent volumes to be expandable, you just need to edit the persistent volume claim with the new size. | | `managed-csi-premium` | Uses Azure Premium locally redundant storage (LRS) to create a Managed Disk. The reclaim policy again ensures that the underlying Azure Disk is deleted when the persistent volume that used it is deleted. Similarly, this storage class allows for persistent volumes to be expanded. |
-| `azurefile-csi` | Uses Azure Standard storage to create an Azure File Share. The reclaim policy ensures that the underlying Azure File Share is deleted when the persistent volume that used it is deleted. |
-| `azurefile-csi-premium` | Uses Azure Premium storage to create an Azure File Share. The reclaim policy ensures that the underlying Azure File Share is deleted when the persistent volume that used it is deleted.|
+| `azurefile-csi` | Uses Azure Standard storage to create an Azure file share. The reclaim policy ensures that the underlying Azure file share is deleted when the persistent volume that used it is deleted. |
+| `azurefile-csi-premium` | Uses Azure Premium storage to create an Azure file share. The reclaim policy ensures that the underlying Azure file share is deleted when the persistent volume that used it is deleted.|
| `azureblob-nfs-premium` | Uses Azure Premium storage to create an Azure Blob storage container and connect using the NFS v3 protocol. The reclaim policy ensures that the underlying Azure Blob storage container is deleted when the persistent volume that used it is deleted. | | `azureblob-fuse-premium` | Uses Azure Premium storage to create an Azure Blob storage container and connect using BlobFuse. The reclaim policy ensures that the underlying Azure Blob storage container is deleted when the persistent volume that used it is deleted. |
For associated best practices, see [Best practices for storage and backups in AK
To see how to use CSI drivers, see the following how-to articles: -- [Enable Container Storage Interface (CSI) drivers for Azure Disks, Azure Files, and Azure Blob storage on Azure Kubernetes Service][csi-storage-drivers]-- [Use Azure Disks CSI driver in Azure Kubernetes Service][azure-disk-csi]
+- [Enable Container Storage Interface (CSI) drivers for Azure Disk, Azure Files, and Azure Blob storage on Azure Kubernetes Service][csi-storage-drivers]
+- [Use Azure Disk CSI driver in Azure Kubernetes Service][azure-disk-csi]
- [Use Azure Files CSI driver in Azure Kubernetes Service][azure-files-csi] - [Use Azure Blob storage CSI driver (preview) in Azure Kubernetes Service][azure-blob-csi] - [Integrate Azure NetApp Files with Azure Kubernetes Service][azure-netapp-files]
For more information on core Kubernetes and AKS concepts, see the following arti
[csi-storage-drivers]: csi-storage-drivers.md [azure-blob-csi]: azure-blob-csi.md [general-purpose-machine-sizes]: ../virtual-machines/sizes-general.md
+[azure-files-azure-netapp-comparison]: ../storage/files/storage-files-netapp-comparison.md
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
For the past release history, see [Kubernetes history](https://en.wikipedia.org/
## Alias minor version > [!NOTE]
-> Alias minor version requires Azure CLI version 2.37 or above as well as API version 20220201 or above. Use `az upgrade` to install the latest version of the CLI.
+> Alias minor version requires Azure CLI version 2.37 or above as well as API version 20220401 or above. Use `az upgrade` to install the latest version of the CLI.
AKS allows you to create a cluster without specifying the exact patch version. When you create a cluster without designating a patch, the cluster runs the minor version's latest GA patch. For example, if you create a cluster with **`1.21`**, your cluster will run **`1.21.7`**, which is the latest GA patch version of *1.21*.
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-managed-identity.md
Title: Use a managed identity in Azure Kubernetes Service
-description: Learn how to use a system-assigned or user-assigned managed identity in Azure Kubernetes Service (AKS)
+ Title: Use a managed identity in Azure Kubernetes Service (AKS)
+description: Learn how to use a system-assigned or user-assigned managed identity in Azure Kubernetes Service (AKS).
Previously updated : 11/08/2022 Last updated : 04/26/2023
-# Use a managed identity in Azure Kubernetes Service
+# Use a managed identity in Azure Kubernetes Service (AKS)
-An Azure Kubernetes Service (AKS) cluster requires an identity to access Azure resources like load balancers and managed disks. This identity can be either a managed identity or a service principal. By default, when you create an AKS cluster a system-assigned managed identity is automatically created. The identity is managed by the Azure platform and doesn't require you to provision or rotate any secrets. For more information about managed identities in Azure AD, see [Managed identities for Azure resources][managed-identity-resources-overview].
+Azure Kubernetes Service (AKS) clusters require an identity to access Azure resources like load balancers and managed disks. This identity can be a *managed identity* or *service principal*. A system-assigned managed identity is automatically created when you create an AKS cluster. This identity is managed by the Azure platform and doesn't require you to provision or rotate any secrets. For more information about managed identities in Azure AD, see [Managed identities for Azure resources][managed-identity-resources-overview].
-To use a [service principal](kubernetes-service-principal.md), you have to create one, as AKS does not create one automatically. Clusters using a service principal eventually expire and the service principal must be renewed to keep the cluster working. Managing service principals adds complexity, thus it's easier to use managed identities instead. The same permission requirements apply for both service principals and managed identities.
+AKS doesn't automatically create a [service principal](kubernetes-service-principal.md), so you have to create one. Clusters that use a service principal eventually expire, and the service principal must be renewed to keep the cluster working. Managing service principals adds complexity, so it's easier to use managed identities instead. The same permission requirements apply for both service principals and managed identities. Managed identities use certificate-based authentication. Each managed identity's credentials have an expiration of *90 days* and are rolled after *45 days*. AKS uses both system-assigned and user-assigned managed identity types, and these identities are immutable.
-Managed identities are essentially a wrapper around service principals, and make their management simpler. Managed identities use certificate-based authentication, and each managed identities credential has an expiration of 90 days and it's rolled after 45 days. AKS uses both system-assigned and user-assigned managed identity types, and these identities are immutable.
+> [!NOTE]
+> If you're considering implementing [Azure AD pod-managed identity][aad-pod-identity] on your AKS cluster, we recommend you first review the [Azure AD workload identity overview][workload-identity-overview]. This authentication method replaces Azure AD pod-managed identity (preview) and is the recommended method.
-## Prerequisites
+## Before you begin
-Azure CLI version 2.23.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+Make sure you have Azure CLI version 2.23.0 or later installed. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
## Limitations
-* Tenants move or migrate a managed identity-enabled cluster isn't supported.
-* If the cluster has Azure AD pod-managed identity (`aad-pod-identity`) enabled, Node-Managed Identity (NMI) pods modify the nodes'
- iptables to intercept calls to the Azure Instance Metadata (IMDS) endpoint. This configuration means any
- request made to the Metadata endpoint is intercepted by NMI even if the pod doesn't use
- `aad-pod-identity`. AzurePodIdentityException CRD can be configured to inform `aad-pod-identity`
- that any requests to the Metadata endpoint originating from a pod that matches labels defined in
- CRD should be proxied without any processing in NMI. The system pods with
- `kubernetes.azure.com/managedby: aks` label in _kube-system_ namespace should be excluded in
- `aad-pod-identity` by configuring the AzurePodIdentityException CRD. For more information, see
- [Disable aad-pod-identity for a specific pod or application](https://azure.github.io/aad-pod-identity/docs/configure/application_exception).
- To configure an exception, install the
- [mic-exception YAML](https://github.com/Azure/aad-pod-identity/blob/master/deploy/infra/mic-exception.yaml).
-
-> [!NOTE]
-> If you are considering implementing [Azure AD pod-managed identity][aad-pod-identity] on your AKS cluster,
-> we recommend you first review the [workload identity overview][workload-identity-overview] article to understand our
-> recommendations and options to set up your cluster to use an Azure AD workload identity (preview).
-> This authentication method replaces pod-managed identity (preview), which integrates with the Kubernetes native capabilities
-> to federate with any external identity providers.
+* Tenants moving or migrating a managed identity-enabled cluster isn't supported.
+* If the cluster has Azure AD pod-managed identity (`aad-pod-identity`) enabled, Node-Managed Identity (NMI) pods modify the iptables of the nodes to intercept calls to the Azure Instance Metadata (IMDS) endpoint. This configuration means any request made to the Metadata endpoint is intercepted by NMI, even if the pod doesn't use `aad-pod-identity`. AzurePodIdentityException CRD can be configured to inform `aad-pod-identity` of any requests to the Metadata endpoint originating from a pod that matches labels defined in CRD should be proxied without any processing in NMI. The system pods with `kubernetes.azure.com/managedby: aks` label in *kube-system* namespace should be excluded in `aad-pod-identity` by configuring the AzurePodIdentityException CRD.
+ * For more information, see [Disable aad-pod-identity for a specific pod or application](https://azure.github.io/aad-pod-identity/docs/configure/application_exception).
+ * To configure an exception, install the [mic-exception YAML](https://github.com/Azure/aad-pod-identity/blob/master/deploy/infra/mic-exception.yaml).
+* AKS doesn't support the use of a system-assigned managed identity if using a custom private DNS zone.
## Summary of managed identities
AKS uses several managed identities for built-in services and add-ons.
| Identity | Name | Use case | Default permissions | Bring your own identity |-|--|-|
-| Control plane | AKS Cluster Name | Used by AKS control plane components to manage cluster resources including ingress load balancers and AKS managed public IPs, Cluster Autoscaler, Azure Disk & File CSI drivers | Contributor role for Node resource group | Supported
-| Kubelet | AKS Cluster Name-agentpool | Authentication with Azure Container Registry (ACR) | NA (for kubernetes v1.15+) | Supported
-| Add-on | AzureNPM | No identity required | NA | No
-| Add-on | AzureCNI network monitoring | No identity required | NA | No
-| Add-on | azure-policy (gatekeeper) | No identity required | NA | No
-| Add-on | azure-policy | No identity required | NA | No
-| Add-on | Calico | No identity required | NA | No
-| Add-on | Dashboard | No identity required | NA | No
-| Add-on | HTTPApplicationRouting | Manages required network resources | Reader role for node resource group, contributor role for DNS zone | No
-| Add-on | Ingress application gateway | Manages required network resources| Contributor role for node resource group | No
-| Add-on | omsagent | Used to send AKS metrics to Azure Monitor | Monitoring Metrics Publisher role | No
-| Add-on | Virtual-Node (ACIConnector) | Manages required network resources for Azure Container Instances (ACI) | Contributor role for node resource group | No
-| OSS project | aad-pod-identity | Enables applications to access cloud resources securely with Microsoft Azure Active Directory (Azure AD) | NA | Steps to grant permission at [Azure AD Pod Identity Role Assignment configuration](https://azure.github.io/aad-pod-identity/docs/getting-started/role-assignment/).
-
-## Create an AKS cluster using a managed identity
+| Control plane | AKS Cluster Name | Used by AKS control plane components to manage cluster resources including ingress load balancers and AKS-managed public IPs, Cluster Autoscaler, Azure Disk & File CSI drivers. | Contributor role for Node resource group | Supported
+| Kubelet | AKS Cluster Name-agentpool | Authentication with Azure Container Registry (ACR). | N/A (for kubernetes v1.15+) | Supported
+| Add-on | AzureNPM | No identity required. | N/A | No
+| Add-on | AzureCNI network monitoring | No identity required. | N/A | No
+| Add-on | azure-policy (gatekeeper) | No identity required. | N/A | No
+| Add-on | azure-policy | No identity required. | N/A | No
+| Add-on | Calico | No identity required. | N/A | No
+| Add-on | Dashboard | No identity required. | N/A | No
+| Add-on | HTTPApplicationRouting | Manages required network resources. | Reader role for node resource group, contributor role for DNS zone | No
+| Add-on | Ingress application gateway | Manages required network resources. | Contributor role for node resource group | No
+| Add-on | omsagent | Used to send AKS metrics to Azure Monitor. | Monitoring Metrics Publisher role | No
+| Add-on | Virtual-Node (ACIConnector) | Manages required network resources for Azure Container Instances (ACI). | Contributor role for node resource group | No
+| OSS project | aad-pod-identity | Enables applications to access cloud resources securely with Microsoft Azure Active Directory (Azure AD). | N/A | Steps to grant permission at [Azure AD Pod Identity Role Assignment configuration](https://azure.github.io/aad-pod-identity/docs/getting-started/role-assignment/).
+
+## Enable managed identities on a new AKS cluster
> [!NOTE]
-> AKS will create a system-assigned kubelet identity in the Node resource group if you do not [specify your own kubelet managed identity][Use a pre-created kubelet managed identity].
+> AKS creates a system-assigned kubelet identity in the node resource group if you don't [specify your own kubelet managed identity][Use a pre-created kubelet managed identity].
-You can create an AKS cluster using a system-assigned managed identity by running the following CLI command.
+1. Create an Azure resource group using the [`az group create`][az-group-create] command.
-First, create an Azure resource group:
+ ```azurecli-interactive
+ az group create --name myResourceGroup --location westus2
+ ```
-```azurecli-interactive
-# Create an Azure resource group
-az group create --name myResourceGroup --location westus2
-```
+2. Create an AKS cluster using the [`az aks create`][az-aks-create] command.
-Then, create an AKS cluster:
+ ```azurecli-interactive
+ az aks create -g myResourceGroup -n myManagedCluster --enable-managed-identity
+ ```
-```azurecli-interactive
-az aks create -g myResourceGroup -n myManagedCluster --enable-managed-identity
-```
+3. Get credentials to access the cluster using the [`az aks get-credentials`][az-aks-get-credentials] command.
-Once the cluster is created, you can then deploy your application workloads to the new cluster and interact with it just as you've done with service-principal-based AKS clusters.
+ ```azurecli-interactive
+ az aks get-credentials --resource-group myResourceGroup --name myManagedCluster
+ ```
-Finally, get credentials to access the cluster:
+## Enable managed identities on an existing AKS cluster
-```azurecli-interactive
-az aks get-credentials --resource-group myResourceGroup --name myManagedCluster
-```
-
-## Update an AKS cluster to use a managed identity
-
-> [!NOTE]
-> If AKS has custom private DNS zone, AKS does not support to use system-assigned managed identity.
+* Update an existing AKS cluster currently using a service principal to work with a system-assigned managed identity using the [`az aks update`][az-aks-update] command.
-To update an AKS cluster currently using a service principal to work with a system-assigned managed identity, run the following CLI command.
+ ```azurecli-interactive
+ az aks update -g myResourceGroup -n myManagedCluster --enable-managed-identity
+ ```
-```azurecli-interactive
-az aks update -g <RGName> -n <AKSName> --enable-managed-identity
-```
+After updating your cluster, the control plane and pods use the managed identity. kubelet continues using a service principal until you upgrade your agentpool. You can use the `az aks nodepool upgrade --node-image-only` command on your nodes to update to a managed identity. A node pool upgrade causes downtime for your AKS cluster as the nodes in the node pools are cordoned/drained and reimaged.
> [!NOTE]
-> An update will only work if there is an actual VHD update to consume. If you are running the latest VHD, you'll need to wait until the next VHD is available in order to perform the update.
>-
-> [!NOTE]
-> After updating, your cluster's control plane and addon pods, they use the managed identity, but kubelet will continue using a service principal until you upgrade your agentpool. Perform an `az aks nodepool upgrade --node-image-only` on your nodes to complete the update to a managed identity.
+> * Keep the following information in mind when updating your cluster:
>
-> If your cluster was using `--attach-acr` to pull from image from Azure Container Registry, after updating your cluster to a managed identity, you need to rerun `az aks update --attach-acr <ACR Resource ID>` to let the newly created kubelet used for managed identity get the permission to pull from ACR. Otherwise, you won't be able to pull from ACR after the upgrade.
+> * An update only works if there's a VHD update to consume. If you're running the latest VHD, you need to wait until the next VHD is available in order to perform the update.
>
-> The Azure CLI will ensure your addon's permission is correctly set after migrating, if you're not using the Azure CLI to perform the migrating operation, you'll need to handle the addon identity's permission by yourself. Here is one example using an [Azure Resource Manager](../role-based-access-control/role-assignments-template.md) template.
-
-> [!WARNING]
-> A nodepool upgrade will cause downtime for your AKS cluster as the nodes in the nodepools will be cordoned/drained and then reimaged.
+> * The Azure CLI ensures your addon's permission is correctly set after migrating. If you're not using the Azure CLI to perform the migrating operation, you need to handle the addon identity's permission by yourself. For an example using an Azure Resource Manager (ARM) template, see [Assign Azure roles using ARM templates](../role-based-access-control/role-assignments-template.md).
+>
+> * If your cluster was using `--attach-acr` to pull from images from Azure Container Registry, you need to run the `az aks update --attach-acr <ACR resource ID>` command after updating your cluster to let the newly-created kubelet used for managed identity get the permission to pull from ACR. Otherwise, you won't be able to pull from ACR after the update.
## Add role assignment for control plane identity
-When creating and using your own VNet, attached Azure disk, static IP address, route table or user-assigned kubelet identity where the resources are outside of the worker node resource group, the Azure CLI adds the role assignment automatically. If you are using an ARM template or other method, you need to use the Principal ID of the cluster managed identity to perform a role assignment.
+When you create and use your own VNet, attached Azure disk, static IP address, route table, or user-assigned kubelet identity where the resources are outside of the worker node resource group, the Azure CLI adds the role assignment automatically. If you're using an ARM template or another method, you need to use the Principal ID of the cluster managed identity to perform a role assignment.
-> [!NOTE]
-> If you are not using the Azure CLI but using your own VNet, attached Azure disk, static IP address, route table or user-assigned kubelet identity that are outside of the worker node resource group, it's recommended to use [user-assigned control plane identity][Bring your own control plane managed identity]. For system-assigned control plane identity, we cannot get the identity ID before creating cluster, which delays role assignment from taking effect.
+If you're not using the Azure CLI, but you're using your own VNet, attached Azure disk, static IP address, route table, or user-assigned kubelet identity that's outside of the worker node resource group, we recommend using [user-assigned control plane identity][Bring your own control plane managed identity]. For system-assigned control plane identity, we can't get the identity ID before creating cluster, which delays the role assignment from taking effect.
-### Get the Principal ID of control plane identity
+### Get the principal ID of control plane identity
-You can find existing identity's Principal ID by running the following command:
+* Get the existing identity's principal ID using the [`az identity show`][az-identity-show] command.
-```azurecli-interactive
-az identity show --ids <identity-resource-id>
-```
+ ```azurecli-interactive
+ az identity show --ids <identity-resource-id>
+ ```
-The output should resemble the following:
+ Your output should resemble the following example output:
-```output
-{
- "clientId": "<client-id>",
- "id": "/subscriptions/<subscriptionid>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myIdentity",
- "location": "eastus",
- "name": "myIdentity",
- "principalId": "<principal-id>",
- "resourceGroup": "myResourceGroup",
- "tags": {},
- "tenantId": "<tenant-id>",
- "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
-}
-```
+ ```output
+ {
+ "clientId": "<client-id>",
+ "id": "/subscriptions/<subscriptionid>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myIdentity",
+ "location": "eastus",
+ "name": "myIdentity",
+ "principalId": "<principal-id>",
+ "resourceGroup": "myResourceGroup",
+ "tags": {},
+ "tenantId": "<tenant-id>",
+ "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
+ }
+ ```
### Add role assignment
-For VNet, attached Azure disk, static IP address, route table which are outside the default worker node resource group, you need to assign the `Contributor` role on custom resource group.
-
-```azurecli-interactive
-az role assignment create --assignee <control-plane-identity-principal-id> --role "Contributor" --scope "<custom-resource-group-resource-id>"
-```
+For a VNet, attached Azure disk, static IP address, or route table outside the default worker node resource group, you need to assign the `Contributor` role on the custom resource group.
-Example:
+* Assign the `Contributor` role on the custom resource group using the [`az role assignment create`][az-role-assignment-create] command.
-```azurecli-interactive
-az role assignment create --assignee 22222222-2222-2222-2222-222222222222 --role "Contributor" --scope "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/custom-resource-group"
-```
+ ```azurecli-interactive
+ az role assignment create --assignee <control-plane-identity-principal-id> --role "Contributor" --scope "<custom-resource-group-resource-id>"
+ ```
-For user-assigned kubelet identity which is outside the default worker node resource group, you need to assign the `Managed Identity Operator`on kubelet identity.
+For a user-assigned kubelet identity outside the default worker node resource group, you need to assign the `Managed Identity Operator` role on the kubelet identity.
-```azurecli-interactive
-az role assignment create --assignee <kubelet-identity-principal-id> --role "Managed Identity Operator" --scope "<kubelet-identity-resource-id>"
-```
+* Assign the `Managed Identity Operator` role on the kubelet identity using the [`az role assignment create`][az-role-assignment-create] command.
-Example:
-
-```azurecli-interactive
-az role assignment create --assignee 22222222-2222-2222-2222-222222222222 --role "Managed Identity Operator" --scope "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myKubeletIdentity"
-```
+ ```azurecli-interactive
+ az role assignment create --assignee <kubelet-identity-principal-id> --role "Managed Identity Operator" --scope "<kubelet-identity-resource-id>"
+ ```
> [!NOTE]
-> Permission granted to your cluster's managed identity used by Azure may take up 60 minutes to populate.
+> It may take up to 60 minutes for the permissions granted to your cluster's managed identity to populate.
## Bring your own control plane managed identity
-A custom control plane managed identity enables access to be granted to the existing identity prior to cluster creation. This feature enables scenarios such as using a custom VNET or outboundType of UDR with a pre-created managed identity.
+A custom control plane managed identity enables access to the existing identity prior to cluster creation. This feature enables scenarios such as using a custom VNet or outboundType of UDR with a pre-created managed identity.
> [!NOTE]
-> USDOD Central, USDOD East, USGov Iowa regions in Azure US Government cloud aren't currently supported.
->
-> AKS will create a system-assigned kubelet identity in the Node resource group if you do not [specify your own kubelet managed identity][Use a pre-created kubelet managed identity].
-
-If you don't have a managed identity, you should create one by running the [az identity][az-identity-create] command.
-
-```azurecli-interactive
-az identity create --name myIdentity --resource-group myResourceGroup
-```
-
-The output should resemble the following:
-
-```output
-{
- "clientId": "<client-id>",
- "clientSecretUrl": "<clientSecretUrl>",
- "id": "/subscriptions/<subscriptionid>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myIdentity",
- "location": "westus2",
- "name": "myIdentity",
- "principalId": "<principal-id>",
- "resourceGroup": "myResourceGroup",
- "tags": {},
- "tenantId": "<tenant-id>",
- "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
-}
-```
-
-Before creating the cluster, you need to [add the role assignment for control plane identity][add role assignment for control plane identity].
-
-Run the following command to create a cluster with your existing identity:
-
-```azurecli-interactive
-az aks create \
- --resource-group myResourceGroup \
- --name myManagedCluster \
- --network-plugin azure \
- --vnet-subnet-id <subnet-id> \
- --docker-bridge-address 172.17.0.1/16 \
- --dns-service-ip 10.2.0.10 \
- --service-cidr 10.2.0.0/24 \
- --enable-managed-identity \
- --assign-identity <identity-resource-id>
-```
-
-A successful cluster creation using your own managed identity should resemble the following **userAssignedIdentities** profile information:
-
-```output
- "identity": {
- "principalId": null,
- "tenantId": null,
- "type": "UserAssigned",
- "userAssignedIdentities": {
- "/subscriptions/<subscriptionid>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myIdentity": {
- "clientId": "<client-id>",
- "principalId": "<principal-id>"
- }
- }
- },
-```
+>
+> USDOD Central, USDOD East, and USGov Iowa regions in Azure US Government cloud aren't supported.
+>
+> AKS creates a system-assigned kubelet identity in the node resource group if you don't [specify your own kubelet managed identity][Use a pre-created kubelet managed identity].
+
+* If you don't have a managed identity, create one using the [`az identity create`][az-identity-create] command.
+
+ ```azurecli-interactive
+ az identity create --name myIdentity --resource-group myResourceGroup
+ ```
+
+ Your output should resemble the following example output:
+
+ ```output
+ {
+ "clientId": "<client-id>",
+ "clientSecretUrl": "<clientSecretUrl>",
+ "id": "/subscriptions/<subscriptionid>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myIdentity",
+ "location": "westus2",
+ "name": "myIdentity",
+ "principalId": "<principal-id>",
+ "resourceGroup": "myResourceGroup",
+ "tags": {},
+ "tenantId": "<tenant-id>",
+ "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
+ }
+ ```
+
+* Before creating the cluster, [add the role assignment for control plane identity][add role assignment for control plane identity] using the [`az aks create`][az-aks-create] command.
+
+ ```azurecli-interactive
+ az aks create \
+ --resource-group myResourceGroup \
+ --name myManagedCluster \
+ --network-plugin azure \
+ --vnet-subnet-id <subnet-id> \
+ --docker-bridge-address 172.17.0.1/16 \
+ --dns-service-ip 10.2.0.10 \
+ --service-cidr 10.2.0.0/24 \
+ --enable-managed-identity \
+ --assign-identity <identity-resource-id>
+ ```
## Use a pre-created kubelet managed identity
-A Kubelet identity enables access granted to the existing identity prior to cluster creation. This feature enables scenarios such as connection to ACR with a pre-created managed identity.
+A kubelet identity enables access to the existing identity prior to cluster creation. This feature enables scenarios such as connection to ACR with a pre-created managed identity.
### Prerequisites
-Azure CLI version 2.26.0 or later installed. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+Make sure you have Azure CLI version 2.26.0 or later installed. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-### Limitations
+### Pre-created kubelet identity limitations
* Only works with a user-assigned managed cluster.
-* China East and China North regions in Azure China 21Vianet aren't currently supported.
+* The China East and China North regions in Azure China 21Vianet aren't supported.
### Create user-assigned managed identities
-If you don't have a control plane managed identity, you can create by running the following [az identity create][az-identity-create] command:
-
-```azurecli-interactive
-az identity create --name myIdentity --resource-group myResourceGroup
-```
-
-The output should resemble the following:
-
-```output
-{
- "clientId": "<client-id>",
- "clientSecretUrl": "<clientSecretUrl>",
- "id": "/subscriptions/<subscriptionid>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myIdentity",
- "location": "westus2",
- "name": "myIdentity",
- "principalId": "<principal-id>",
- "resourceGroup": "myResourceGroup",
- "tags": {},
- "tenantId": "<tenant-id>",
- "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
-}
-```
-
-If you don't have a kubelet managed identity, you can create one by running the following [az identity create][az-identity-create] command:
-
-```azurecli-interactive
-az identity create --name myKubeletIdentity --resource-group myResourceGroup
-```
-
-The output should resemble the following:
-
-```output
-{
- "clientId": "<client-id>",
- "clientSecretUrl": "<clientSecretUrl>",
- "id": "/subscriptions/<subscriptionid>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myKubeletIdentity",
- "location": "westus2",
- "name": "myKubeletIdentity",
- "principalId": "<principal-id>",
- "resourceGroup": "myResourceGroup",
- "tags": {},
- "tenantId": "<tenant-id>",
- "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
-}
-```
+#### Control plane managed identity
-### Create a cluster using user-assigned kubelet identity
+* If you don't have a control plane managed identity, create one using the [`az identity create`][az-identity-create].
+
+ ```azurecli-interactive
+ az identity create --name myIdentity --resource-group myResourceGroup
+ ```
+
+ Your output should resemble the following example output:
-Now you can use the following command to create your AKS cluster with your existing identities. Provide the control plane identity resource ID via `assign-identity` and the kubelet managed identity via `assign-kubelet-identity`:
-
-```azurecli-interactive
-az aks create \
- --resource-group myResourceGroup \
- --name myManagedCluster \
- --network-plugin azure \
- --vnet-subnet-id <subnet-id> \
- --docker-bridge-address 172.17.0.1/16 \
- --dns-service-ip 10.2.0.10 \
- --service-cidr 10.2.0.0/24 \
- --enable-managed-identity \
- --assign-identity <identity-resource-id> \
- --assign-kubelet-identity <kubelet-identity-resource-id>
-```
-
-A successful AKS cluster creation using your own kubelet managed identity should resemble the following output:
-
-```output
- "identity": {
- "principalId": null,
- "tenantId": null,
- "type": "UserAssigned",
- "userAssignedIdentities": {
- "/subscriptions/<subscriptionid>/resourcegroups/resourcegroups/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myIdentity": {
- "clientId": "<client-id>",
- "principalId": "<principal-id>"
- }
+ ```output
+ {
+ "clientId": "<client-id>",
+ "clientSecretUrl": "<clientSecretUrl>",
+ "id": "/subscriptions/<subscriptionid>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myIdentity",
+ "location": "westus2",
+ "name": "myIdentity",
+ "principalId": "<principal-id>",
+ "resourceGroup": "myResourceGroup",
+ "tags": {},
+ "tenantId": "<tenant-id>",
+ "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
}
- },
- "identityProfile": {
- "kubeletidentity": {
+ ```
+
+#### kubelet managed identity
+
+* If you don't have a kubelet managed identity, create one using the [`az identity create`][az-identity-create] command.
+
+ ```azurecli-interactive
+ az identity create --name myKubeletIdentity --resource-group myResourceGroup
+ ```
+
+ Your output should resemble the following example output:
+
+ ```output
+ {
"clientId": "<client-id>",
- "objectId": "<object-id>",
- "resourceId": "/subscriptions/<subscriptionid>/resourcegroups/resourcegroups/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myKubeletIdentity"
+ "clientSecretUrl": "<clientSecretUrl>",
+ "id": "/subscriptions/<subscriptionid>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myKubeletIdentity",
+ "location": "westus2",
+ "name": "myKubeletIdentity",
+ "principalId": "<principal-id>",
+ "resourceGroup": "myResourceGroup",
+ "tags": {},
+ "tenantId": "<tenant-id>",
+ "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
}
- },
-```
+ ```
-### Update an existing cluster using kubelet identity
+### Create a cluster using user-assigned kubelet identity
-Update kubelet identity on an existing AKS cluster with your existing identities.
+Now you can create your AKS cluster with your existing identities. Make sure to provide the control plane identity resource ID via `assign-identity` and the kubelet managed identity via `assign-kubelet-identity`.
+
+* Create an AKS cluster with your existing identities using the [`az aks create`][az-aks-create] command.
+
+ ```azurecli-interactive
+ az aks create \
+ --resource-group myResourceGroup \
+ --name myManagedCluster \
+ --network-plugin azure \
+ --vnet-subnet-id <subnet-id> \
+ --docker-bridge-address 172.17.0.1/16 \
+ --dns-service-ip 10.2.0.10 \
+ --service-cidr 10.2.0.0/24 \
+ --enable-managed-identity \
+ --assign-identity <identity-resource-id> \
+ --assign-kubelet-identity <kubelet-identity-resource-id>
+ ```
+
+ A successful AKS cluster creation using your own kubelet managed identity should resemble the following example output:
+
+ ```output
+ "identity": {
+ "principalId": null,
+ "tenantId": null,
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "/subscriptions/<subscriptionid>/resourcegroups/resourcegroups/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myIdentity": {
+ "clientId": "<client-id>",
+ "principalId": "<principal-id>"
+ }
+ }
+ },
+ "identityProfile": {
+ "kubeletidentity": {
+ "clientId": "<client-id>",
+ "objectId": "<object-id>",
+ "resourceId": "/subscriptions/<subscriptionid>/resourcegroups/resourcegroups/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myKubeletIdentity"
+ }
+ },
+ ```
+
+### Update an existing cluster using kubelet identity
> [!WARNING]
-> Updating kubelet managed identity upgrades Nodepool, which causes downtime for your AKS cluster as the nodes in the nodepools will be cordoned/drained and then reimaged.
+> Updating kubelet managed identity upgrades node pools, which causes downtime for your AKS cluster as the nodes in the node pools will be cordoned/drained and reimaged.
> [!NOTE]
-> If your cluster was using `--attach-acr` to pull from image from Azure Container Registry, after updating your cluster kubelet identity, you need to rerun `az aks update --attach-acr <ACR Resource ID>` to let the newly created kubelet used for managed identity get the permission to pull from ACR. Otherwise, you won't be able to pull from ACR after the upgrade.
+> If your cluster was using `--attach-acr` to pull from images from Azure Container Registry, you need to run the `az aks update --attach-acr <ACR Resource ID>` command after updating your cluster to let the newly-created kubelet used for managed identity get the permission to pull from ACR. Otherwise, you won't be able to pull from ACR after the upgrade.
+
+#### Make sure your CLI version is updated
+
+1. Check your CLI version using the [`az version`][az-version] command.
-#### Make sure the CLI version is 2.37.0 or later
+ ```azurecli-interactive
+ az version
+ ```
-```azurecli-interactive
-# Check the version of Azure CLI modules
-az version
+2. Upgrade your CLI version using the [`az upgrade`][az-upgrade] command.
-# Upgrade the version to make sure it is 2.37.0 or later
-az upgrade
-```
+ ```azurecli-interactive
+ az upgrade
+ ```
#### Get the current control plane identity for your AKS cluster
-Confirm your AKS cluster is using user-assigned control plane identity with the following CLI command:
+1. Confirm your AKS cluster is using the user-assigned control plane identity using the [`az aks show`][az-aks-show] command.
-```azurecli-interactive
-az aks show -g <RGName> -n <ClusterName> --query "servicePrincipalProfile"
-```
+ ```azurecli-interactive
+ az aks show -g <RGName> -n <ClusterName> --query "servicePrincipalProfile"
+ ```
-If the cluster is using a managed identity, the output shows `clientId` with a value of **msi**. A cluster using a service principal shows an object ID. For example:
+ If your cluster is using a managed identity, the output shows `clientId` with a value of **msi**. A cluster using a service principal shows an object ID. For example:
-```output
-{
- "clientId": "msi"
-}
-```
+ ```output
+ {
+ "clientId": "msi"
+ }
+ ```
-After verifying the cluster is using a managed identity, you can find the control plane identity's resource ID by running the following command:
+2. After confirming your cluster is using a managed identity, find the control plane identity's resource ID using the [`az aks show`][az-aks-show] command.
-```azurecli-interactive
-az aks show -g <RGName> -n <ClusterName> --query "identity"
-```
+ ```azurecli-interactive
+ az aks show -g <RGName> -n <ClusterName> --query "identity"
+ ```
-For user-assigned control plane identity, the output should look like:
+ For a user-assigned control plane identity, your output should look similar to the following example output:
-```output
-{
- "principalId": null,
- "tenantId": null,
- "type": "UserAssigned",
- "userAssignedIdentities": <identity-resource-id>
- "clientId": "<client-id>",
- "principalId": "<principal-id>"
-},
-```
-
-#### Updating your cluster with kubelet identity
-
-If you don't have a kubelet managed identity, you can create one by running the following [az identity create][az-identity-create] command:
-
-```azurecli-interactive
-az identity create --name myKubeletIdentity --resource-group myResourceGroup
-```
-
-The output should resemble the following:
-
-```output
-{
- "clientId": "<client-id>",
- "clientSecretUrl": "<clientSecretUrl>",
- "id": "/subscriptions/<subscriptionid>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myKubeletIdentity",
- "location": "westus2",
- "name": "myKubeletIdentity",
- "principalId": "<principal-id>",
- "resourceGroup": "myResourceGroup",
- "tags": {},
- "tenantId": "<tenant-id>",
- "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
-}
-```
-
-Now you can use the following command to update your cluster with your existing identities. Provide the control plane identity resource ID via `assign-identity` and the kubelet managed identity via `assign-kubelet-identity`:
-
-```azurecli-interactive
-az aks update \
- --resource-group myResourceGroup \
- --name myManagedCluster \
- --enable-managed-identity \
- --assign-identity <identity-resource-id> \
- --assign-kubelet-identity <kubelet-identity-resource-id>
-```
-
-A successful cluster update using your own kubelet managed identity contains the following output:
-
-```output
- "identity": {
- "principalId": null,
- "tenantId": null,
- "type": "UserAssigned",
- "userAssignedIdentities": {
- "/subscriptions/<subscriptionid>/resourcegroups/resourcegroups/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myIdentity": {
- "clientId": "<client-id>",
- "principalId": "<principal-id>"
- }
- }
- },
- "identityProfile": {
- "kubeletidentity": {
+ ```output
+ {
+ "principalId": null,
+ "tenantId": null,
+ "type": "UserAssigned",
+ "userAssignedIdentities": <identity-resource-id>
+ "clientId": "<client-id>",
+ "principalId": "<principal-id>"
+ },
+ ```
+
+#### Update your cluster with kubelet identity
+
+1. If you don't have a kubelet managed identity, create one using the [`az identity create`][az-identity-create] command.
+
+ ```azurecli-interactive
+ az identity create --name myKubeletIdentity --resource-group myResourceGroup
+ ```
+
+ Your output should resemble the following example output:
+
+ ```output
+ {
"clientId": "<client-id>",
- "objectId": "<object-id>",
- "resourceId": "/subscriptions/<subscriptionid>/resourcegroups/resourcegroups/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myKubeletIdentity"
+ "clientSecretUrl": "<clientSecretUrl>",
+ "id": "/subscriptions/<subscriptionid>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myKubeletIdentity",
+ "location": "westus2",
+ "name": "myKubeletIdentity",
+ "principalId": "<principal-id>",
+ "resourceGroup": "myResourceGroup",
+ "tags": {},
+ "tenantId": "<tenant-id>",
+ "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
}
- },
-```
+ ```
+
+2. Update your cluster with your existing identities using the [`az aks update`][az-aks-update] command. Make sure you provide the control plane identity resource ID for `assign-identity` and the kubelet managed identity for `assign-kubelet-identity`.
+
+ ```azurecli-interactive
+ az aks update \
+ --resource-group myResourceGroup \
+ --name myManagedCluster \
+ --enable-managed-identity \
+ --assign-identity <identity-resource-id> \
+ --assign-kubelet-identity <kubelet-identity-resource-id>
+ ```
+
+ Your output for a successful cluster update using your own kubelet managed identity should resemble the following example output:
+
+ ```output
+ "identity": {
+ "principalId": null,
+ "tenantId": null,
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "/subscriptions/<subscriptionid>/resourcegroups/resourcegroups/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myIdentity": {
+ "clientId": "<client-id>",
+ "principalId": "<principal-id>"
+ }
+ }
+ },
+ "identityProfile": {
+ "kubeletidentity": {
+ "clientId": "<client-id>",
+ "objectId": "<object-id>",
+ "resourceId": "/subscriptions/<subscriptionid>/resourcegroups/resourcegroups/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myKubeletIdentity"
+ }
+ },
+ ```
## Next steps
-Use [Azure Resource Manager templates ][aks-arm-template] to create a managed identity-enabled cluster.
+Use [Azure Resource Manager templates][aks-arm-template] to create a managed identity-enabled cluster.
<!-- LINKS - external --> [aks-arm-template]: /azure/templates/microsoft.containerservice/managedclusters
Use [Azure Resource Manager templates ][aks-arm-template] to create a managed id
<!-- LINKS - internal --> [install-azure-cli]: /cli/azure/install-azure-cli [az-identity-create]: /cli/azure/identity#az_identity_create
-[az-identity-list]: /cli/azure/identity#az_identity_list
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-provider-register]: /cli/azure/provider#az_provider_register
+[az-identity-show]: /cli/azure/identity#az_identity_show
[managed-identity-resources-overview]: ../active-directory/managed-identities-azure-resources/overview.md [Bring your own control plane managed identity]: use-managed-identity.md#bring-your-own-control-plane-managed-identity [Use a pre-created kubelet managed identity]: use-managed-identity.md#use-a-pre-created-kubelet-managed-identity [workload-identity-overview]: workload-identity-overview.md [aad-pod-identity]: use-azure-ad-pod-identity.md [add role assignment for control plane identity]: use-managed-identity.md#add-role-assignment-for-control-plane-identity
+[az-group-create]: /cli/azure/group#az_group_create
+[az-aks-create]: /cli/azure/aks#az_aks_create
+[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[az-aks-update]: /cli/azure/aks#az_aks_update
+[az-aks-show]: /cli/azure/aks#az_aks_show
+[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create
+[az-version]: /cli/azure/reference-index#az_version
+[az-upgrade]: /cli/azure/reference-index#az_upgrade
aks Use Premium V2 Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-premium-v2-disks.md
+
+ Title: Enable Premium SSD v2 Disk support on Azure Kubernetes Service (AKS)
+description: Learn how to enable and configure Premium SSD v2 Disks in an Azure Kubernetes Service (AKS) cluster.
+ Last updated : 04/25/2023+++
+# Use Azure Premium SSD v2 disks on Azure Kubernetes Service
+
+[Azure Premium SSD v2 disks][azure-premium-v2-disk-overview] offer IO-intense enterprise workloads, a consistent submillisecond disk latency, and high IOPS and throughput. The performance (capacity, throughput, and IOPS) of Premium SSD v2 disks can be independently configured at any time, making it easier for more scenarios to be cost efficient while meeting performance needs.
+
+This article describes how to configure a new or existing AKS cluster to use Azure Premium SSD v2 disks.
+
+## Before you begin
+
+Before creating or upgrading an AKS cluster that is able to use Azure Premium SSD v2 disks, you need to create an AKS cluster in the same region and availability zone that supports Premium Storage and attach the disks following the steps below.
+
+For an existing AKS cluster, you can enable Premium SSD v2 disks by adding a new node pool to your cluster, and then attach the disks following the steps below.
+
+> [!IMPORTANT]
+> Azure Premium SSD v2 disks require node pools deployed in regions that support these disks. For a list of supported regions, see [Premium SSD v2 disk supported regions][premium-v2-regions].
+
+### Limitations
+
+- Azure Premium SSD v2 disks have certain limitations that you need to be aware of. For a complete list, see [Premium SSD v2 limitations][premium-v2-limitations].
+
+## Use Premium SSD v2 disks dynamically with a storage class
+
+To use Premium SSD v2 disks in a deployment or stateful set, you can use a [storage class for dynamic provisioning][azure-disk-volume].
+
+### Create the storage class
+
+A storage class is used to define how a unit of storage is dynamically created with a persistent volume. For more information on Kubernetes storage classes, see [Kubernetes Storage Classes][kubernetes-storage-classes].
+
+In this example, you create a storage class that references Premium SSD v2 disks. Create a file named `azure-pv2-disk-sc.yaml`, and copy in the following manifest.
+
+```yaml
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: premium2-disk-sc
+parameters:
+ cachingMode: None
+ skuName: PremiumV2_LRS
+ DiskIOPSReadWrite: "4000"
+ DiskMBpsReadWrite: "1000"
+provisioner: disk.csi.azure.com
+reclaimPolicy: Delete
+volumeBindingMode: Immediate
+allowVolumeExpansion: true
+```
+
+Create the storage class with the [kubectl apply][kubectl-apply] command and specify your *azure-pv2-disk-sc.yaml* file:
+
+```bash
+kubectl apply -f azure-pv2-disk-sc.yaml
+```
+
+The output from the command resembles the following example:
+
+```console
+storageclass.storage.k8s.io/premium2-disk-sc created
+```
+
+## Create a persistent volume claim
+
+A persistent volume claim (PVC) is used to automatically provision storage based on a storage class. In this case, a PVC can use the previously created storage class to create an ultra disk.
+
+Create a file named `azure-pv2-disk-pvc.yaml`, and copy in the following manifest. The claim requests a disk named `premium2-disk` that is *1000 GB* in size with *ReadWriteOnce* access. The *premium2-disk-sc* storage class is specified as the storage class.
+
+```yaml
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: premium2-disk
+spec:
+ accessModes:
+ - ReadWriteOnce
+ storageClassName: premium2-disk-sc
+ resources:
+ requests:
+ storage: 1000Gi
+```
+
+Create the persistent volume claim with the [kubectl apply][kubectl-apply] command and specify your *azure-pv2-disk-pvc.yaml* file:
+
+```bash
+kubectl apply -f azure-pv2-disk-pvc.yaml
+```
+
+The output from the command resembles the following example:
+
+```console
+persistentvolumeclaim/premium2-disk created
+```
+
+## Use the persistent volume
+
+Once the persistent volume claim has been created and the disk successfully provisioned, a pod can be created with access to the disk. The following manifest creates a basic NGINX pod that uses the persistent volume claim named *premium2-disk* to mount the Azure disk at the path `/mnt/azure`.
+
+Create a file named `nginx-premium2.yaml`, and copy in the following manifest.
+
+```yaml
+kind: Pod
+apiVersion: v1
+metadata:
+ name: nginx-premium2
+spec:
+ containers:
+ - name: nginx-premium2
+ image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ volumeMounts:
+ - mountPath: "/mnt/azure"
+ name: volume
+ volumes:
+ - name: volume
+ persistentVolumeClaim:
+ claimName: premium2-disk
+```
+
+Create the pod with the [kubectl apply][kubectl-apply] command, as shown in the following example:
+
+```bash
+kubectl apply -f nginx-premium2.yaml
+```
+
+The output from the command resembles the following example:
+
+```bash
+pod/nginx-premium2 created
+```
+
+You now have a running pod with your Azure disk mounted in the `/mnt/azure` directory. This configuration can be seen when inspecting your pod via `kubectl describe pod nginx-premium2`, as shown in the following condensed example:
+
+```bash
+kubectl describe pod nginx-premium2
+
+[...]
+Volumes:
+ volume:
+ Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
+ ClaimName: premium2-disk
+ ReadOnly: false
+ kube-api-access-sh59b:
+ Type: Projected (a volume that contains injected data from multiple sources)
+ TokenExpirationSeconds: 3607
+ ConfigMapName: kube-root-ca.crt
+ ConfigMapOptional: <nil>
+ DownwardAPI: true
+QoS Class: Burstable
+Node-Selectors: <none>
+Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists
+ node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
+ node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
+Events:
+ Type Reason Age From Message
+ - - - -
+ Normal Scheduled 7m58s default-scheduler Successfully assigned default/nginx-premium2 to aks-agentpool-12254644-vmss000006
+ Normal SuccessfulAttachVolume 7m46s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-ff39fb64-1189-4c52-9a24-e065b855b886"
+ Normal Pulling 7m39s kubelet Pulling image "mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine"
+ Normal Pulled 7m38s kubelet Successfully pulled image "mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine" in 1.192915667s
+ Normal Created 7m38s kubelet Created container nginx-premium2
+ Normal Started 7m38s kubelet Started container nginx-premium2
+[...]
+```
+
+## Set IOPS and throughput limits
+
+Input/Output Operations Per Second (IOPS) and throughput limits for Azure Premium v2 SSD disk is currently not supported through AKS. To adjust performance, you can use the Azure CLI command [az disk update][az-disk-update] and including the `--disk-iops-read-write` and `--disk-mbps-read-write` parameters.
+
+The following example updates the disk IOPS read/write to **5000** and Mbps to **200**. For `--resource-group`, the value must be the second resource group automatically created to store the AKS worker nodes with the naming convention *MC_resourcegroupname_clustername_location*. For more information, see [Why are two resource groups created with AKS?][aks-two-resource-groups].
+
+The value for the `--name` parameter is the name of the volume created using the StorageClass, and it starts with `pvc-`. To identify the disk name, you can run `kubectl get pvc` or navigate to the secondary resource group in the portal to find it. See [manage resources from the Azure portal][manage-resources-azure-portal] to learn more.
+
+```azurecli
+az disk update --subscription subscriptionName --resource-group myResourceGroup --name diskName --disk-iops-read-write=5000 --disk-mbps-read-write=200
+```
+
+## Next steps
+
+- For more about Premium SSD v2 disks, see [Using Azure Premium SSD v2 disks](../virtual-machines/disks-deploy-premium-v2.md).
+- For more about storage best practices, see [Best practices for storage and backups in Azure Kubernetes Service (AKS)][operator-best-practices-storage].
+
+<!-- LINKS - external -->
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[kubernetes-storage-classes]: https://kubernetes.io/docs/concepts/storage/storage-classes/
+
+<!-- LINKS - internal -->
+[azure-premium-v2-disk-overview]: ../virtual-machines/disks-types.md#premium-ssd-v2
+[premium-v2-regions]: ../virtual-machines/disks-types.md#regional-availability
+[premium-v2-limitations]: ../virtual-machines/disks-types.md#premium-ssd-v2-limitations
+[azure-disk-volume]: azure-disk-csi.md
+[use-tags]: use-tags.md
+[operator-best-practices-storage]: operator-best-practices-storage.md
+[az-disk-update]: /cli/azure/disk#az-disk-update
+[manage-resources-azure-portal]: ../azure-resource-manager/management/manage-resources-portal.md#open-resources
+[aks-two-resource-groups]: faq.md#why-are-two-resource-groups-created-with-aks
aks Use Ultra Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-ultra-disks.md
Title: Enable Ultra Disk support on Azure Kubernetes Service (AKS) description: Learn how to enable and configure Ultra Disks in an Azure Kubernetes Service (AKS) cluster Previously updated : 03/28/2023 Last updated : 04/10/2023 # Use Azure ultra disks on Azure Kubernetes Service
-[Azure ultra disks](../virtual-machines/disks-enable-ultra-ssd.md) offer high throughput, high IOPS, and consistent low latency disk storage for your stateful applications. With ultra disks, you can dynamically change the performance of the SSD along with your workloads without the need to restart your agent nodes. Ultra disks are suited for data-intensive workloads.
+[Azure ultra disks][ultra-disk-overview] offer high throughput, high IOPS, and consistent low latency disk storage for your stateful applications. One major benefit of ultra disks is the ability to dynamically change the performance of the SSD along with your workloads without the need to restart your agent nodes. Ultra disks are suited for data-intensive workloads.
-## Before you begin
+This article describes how to configure a new or existing AKS cluster to use Azure ultra disks.
-This feature can only be set at cluster or node pool creation time.
+## Before you begin
-> [!IMPORTANT]
-> Azure ultra disks require node pools deployed in availability zones and regions that support these disks and specific VM series. For more information, see the [**Ultra disks GA scope and limitations**](../virtual-machines/disks-enable-ultra-ssd.md#ga-scope-and-limitations).
+This feature can only be set at cluster creation or when creating a node pool.
### Limitations -- Ultra disks can't be used with certain features, such as availability sets or Azure Disk encryption. Review [**Ultra disks GA scope and limitations**](../virtual-machines/disks-enable-ultra-ssd.md#ga-scope-and-limitations) before proceeding.
+- Azure ultra disks require node pools deployed in availability zones and regions that support these disks, and are only supported by specific VM series. Review the corresponding table under the [Ultra disk limitations][ultra-disk-limitations] section for more information.
+- Ultra disks can't be used with some features and functionality, such as availability sets or Azure Disk Encryption. Review the [Ultra disk limitations][ultra-disk-limitations] for the latest information.
- The supported size range for ultra disks is between *100* and *1500*. ## Create a cluster that can use ultra disks
-Create an AKS cluster that can use ultra disks by enabling the `EnableUltraSSD` feature.
+Create an AKS cluster that is able to leverage Azure ultra Disks by using the following CLI commands. Use the `--enable-ultra-ssd` parameter to set the `EnableUltraSSD` feature.
-1. Create an Azure resource group using the [`az group create`][az-group-create] command.
-
- ```azurecli-interactive
- az group create --name myResourceGroup --location westus2
- ```
+```azurecli-interactive
+az aks create -g MyResourceGroup -n myAKSCluster -l westus2 --node-vm-size Standard_D2s_v3 --zones 1 2 --node-count 2 --enable-ultra-ssd
+```
-2. Create an AKS-managed Azure AD cluster with support for ultra disks using the [`az aks create`][az-aks-create] command with the `--enable-ultra-ssd` flag.
-
- ```azurecli-interactive
- az aks create -g MyResourceGroup -n myAKSCluster -l westus2 --node-vm-size Standard_D2s_v3 --zones 1 2 --node-count 2 --enable-ultra-ssd
- ```
+If you want to create a cluster without ultra disk support, you can do so by omitting the `--enable-ultra-ssd` parameter.
## Enable ultra disks on an existing cluster
-You can enable ultra disks on existing clusters by adding a new node pool to your cluster that support ultra disks.
+You can enable ultra disks on an existing cluster by adding a new node pool to your cluster that support ultra disks. Configure a new node pool to use ultra disks by using the `--enable-ultra-ssd` parameter with the [`az aks nodepool add`][az-aks-nodepool-add] command.
-- Configure a new node pool to use ultra disks using the [`az aks nodepool add`][az-aks-nodepool-add] command with the `--enable-ultra-ssd` flag.-
- ```azurecli
- az aks nodepool add --name ultradisk --cluster-name myAKSCluster --resource-group myResourceGroup --node-vm-size Standard_D2s_v3 --zones 1 2 --node-count 2 --enable-ultra-ssd
- ```
+If you want to create new node pools without support for ultra disks, you can do so by excluding the `--enable-ultra-ssd` parameter.
## Use ultra disks dynamically with a storage class
To use ultra disks in your deployments or stateful sets, you can use a [storage
### Create the storage class
-A storage class is used to define how a unit of storage is dynamically created with a persistent volume. For more information on Kubernetes storage classes, see [Kubernetes storage classes][kubernetes-storage-classes]. In this case, we'll create a storage class that references ultra disks.
+A storage class is used to define how a unit of storage is dynamically created with a persistent volume. For more information on Kubernetes storage classes, see [Kubernetes storage classes][kubernetes-storage-classes]. In this example, we'll create a storage class that references ultra disks.
1. Create a file named `azure-ultra-disk-sc.yaml` and copy in the following manifest:
A storage class is used to define how a unit of storage is dynamically created w
2. Create the storage class using the [`kubectl apply`][kubectl-apply] command and specify your `azure-ultra-disk-sc.yaml` file.
- ```console
+ ```bash
kubectl apply -f azure-ultra-disk-sc.yaml ```
A persistent volume claim (PVC) is used to automatically provision storage based
2. Create the persistent volume claim using the [`kubectl apply`][kubectl-apply] command and specify your `azure-ultra-disk-pvc.yaml` file.
- ```console
+ ```bash
kubectl apply -f azure-ultra-disk-pvc.yaml ```
Once the persistent volume claim has been created and the disk successfully prov
3. See your configuration details using the `kubectl describe pod` command and specify your `nginx-ultra.yaml` file.
- ```console
+ ```bash
kubectl describe pod nginx-ultra ```
Once the persistent volume claim has been created and the disk successfully prov
[...] ```
-## Using Azure tags
-
-For more details on using Azure tags, see [Use Azure tags in AKS][use-tags].
- ## Next steps - For more about ultra disks, see [Using Azure ultra disks](../virtual-machines/disks-enable-ultra-ssd.md).
For more details on using Azure tags, see [Use Azure tags in AKS][use-tags].
[kubernetes-storage-classes]: https://kubernetes.io/docs/concepts/storage/storage-classes/ <!-- LINKS - internal -->
+[ultra-disk-overview]: ../virtual-machines/disks-types.md#ultra-disks
+[ultra-disk-limitations]: ../virtual-machines/disks-types.md#ultra-disk-limitations
[azure-disk-volume]: azure-disk-csi.md [operator-best-practices-storage]: operator-best-practices-storage.md [use-tags]: use-tags.md
-[az-group-create]: /cli/azure/group#az_group_create
-[az-aks-create]: /cli/azure/aks#az_aks_create
[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add
app-service Manage Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-backup.md
Title: Back up an app
description: Learn how to restore backups of your apps in Azure App Service or configure custom backups. Customize backups by including the linked database. ms.assetid: 6223b6bd-84ec-48df-943f-461d84605694 Previously updated : 10/24/2022 -- Last updated : 04/25/2023 # Back up and restore your app in Azure App Service In [Azure App Service](overview.md), you can easily restore app backups. You can also make on-demand custom backups or configure scheduled custom backups. You can restore a backup by overwriting an existing app by restoring to a new app or slot. This article shows you how to restore a backup and make custom backups.
-Backup and restore are supported in **Basic**, **Standard**, **Premium**, and **Isolated** tiers. For **Basic** tier, only the production slot can be backed up and restored. For more information about scaling your App Service plan to use a higher tier, see [Scale up an app in Azure](manage-scale-up.md).
+Back up and restore are supported in **Basic**, **Standard**, **Premium**, and **Isolated** tiers. For **Basic** tier, only the production slot can be backed up and restored. For more information about scaling your App Service plan to use a higher tier, see [Scale up an app in Azure](manage-scale-up.md).
> [!NOTE] > For App Service environments:
Backup and restore are supported in **Basic**, **Standard**, **Premium**, and **
> - Custom backups can be restored to a target app in another ASE, such as from a V2 ASE to a V3 ASE. > - Backups can be restored to target app of the same OS platform as the source app.
-## Automatic vs custom backups
+
+## Automatic vs. custom backups
There are two types of backups in App Service. Automatic backups made for your app regularly as long as it's in a supported pricing tier. Custom backups require initial configuration, and can be made on-demand or on a schedule. The following table shows the differences between the two types.
There are two types of backups in App Service. Automatic backups made for your a
1. In **Storage account**, select an existing storage account (in the same subscription) or select **Create new**. Do the same with **Container**.
- To back up the linked database(s), select **Next: Advanced** > **Include database**, and select the database(s) to back up.
+ To back up the linked databases, select **Next: Advanced** > **Include database**, and select the databases to backup.
> [!NOTE] > For a supported database to appear in this list, its connection string must exist in the **Connection strings** section of the **Configuration** page for your app.
Automatic backups are simple and stored in the same datacenter as the App Servic
#### How do I stop the automatic backup?
-You cannot stop automatic backup. The automatic backup is stored on the platform and has no effect on the underlying app instance or itΓÇÖs storage.
+You cannot stop automatic backup. The automatic backup is stored on the platform and has no effect on the underlying app instance or its storage.
<a name="nextsteps"></a>
app-service Overview Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-disaster-recovery.md
Steps to create a passive-cold region without GRS and GZRS are summarized as fol
- Aside from Azure Front Door, which is proposed in this article, Azure provides other load balancing options, such as Azure Traffic Manager. For a comparison of the various options, see [Load-balancing options - Azure Architecture Center](/azure/architecture/guide/technology-choices/load-balancing-overview). - It's also recommended to set up monitoring and alerts for your web apps to for timely notifications during a disaster. For more information, see [Application Insights availability tests](../azure-monitor/app/availability-overview.md). + ## Next steps [Tutorial: Create a highly available multi-region app in Azure App Service](tutorial-multi-region-app.md)
application-gateway Application Gateway Troubleshooting 502 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-troubleshooting-502.md
If all the instances of BackendAddressPool are unhealthy, then the application g
Ensure that the instances are healthy and the application is properly configured. Check if the backend instances can respond to a ping from another VM in the same VNet. If configured with a public end point, ensure a browser request to the web application is serviceable.
+## Upstream SSL certificate does not match
+
+### Cause
+
+The TLS certificate installed in the backend server(s), does not match the hostname received in the Host request header.
+
+In scenarios where End-to-end TLS is enabled, a configuration that is achieved by editing the appropiate "Backend HTTP Settings", and changing there the configuration of the "Backend protocol" setting to HTTPS, it is mandatory to ensure that the CNAME of the TLS certificate installed in the backend servers matches the hostname coming to the backend in the HTTP host header request.
+
+As a reminder, the effect of enabling on the "Backend HTTP Settings" the option of protocol HTTPS rather than HTTP, will be that the second part of the communication that happens between the instances of the Application Gateway and the backend servers will be encrypted with TLS.
+
+Due to the fact that by default Application Gateway sends the same HTTP host header to the backend as it receives from the client, you will need to ensure that the TLS certificate installed on the backend server, is issued with a CNAME that matches the host name received by that backend server in the HTTP host header.
+Remember that, unless specified otherwise, this hostname would be the same as the one received from the client.
+
+For example:
+
+Imagine that you have an Application Gateway to serve the https requests for domain www.contoso.com
+You could have the domain contoso.com delegated to an Azure DNS Public Zone, and a A DNS record in that zone pointing www.contoso.com to the public IP of the specific Application Gateway that is going to serve the requests.
+
+On that Application Gateway you should have a listener for the host www.contoso.com with a rule that has the "Backed HTTP Setting" forced to use protocol HTTPS (ensuring End-to-end TLS). That same rule could have configured a backend pool with two VMs running IIS as Web servers.
+
+As we know enabling HTTPS in the "Backed HTTP Setting" of the rule will make the second part of the communication that happens between the Application Gateway instances and the servers in the backend to use TLS.
+
+If the backend servers do not have a TLS certificate issued for the CNAME www.contoso.com or *.contoso.com, the request will fail with **Server Error: 502 - Web server received an invalid response while acting as a gateway or proxy server** because the upstream SSL certificate (the certificate installed in the backend servers) will not match the hostname in the host header, and hence the TLS negotiation will fail.
++
+www.contoso.com --> APP GW front end IP --> Listener with a rule that configures "Backend HTTP Settings" to use protocol HTTP --> Backend Pool --> Web server (needs to have a TLS certificate installed for www.contoso.com)
+
+## Solution
+
+it is required that the CNAME of the TLS certificate installed in the backend server, matches the host name configured in the HTTP backend settings, otherwise the second part of the End-to-end communication that happens between the instances of the Application Gateway and the backend, will fail with "Upstream SSL certificate does not match", and will throw back a **Server Error: 502 - Web server received an invalid response while acting as a gateway or proxy server**
++ ## Next steps If the preceding steps don't resolve the issue, open a [support ticket](https://azure.microsoft.com/support/options/).
applied-ai-services Onboard Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/how-tos/onboard-your-data.md
After signing into your Metrics Advisor portal and choosing your workspace, clic
#### 1. Basic settings Next you'll input a set of parameters to connect your time-series data source. * **Source Type**: The type of data source where your time series data is stored.
-* **Granularity**: The interval between consecutive data points in your time series data. Currently Metrics Advisor supports: Yearly, Monthly, Weekly, Daily, Hourly, and Custom. The lowest interval the customization option supports is 300 seconds.
+* **Granularity**: The interval between consecutive data points in your time series data. Currently Metrics Advisor supports: Yearly, Monthly, Weekly, Daily, Hourly, per minute, and Custom. The lowest interval the customization option supports is 60 seconds.
* **Seconds**: The number of seconds when *granularityName* is set to *Customize*. * **Ingest data since (UTC)**: The baseline start time for data ingestion. `startOffsetInSeconds` is often used to add an offset to help with data consistency.
automation Automation Graphical Authoring Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-graphical-authoring-intro.md
Title: Author graphical runbooks in Azure Automation
description: This article tells how to author a graphical runbook without working with code. Previously updated : 03/07/2023 Last updated : 04/25/2023
Select an activity on the canvas to configure its properties and parameters in t
A parameter set defines the mandatory and optional parameters that accept values for a particular cmdlet. All cmdlets have at least one parameter set, and some have several sets. If a cmdlet has multiple parameter sets, you must select the one to use before you can configure parameters. You can change the parameter set used by an activity by selecting **Parameter Set** and choosing another set. In this case, any parameter values that you have already configured are lost.
-In the following example, the [Get-AzVM](/powershell/module/az.compute/get-azvm) cmdlet has three parameter sets. The example uses one set called **ListVirtualMachineInResourceGroupParamSet**, with a single optional parameter, for returning all virtual machines in a resource group. The example also uses the **GetVirtualMachineInResourceGroupParamSet** parameter set for specifying the virtual machine to return. This set has two mandatory parameters and one optional parameter.
+In the following example, the [Get-AzVM](/powershell/module/az.compute/get-azvm) cmdlet has three parameter sets. The example uses one set called **ListLocationVirtualMachinesParamSet**, with a single optional parameter, to return the location for the virtual machines to be listed. The example also uses the **GetVirtualMachineInResourceGroupParamSet** parameter set for specifying the virtual machine to return. This set has two mandatory parameters and one optional parameter.
+ ![Parameter set](media/automation-graphical-authoring-intro/get-azvm-parameter-sets.png)
automation Automation Linux Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-linux-hrw-install.md
sudo python /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/DSCResources/
Run the following commands as root on the agent-based Linux Hybrid Worker:
-1. ```python
+1. ```bash
sudo bash ```
-1. ```python
+1. ```bash
rm -r /home/nxautomation ``` 1. Under **Process Automation**, select **Hybrid worker groups** and then your hybrid worker group to go to the **Hybrid Worker Group** page.
Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookW
## Check version of Hybrid Worker To check the version of agent-based Linux Hybrid Runbook Worker, go to the following path:
-`vi/opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/VERSION`
-
+```bash
+ sudo cat /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/VERSION
+```
The file *VERSION* has the version number of Hybrid Runbook Worker. ## Next steps
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview.md
Change Tracking and Inventory now support Python 2 and Python 3. If your machine
> To use the OMS agent compatible with Python 3, ensure that you first uninstall Python 2; otherwise, the OMS agent will continue to run with python 2 by default. #### [Python 2](#tab/python-2) -- Red Hat, CentOS, Oracle: `yum install -y python2`-- Ubuntu, Debian: `apt-get install -y python2`-- SUSE: `zypper install -y python2`
+- Red Hat, CentOS, Oracle:
+
+```bash
+ sudo yum install -y python2
+```
+- Ubuntu, Debian:
+
+```bash
+ sudo apt-get udpate
+ sudo apt-get install -y python2
+```
+- SUSE:
+
+```bash
+ sudo zypper install -y python2
+```
+ > [!NOTE] > The Python 2 executable must be aliased to *python*. #### [Python 3](#tab/python-3) -- Red Hat, CentOS, Oracle: `yum install -y python3`-- Ubuntu, Debian: `apt-get install -y python3`-- SUSE: `zypper install -y python3`-
+- Red Hat, CentOS, Oracle:
+
+```bash
+ sudo yum install -y python3
+```
+- Ubuntu, Debian:
+
+```bash
+ sudo apt-get update
+ sudo apt-get install -y python3
+```
+- SUSE:
+
+```bash
+ sudo zypper install -y python3
+```
## Network requirements
automation Dsc Linux Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/dsc-linux-powershell.md
Register the Azure Linux VM as a Desired State Configuration (DSC) node for the
```cmd ssh user@IP
+ ```
+ ```bash
sudo apt-get update sudo apt-get install -y python ```
automation Migrate Existing Agent Based Hybrid Worker To Extension Based Workers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md
To install and use Hybrid Worker extension using REST API, follow these steps. T
#### [Azure CLI](#tab/cli)
-You can use Azure CLI to create a new Hybrid Worker group, create a new Azure VM, add it to an existing Hybrid Worker Group and install the Hybrid Worker extension. Learn more aboutΓÇ»[Azure CLI](https://learn.microsoft.com/cli/azure/what-is-azure-cli).
+You can use Azure CLI to create a new Hybrid Worker group, create a new Azure VM, add it to an existing Hybrid Worker Group and install the Hybrid Worker extension. Learn more aboutΓÇ»[Azure CLI](/cli/azure/what-is-azure-cli).
Follow the steps mentioned below as an example:
Follow the steps mentioned below as an example:
**Manage Hybrid Worker Extension** -- To create, delete, and manage extension-based Hybrid Runbook Worker groups, see [az automation hrwg | Microsoft Docs](/cli/azure/automation/hrwg?view=azure-cli-latest)-- To create, delete, and manage extension-based Hybrid Runbook Worker, see [az automation hrwg hrw | Microsoft Docs](/cli/azure/automation/hrwg/hrw?view=azure-cli-latest)
+- To create, delete, and manage extension-based Hybrid Runbook Worker groups, see [az automation hrwg | Microsoft Docs](/cli/azure/automation/hrwg?view=azure-cli-latest&preserve-view=true)
+- To create, delete, and manage extension-based Hybrid Runbook Worker, see [az automation hrwg hrw | Microsoft Docs](/cli/azure/automation/hrwg/hrw?view=azure-cli-latest&preserve-view=true)
-After creating new Hybrid Runbook Worker, you must install the extension on the Hybrid Worker using [az vm extension set](/cli/azure/vm/extension?view=azure-cli-latest#az-vm-extension-set).
+After creating new Hybrid Runbook Worker, you must install the extension on the Hybrid Worker using [az vm extension set](/cli/azure/vm/extension?view=azure-cli-latest#az-vm-extension-set&preserve-view=true).
#### [PowerShell](#tab/ps)
New-AzConnectedMachineExtension -ResourceGroupName <VMResourceGroupName> -Locati
Run the following commands on agent-based Linux Hybrid Worker:
-1. ```python
+1. ```bash
sudo bash ```
-1. ```python
+1. ```bash
rm -r /home/nxautomation ``` 1. Under **Process Automation**, select **Hybrid worker groups** and then your hybrid worker group to go to the **Hybrid Worker Group** page.
automation Remove Node And Configuration Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/state-configuration/remove-node-and-configuration-package.md
To find the package names and other relevant details, see the [PowerShell Desire
### RPM-based systems ```bash
-RPM -e <package name>
+rpm -e <package name>
``` ### dpkg-based systems
dpkg -P <package name>
- If you want to re-register the node, or register a new one, see [Register a VM to be managed by State Configuration](../tutorial-configure-servers-desired-state.md#register-a-vm-to-be-managed-by-state-configuration). -- If you want to add the configuration back and recompile, see [Compile DSC configurations in Azure Automation State Configuration](../automation-dsc-compile.md).
+- If you want to add the configuration back and recompile, see [Compile DSC configurations in Azure Automation State Configuration](../automation-dsc-compile.md).
automation Extension Based Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/extension-based-hybrid-runbook-worker.md
Title: Troubleshoot extension-based Hybrid Runbook Worker issues in Azure Automation description: This article tells how to troubleshoot and resolve issues that arise with Azure Automation extension-based Hybrid Runbook Workers. Previously updated : 02/09/2023 Last updated : 04/26/2023
To help troubleshoot issues with extension-based Hybrid Runbook Workers:
Logs are in `C:\HybridWorkerExtensionLogs`. - For Linux: Logs are in folders </br>`/var/log/azure/Microsoft.Azure.Automation.HybridWorker.HybridWorkerForLinux` and `/home/hweautomation`.
+### Scenario: Job failed to start as the Hybrid Worker was not available when the scheduled job started
+
+#### Issue
+Job fails to start on a Hybrid Worker and you see the following error:
+
+*Failed to start, as hybrid worker was not available when scheduled job started, the hybrid worker was last active at mm/dd/yyyy*.
+
+#### Cause
+This error can occur due to the following reasons:
+- The machines doesn't exist anymore.
+- The machine is turned off and is unreachable.
+- The machine has a network connectivity issue.
+- The Hybrid Runbook Worker extension has been uninstalled from the machine.
+
+#### Resolution
+- Ensure that the machine exists, and Hybrid Runbook Worker extension is installed on it. The Hybrid Worker should be healthy and should give a heartbeat. Troubleshoot any network issues by checking the Microsoft-SMA event logs on the Workers in the Hybrid Runbook Worker Group that tried to run this job.
+- You can also monitor [HybridWorkerPing](../../azure-monitor/essentials/metrics-supported.md#microsoftautomationautomationaccounts) metric that provides the number of pings from a Hybrid Worker and can help to check ping-related issues.
+
+### Scenario: Job was suspended as it exceeded the job limit for a Hybrid Worker
+
+#### Issue
+Job gets suspended with the following error message:
+
+*Job was suspended as it exceeded the job limit for a Hybrid Worker. Add more Hybrid Workers to the Hybrid Worker group to overcome this issue.*
+
+#### Cause
+Jobs might get suspended due to any of the following reasons:
+- Each active Hybrid Worker in the group will poll for jobs every 30 seconds to see if any jobs are available. The Worker picks jobs on a first-come, first-serve basis. Depending on when a job was pushed, whichever Hybrid Worker within the Hybrid Worker Group pings the Automation service first picks up the job. A single hybrid worker can generally pick up four jobs per ping (that is, every 30 seconds). If your rate of pushing jobs is higher than four per 30 seconds and no other Worker picks up the job, the job might get suspended.
+- Hybrid Worker might not be polling as expected every 30 seconds. This could happen if the Worker is not healthy or there are network issues.
+
+#### Resolution
+- If the job limit for a Hybrid Worker exceeds four jobs per 30 seconds, you can add more Hybrid Workers to the Hybrid Worker group for high availability and load balancing. You can also schedule jobs so they do not exceed the limit of four jobs per 30 seconds. The processing time of the jobs queue depends on the Hybrid worker hardware profile and load. Ensure that the Hybrid Worker is healthy and gives a heartbeat.
+- Troubleshoot any network issues by checking the Microsoft-SMA event logs on the Workers in the Hybrid Runbook Worker Group that tried to run this job.
+- You can also monitor the [HybridWorkerPing](../../azure-monitor/essentials/metrics-supported.md#microsoftautomationautomationaccounts) metric that provides the number of pings from a Hybrid Worker and can help to check ping-related issues.
+ ### Scenario: Hybrid Worker deployment fails with Private Link error #### Issue
automation Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/hybrid-runbook-worker.md
Title: Troubleshoot agent-based Hybrid Runbook Worker issues in Azure Automation description: This article tells how to troubleshoot and resolve issues that arise with Azure Automation agent-based Hybrid Runbook Workers. Previously updated : 03/15/2023 Last updated : 04/26/2023
The Hybrid Runbook Worker jobs failed to refresh when communicating through a Lo
Verify the Log Analytics Gateway server is online and is accessible from the machine hosting the Hybrid Runbook Worker role. For additional troubleshooting information, see [Troubleshoot Log Analytics Gateway](../../azure-monitor/agents/gateway.md#troubleshooting). +
+### Scenario: Job failed to start as the Hybrid Worker was not available when the scheduled job started
+
+#### Issue
+Job fails to start on a Hybrid Worker and you see the following error:
+
+*Failed to start, as hybrid worker was not available when scheduled job started, the hybrid worker was last active at mm/dd/yyyy*.
+
+#### Cause
+This error can occur due to the following reasons:
+- The machines doesn't exist anymore.
+- The machine is turned off and is unreachable.
+- The machine has a network connectivity issue.
+- The Hybrid Runbook Worker extension has been uninstalled from the machine.
+
+#### Resolution
+- Ensure that the machine exists, and Hybrid Runbook Worker extension is installed on it. The Hybrid Worker should be healthy and should give a heartbeat. Troubleshoot any network issues by checking the Microsoft-SMA event logs on the Workers in the Hybrid Runbook Worker Group that tried to run this job.
+- You can also monitor [HybridWorkerPing](../../azure-monitor/essentials/metrics-supported.md#microsoftautomationautomationaccounts) metric that provides the number of pings from a Hybrid Worker and can help to check ping-related issues.
+
+### Scenario: Job was suspended as it exceeded the job limit for a Hybrid Worker
+
+#### Issue
+Job gets suspended with the following error message:
+
+*Job was suspended as it exceeded the job limit for a Hybrid Worker. Add more Hybrid Workers to the Hybrid Worker group to overcome this issue.*
+
+#### Cause
+Jobs might get suspended due to any of the following reasons:
+- Each active Hybrid Worker in the group will poll for jobs every 30 seconds to see if any jobs are available. The Worker picks jobs on a first-come, first-serve basis. Depending on when a job was pushed, whichever Hybrid Worker within the Hybrid Worker Group pings the Automation service first picks up the job. A single hybrid worker can generally pick up four jobs per ping (that is, every 30 seconds). If your rate of pushing jobs is higher than four per 30 seconds and no other Worker picks up the job, the job might get suspended.
+- Hybrid Worker might not be polling as expected every 30 seconds. This could happen if the Worker is not healthy or there are network issues.
+
+#### Resolution
+- If the job limit for a Hybrid Worker exceeds four jobs per 30 seconds, you can add more Hybrid Workers to the Hybrid Worker group for high availability and load balancing. You can also schedule jobs so they do not exceed the limit of four jobs per 30 seconds. The processing time of the jobs queue depends on the Hybrid worker hardware profile and load. Ensure that the Hybrid Worker is healthy and gives a heartbeat.
+- Troubleshoot any network issues by checking the Microsoft-SMA event logs on the Workers in the Hybrid Runbook Worker Group that tried to run this job.
+- You can also monitor the [HybridWorkerPing](../../azure-monitor/essentials/metrics-supported.md#microsoftautomationautomationaccounts) metric that provides the number of pings from a Hybrid Worker and can help to check ping-related issues.
++++++ ### <a name="cannot-connect-signalr"></a>Scenario: Event 15011 in the Hybrid Runbook Worker #### Issue
If the agent isn't running, it prevents the Linux Hybrid Runbook Worker from com
Verify the agent is running by entering the command `ps -ef | grep python`. You should see output similar to the following. The Python processes with the **nxautomation** user account. If the Azure Automation feature isn't enabled, none of the following processes are running.
-```bash
+```output
nxautom+ 8567 1 0 14:45 ? 00:00:00 python /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/DSCResources/MSFT_nxOMSAutomationWorkerResource/automationworker/worker/main.py /var/opt/microsoft/omsagent/state/automationworker/oms.conf rworkspace:<workspaceId> <Linux hybrid worker version> nxautom+ 8593 1 0 14:45 ? 00:00:02 python /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/DSCResources/MSFT_nxOMSAutomationWorkerResource/automationworker/worker/hybridworker.py /var/opt/microsoft/omsagent/state/automationworker/worker.conf managed rworkspace:<workspaceId> rversion:<Linux hybrid worker version> nxautom+ 8595 1 0 14:45 ? 00:00:02 python /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/DSCResources/MSFT_nxOMSAutomationWorkerResource/automationworker/worker/hybridworker.py /var/opt/microsoft/omsagent/<workspaceId>/state/automationworker/diy/worker.conf managed rworkspace:<workspaceId> rversion:<Linux hybrid worker version>
To resolve this issue:
1. Run these commands:
- ```
+ ```bash
sudo mv -f /home/nxautomation/state/worker.conf /home/nxautomation/state/worker.conf_old sudo mv -f /home/nxautomation/state/worker_diy.crt /home/nxautomation/state/worker_diy.crt_old sudo mv -f /home/nxautomation/state/worker_diy.key /home/nxautomation/state/worker_diy.key_old
automation Update Agent Issues Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-agent-issues-linux.md
The operating system check verifies if the Hybrid Runbook Worker is running one
To verify if a VM is an Azure VM, check for Asset tag value using the below command:
-```
+```bash
sudo dmidecode ```
This task checks if the folder is present -
To fix this issue, you must start the OMS Agent service by using the following command:
-```
+```bash
sudo /opt/microsoft/omsagent/bin/service_control restart ``` To validate you can perform process check using the below command:
-```
+```bash
process_name="omsagent" ps aux | grep %s | grep -v grep" % (process_name)" ```
As they are the directories of workspaces, the number of directories equals the
### Hybrid Runbook Worker To fix the issue, run the following command:
-```
+```bash
sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/PerformRequiredConfigurationChecks.py' ```
This command forces the omsconfig agent to talk to Azure Monitor and retrieve th
Validate to check if the following two paths exists:
-```
+```bash
/opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/VERSION </br> /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/DSCResources/MSFT_nxOMSAutomationWorkerResource/automationworker/worker/configuration.py ``` ### Hybrid Runbook Worker status This check makes sure the Hybrid Runbook Worker is running on the machine. The processes in the example below should be present if the Hybrid Runbook Worker is running correctly.
-```
+```bash
ps -ef | grep python ```
-```
+```output
nxautom+ 8567 1 0 14:45 ? 00:00:00 python /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/DSCResources/MSFT_nxOMSAutomationWorkerResource/automationworker/worker/main.py /var/opt/microsoft/omsagent/state/automationworker/oms.conf rworkspace:<workspaceId> <Linux hybrid worker version> nxautom+ 8593 1 0 14:45 ? 00:00:02 python /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/DSCResources/MSFT_nxOMSAutomationWorkerResource/automationworker/worker/hybridworker.py /var/opt/microsoft/omsagent/state/automationworker/worker.conf managed rworkspace:<workspaceId> rversion:<Linux hybrid worker version> nxautom+ 8595 1 0 14:45 ? 00:00:02 python /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/DSCResources/MSFT_nxOMSAutomationWorkerResource/automationworker/worker/hybridworker.py /var/opt/microsoft/omsagent/<workspaceId>/state/automationworker/diy/worker.conf managed rworkspace:<workspaceId> rversion:<Linux hybrid worker version>
Update Management downloads Hybrid Runbook Worker packages from the operations e
To fix this issue, run the following command:
-```
+```bash
sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/PerformRequiredConfigurationChecks.py' ```
To fix the issue, either remove the proxy or make sure that the proxy address is
You can validate the task by running the below command:
-```
+```bash
HTTP_PROXY ```
To fix this issue, allow access to IP **169.254.169.254**. For more information,
After the network changes, you can either rerun the Troubleshooter or run the below commands to validate:
-```
+```bash
curl -H \"Metadata: true\" http://169.254.169.254/metadata/instance?api-version=2018-02-01 ```
Curl on software repositories configured in package manager.
Refreshing repos would help to confirm the communication.
-```
+```bash
sudo apt-get check sudo yum check-update ```
azure-app-configuration Quickstart Python Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-python-provider.md
Add the following key-values to the App Configuration store. For more informatio
| *test.message* | *Hello test* | Leave empty | Leave empty | | *my_json* | *{"key":"value"}* | Leave empty | *application/json* |
-## Set up the Python app
+## Console applications
+In this section, you will create a console application and load data from your App Configuration store.
+### Connect to App Configuration
1. Create a new directory for the project named *app-configuration-quickstart*. ```console
Add the following key-values to the App Configuration store. For more informatio
print("test.message found: " + str("test.message" in config)) ```
-## Configure your App Configuration connection string
+### Run the application
1. Set an environment variable named **AZURE_APPCONFIG_CONNECTION_STRING**, and set it to the connection string of your App Configuration store. At the command line, run the following command:
- ### [Windows command prompt](#tab/windowscommandprompt)
+ #### [Windows command prompt](#tab/windowscommandprompt)
To build and run the app locally using the Windows command prompt, run the following command and replace `<app-configuration-store-connection-string>` with the connection string of your app configuration store:
Add the following key-values to the App Configuration store. For more informatio
setx AZURE_APPCONFIG_CONNECTION_STRING "connection-string-of-your-app-configuration-store" ```
- ### [PowerShell](#tab/powershell)
+ #### [PowerShell](#tab/powershell)
If you use Windows PowerShell, run the following command and replace `<app-configuration-store-connection-string>` with the connection string of your app configuration store:
Add the following key-values to the App Configuration store. For more informatio
$Env:AZURE_APPCONFIG_CONNECTION_STRING = "<app-configuration-store-connection-string>" ```
- ### [macOS](#tab/unix)
+ #### [macOS](#tab/unix)
If you use macOS, run the following command and replace `<app-configuration-store-connection-string>` with the connection string of your app configuration store:
Add the following key-values to the App Configuration store. For more informatio
export AZURE_APPCONFIG_CONNECTION_STRING='<app-configuration-store-connection-string>' ```
- ### [Linux](#tab/linux)
+ #### [Linux](#tab/linux)
If you use Linux, run the following command and replace `<app-configuration-store-connection-string>` with the connection string of your app configuration store:
Add the following key-values to the App Configuration store. For more informatio
1. Restart the command prompt to allow the change to take effect. Print out the value of the environment variable to validate that it is set properly with the command below.
- ### [Windows command prompt](#tab/windowscommandprompt)
+ #### [Windows command prompt](#tab/windowscommandprompt)
Using the Windows command prompt, run the following command:
Add the following key-values to the App Configuration store. For more informatio
printenv AZURE_APPCONFIG_CONNECTION_STRING ```
- ### [PowerShell](#tab/powershell)
+ #### [PowerShell](#tab/powershell)
If you use Windows PowerShell, run the following command:
Add the following key-values to the App Configuration store. For more informatio
$Env:AZURE_APPCONFIG_CONNECTION_STRING ```
- ### [macOS](#tab/unix)
+ #### [macOS](#tab/unix)
If you use macOS, run the following command:
Add the following key-values to the App Configuration store. For more informatio
echo "$AZURE_APPCONFIG_CONNECTION_STRING" ```
- ### [Linux](#tab/linux)
+ #### [Linux](#tab/linux)
If you use Linux, run the following command:
Add the following key-values to the App Configuration store. For more informatio
test.message found: False ```
+## Web applications
+The App Configuration provider loads data into a `Mapping` object, accessible as a dictionary, which can be used in combination with the existing configuration of various Python frameworks. This section shows how to use the App Configuration provider in popular web frameworks like Flask and Django.
+
+### [Flask](#tab/flask)
+You can use Azure App Configuration in your existing Flask web apps by updating its in-built configuration. You can do this by passing your App Configuration provider object to the `update` function of your Flask app instance in `app.py`:
+
+```python
+azure_app_config = load(connection_string=os.environ.get("AZURE_APPCONFIG_CONNECTION_STRING"))
+
+# NOTE: This will override all existing configuration settings with the same key name.
+app.config.update(azure_app_config)
+
+# Access a configuration setting directly from within Flask configuration
+message = app.config.get("message")
+```
+
+### [Django](#tab/django)
+You can use Azure App Configuration in your existing Django web apps by adding the following lines of code into your `settings.py` file
+
+```python
+CONFIG = load(connection_string=os.environ.get("AZURE_APPCONFIG_CONNECTION_STRING"))
+```
+
+To access individual configuration settings in the Django views, you can reference them from the provider object created in Django settings. For example, in `views.py`:
+```python
+# Import Django settings
+from django.conf import settings
+
+# Access a configuration setting from Django settings instance.
+MESSAGE = settings.CONFIG.get("message")
+```
++
+Full code samples on how to use Azure App Configuration in Python web applications can be found in the [Azure App Configuration](https://github.com/Azure/AppConfiguration/tree/main/examples/Python) GitHub repo.
+ ## Clean up resources [!INCLUDE [azure-app-configuration-cleanup](../../includes/azure-app-configuration-cleanup.md)]
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
For complete release version information, see [Version log](version-log.md#febru
- Set `--readable-secondaries` to any value between 0 and the number of replicas minus 1. - `--readable-secondaries` only applies to Business Critical tier. - Automatic backups are taken on the primary instance in a Business Critical service tier when there are multiple replicas. When a failover happens, backups move to the new primary. -- [ReadWriteMany (RWX) capable storage class](../../aks/concepts-storage.md#azure-disks) is required for backups, for both General Purpose and Business Critical service tiers. Specifying a non-ReadWriteMany storage class will cause the SQL Managed Instance to be stuck in "Pending" status during deployment.
+- [ReadWriteMany (RWX) capable storage class](../../aks/concepts-storage.md#azure-disk) is required for backups, for both General Purpose and Business Critical service tiers. Specifying a non-ReadWriteMany storage class will cause the SQL Managed Instance to be stuck in "Pending" status during deployment.
- Billing support when using multiple read replicas. For additional information about service tiers, see [High Availability with Azure Arc-enabled SQL Managed Instance (preview)](managed-instance-high-availability.md).
azure-arc Conceptual Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-extensions.md
Title: "Cluster extensions - Azure Arc-enabled Kubernetes" Previously updated : 03/08/2023 Last updated : 04/27/2023 description: "This article provides a conceptual overview of the Azure Arc-enabled Kubernetes cluster extensions capability."
Both the `config-agent` and `extensions-manager` components running in the clust
> Protected configuration settings for an extension instance are stored for up to 48 hours in the Azure Arc-enabled Kubernetes services. As a result, if the cluster remains disconnected during the 48 hours after the extension resource was created on Azure, the extension changes from a `Pending` state to `Failed` state. To prevent this, we recommend bringing clusters online regularly. > [!IMPORTANT]
-> Currently, Azure Arc-enabled Kubernetes cluster extensions aren't supported on ARM64-based clusters. To [install and use cluster extensions](extensions.md), the cluster must have at least one node of operating system and architecture type `linux/amd64`.
+> Currently, Azure Arc-enabled Kubernetes cluster extensions aren't supported on ARM64-based clusters, except for [Flux (GitOps)](conceptual-gitops-flux2.md). To [install and use other cluster extensions](extensions.md), the cluster must have at least one node of operating system and architecture type `linux/amd64`.
## Extension scope
azure-arc Conceptual Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-flux2.md
Title: "GitOps Flux v2 configurations with AKS and Azure Arc-enabled Kubernetes" description: "This article provides a conceptual overview of GitOps in Azure for use in Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters." Previously updated : 03/20/2023 Last updated : 04/27/2023
GitOps is enabled in an Azure Arc-enabled Kubernetes or AKS cluster as a `Micros
### Version support
-The most recent version of the Flux v2 extension and the two previous versions (N-2) are supported. We generally recommend that you use the [most recent version](extensions-release.md#flux-gitops) of the extension.
+The most recent version of the Flux v2 extension (`microsoft.flux`) and the two previous versions (N-2) are supported. We generally recommend that you use the [most recent version](extensions-release.md#flux-gitops) of the extension.
+
+Starting with [`microsoft.flux` version 1.7.0](extensions-release.md#170-march-2023), ARM64-based clusters are supported.
> [!NOTE] > If you have been using Flux v1, we recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible.
azure-arc Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions.md
Title: "Deploy and manage Azure Arc-enabled Kubernetes cluster extensions" Previously updated : 04/14/2023 Last updated : 04/27/2023 description: "Create and manage extension instances on Azure Arc-enabled Kubernetes clusters."
Before you begin, read the [conceptual overview of Arc-enabled Kubernetes cluste
az extension update --name k8s-extension ```
-* An existing Azure Arc-enabled Kubernetes connected cluster, with at least one node of operating system and architecture type `linux/amd64`.
+* An existing Azure Arc-enabled Kubernetes connected cluster, with at least one node of operating system and architecture type `linux/amd64`. If deploying [Flux (GitOps)](extensions-release.md#flux-gitops), you can use an ARM64-based cluster without a `linux/amd64` node.
* If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md). * [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version.
azure-arc System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/system-requirements.md
Title: "Azure Arc-enabled Kubernetes system requirements" Previously updated : 03/08/2023 Last updated : 04/27/2023 description: Learn about the system requirements to connect Kubernetes clusters to Azure Arc.
The cluster must have at least one node with operating system and architecture t
> [!IMPORTANT] > Many Arc-enabled Kubernetes features and scenarios are supported on ARM64 nodes, such as [cluster connect](cluster-connect.md) and [viewing Kubernetes resources in the Azure portal](kubernetes-resource-view.md). However, if using Azure CLI to enable these scenarios, [Azure CLI must be installed](/cli/azure/install-azure-cli) and run from an AMD64 machine. >
-> Currently, Azure Arc-enabled Kubernetes [cluster extensions](conceptual-extensions.md) aren't supported on ARM64-based clusters. To [install and use cluster extensions](extensions.md), the cluster must have at least one node of operating system and architecture type `linux/amd64`.
+> Currently, Azure Arc-enabled Kubernetes [cluster extensions](conceptual-extensions.md) aren't supported on ARM64-based clusters, except for [Flux (GitOps)](conceptual-gitops-flux2.md). To [install and use other cluster extensions](extensions.md), the cluster must have at least one node of operating system and architecture type `linux/amd64`.
## Compute and memory requirements
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
Title: "Tutorial: Deploy applications using GitOps with Flux v2" description: "This tutorial shows how to use GitOps with Flux v2 to manage configuration and application deployment in Azure Arc and AKS clusters." Previously updated : 03/15/2023 Last updated : 04/27/2023
To deploy applications using GitOps with Flux v2, you need the following:
#### For Azure Arc-enabled Kubernetes clusters
-* An Azure Arc-enabled Kubernetes connected cluster that's up and running.
+* An Azure Arc-enabled Kubernetes connected cluster that's up and running. Starting with [`microsoft.flux` version 1.7.0](extensions-release.md#170-march-2023), ARM64-based clusters are supported.
[Learn how to connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md). If you need to connect through an outbound proxy, then assure you [install the Arc agents with proxy settings](./quickstart-connect-cluster.md?tabs=azure-cli#connect-using-an-outbound-proxy-server).
False whl k8s-extension C:\Users\somename\.azure\c
#### For Azure Arc-enabled Kubernetes clusters
-* An Azure Arc-enabled Kubernetes connected cluster that's up and running.
+* An Azure Arc-enabled Kubernetes connected cluster that's up and running. Starting with [`microsoft.flux` version 1.7.0](extensions-release.md#170-march-2023), ARM64-based clusters are supported.
[Learn how to connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md). If you need to connect through an outbound proxy, then assure you [install the Arc agents with proxy settings](./quickstart-connect-cluster.md?tabs=azure-cli#connect-using-an-outbound-proxy-server).
azure-functions Create First Function Cli Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-python.md
Use the following commands to create these items. Both Azure CLI and PowerShell
-1. When you're using the Azure CLI, you can turn on the `param-persist` option that automatically tracks the names of your created resources. For more information, see [Azure CLI persisted parameter](/cli/azure/param-persist-howto).
-
- # [Azure CLI](#tab/azure-cli)
- ```azurecli
- az config param-persist on
- ```
-
- # [Azure PowerShell](#tab/azure-powershell)
-
- This feature isn't available in Azure PowerShell.
-
-
1. Create a resource group named `AzureFunctionsQuickstart-rg` in your chosen region.
Use the following commands to create these items. Both Azure CLI and PowerShell
# [Azure CLI](#tab/azure-cli) ```azurecli
- az storage account create --name <STORAGE_NAME> --sku Standard_LRS
+ az storage account create --name <STORAGE_NAME> --location <REGION> --resource-group AzureFunctionsQuickstart-rg --sku Standard_LRS
``` The [az storage account create](/cli/azure/storage/account#az-storage-account-create) command creates the storage account.
Use the following commands to create these items. Both Azure CLI and PowerShell
# [Azure CLI](#tab/azure-cli) ```azurecli
- az functionapp create --consumption-plan-location westeurope --runtime python --runtime-version 3.9 --functions-version 4 --name <APP_NAME> --os-type linux --storage-account <STORAGE_NAME>
+ az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location westeurope --runtime python --runtime-version 3.9 --functions-version 4 --name <APP_NAME> --os-type linux --storage-account <STORAGE_NAME>
``` The [az functionapp create](/cli/azure/functionapp#az-functionapp-create) command creates the function app in Azure. If you're using Python 3.9, 3.8, or 3.7, change `--runtime-version` to `3.9`, `3.8`, or `3.7`, respectively. You must supply `--os-type linux` because Python functions can't run on Windows, which is the default.
azure-maps Tutorial Search Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-search-location.md
The next tutorial demonstrates how to display a route between two locations.
[MapControlCredential]: /javascript/api/azure-maps-rest/atlas.service.mapcontrolcredential [pipeline]: /javascript/api/azure-maps-rest/atlas.service.pipeline [searchURL]: /javascript/api/azure-maps-rest/atlas.service.searchurl
-[Search API] /rest/api/maps/search
+[Search API]: /rest/api/maps/search
[Fuzzy Search service]: /rest/api/maps/search/get-search-fuzzy [setCamera]: /javascript/api/azure-maps-control/atlas.map#setcamera-cameraoptionscameraboundsoptionsanimationoptions-
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
You can define a data collection rule to send data from multiple machines to mul
1. On the **Destination** tab, add one or more destinations for the data source. You can select multiple destinations of the same or different types. For instance, you can select multiple Log Analytics workspaces, which is also known as multihoming.
- You can send Windows event and Syslog data sources to Azure Monitor Logs only. You can send performance counters to both Azure Monitor Metrics and Azure Monitor Logs.
+ You can send Windows event and Syslog data sources to Azure Monitor Logs only. You can send performance counters to both Azure Monitor Metrics and Azure Monitor Logs. At this time, hybrid compute (Arc for Server) resources **do not** support the Azure Monitor Metrics (Preview) destination.
[ ![Screenshot that shows the Azure portal form to add a data source in a data collection rule.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png#lightbox)
azure-monitor Java Get Started Supplemental https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-get-started-supplemental.md
ENV APPLICATIONINSIGHTS_CONNECTION_STRING="CONNECTION-STRING"
ENTRYPOINT["java", "-javaagent:applicationinsights-agent-3.4.12.jar", "-jar", "app.jar"] ```
+In this example we have copied the `applicationinsights-agent-3.4.12.jar` and `applicationinsights.json` files from an `agent` folder (you can choose any folder of your machine). These two files have to be in the same folder in the Docker container.
+ ### Third-party container images If you're using a third-party container image that you can't modify, mount the Application Insights Java agent jar into the container from outside. Set the environment variable for the container
azure-monitor Azure Monitor Operations Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/azure-monitor-operations-manager.md
Title: Azure Monitor for existing Operations Manager customers
-description: Guidance for existing users of Operations Manager to transition monitoring of certain workloads to Azure Monitor as part of a transition to the cloud.
+ Title: Migrate from System Center Operations Manager (SCOM) to Azure Monitor
+description: Guidance for existing users of System Center Operations Manager (SCOM) to transition monitoring of workloads to Azure Monitor as part of a transition to the cloud.
Previously updated : 04/05/2022 Last updated : 04/21/2023
-# Azure Monitor for existing Operations Manager customers
-This article provides guidance for customers who currently use [System Center Operations Manager](/system-center/scom/welcome) and are planning a transition to [Azure Monitor](overview.md) as they migrate business applications and other resources into Azure. It assumes that your ultimate goal is a full transition into the cloud, replacing as much Operations Manager functionality as possible with Azure Monitor, without compromising your business and IT operational requirements.
+# Migrate from System Center Operations Manager (SCOM) to Azure Monitor
+This article provides guidance for customers who currently use [System Center Operations Manager (SCOM)](/system-center/scom/welcome) and are planning a transition to cloud based monitoring with [Azure Monitor](overview.md) as they migrate business applications and other resources into Azure.
-The specific recommendations made in this article will change as Azure Monitor and Operations Manager add features. The fundamental strategy though will remain consistent.
+There's no standard process for migrating from SCOM, and you may rely on SCOM management packs for an extended time as opposed to performing a quick migration. This article describes the different options available and decision criteria you can use to determine the best strategy for your particular environment.
-> [!IMPORTANT]
-> There is a cost to implementing several Azure Monitor features described here, so you should evaluate their value before deploying across your entire environment. See [Cost optimization and Azure Monitor](best-practices-cost.md) for strategies for reducing your cost for Azure Monitor.
-## Prerequisites
-This article assumes that you already use [Operations Manager](/system-center/scom) and at least have a basic understanding of [Azure Monitor](overview.md). For a complete comparison between the two, see [Cloud monitoring guide: Monitoring platforms overview](/azure/cloud-adoption-framework/manage/monitor/platform-overview). That article details specific feature differences between to the two to help you understand some of the recommendations made here.
+## Hybrid cloud monitoring
+Most customers use a [hybrid cloud monitoring](/azure/cloud-adoption-framework/manage/monitor/cloud-models-monitor-overview#hybrid-cloud-monitoring) strategy that allows you to make a gradual transition to the cloud. This approach allows you to maintain your existing business processes as you become more familiar with the new platform. Only move away from SCOM functionality as you can replace it with Azure Monitor. Multiple monitoring tools add complexity, but it allows you to take advantage of Azure Monitor's ability to monitor next generation cloud workloads while retaining SCOM's ability to monitor server software and workloads.
-## General strategy
-Your migration will instead constitute a [standard Azure Monitor implementation](best-practices.md) while you continue to use Operations Manager. As you customize Azure Monitor to meet your requirements for different applications and components and as it gains more features, then you can start to retire different management packs and agents in Operations Manager.
+Your environment prior to moving any components into Azure is based on virtual and physical machines located on-premises or with a managed service provider. It relies on SCOM to monitor business applications, server software, and other infrastructure components in your environment such as physical servers and networks. You use standard management packs for server software such as IIS, SQL Server, and various vendor software, and you tune those management packs for your specific requirements. You create custom management packs for your business applications and components that can't be monitored with existing management packs, and you also configure SCOM to support your business processes.
-> [!IMPORTANT]
-> [Azure Monitor SCOM Managed Instance (preview)](vm/scom-managed-instance-overview.md) is now in public preview. This allows you to move your existing SCOM environment into the Azure portal with Azure Monitor while continuing to use the same management packs. The rest of the recommendations in this article still apply as you migrate your monitoring logic into Azure Monitor.
+As you move services into the cloud, Azure Monitor starts collecting [platform metrics](essentials/data-platform-metrics.md) and the [activity log](essentials/activity-log.md) for each of your resources. You create [diagnostic settings](essentials/diagnostic-settings.md) to collect [resource logs](essentials/resource-logs.md) so you can interactively analyze all available telemetry using [log queries](logs/log-query-overview.md) and [insights](insights/insights-overview.md).
-The general strategy recommended in this article is the same as in the [Cloud Monitoring Guide](/azure/cloud-adoption-framework/manage/monitor/), which recommends a [Hybrid cloud monitoring](/azure/cloud-adoption-framework/manage/monitor/cloud-models-monitor-overview#hybrid-cloud-monitoring) strategy that allows you to make a gradual transition to the cloud. Even though some features may overlap, this strategy will allow you to maintain your existing business processes as you become more familiar with the new platform. Only move away from Operations Manager functionality as you can replace it with Azure Monitor. Using multiple monitoring tools does add complexity, but it allows you to take advantage of Azure Monitor's ability to monitor next generation cloud workloads while retaining Operations Manager's ability to monitor server software and infrastructure components that may be on-premises or in other clouds.
+During this period of transition, you have two independent monitoring tools. You use insights and workbooks to analyze your cloud telemetry in the Azure portal while still using the Operations console to analyze your data collected by SCOM. Since each system has its own alerting, you need to create action groups in Azure Monitor equivalent to your notification groups in SCOM.
+The following table describes the different features and strategies that are available for a hybrid monitoring environment using SCOM and Azure Monitor.
-## Components to monitor
-It helps to categorize the different types of workloads that you need to monitor in order to determine a distinct monitoring strategy for each. [Cloud monitoring guide: Formulate a monitoring strategy](/azure/cloud-adoption-framework/strategy/monitoring-strategy#high-level-modeling) provides a detailed breakdown of the different layers in your environment that need monitoring as you progress from legacy enterprise applications to modern applications in the cloud.
+| Method | Description |
+|:|:|
+| Dual-homed agents | SCOM uses the Microsoft Management Agent (MMA) which is the same as [Log Analytics agent](agents/log-analytics-agent.md) used by Azure Monitor. You can configure this agent to have a single machine connect to both SCOM and Azure Monitor simultaneously. This configuration does require that your Azure VMs have a connection to your on-premises management servers.<br><br>The [Log Analytics agent](agents/log-analytics-agent.md) has been replaced with the [Azure Monitor agent](agents/agents-overview.md), which provides significant advantages including simpler management and better control over data collection. The two agents can coexist on the same machine allowing you to connect to both Azure Monitor and SCOM. This configuration is a better option than dual-homing the legacy agent because of the significant [advantages of the Azure Monitor agent](agents/agents-overview.md#benefits). |
+| Connected management group | [Connect your SCOM management group to Azure Monitor](agents/om-agents.md) to forward data collected from your SCOM agents to Azure Monitor. This is similar to using dual-homed agents, but doesn't require each agent to be configured to connect to Azure Monitor. This strategy requires the legacy agent, so you can't specify monitoring with data collection rules. You also can't use VM insights unless you connect each VM directly to Azure Monitor. |
+| SCOM Managed instance (preview) | [SCOM managed instance (preview)](vm/scom-managed-instance-overview.md) is a full implementation of SCOM in Azure allowing you to continue running the same management packs that you run in your on-premises SCOM environment. There's no current integration between the data and alerts from SCOM and Azure Monitor, and you continue to use the same Operations console for analyzing your health and alerts.<br><br>SCOM MI is similar to maintaining your existing SCOM environment and dual-homing agents, although you can consolidate your monitoring configuration in Azure and retire your on-premises components such as database and management servers. Agents from Azure VMs can connect to the SCOM managed instance in Azure rather than connecting to management servers in your own data center. |
+| Azure management pack | The [Azure management pack](https://www.microsoft.com/download/details.aspx?id=50013) allows Operations Manager to discover Azure resources and monitor their health based on a particular set of monitoring scenarios. This management pack does require you to perform extra configuration for each resource in Azure. It may be helpful though to provide some visibility of your Azure resources in the Operations Console until you evolve your business processes to focus on Azure Monitor. |
-Before the cloud, you used Operations Manager to monitor all layers. As you start your transition with Infrastructure as a Service (IaaS), you continue to use Operations Manager for your virtual machines but start to use Azure Monitor for your cloud resources. As you further transition to modern applications using Platform as a Service (PaaS), you can focus more on Azure Monitor and start to retire Operations Manager functionality.
--
-![Cloud Models](/azure/cloud-adoption-framework/strategy/media/monitoring-strategy/cloud-models.png)
-
-These layers can be simplified into the following categories, which are further described in the rest of this article. While every monitoring workload in your environment may not fit neatly into one of these categories, each should be close enough to a particular category for the general recommendations to apply.
-
-**Business applications.** Applications that provide functionality specific to your business. They may be internal or external and are often developed internally using custom code. Your legacy applications will typically be hosted on virtual or physical machines running either Windows or Linux, while your newer applications will be based on application services in Azure such as Azure Web Apps and Azure Functions.
-
-**Azure services.** Resources in Azure that support your business applications that have migrated to the cloud. This includes services such as Azure Storage, Azure SQL, and Azure IoT. This also includes Azure virtual machines since they are monitored like other Azure services, but the applications and software running on the guest operating system of those virtual machines require more monitoring beyond the host.
-
-**Server software.** Software running on virtual and physical machines that support your business applications or packaged applications that provide general functionality to your business. Examples include Internet Information Server (IIS), SQL Server, Exchange, and SharePoint. This also includes the Windows or Linux operating system on your virtual and physical machines.
-
-**Local infrastructure.** Components specific to your on-premises environment that require monitoring. This includes such resources as physical servers, storage, and network components. These are the components that are virtualized when you move to the cloud.
-
-## Sample walkthrough
-The following is a hypothetical walkthrough of a migration from Operations Manager to Azure Monitor. This is not intended to represent the full complexity of an actual migration, but it does at least provide the basic steps and sequence. The sections below describe each of these steps in more detail.
-
-Your environment prior to moving any components into Azure is based on virtual and physical machines located on-premises or with a managed service provider. It relies on Operations Manager to monitor business applications, server software, and other infrastructure components in your environment such as physical servers and networks. You use standard management packs for server software such as IIS, SQL Server, and various vendor software, and you tune those management packs for your specific requirements. You create custom management packs for your business applications and other components that can't be monitored with existing management packs and configure Operations Manager to support your business processes.
-
-Your migration to Azure starts with IaaS, moving virtual machines supporting business applications into Azure. The monitoring requirements for these applications and the server software they depend on don't change, and you continue using Operations Manager on these servers with your existing management packs.
-
-Azure Monitor is enabled for your Azure services as soon as you create an Azure subscription. It automatically collects platform metrics and the Activity log, and you configure resource logs to be collected so you can interactively analyze all available telemetry using log queries. You enable VM insights on your virtual machines to analyze monitoring data across your entire environment together and to discover relationships between machines and processes. You extend your use of Azure Monitor to your on-premises physical and virtual machines by enabling Azure Arc-enabled servers on them.
-
-You enable Application Insights for each of your business applications. It identifies the different components of each application, begins to collect usage and performance data, and identifies any errors that occur in the code. You create availability tests to proactively test your external applications and alert you to any performance or availability problems. While Application Insights gives you powerful features that you don't have in Operations Manager, you continue to rely on custom management packs that you developed for your business applications since they include monitoring scenarios not yet covered by Azure Monitor.
-
-As you gain familiarity with Azure Monitor, you start to create alert rules that are able to replace some management pack functionality and start to evolve your business processes to use the new monitoring platform. This allows you to start removing machines and management packs from the Operations Manager management group. You continue to use management packs for critical server software and on-premises infrastructure but continue to watch for new features in Azure Monitor that will allow you to retire additional functionality.
-
-## Monitor Azure services
-Azure services actually require Azure Monitor to collect telemetry, and it's enabled the moment that you create an Azure subscription. The [Activity log](essentials/activity-log.md) is automatically collected for the subscription, and [platform metrics](essentials/data-platform-metrics.md) are automatically collected from any Azure resources you create. You can immediately start using [metrics explorer](essentials/metrics-getting-started.md), which is similar to performance views in the Operations console, but it provides interactive analysis and [advanced aggregations](essentials/metrics-charts.md) of data. [Create a metric alert](alerts/alerts-metric.md) to be notified when a value crosses a threshold or [save a chart to a dashboard or workbook](essentials/metrics-charts.md#saving-to-dashboards-or-workbooks) for visibility.
-
-[![Metrics explorer](media/azure-monitor-operations-manager/metrics-explorer.png)](media/azure-monitor-operations-manager/metrics-explorer.png#lightbox)
-
-[Create a diagnostic setting](essentials/diagnostic-settings.md) for each Azure resource to send metrics and [resource logs](essentials/resource-logs.md), which provide details about the internal operation of each resource, to a Log Analytics workspace. This gives you all available telemetry for your resources and allows you to use [Log Analytics](logs/log-analytics-overview.md) to interactively analyze log and performance data using an advanced query language that has no equivalent in Operations Manager. You can also create [log query alerts](alerts/alerts-log-query.md), which can use complex logic to determine alerting conditions and correlate data across multiple resources.
+## Monitor business applications
+You typically require custom management packs to monitor your business applications with SCOM, using agents installed on each virtual machine. Application Insights in Azure Monitor monitors web-based applications whether they're in Azure, other clouds, or on-premises. It can be used for all of your applications whether or not they've been migrated to Azure.
-[![Logs Analytics](media/azure-monitor-operations-manager/log-analytics.png)](media/azure-monitor-operations-manager/log-analytics.png#lightbox)
+If your monitoring of a business application is limited to functionality provided by the [.NET app performance template](/system-center/scom/net-application-performance-monitoring-template) in SCOM, then you can most likely migrate to Application Insights with no loss of functionality. In fact, Application Insights includes a significant number of other features including the following:
-[Insights](monitor-reference.md) in Azure Monitor are similar to management packs in that they provide unique monitoring for a particular Azure service. Insights are currently available for several services including networking, storage, and containers, and others are continuously being added.
+- Automatically discover and monitor application components.
+- Collect detailed application usage and performance data such as response time, failure rates, and request rates.
+- Collect browser data such as page views and load performance.
+- Detect exceptions and drill into stack trace and related requests.
+- Perform advanced analysis using features such as [distributed tracing](app/distributed-tracing-telemetry-correlation.md) and [smart detection](alerts/proactive-diagnostics.md).
+- Use [metrics explorer](essentials/metrics-getting-started.md) to interactively analyze performance data.
+- Use [log queries](logs/log-query-overview.md) to interactively analyze collected telemetry together with data collected for Azure services and VM insights.
-[![Insight example](media/azure-monitor-operations-manager/insight.png)](media/azure-monitor-operations-manager/insight.png#lightbox)
+There are certain scenarios though where you may need to continue using SCOM in addition to Application Insights until you're able to achieve required functionality. Examples where you may need to continue with SCOM include the following:
+- [Availability tests](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability), which allow you to monitor and alert on the availability and responsiveness of your applications require incoming requests from the IP addresses of web test agents. If your policy doesn't allow such access, you may need to keep using [Web Application Availability Monitors](/system-center/scom/web-application-availability-monitoring-template) in SCOM.
+- In SCOM you can set any polling interval for availability tests, with many customers checking every 60-120 seconds. Application Insights has a minimum polling interval of five minutes, which may be too long for some customers.
+- A significant amount of monitoring in SCOM is performed by collecting events generated by applications and by running scripts on the local agent. These aren't standard options in Application Insights, so you could require custom work to achieve your business requirements. This might include custom alert rules using event data stored in a Log Analytics workspace and scripts launched in a virtual machines guest using [hybrid runbook worker](../automation/automation-hybrid-runbook-worker.md).
+- Depending on the language that your application is written in, you may be limited in the [instrumentation you can use with Application Insights](app/app-insights-overview.md#supported-languages).
-Insights are based on [workbooks](visualize/workbooks-overview.md) in Azure Monitor, which combine metrics and log queries into rich interactive reports. Create your own workbooks to combine data from multiple services similar to how you might create custom views and reports in the Operations console.
+Following the basic strategy in the other sections of this guide, continue to use SCOM for your business applications, but take advantage of additional features provided by Application Insights. As you're able to replace critical functionality with Azure Monitor, you can start to retire your custom management packs.
-### Azure management pack
-The [Azure management pack](https://www.microsoft.com/download/details.aspx?id=50013) allows Operations Manager to discover Azure resources and monitor their health based on a particular set of monitoring scenarios. This management pack does require you to perform additional configuration for each resource in Azure, but it may be helpful to provide some visibility of your Azure resources in the Operations Console until you evolve your business processes to focus on Azure Monitor.
-[![Azure management pack](media/azure-monitor-operations-manager/operations-console.png)](media/azure-monitor-operations-manager/operations-console.png#lightbox)
- You may choose to use the Azure Management pack if you want visibility for certain Azure resources in the Operations console and to integrate some basic alerting with your existing processes. It actually uses data collected by Azure Monitor. You should look to Azure Monitor though for long-term complete monitoring of your Azure resources.
+## Monitor virtual machines
+Monitoring the software on your virtual machines in a hybrid environment will often use a combination of Azure Monitor and SCOM, depending on the requirements of the workloads running on your VMs. As soon as a virtual machine is created in Azure, [platform metrics](essentials/data-platform-metrics.md) and [activity logs](essentials/activity-log.md) for the VM host automatically start being collected. [Enable recommended alerts](vm/tutorial-monitor-vm-alert-recommended.md) to notify you of common errors for the VM host such as server down and high CPU utilization.
-## Monitor server software and local infrastructure
-When you move machines to the cloud, the monitoring requirements for their software don't change. You no longer need to monitor their physical components since they're virtualized, but the guest operating system and its workloads have the same requirements regardless of their environment.
+Enable [VM insights](vm/vminsights-overview.md) to install the Azure Monitor agent and begin collecting common performance data from the client operating system. This may overlap with some data that you're already collecting in SCOM, but it will allow you to start viewing trends over time and monitor your Azure VMs with other cloud resources. You may also choose to enable the [map feature](vm/vminsights-maps.md) which will give you insight into the processes running on your virtual machines and their dependencies on other services.
-The [Azure Monitor agent](agents/agents-overview.md) uses [data collection rules](essentials/data-collection-rule-overview.md) to collect data from the guest operating system of virtual machines. This is the same performance and event data typically used by management packs for analysis and alerting. [VM insights](vm/vminsights-overview.md) allows you to easily deploy and manage the agent and gets you started with preexisting data collection rules and performance views.
+Continue to use management packs for functionality that can't be provided by other features in Azure Monitor. This includes management packs for critical server software like IIS, SQL Server, or Exchange. You may also have custom management packs developed for on-premises infrastructure that can't be reached with Azure Monitor. Also continue to use SCOM if it's tightly integrated into your operational processes until you can transition to modernizing your service operations where Azure Monitor and other Azure services can augment or replace.
> [!NOTE]
-> Azure Monitor previously used the same Microsoft Management Agent (referred to as the Log Analytics agent in Azure Monitor) as Operations Manager. The Azure Monitor agent can coexist with this agent on the same machine during migration.
+> If you enable VM Insights with the Log Analytics agent instead of the Azure Monitor agent, then no additional agent needs to be installed on the VM. Azure Monitor agent is recommended though because of its significant improvements in monitoring the VM in the cloud. The complexity from maintaining multiple agents is offset by the ability to define monitoring in data collection rules which allow you to configure different data collection for different sets of VMs, similar to your strategy for designing management packs.
+### Migrate management pack logic for VM workloads
+There are no migration tools to convert SCOM management packs to Azure Monitor because their logic is fundamentally different than Azure Monitor data collection. Migrating management pack logic will typically focus on analyzing the data collected by SCOM and identifying those monitoring scenarios that can be replicated by Azure Monitor. As you customize Azure Monitor to meet your requirements for different applications and components, then you can start to retire different management packs and legacy agents in SCOM.
-[![VM insights performance](media/azure-monitor-operations-manager/vm-insights-performance.png)](media/azure-monitor-operations-manager/vm-insights-performance.png#lightbox)
+Management packs in SCOM contain rules and monitors that combine collection of data and the resulting alert into a single end-to-end workflow. Data already collected by SCOM is rarely used for alerting. Azure Monitor separates data collection and alerts into separate processes. Alert rules access data from Azure Monitor Logs and Azure Monitor Metrics that has already been collected from agents. Also, rules and monitors are typically narrowly focused on specific data such as a particular event or performance counter. Data collection rules in Azure Monitor are typically more broad collecting multiple sets of events and performance counters in a single DCR.
+See the following content for guidance on creating data collection and alerting for common monitoring scenarios:
-Examples of features unique to Azure Monitor include the following:
+- Data that you need to collect to support alerting, analysis, and visualization. See [Monitor virtual machines with Azure Monitor: Data collection](vm/monitor-virtual-machine-data-collection.md)
+- Alerts rules that analyze the collected data to proactively notify of you of issues. See [Monitor virtual machines with Azure Monitor: Alerts](vm/monitor-virtual-machine-alerts.md)
-- Discover and monitor relationships between virtual machines and their external dependencies.-- View aggregated performance data across multiple virtual machines in interactive charts and workbooks.-- Use [log queries](logs/log-query-overview.md) to interactively analyze telemetry from your virtual machines with data from your other Azure resources.-- Create [log alert rules](alerts/alerts-log-query.md) based on complex logic across multiple virtual machines.
+Instead of attempting to replicate the entire functionality of a management pack, analyze the critical monitoring that each provides. Decide whether you can replicate those monitoring requirements by using alternate methods. In many cases, you can configure data collection and alert rules in Azure Monitor that replicate enough functionality that you can retire a particular management pack. Management packs can often include hundreds and even thousands of rules and monitors.
-In addition to Azure virtual machines, Azure Monitor can monitor machines on-premises and in other clouds using [Azure Arc-enabled servers](../azure-arc/servers/overview.md). Azure Arc-enabled servers allow you to manage your Windows and Linux machines hosted outside of Azure, on your corporate network, or other cloud provider consistent with how you manage native Azure virtual machines.
+One strategy is to focus on those monitors and rules that triggered alerts in your environment. Refer to [existing reports available in Operations Manager](/system-center/scom/manage-reports-installed-during-setup), such as **Alerts** and **Most Common Alerts**, which can help you identify alerts over time. You can also run the following query on the Operations Database to evaluate the most common recent alerts.
-[![VM insights map](media/azure-monitor-operations-manager/vm-insights-map.png)](media/azure-monitor-operations-manager/vm-insights-map.png#lightbox)
+```sql
+select AlertName, COUNT(AlertName) as 'Total Alerts' from
+Alert.vAlertResolutionState ars
+inner join Alert.vAlertDetail adt on ars.AlertGuid = adt.AlertGuid
+inner join Alert.vAlert alt on ars.AlertGuid = alt.AlertGuid
+group by AlertName
+order by 'Total Alerts' DESC
+```
+Evaluate the output to identify specific alerts for migration. Ignore any alerts that were tuned out or are known to be problematic. Review your management packs to identify any critical alerts of interest that never fired.
-
-Azure Monitor though doesn't have preexisting rules to identify and alert on issues for the business applications and server software running in your virtual machines. You must create your own alert rules to be proactively notified of any detected issues.
--
-Azure Monitor also doesn't measure the health of different applications and services running on a virtual machine. Metric alerts can automatically resolve when a value drops below a threshold, but Azure Monitor doesn't currently have the ability to define health criteria for applications and services running on the machine, nor does it provide health rollup to group the health of related components.
-
-Monitoring the software on your machines in a hybrid environment will often use a combination of Azure Monitor and Operations Manager, depending on the requirements of each machine and on your maturity developing operational processes around Azure Monitor.
-
-Continue to use Operations Manager for functionality that cannot yet be provided by Azure Monitor. This includes management packs for critical server software like IIS, SQL Server, or Exchange. You may also have custom management packs developed for on-premises infrastructure that can't be reached with Azure Monitor. Also continue to use Operations Manager if it is tightly integrated into your operational processes until you can transition to modernizing your service operations where Azure Monitor and other Azure services can augment or replace. Use Azure Monitor to enhance your current monitoring even if it doesn't immediately replace Operations Manager.
--
-## Monitor business applications
-You typically require custom management packs to monitor your business applications with Operations Manager, leveraging agents installed on each virtual machine. Application Insights in Azure Monitor monitors web-based applications whether they're in Azure, other clouds, or on-premises, so it can be used for all of your applications whether or not they've been migrated to Azure.
-
-If your monitoring of a business application is limited to functionality provided by the [.NET app performance template]() in Operations Manager, then you can most likely migrate to Application Insights with no loss of functionality. In fact, Application Insights will include a significant number of additional features including the following:
--- Automatically discover and monitor application components.-- Collect detailed application usage and performance data such as response time, failure rates, and request rates.-- Collect browser data such as page views and load performance.-- Detect exceptions and drill into stack trace and related requests.-- Perform advanced analysis using features such as [distributed tracing](app/distributed-tracing-telemetry-correlation.md) and [smart detection](alerts/proactive-diagnostics.md).-- Use [metrics explorer](essentials/metrics-getting-started.md) to interactively analyze performance data.-- Use [log queries](logs/log-query-overview.md) to interactively analyze collected telemetry together with data collected for Azure services and VM insights.-
-[![Application Insights](media/azure-monitor-operations-manager/application-insights.png)](media/azure-monitor-operations-manager/application-insights.png#lightbox)
-
-There are certain scenarios though where you may need to continue using Operations Manager in addition to Application Insights until you're able to achieve required functionality. Examples where you may need to continue with Operations Manager include the following:
--- [Availability tests](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability), which allow you to monitor and alert on the availability and responsiveness of your applications require incoming requests from the IP addresses of web test agents. If your policy won't allow such access, you may need to keep using [Web Application Availability Monitors](/system-center/scom/web-application-availability-monitoring-template) in Operations Manager.-- In Operations Manager you can set any polling interval for availability tests, with many customers checking every 60-120 seconds. Application Insights has a minimum polling interval of 5 minutes which may be too long for some customers.-- A significant amount of monitoring in Operations Manager is performed by collecting events generated by applications and by running scripts on the local agent. These aren't standard options in Application Insights, so you could require custom work to achieve your business requirements. This might include custom alert rules using event data stored in a Log Analytics workspace and scripts launched in a virtual machines guest using [hybrid runbook worker](../automation/automation-hybrid-runbook-worker.md).-- Depending on the language that your application is written in, you may be limited in the [instrumentation you can use with Application Insights](app/app-insights-overview.md#supported-languages).-
-Following the basic strategy in the other sections of this guide, continue to use Operations Manager for your business applications, but take advantage of additional features provided by Application Insights. As you're able to replace critical functionality with Azure Monitor, you can start to retire your custom management packs.
-
+### Synthetic transactions
+Management packs often make use of synthetic transactions that connect to an application or service running on a machine to simulate a user connection or actual user traffic. If the application is available, you can assume that the machine is running properly. [Application Insights availability tests](app/availability-overview.md) in Azure Monitor provides this functionality. It only works for applications that are accessible from the internet. For internal applications, you must open a firewall to allow access from specific Microsoft URLs performing the test. Or you can continue to use your existing management pack.
## Next steps - See the [Cloud Monitoring Guide](/azure/cloud-adoption-framework/manage/monitor/) for a detailed comparison of Azure Monitor and System Center Operations Manager and more details on designing and implementing a hybrid monitoring environment.-- Read more about [monitoring Azure resources in Azure Monitor](essentials/monitor-azure-resource.md). - Read more about [monitoring Azure virtual machines in Azure Monitor](vm/monitor-vm-azure.md). - Read more about [VM insights](vm/vminsights-overview.md). - Read more about [Application Insights](app/app-insights-overview.md).+
azure-monitor Container Insights Cost Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost-config.md
Reference the [Limitations](./container-insights-cost-config.md#limitations) sec
## Pre-requisites - AKS Cluster MUST be using either System or User Assigned Managed Identity
- - If the AKS Cluster is using Service Principal, you must upgrade to [Managed Identity](../../aks/use-managed-identity.md#update-an-aks-cluster-to-use-a-managed-identity)
+ - If the AKS Cluster is using Service Principal, you must upgrade to [Managed Identity](../../aks/use-managed-identity.md#enable-managed-identities-on-an-existing-aks-cluster)
- Azure CLI: Minimum version required for Azure CLI is 2.45.0. Run az --version to find the version, and run az upgrade to upgrade the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli] - For AKS clusters, aks-preview version 0.5.125 or higher
azure-monitor Container Insights Logging V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-logging-v2.md
This applies to the scenario where you have already enabled container insights f
>* The configuration change can take a few minutes to complete before it takes effect. All ama-logs pods in the cluster will restart. >* The restart is a rolling restart for all ama-logs pods. It won't restart all of them at the same time.
+## Multi-line logging in Container Insights
+Azure Monitor - Container insights now supports multiline logging. With this feature enabled, previously split container logs are stitched together and sent as single entries to the ContainerLogV2 table. Customers are able see container log lines upto to 64 KB (up from the existing 16 KB limit). If the stitched log line is larger than 64 KB, it gets truncated due to Log Analytics limits.
+Additionally, the feature also adds support for .NET and Go stack traces, which appear as single entries instead of being split into multiple entries in ContainerLogV2 table.
+
+### Pre-requisites
+Customers must enable *ContainerLogV2* for multi-line logging to work. Go here to [enable ContainerLogV2](/containers/container-insights-logging-v2#enable-the-containerlogv2-schema) in Container Insights.
+
+### How to enable - This is currently a preview feature
+Multi-line logging can be enabled by setting *enable_multiline_logs* flag to ΓÇ£trueΓÇ¥ in [the config map](https://github.com/microsoft/Docker-Provider/blob/ci_prod/kubernetes/container-azm-ms-agentconfig.yaml#L49)
+
+### Next steps for Multi-line logging
+* Read more about the [ContainerLogV2 schema](https://aka.ms/ContainerLogv2)
+ ## Next steps * Configure [Basic Logs](../logs/basic-logs-configure.md) for ContainerLogv2.
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/overview.md
Last updated 02/28/2023
+uid: azure_monitor_logs_api_overview
# Azure Monitor Log Analytics API overview
azure-monitor Snapshot Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger.md
For bug reports and feedback, [open an issue on GitHub](https://github.com/micro
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
+### [1.4.4](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.4)
+A point release to address user-reported bugs.
+#### Bug fixes
+- Fix [Exception during native component extraction when using a single file application.](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/21)
+#### Changes
+- Lowered PDB scan failure messages from Error to Warning.
+- Update msdia140.dll.
+- Avoid making a service connection if the debugger is disabled via site extension settings.
+ ### [1.4.3](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.3) A point release to address user-reported bugs. #### Bug fixes-- Fix [Hide the IDMS dependency from dependency tracker.](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/17)
+- Fix [Hide the IMDS dependency from dependency tracker.](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/17)
- Fix [ArgumentException: telemetryProcessorTypedoes not implement ITelemetryProcessor.](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/19) <br>Snapshot Collector used via SDK is not supported when Interop feature is enabled. [See more not supported scenarios.](snapshot-debugger-troubleshoot.md#not-supported-scenarios)
azure-monitor Monitor Virtual Machine Management Packs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-management-packs.md
- Title: 'Monitor virtual machines with Azure Monitor: Migrate management pack logic'
-description: Includes a general approach that existing customers of System Center Operations Manager (SCOM) might take to translate critical logic in their management packs to Azure Monitor.
---- Previously updated : 01/10/2023----
-# Monitor virtual machines with Azure Monitor: Migrate management pack logic
-This article is part of the guide [Monitor virtual machines and their workloads in Azure Monitor](monitor-virtual-machine.md). It discusses a general approach that existing customers of System Center Operations Manager (SCOM) might take to translate critical logic in their management packs to Azure Monitor.
-
-> [!NOTE]
-> [Azure Monitor SCOM Managed Instance (preview)](scom-managed-instance-overview.md) is now in public preview. This allows you to move your existing SCOM environment into the Azure portal with Azure Monitor while continuing to use the same management packs. The rest of the recommendations in this article still apply as you migrate your monitoring logic into Azure Monitor.
--
-## Translating logic
-You may currently use SCOM to monitor your virtual machines and their workloads and are starting to consider which monitoring you can move to Azure Monitor. As described in [Azure Monitor for existing Operations Manager customer](../azure-monitor-operations-manager.md), you may continue using SCOM for some period of time until you no longer require the extensive monitoring that SCOM provides. See [Cloud monitoring guide: Monitoring platforms overview](/azure/cloud-adoption-framework/manage/monitor/platform-overview) for a complete comparison of Azure Monitor and SCOM.
-
-There are no migration tools to convert SCOM management packs to Azure Monitor because the platforms are fundamentally different. Your migration instead constitutes a standard Azure Monitor implementation while you continue to use SCOM. As you customize Azure Monitor to meet your requirements for different applications and components and as it gains more features, then you can start to retire different management packs and agents in Operations Manager.
-
-Management packs in SCOM contain rules and monitors that combine collection of data and the resulting alert into a single end-to-end workflow. Data that's already been collected by SCOM is rarely used for alerting. Azure Monitor separates data collection and alerts into separate processes. Alert rules access data from Azure Monitor Logs and Azure Monitor Metrics that has already been collected from agents. Also, rules and monitors are typically narrowly focused on very specific data such as a particular event or performance counter. Data collection rules in Azure Monitor are typically more broad collecting multiple sets of events and performance counters in a single DCR.
----- Data that you need to collect to support alerting, analysis, and visualization. See [Monitor virtual machines with Azure Monitor: Data collection](monitor-virtual-machine-data-collection.md)-- Alerts rules that analyze the collected data to proactively notify of you of issues. See [Monitor virtual machines with Azure Monitor: Alerts](monitor-virtual-machine-alerts.md)--
-## Identify critical management pack logic
-
-Instead of attempting to replicate the entire functionality of a management pack, analyze the critical monitoring provided by the management pack. Decide whether you can replicate those monitoring requirements by using the methods described in the previous sections. In many cases, you can configure data collection and alert rules in Azure Monitor that replicate enough functionality that you can retire a particular management pack. Management packs can often include hundreds and even thousands of rules and monitors.
-
-In most scenarios, Operations Manager combines data collection and alerting conditions in the same rule or monitor. In Azure Monitor, you must configure data collection and an alert rule for any alerting scenarios.
-
-One strategy is to focus on those monitors and rules that triggered alerts in your environment. Refer to [existing reports available in Operations Manager](/system-center/scom/manage-reports-installed-during-setup), such as **Alerts** and **Most Common Alerts**, which can help you identify alerts over time. You can also run the following query on the Operations Database to evaluate the most common recent alerts.
-
-```sql
-select AlertName, COUNT(AlertName) as 'Total Alerts' from
-Alert.vAlertResolutionState ars
-inner join Alert.vAlertDetail adt on ars.AlertGuid = adt.AlertGuid
-inner join Alert.vAlert alt on ars.AlertGuid = alt.AlertGuid
-group by AlertName
-order by 'Total Alerts' DESC
-```
-
-Evaluate the output to identify specific alerts for migration. Ignore any alerts that were tuned out or are known to be problematic. Review your management packs to identify any critical alerts of interest that never fired.
---
-## Synthetic transactions
-Management packs often make use of synthetic transactions that connect to an application or service running on a machine to simulate a user connection or actual user traffic. If the application is available, you can assume that the machine is running properly. [Application insights](../app/app-insights-overview.md) in Azure Monitor provides this functionality. It only works for applications that are accessible from the internet. For internal applications, you must open a firewall to allow access from specific Microsoft URLs performing the test. Or you can use an alternate monitoring solution, such as System Center Operations Manager.
-
-|Method | Description |
-|:|:|
-| [URL test](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) | Ensures that HTTP is available and returning a web page |
-| [Multistep test](/previous-versions/azure/azure-monitor/app/availability-multistep) | Simulates a user session |
-
-## Next steps
-
-* [Learn how to analyze data in Azure Monitor logs using log queries](../logs/get-started-queries.md)
-* [Learn about alerts using metrics and logs in Azure Monitor](../alerts/alerts-overview.md)
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
The following table describes whatΓÇÖs supported for each network features confi
| Azure NetApp Files delegated subnets per VNet | 1 | 1 | | [Network Security Groups](../virtual-network/network-security-groups-overview.md) (NSGs) on Azure NetApp Files delegated subnets | Yes | No | | [User-defined routes](../virtual-network/virtual-networks-udr-overview.md#user-defined) (UDRs) on Azure NetApp Files delegated subnets | Yes | No |
-| Connectivity to [Private Endpoints](../private-link/private-endpoint-overview.md) | No | No |
-| Connectivity to [Service Endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) | No | No |
+| Connectivity to [Private Endpoints](../private-link/private-endpoint-overview.md) | Yes* | No |
+| Connectivity to [Service Endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) | Yes | No |
| Azure policies (for example, custom naming policies) on the Azure NetApp Files interface | No | No | | Load balancers for Azure NetApp Files traffic | No | No | | Dual stack (IPv4 and IPv6) VNet | No <br> (IPv4 only supported) | No <br> (IPv4 only supported) |
+\* Applying Azure network security groups on the private link subnet to Azure Key Vault isn't supported for Azure NetApp Files customer-managed keys. Network security groups don't affect connectivity to Private Link unless Private endpoint network policy is enabled on the subnet. It's recommended to keep this option disabled.
+ > [!IMPORTANT] > Conversion between Basic and Standard networking features in either direction is not currently supported. >
azure-netapp-files Configure Virtual Wan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-virtual-wan.md
Previously updated : 03/24/2023- Last updated : 04/26/2023+
-# Configure Virtual WAN for Azure NetApp Files (preview)
+# Configure Virtual WAN for Azure NetApp Files
You can configure Azure NetApp Files volumes with Standard network features in one or more Virtual WAN spoke virtual networks (VNets). Virtual WAN spoke VNets allow access to the file storage service globally across your Virtual WAN environment.
azure-netapp-files Performance Linux Nfs Read Ahead https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-nfs-read-ahead.md
Read-ahead can be defined either dynamically per NFS mount using the following s
To show the current read-ahead value (the returned value is in KiB), run the following command:
-`$ ./readahead.sh show <mount-point>`
+```bash
+ ./readahead.sh show <mount-point>
+```
To set a new value for read-ahead, run the following command:
-`$ ./readahead.sh set <mount-point> [read-ahead-kb]`
+```bash
+./readahead.sh set <mount-point> [read-ahead-kb]
+```
### Example
To persistently set read-ahead for NFS mounts, `udev` rules can be written as fo
1. Create and test `/etc/udev/rules.d/99-nfs.rules`:
- `SUBSYSTEM=="bdi", ACTION=="add", PROGRAM="<absolute_path>/awk -v bdi=$kernel 'BEGIN{ret=1} {if ($4 == bdi) {ret=0}} END{exit ret}' /proc/fs/nfsfs/volumes", ATTR{read_ahead_kb}="15380"`
+ ```config
+ SUBSYSTEM=="bdi", ACTION=="add", PROGRAM="<absolute_path>/awk -v bdi=$kernel 'BEGIN{ret=1} {if ($4 == bdi) {ret=0}} END{exit ret}' /proc/fs/nfsfs/volumes", ATTR{read_ahead_kb}="15380"
+ ```
2. Apply the `udev` rule:
- `$udevadm control --reload`
+ ```bash
+ sudo udevadm control --reload
+ ```
## Next steps
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 03/16/2023 Last updated : 04/26/2023 # What's new in Azure NetApp Files Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
+## April 2023
+
+* [Azure Virtual WAN](configure-virtual-wan.md) is now generally available in [all regions](azure-netapp-files-network-topologies.md#supported-regions) that support standard network features
+ ## March 2023 * [Disable `showmount`](disable-showmount.md) (Preview)
azure-resource-manager Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter.md
The default set of linter rules is minimal and taken from [arm-ttk test cases](.
- [outputs-should-not-contain-secrets](./linter-rule-outputs-should-not-contain-secrets.md) - [prefer-interpolation](./linter-rule-prefer-interpolation.md) - [prefer-unquoted-property-names](./linter-rule-prefer-unquoted-property-names.md)-- [protect-commandtoexecute-secrets](./linter-rule-protect-commandtoexecute-secrets.md) - [secure-parameter-default](./linter-rule-secure-parameter-default.md) - [secure-params-in-nested-deploy](./linter-rule-secure-params-in-nested-deploy.md) - [secure-secrets-in-params](./linter-rule-secure-secrets-in-parameters.md)
azure-resource-manager Linked Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/linked-templates.md
Title: Link templates for deployment description: Describes how to use linked templates in an Azure Resource Manager template (ARM template) to create a modular template solution. Shows how to pass parameters values, specify a parameter file, and dynamically created URLs. Previously updated : 01/06/2022 Last updated : 04/26/2023
For a tutorial, see [Tutorial: Deploy a linked template](./deployment-tutorial-l
> [!NOTE] > For linked or nested templates, you can only set the deployment mode to [Incremental](deployment-modes.md). However, the main template can be deployed in complete mode. If you deploy the main template in the complete mode, and the linked or nested template targets the same resource group, the resources deployed in the linked or nested template are included in the evaluation for complete mode deployment. The combined collection of resources deployed in the main template and linked or nested templates is compared against the existing resources in the resource group. Any resources not included in this combined collection are deleted. >
-> If the linked or nested template targets a different resource group, that deployment uses incremental mode.
+> If the linked or nested template targets a different resource group, that deployment uses incremental mode. For more information, see [Deployment Scope](./deploy-to-resource-group.md#deployment-scopes).
> > [!TIP]
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
To stay up-to-date with the most recent Azure Video Indexer developments, this a
### Resource Health support
-Azure Video Indexer is now integrated with Azure Resource Health enabling you to see the health and availability of each of your Video Indexer resources and if needed, help with diagnosing and solving problems. You can also set alerts to be notified when your resources are affected. For more information, see [Azure Resource Health overview](../service-health/resource-health-overview.md).
+Azure Video Indexer is now integrated with Azure Resource Health enabling you to see the health and availability of each of your Azure Video Indexer resource. If needed, Azure Resource Health helps with diagnosing and solving problems. You can also set alerts to be notified whenever your resources are affected. For more information, see [Azure Resource Health overview](../service-health/resource-health-overview.md).
### The animation character recognition model has been retired
azure-vmware Configure Storage Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-storage-policy.md
You'll run the `Set-ClusterDefaultStoragePolicy` cmdlet to specify default stora
1. Check **Notifications** to see the progress.
+## Create custom AVS storage policy
+You'll run the `New-AVSStoragePolicy` cmdlet to create or overwrite an existing policy.
+This function creates a new or overwrites an existing vSphere Storage Policy. Non vSAN-Based, vSAN Only, VMEncryption Only, Tag Only based and/or any combination of these policy types are supported.
+> [!NOTE]
+> You cannot modify existing AVS default storage policies.
+> Certain options enabled in storage policies will produce warnings to associated risks.
+
+1. Select **Run command** > **Packages** > **New-AVSStoragePolicy**.
+
+1. Provide the required values or change the default values, and then select **Run**.
+
+ | **Field** | **Value** |
+ | | |
+ | **Overwrite** | Overwrite existing Storage Policy. <ul><li>Default is $false. <li>Passing overwrite true provided will overwrite an existing policy exactly as defined. <li>Those values not passed will be removed or set to default values. </ul></li>|
+ | **NotTags** | Match to datastores that do NOT have these tags. <ul><li>Tags are case sensitive. <li>Comma seperate multiple tags. <li> Example: Tag1,Tag 2,Tag_3 | <ul><li>
+ | **Tags** | Match to datastores that do have these tags. <ul><li> Tags are case sensitive. <li>Comma seperate multiple tags. <li>Example: Tag1,Tag 2,Tag_3 </ul></li>|
+ | **vSANForceProvisioning** | Default is $false. <ul><li> Force provisioning for the policy. <li> Valid values are $true or $false <li>**WARNING** - vSAN Force Provisioned Objects are not covered under Microsoft SLA. Data LOSS and vSAN instability may occur. <li>Recommended value is $false.</ul></li> |
+ | **vSANChecksumDisabled** | Default is $false. <ul><li> Enable or disable checksum for the policy. <li>Valid values are $true or $false. <li> **WARNING** - Disabling checksum may lead to data LOSS and/or corruption. <li> Recommended value is $false.</ul></li> |
+ | **vSANCacheReservation** | Default is 0. <ul><li>Valid values are 0..100. <li>Percentage of cache reservation for the policy.</ul></li> |
+ | **vSANIOLimit** | Default is unset. <ul><li>Valid values are 0..2147483647. <li>IOPS limit for the policy.</ul></li> |
+ | **vSANDiskStripesPerObject** | Default is 1. Valid values are 1..12. <ul><li>The number of HDDs across which each replica of a storage object is striped. <li>A value higher than 1 may result in better performance (for e.g. when flash read cache misses need to get serviced from HDD), but also results in higher use of system resources.</ul></li> |
+ | **vSANObjectSpaceReservation** | Default is 0. Valid values are 0..100. <ul><li>Object Reservation. <li>0=Thin Provision <li>100=Thick Provision</ul></li> |
+ | **VMEncryption** | Default is None. <ul><li> Valid values are None, PreIO, PostIO. <li>PreIO allows VAIO filtering solutions to capture data prior to VM encryption. <li>PostIO allows VAIO filtering solutions to capture data after VM encryption.</ul></li> |
+ | **vSANFailuresToTolerate** | Default is "R1FTT1". <ul><li> Valid values are "None", "R1FTT1", "R1FTT2", "R1FTT3", "R5FTT1", "R6FTT2", "R1FTT3" <li> None = No Data Redundancy<li> R1FTT1 = 1 failure - RAID-1 (Mirroring)<li> R1FTT2 = 2 failures - RAID-1 (Mirroring)<li> R1FTT3 = 3 failures - RAID-1 (Mirroring)<li> R5FTT1 = 1 failure - RAID-5 (Erasure Coding), <li> R6FTT2 = 2 failures - RAID-6 (Erasure Coding) <li> No Data Redundancy options are not covered under Microsoft SLA. </li></ul>|
+ | **vSANSiteDisasterTolerance** | Default is "None". <ul><li> Valid Values are "None", "Dual", "Preferred", "Secondary", "NoneStretch" <li> None = No Site Redundancy (Recommended Option for Non-Stretch Clusters, NOT recommended for Stretch Clusters) <li> Dual = Dual Site Redundancy (Recommended Option for Stretch Clusters) <li> Preferred = No site redundancy - keep data on Preferred (stretched cluster) <li> Secondary = No site redundancy - Keep data on Secondary Site (stretched cluster) <li>NoneStretch = No site redundancy - Not Recommended (https://kb.vmware.com/s/article/88358)<li> Only valid for stretch clusters.</li></ul> |
+ | **Description** | Description of Storage Policy you are creating, free form text. |
+ | **Name** | Name of the storage policy to set. For example, **RAID-FTT-1**. |
+ | **Retain up to** | Retention period of the cmdlet output. The default value is 60. |
+ | **Specify name for execution** | Alphanumeric name, for example, **New-AVSStoragePolicy-Exec1**. |
+ | **Timeout** | The period after which a cmdlet exits if taking too long to finish. |
+
+1. Check **Notifications** to see the progress.
+
+## Remove AVS Storage Policy
+You'll run the `Remove-AVSStoragePolicy` cmdlet to specify default storage policy for a cluster,
++
+1. Select **Run command** > **Packages** > **Remove-AVSStoragePolicy**.
+
+1. Provide the required values or change the default values, and then select **Run**.
+
+ | **Field** | **Value** |
+ | | |
+ | **Name** | Name of Storage Policy. Wildcards are not supported and will be stripped. |
+ | **Retain up to** | Retention period of the cmdlet output. The default value is 60. |
+ | **Specify name for execution** | Alphanumeric name, for example, **Remove-AVSStoragePolicy-Exec1**. |
+ | **Timeout** | The period after which a cmdlet exits if taking too long to finish. |
+
+1. Check **Notifications** to see the progress.
## Next steps
azure-vmware Vmware Hcx Mon Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/vmware-hcx-mon-guidance.md
Last updated 3/24/2023
[HCX Mobility Optimized Networking (MON)](https://docs.vmware.com/en/VMware-HCX/4.2/hcx-user-guide/GUID-0E254D74-60A9-479C-825D-F373C41F40BC.html) is an optional feature to enable when using [HCX Network Extensions (NE)](configure-hcx-network-extension.md). MON provides optimal traffic routing under certain scenarios to prevent network tromboning between the on-premises and cloud-based resources on extended networks.
-As MON is an enterprise capability of the NE feature, make sure you've enabled the [VMware HCX Enterprise](https://cloud.vmware.com/community/2019/08/08/introducing-hcx-enterprise/) add-on through a [support request](https://portal.azure.com/#create/Microsoft.Support).
+As MON is an enterprise capability of the NE feature, make sure you've [enabled the VMware HCX Enterprise](/azure/azure-vmware/install-vmware-hcx#hcx-license-edition) through the Azure portal.
Throughout the migration cycle, MON optimizes application mobility for:
backup Azure Kubernetes Service Cluster Backup Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-concept.md
To enable backup for an AKS cluster, see the following prerequisites: .
- Before installing Backup Extension in the AKS cluster, ensure that the CSI drivers and snapshots are enabled for your cluster. If disabled, see [these steps to enable them](../aks/csi-storage-drivers.md#enable-csi-storage-drivers-on-an-existing-cluster). -- Backup Extension uses the AKS clusterΓÇÖs Managed System Identity to perform backup operations. So, ASK backup doesn't support AKS clusters using Service Principal. You can [update your AKS cluster to use Managed System Identity](../aks/use-managed-identity.md#update-an-aks-cluster-to-use-a-managed-identity).
+- Backup Extension uses the AKS clusterΓÇÖs Managed System Identity to perform backup operations. So, ASK backup doesn't support AKS clusters using Service Principal. You can [update your AKS cluster to use Managed System Identity](../aks/use-managed-identity.md#enable-managed-identities-on-an-existing-aks-cluster).
>[!Note] >Only Managed System Identity based AKS clusters are supported by AKS backup. The support for User Identity based AKS clusters is currently not available.
backup Azure Kubernetes Service Cluster Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-support-matrix.md
AKS backup is available in all the Azure public cloud regions, East US, North Eu
- Before you install the Backup Extension in the AKS cluster, ensure that the *CSI drivers*, and *snapshot* are enabled for your cluster. If disabled, [enable these settings](../aks/csi-storage-drivers.md#enable-csi-storage-drivers-on-an-existing-cluster). -- The Backup Extension uses the AKS cluster's Managed System Identity to perform backup operations. So, AKS clusters using *Service Principal* aren't supported by ASK backup. You can [update your AKS cluster to use Managed System Identity](../aks/use-managed-identity.md#update-an-aks-cluster-to-use-a-managed-identity).
+- The Backup Extension uses the AKS cluster's Managed System Identity to perform backup operations. So, AKS clusters using *Service Principal* aren't supported by ASK backup. You can [update your AKS cluster to use Managed System Identity](../aks/use-managed-identity.md#enable-managed-identities-on-an-existing-aks-cluster).
- You must install Backup Extension in the AKS cluster. If you're using Azure CLI to install the Backup Extension, ensure that the CLI version is to *2.41* or later. Use `az upgrade` command to upgrade Azure CLI.
backup Azure Kubernetes Service Cluster Manage Backups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-manage-backups.md
Title: Manage Azure Kubernetes Service (AKS) backups using Azure Backup
description: This article explains how to manage Azure Kubernetes Service (AKS) backups using Azure Backup. Previously updated : 04/21/2023 Last updated : 04/26/2023
This section provides the set of Azure CLI commands to perform create, update, o
To install the Backup Extension, run the following command: ```azurecli-interactive
- az k8s-extension create --name azure-aks-backup --extension-type Microsoft.DataProtection.Kubernetes --scope cluster --cluster-type managedClusters --cluster-name <aksclustername> --resource-group <aksclusterrg> --release-train stable --configuration-settings blobContainer=<containername> storageAccount=<storageaccountname> storageAccountResourceGroup=<storageaccountrg> storageAccountSubscriptionId=<subscriptionid>
+ az k8s-extension create --name azure-aks-backup --extension-type microsoft.dataprotection.kubernetes --scope cluster --cluster-type managedClusters --cluster-name <aksclustername> --resource-group <aksclusterrg> --release-train stable --configuration-settings blobContainer=<containername> storageAccount=<storageaccountname> storageAccountResourceGroup=<storageaccountrg> storageAccountSubscriptionId=<subscriptionid>
``` ### View Backup Extension installation status
To enable Trusted Access between Backup vault and AKS cluster, use the following
```azurecli-interactive az aks trustedaccess rolebinding create \
- --resource-group <backupvaultrg> \
- --cluster-name <aksclustername> \
- --name <randomRoleBindingName> \
- --source-resource-id /subscriptions/<subscriptionid>/resourcegroups/<backupvaultrg>/providers/Microsoft.DataProtection/BackupVaults/<backupvaultname> \
- --roles Microsoft.DataProtection/backupVaults/backup-operator
+ -g $myResourceGroup \ 
+ --cluster-name $myAKSCluster 
+ –n <randomRoleBindingName> \ 
+ --source-resource-id <vaultID> \ 
+ --roles Microsoft.DataProtection/backupVaults/backup-operator
``` Learn more about [other commands related to Trusted Access](../aks/trusted-access-feature.md#trusted-access-feature-overview).
backup Backup Azure Mabs Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-mabs-troubleshoot.md
Title: Troubleshoot Azure Backup Server
description: Troubleshoot installation, registration of Azure Backup Server, and backup and restore of application workloads. Previously updated : 10/21/2022 Last updated : 04/26/2022
MABS is compatible with most popular antivirus software products. We recommend t
- `\Windows\Microsoft.net\Framework\v2.0.50727\csc.exe` - `\Windows\Microsoft.NET\Framework\v4.0.30319\csc.exe` - For the MARS agent installed on the MABS server, we recommend that you exclude the following files and locations:
- - `C:\Program Files\Microsoft Azure Backup Server\DPM\MARS\Microsoft Azure Recovery Services Agent\bin\cbengine.exe` as a process
- - `C:\Program Files\Microsoft Azure Backup Server\DPM\MARS\Microsoft Azure Recovery Services Agent\folder`
- - Scratch location (if you're not using the standard location)
+ - `C:\Program Files\Microsoft Azure Backup Server\DPM\MARS\Microsoft Azure Recovery Services Agent\bin\cbengine.exe` as a process.
+ - `C:\Program Files\Microsoft Azure Backup Server\DPM\MARS\Microsoft Azure Recovery Services Agent` as a folder.
+ - Scratch location (if you're not using the standard location).
2. **Disable real-time monitoring on the protected server**: Disable the real-time monitoring of **dpmra.exe**, which is located in the folder `C:\Program Files\Microsoft Data Protection Manager\DPM\bin`, on the protected server. 3. **Configure anti-virus software to delete the infected files on protected servers and the MABS server**: To prevent data corruption of replicas and recovery points, configure the antivirus software to delete infected files, rather than automatically cleaning or quarantining them. Automatic cleaning and quarantining might cause the antivirus software to modify files, making changes that MABS can't detect.
backup Backup Azure Private Endpoints Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-private-endpoints-concept.md
Title: Private endpoints for Azure Backup - Overview
description: This article explains about the concept of private endpoints for Azure Backup that helps to perform backups while maintaining the security of your resources. Previously updated : 03/08/2023 Last updated : 04/26/2023
The workload extension running on Azure VM requires connection to at least two s
For a private endpoint enabled vault, the Azure Backup service creates private endpoint for these storage accounts. This prevents any network traffic related to Azure Backup (control plane traffic to service and backup data to storage blob) from leaving the virtual network. In addition to the Azure Backup cloud services, the workload extension and agent require connectivity to the Azure Storage accounts and Azure Active Directory (Azure AD).
-As a pre-requisite, Recovery Services vault requires permissions for creating additional private endpoints in the same Resource Group. We also recommend providing the Recovery Services vault the permissions to create DNS entries in the private DNS zones (`privatelink.blob.core.windows.net`, `privatelink.queue.core.windows.net`). Recovery Services vault searches for private DNS zones in the resource groups where VNet and private endpoint are created. If it has the permissions to add DNS entries in these zones, theyΓÇÖll be created by the vault; otherwise, you must create them manually.
- The following diagram shows how the name resolution works for storage accounts using a private DNS zone. :::image type="content" source="./media/private-endpoints-overview/name-resolution-works-for-storage-accounts-using-private-dns-zone-inline.png" alt-text="Diagram showing how the name resolution works for storage accounts using a private DNS zone." lightbox="./media/private-endpoints-overview/name-resolution-works-for-storage-accounts-using-private-dns-zone-expanded.png":::
backup Backup Azure Private Endpoints Configure Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-private-endpoints-configure-manage.md
Title: How to create and manage private endpoints (with v2 experience) for Azure
description: This article explains how to configure and manage private endpoints for Azure Backup. Previously updated : 03/08/2023 Last updated : 04/26/2023
You'll see an entry for the virtual network for which you've created the private
|Zone |Service | | | |
- |`privatelink.<geo>.backup.windowsazure.com` |Backup |
- |`privatelink.blob.core.windows.net` |Blob |
- |`privatelink.queue.core.windows.net` |Queue |
+ |`*.privatelink.<geo>.backup.windowsazure.com` |Backup |
+ |`*.blob.core.windows.net` |Blob |
+ |`*.queue.core.windows.net` |Queue |
+ |`*.storage.azure.net` |Blob |
>[!NOTE] > In the above text, `<geo>` refers to the region code (for example *eus* and *ne* for East US and North Europe respectively). Refer to the following lists for regions codes:
backup Private Endpoints Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/private-endpoints-overview.md
Title: Private endpoints overview description: Understand the use of private endpoints for Azure Backup and the scenarios where using private endpoints helps maintain the security of your resources. Previously updated : 03/01/2023 Last updated : 04/26/2023
This article will help you understand how private endpoints for Azure Backup wor
- A private endpoint connection for Backup uses a total of 11 private IPs in your subnet, including those used by Azure Backup for storage. This number may be higher for certain Azure regions. So we suggest that you have enough private IPs (/26) available when you attempt to create private endpoints for Backup. - While a Recovery Services vault is used by (both) Azure Backup and Azure Site Recovery, this article discusses use of private endpoints for Azure Backup only. - Private endpoints for Backup donΓÇÖt include access to Azure Active Directory (Azure AD) and the same needs to be ensured separately. So, IPs and FQDNs required for Azure AD to work in a region will need outbound access to be allowed from the secured network when performing backup of databases in Azure VMs and backup using the MARS agent. You can also use NSG tags and Azure Firewall tags for allowing access to Azure AD, as applicable.-- Virtual networks with Network Policies aren't supported for Private Endpoints. You'll need to [disable Network Polices](../private-link/disable-private-endpoint-network-policy.md) before continuing. - You need to re-register the Recovery Services resource provider with the subscription if you registered it before May 1 2020. To re-register the provider, go to your subscription in the Azure portal, navigate to **Resource provider** on the left navigation bar, then select **Microsoft.RecoveryServices** and select **Re-register**. - [Cross-region restore](backup-create-rs-vault.md#set-cross-region-restore) for SQL and SAP HANA database backups aren't supported if the vault has private endpoints enabled. - When you move a Recovery Services vault already using private endpoints to a new tenant, you'll need to update the Recovery Services vault to recreate and reconfigure the vaultΓÇÖs managed identity and create new private endpoints as needed (which should be in the new tenant). If this isn't done, the backup and restore operations will start failing. Also, any Azure role-based access control (Azure RBAC) permissions set up within the subscription will need to be reconfigured.
In addition to these connections when the workload extension or MARS agent is in
| Service | Domain names | | | | | Azure Backup | `*.backup.windowsazure.com` |
-| Azure Storage | `*.blob.core.windows.net` <br><br> `*.queue.core.windows.net` <br><br> `*.blob.storage.azure.net` |
+| Azure Storage | `*.blob.core.windows.net` <br><br> `*.queue.core.windows.net` <br><br> `*.blob.storage.azure.net` <br><br> `*.storage.azure.net` |
| Azure Active Directory (Azure AD) | [Allow access to FQDNs under sections 56 and 59](/microsoft-365/enterprise/urls-and-ip-address-ranges?view=o365-worldwide&preserve-view=true#microsoft-365-common-and-office-online). | When the workload extension or MARS agent is installed for Recovery Services vault with private endpoint, the following endpoints are hit:
When the workload extension or MARS agent is installed for Recovery Services vau
| Service | Domain name | | | | | Azure Backup | `*.privatelink.<geo>.backup.windowsazure.com` |
-| Azure Storage | `*.blob.core.windows.net` <br><br> `*.queue.core.windows.net` <br><br> `*.blob.storage.azure.net` |
+| Azure Storage | `*.blob.core.windows.net` <br><br> `*.queue.core.windows.net` <br><br> `*.blob.storage.azure.net` <br><br> `*.storage.azure.net` |
| Azure Active Directory (Azure AD) | [Allow access to FQDNs under sections 56 and 59](/microsoft-365/enterprise/urls-and-ip-address-ranges?view=o365-worldwide&preserve-view=true#microsoft-365-common-and-office-online). | >[!Note]
backup Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/private-endpoints.md
Title: Create and use private endpoints for Azure Backup description: Understand the process to creating private endpoints for Azure Backup where using private endpoints helps maintain the security of your resources. Previously updated : 02/20/2023 Last updated : 04/26/2023
For **each private DNS** zone listed above (for Backup, Blobs and Queues), do th
![Add virtual network link](./media/private-endpoints/add-virtual-network-link.png)
-### When using custom DNS server or host files
-
-If you're using your custom DNS servers, you'll need to add the DNS records needed by the private endpoints to your DNS servers. You can also use conditional forwarders and redirect the DNS request for the FQDN to Azure DNS. Azure DNS redirects the DNS requests to private DNS zone and resolve them.
-
-#### For the Backup service
-
-1. In your DNS server, create a DNS zone for Backup according to the following naming convention:
-
- |Zone |Service |
- |||
- |`privatelink.<geo>.backup.windowsazure.com` | Backup |
-
- >[!NOTE]
- > In the above text, `<geo>` refers to the region code (for example *eus* and *ne* for East US and North Europe respectively). Refer to the following lists for regions codes:
- >
- > - [All public clouds](https://download.microsoft.com/download/1/2/6/126a410b-0e06-45ed-b2df-84f353034fa1/AzureRegionCodesList.docx)
- > - [China](/azure/china/resources-developer-guide#check-endpoints-in-azure)
- > - [Germany](../germany/germany-developer-guide.md#endpoint-mapping)
- > - [US Gov](../azure-government/documentation-government-developer-guide.md)
- > - [Geo-code list - sample XML](scripts/geo-code-list.md)
-
-1. Next, we need to add the required DNS records. To view the records that need to be added to the Backup DNS zone, navigate to the private endpoint you created above, and go to the **DNS configuration** option under the left navigation bar.
-
- ![DNS configuration for custom DNS server](./media/private-endpoints/custom-dns-configuration.png)
-
-1. Add one entry for each FQDN and IP displayed as A type records in your DNS zone for Backup. If you're using a host file for name resolution, make corresponding entries in the host file for each IP and FQDN according to the following format:
-
- `<private ip><space><backup service privatelink FQDN>`
-
->[!NOTE]
->As shown in the screenshot above, the FQDNs depict `xxxxxxxx.<geo>.backup.windowsazure.com` and not `xxxxxxxx.privatelink.<geo>.backup.windowsazure.com`. In such cases, ensure you include (and if required, add) the `.privatelink.` according to the stated format.
-
-#### For Blob and Queue services
-
-For blobs and queues, you can either use conditional forwarders or create DNS zones in your DNS server.
-
-##### If using conditional forwarders
-
-If you're using conditional forwarders, add forwarders for blob and queue FQDNs as follows:
-
-|FQDN |IP |
-|||
-|`privatelink.blob.core.windows.net` | 168.63.129.16 |
-|`privatelink.queue.core.windows.net` | 168.63.129.16 |
-
-##### If using private DNS zones
-
-If you're using DNS zones for blobs and queues, you'll need to first create these DNS zones and later add the required A records.
-
-|Zone |Service |
-|||
-|`privatelink.blob.core.windows.net` | Blob |
-|`privatelink.queue.core.windows.net` | Queue |
-
-At this moment, we'll only create the zones for blobs and queues when using custom DNS servers. Adding DNS records will be done later in two steps:
-
-1. When you register the first backup instance, that is, when you configure backup for the first time
-1. When you run the first backup
-
-We'll perform these steps in the following sections.
- ## When using custom DNS server or host files - If you're using a custom DNS server, you can use conditional forwarder for backup service, blob, and queue FQDNs to redirect the DNS requests to Azure DNS (168.63.129.16). Azure DNS redirects it to Azure Private DNS zone. In such setup, ensure that a virtual network link for Azure Private DNS zone exists as mentioned in [this section](#when-using-custom-dns-server-or-host-files).
bastion Bastion Connect Vm Rdp Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-rdp-linux.md
description: Learn how to use Azure Bastion to connect to Linux VM using RDP.
Previously updated : 10/18/2022 Last updated : 04/26/2023
Before you begin, verify that you've met the following criteria:
* To use RDP with a Linux virtual machine, you must also ensure that you have xrdp installed and configured on the Linux VM. To learn how to do this, see [Use xrdp with Linux](../virtual-machines/linux/use-remote-desktop.md).
-* Bastion must be configured with the [Standard SKU](configuration-settings.md#skus).
+* This configuration isn't available for the **Basic** SKU. To use this feature, [Upgrade the SKU](upgrade-sku.md) to the Standard SKU tier.
* You must use username/password authentication.
In order to make a connection, the following roles are required:
* Reader role on the Azure Bastion resource * Reader role on the virtual network of the target virtual machine (if the Bastion deployment is in a peered virtual network). - ### Ports To connect to the Linux VM via RDP, you must have the following ports open on your VM:
bastion Bastion Connect Vm Ssh Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-ssh-linux.md
description: Learn how to use Azure Bastion to connect to Linux VM using SSH.
Previously updated : 10/18/2022 Last updated : 04/25/2023 - # Create an SSH connection to a Linux VM using Azure Bastion
This article shows you how to securely and seamlessly create an SSH connection t
Azure Bastion provides secure connectivity to all of the VMs in the virtual network in which it's provisioned. Using Azure Bastion protects your virtual machines from exposing RDP/SSH ports to the outside world, while still providing secure access using RDP/SSH. For more information, see the [What is Azure Bastion?](bastion-overview.md) overview article.
-When connecting to a Linux virtual machine using SSH, you can use both username/password and SSH keys for authentication.
-
-The SSH private key must be in a format that begins with `"--BEGIN RSA PRIVATE KEY--"` and ends with `"--END RSA PRIVATE KEY--"`.
+When connecting to a Linux virtual machine using SSH, you can use both username/password and SSH keys for authentication. The SSH private key must be in a format that begins with `"--BEGIN RSA PRIVATE KEY--"` and ends with `"--END RSA PRIVATE KEY--"`.
## Prerequisites
-Make sure that you have set up an Azure Bastion host for the virtual network in which the VM resides. For more information, see [Create an Azure Bastion host](./tutorial-create-host-portal.md). Once the Bastion service is provisioned and deployed in your virtual network, you can use it to connect to any VM in this virtual network.
+Make sure that you have set up an Azure Bastion host for the virtual network in which the VM resides. For more information, see [Create an Azure Bastion host](./tutorial-create-host-portal.md). Once the Bastion service is provisioned and deployed in your virtual network, you can use it to connect to any VM in this virtual network.
+
+The connection settings and features that are available depend on the Bastion SKU you're using.
+
+* To see the available features and settings per SKU tier, see the [SKUs and features](bastion-overview.md#sku) section of the Bastion overview article.
+* To check the SKU tier of your Bastion deployment and upgrade if necessary, see [Upgrade a Bastion SKU](upgrade-sku.md).
### Required roles In order to make a connection, the following roles are required:
-* Reader role on the virtual machine
-* Reader role on the NIC with private IP of the virtual machine
-* Reader role on the Azure Bastion resource
-* Reader role on the virtual network of the target virtual machine (if the Bastion deployment is in a peered virtual network)
+* Reader role on the virtual machine.
+* Reader role on the NIC with private IP of the virtual machine.
+* Reader role on the Azure Bastion resource.
+* Reader role on the virtual network of the target virtual machine (if the Bastion deployment is in a peered virtual network).
### Ports In order to connect to the Linux VM via SSH, you must have the following ports open on your VM: * Inbound port: SSH (22) ***or***
-* Inbound port: Custom value (you'll then need to specify this custom port when you connect to the VM via Azure Bastion)
-
- > [!NOTE]
- > If you want to specify a custom port value, Azure Bastion must be configured using the Standard SKU. The Basic SKU does not allow you to specify custom ports.
- >
+* Inbound port: Custom value (you'll then need to specify this custom port when you connect to the VM via Azure Bastion). This setting requires the **Standard** SKU tier.
## Bastion connection page
-1. In the [Azure portal](https://portal.azure.com), go to the virtual machine that you want to connect to. On the **Overview** page, select **Connect**, then select **Bastion** from the dropdown to open the Bastion connection page. You can also select **Bastion** from the left pane.
+1. In the [Azure portal](https://portal.azure.com), go to the virtual machine to which you want to connect. On the **Overview** page for the virtual machine, select **Connect**, then select **Bastion** from the dropdown to open the Bastion page.
- :::image type="content" source="./media/bastion-connect-vm-ssh-linux/connect.png" alt-text="Screenshot shows the overview for a virtual machine in Azure portal with Connect selected" lightbox="./media/bastion-connect-vm-ssh-linux/connect.png":::
+ :::image type="content" source="./media/bastion-connect-vm-ssh-linux/bastion.png" alt-text="Screenshot shows the Overview page for a virtual machine." lightbox="./media/bastion-connect-vm-ssh-linux/bastion.png":::
-1. On the **Bastion** connection page, click the **Connection Settings** arrow to expand all the available settings. If you are using a Bastion **Standard** SKU, you have more available settings than a Basic SKU.
+1. On the **Bastion** page, the settings that you can configure depend on the Bastion [SKU](bastion-overview.md#sku) tier that your bastion host has been configured to use.
- :::image type="content" source="./media/bastion-connect-vm-ssh-linux/connection-settings.png" alt-text="Screenshot shows connection settings.":::
+ * If you're using the **Standard** SKU, **Connection Settings** values (ports and protocols) are visible and can be configured.
-1. Authenticate and connect using one of the methods in the following sections.
+ :::image type="content" source="./media/bastion-connect-vm-ssh-linux/bastion-connect-full.png" alt-text="Screenshot shows connection settings for the Standard SKU." lightbox="./media/bastion-connect-vm-ssh-linux/bastion-connect-full.png":::
- * [Username and password](#username-and-password)
- * [Private key from local file](#private-key-from-local-file)
- * [Password - Azure Key Vault](#passwordazure-key-vault)
- * [Private key - Azure Key Vault](#private-keyazure-key-vault)
+ * If you're using the **Basic** SKU, you can't configure **Connection Settings** values. Instead, your connection uses the following default settings: SSH and port 22.
-## Username and password
+ :::image type="content" source="./media/bastion-connect-vm-ssh-linux/basic.png" alt-text="Screenshot shows connection settings for the Basic SKU." lightbox="./media/bastion-connect-vm-ssh-linux/basic.png":::
-Use the following steps to authenticate using username and password.
+ * To view and select an available **Authentication Type**, use the dropdown.
+ :::image type="content" source="./media/bastion-connect-vm-ssh-linux/authentication-type.png" alt-text="Screenshot shows authentication type settings." lightbox="./media/bastion-connect-vm-ssh-linux/authentication-type.png":::
-1. To authenticate using a username and password, configure the following settings:
+1. Use the following sections in this article to configure authentication settings and connect to your VM.
- * **Protocol**: Select SSH.
- * **Port**: Input the port number. Custom port connections are available for the Standard SKU only.
- * **Authentication type**: Select **Password** from the dropdown.
- * **Username**: Enter the username.
- * **Password**: Enter the **Password**.
+ * [Username and password](#password-authentication)
+ * [Password - Azure Key Vault](#password-authenticationazure-key-vault)
+ * [SSH private key from local file](#ssh-private-key-authenticationlocal-file)
+ * [SSH private key - Azure Key Vault](#ssh-private-key-authenticationazure-key-vault)
-1. To work with the VM in a new browser tab, select **Open in new browser tab**.
+## Password authentication
-1. Click **Connect** to connect to the VM.
+Use the following steps to authenticate using username and password.
-## Private key from local file
-Use the following steps to authenticate using an SSH private key from a local file.
+1. To authenticate using a username and password, configure the following settings.
+ * **Connection Settings** (Standard SKU only)
-1. To authenticate using a private key from a local file, configure the following settings:
+ * **Protocol**: Select SSH.
+ * **Port**: Specify the port number.
- * **Protocol**: Select SSH.
- * **Port**: Input the port number. Custom port connections are available for the Standard SKU only.
- * **Authentication type**: Select **SSH Private Key from Local File** from the dropdown.
- * **Local File**: Select the local file.
- * **SSH Passphrase**: Enter the SSH passphrase if necessary.
+ * **Authentication type**: Select **Password** from the dropdown.
+ * **Username**: Enter the username.
+ * **Password**: Enter the **Password**.
1. To work with the VM in a new browser tab, select **Open in new browser tab**. 1. Click **Connect** to connect to the VM.
-## Password - Azure Key Vault
+## Password authentication - Azure Key Vault
Use the following steps to authenticate using a password from Azure Key Vault.
-1. To authenticate using a password from Azure Key Vault, configure the following settings:
+1. To authenticate using a password from Azure Key Vault, configure the following settings.
- * **Protocol**: Select SSH.
- * **Port**: Input the port number. Custom port connections are available for the Standard SKU only.
+ * **Connection Settings** (Standard SKU only)
+
+ * **Protocol**: Select SSH.
+ * **Port**: Specify the port number.
* **Authentication type**: Select **Password from Azure Key Vault** from the dropdown. * **Username**: Enter the username. * **Subscription**: Select the subscription.
Use the following steps to authenticate using a password from Azure Key Vault.
1. Click **Connect** to connect to the VM.
-## Private key - Azure Key Vault
+## SSH private key authentication - local file
+
+Use the following steps to authenticate using an SSH private key from a local file.
++
+1. To authenticate using a private key from a local file, configure the following settings.
+
+ * **Connection Settings** (Standard SKU only)
+
+ * **Protocol**: Select SSH.
+ * **Port**: Specify the port number.
+ * **Authentication type**: Select **SSH Private Key from Local File** from the dropdown.
+ * **Username**: Enter the username.
+ * **Local File**: Select the local file.
+ * **SSH Passphrase**: Enter the SSH passphrase if necessary.
+
+1. To work with the VM in a new browser tab, select **Open in new browser tab**.
+
+1. Click **Connect** to connect to the VM.
+
+## SSH private key authentication - Azure Key Vault
Use the following steps to authenticate using a private key stored in Azure Key Vault. +
+1. To authenticate using a private key stored in Azure Key Vault, configure the following settings. For the Basic SKU, connection settings can't be configured and will instead use the default connection settings: SSH and port 22.
-1. To authenticate using a private key stored in Azure Key Vault, configure the following settings:
+ * **Connection Settings** (Standard SKU only)
- * **Protocol**: Select SSH.
- * **Port**: Input the port number. Custom port connections are available for the Standard SKU only.
+ * **Protocol**: Select SSH.
+ * **Port**: Specify the port number.
* **Authentication type**: Select **SSH Private Key from Azure Key Vault** from the dropdown. * **Username**: Enter the username. * **Subscription**: Select the subscription.
bastion Bastion Connect Vm Ssh Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-ssh-windows.md
Currently, Azure Bastion only supports connecting to Windows VMs via SSH using *
1. In the [Azure portal](https://portal.azure.com), go to the virtual machine that you want to connect to. On the **Overview** page, select **Connect**, then select **Bastion** from the dropdown to open the Bastion connection page. You can also select **Bastion** from the left pane.
- :::image type="content" source="./media/bastion-connect-vm-ssh-windows/connect.png" alt-text="Screenshot shows the overview for a virtual machine in Azure portal with Connect selected." lightbox="./media/bastion-connect-vm-ssh-linux/connect.png":::
+ :::image type="content" source="./media/bastion-connect-vm-ssh-windows/connect.png" alt-text="Screenshot shows the overview for a virtual machine in Azure portal with Connect selected." lightbox="./media/bastion-connect-vm-ssh-windows/connect.png":::
1. On the **Bastion** connection page, click the **Connection Settings** arrow to expand all the available settings. If you are using a Bastion **Standard** SKU, you have more available settings than a Basic SKU.
bastion Shareable Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/shareable-link.md
By default, users in your org will have only read access to shared links. If a u
## Considerations
-* Shareable Links isn't currently supported for peered VNEts across tenants.
+* Shareable Links isn't currently supported for peered VNETs across tenants.
* Shareable Links isn't supported for national clouds during preview. * The Standard SKU is required for this feature.
batch Tutorial Parallel Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/tutorial-parallel-python.md
Sign in to the [Azure portal](https://portal.azure.com).
[Download or clone the sample app](https://github.com/Azure-Samples/batch-python-ffmpeg-tutorial) from GitHub. To clone the sample app repo with a Git client, use the following command:
-```
+```bash
git clone https://github.com/Azure-Samples/batch-python-ffmpeg-tutorial.git ```
_STORAGE_ACCOUNT_KEY = 'xxxxxxxxxxxxxxxxy4/xxxxxxxxxxxxxxxxfwpbIC5aAWA8wDu+AFXZB
To run the script:
-```
+```bash
python batch_python_tutorial_ffmpeg.py ```
batch Tutorial Run Python Batch Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/tutorial-run-python-batch-azure-data-factory.md
Title: Tutorial - Run Python scripts through Data Factory
-description: Learn how to run Python scripts as part of a pipeline through Azure Data Factory using Azure Batch.
+ Title: 'Tutorial: Run a Batch job through Azure Data Factory'
+description: Learn how to use Batch Explorer, Azure Storage Explorer, and a Python script to run a Batch workload through an Azure Data Factory pipeline.
ms.devlang: python Previously updated : 03/12/2021 Last updated : 04/20/2023
-# Tutorial: Run Python scripts through Azure Data Factory using Azure Batch
+# Tutorial: Run a Batch job through Data Factory with Batch Explorer, Storage Explorer, and Python
+
+This tutorial walks you through creating and running an Azure Data Factory pipeline that runs an Azure Batch workload. A Python script runs on the Batch nodes to get comma-separated value (CSV) input from an Azure Blob Storage container, manipulate the data, and write the output to a different storage container. You use Batch Explorer to create a Batch pool and nodes, and Azure Storage Explorer to work with storage containers and files.
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Authenticate with Batch and Storage accounts
-> * Develop and run a script in Python
-> * Create a pool of compute nodes to run an application
-> * Schedule your Python workloads
-> * Monitor your analytics pipeline
-> * Access your logfiles
+> - Use Batch Explorer to create a Batch pool and nodes.
+> - Use Storage Explorer to create storage containers and upload input files.
+> - Develop a Python script to manipulate input data and produce output.
+> - Create a Data Factory pipeline that runs the Batch workload.
+> - Use Batch Explorer to look at the output log files.
-The example below runs a Python script that receives CSV input from a blob storage container, performs a data manipulation process, and writes the output to a separate blob storage container.
+## Prerequisites
-If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- An Azure account with an active subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free).
+- A Batch account with a linked Azure Storage account. You can create the accounts by using any of the following methods: [Azure portal](quick-create-portal.md) | [Azure CLI](quick-create-cli.md) | [Bicep](quick-create-bicep.md) | [ARM template](quick-create-template.md) | [Terraform](quick-create-terraform.md).
+- A Data Factory instance. To create the data factory, follow the instructions in [Create a data factory](/azure/data-factory/quickstart-create-data-factory-portal#create-a-data-factory).
+- [Batch Explorer](https://azure.github.io/BatchExplorer) downloaded and installed.
+- [Storage Explorer](https://azure.microsoft.com/products/storage/storage-explorer) downloaded and installed.
+- [Python 3.7 or above](https://www.python.org/downloads), with the [azure-storage-blob](https://pypi.org/project/azure-storage-blob) package installed by using `pip`.
+- The [iris.csv input dataset](https://github.com/Azure-Samples/batch-adf-pipeline-tutorial/blob/master/iris.csv) downloaded from GitHub.
-## Prerequisites
+## Use Batch Explorer to create a Batch pool and nodes
+
+Use Batch Explorer to create a pool of compute nodes to run your workload.
+
+1. Sign in to Batch Explorer with your Azure credentials.
+1. Select your Batch account.
+1. Select **Pools** on the left sidebar, and then select the **+** icon to add a pool.
+
+ [ ![Screenshot of creating a pool in Batch Explorer.](media/run-python-batch-azure-data-factory/batch-explorer-add-pool.png)](media/run-python-batch-azure-data-factory/batch-explorer-add-pool.png#lightbox)
-* An installed [Python](https://www.python.org/downloads/) distribution, for local testing.
-* The [azure-storage-blob](https://pypi.org/project/azure-storage-blob/) `pip` package.
-* The [iris.csv dataset](https://github.com/Azure-Samples/batch-adf-pipeline-tutorial/blob/master/iris.csv)
-* An Azure Batch account and a linked Azure Storage account. See [Create a Batch account](quick-create-portal.md#create-a-batch-account) for more information on how to create and link Batch accounts to storage accounts.
-* An Azure Data Factory account. See [Create a data factory](../data-factory/quickstart-create-data-factory-portal.md#create-a-data-factory) for more information on how to create a data factory through the Azure portal.
-* [Batch Explorer](https://azure.github.io/BatchExplorer/).
-* [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/).
+1. Complete the **Add a pool to the account** form as follows:
-## Sign in to Azure
+ - Under **ID**, enter *custom-activity-pool*.
+ - Under **Dedicated nodes**, enter *2*.
+ - For **Select an operating system configuration**, select the **Data science** tab, and then select **Dsvm Win 2019**.
+ - For **Choose a virtual machine size**, select **Standard_F2s_v2**.
+ - For **Start Task**, select **Add a start task**.
+ On the start task screen, under **Command line**, enter `cmd /c "pip install azure-storage-blob pandas"`, and then select **Select**. This command installs the `azure-storage-blob` package on each node as it starts up.
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+1. Select **Save and close**.
+## Use Storage Explorer to create blob containers
-## Create a Batch pool using Batch Explorer
+Use Storage Explorer to create blob containers to store input and output files, and then upload your input files.
-In this section, you'll use Batch Explorer to create the Batch pool that your Azure Data factory pipeline will use.
+1. Sign in to Storage Explorer with your Azure credentials.
+1. In the left sidebar, locate and expand the storage account that's linked to your Batch account.
+1. Right-click **Blob Containers**, and select **Create Blob Container**, or select **Create Blob Container** from **Actions** at the bottom of the sidebar.
+1. Enter *input* in the entry field.
+1. Create another blob container named *output*.
+1. Select the **input** container, and then select **Upload** > **Upload files** in the right pane.
+1. On the **Upload files** screen, under **Selected files**, select the ellipsis **...** next to the entry field.
+1. Browse to the location of your downloaded *iris.csv* file, select **Open**, and then select **Upload**.
-1. Sign in to Batch Explorer using your Azure credentials.
-1. Select your Batch account
-1. Create a pool by selecting **Pools** on the left side bar, then the **Add** button above the search form.
- 1. Choose an ID and display name. We'll use `custom-activity-pool` for this example.
- 1. Set the scale type to **Fixed size**, and set the dedicated node count to 2.
- 1. Under **Image Type**, select **Marketplace** as the operating system and **Publisher** as **microsoft-dsvm**
- 1. Choose `Standard_f2s_v2` as the virtual machine size.
- 1. Enable the start task and add the command `cmd /c "pip install azure-storage-blob pandas"`. The user identity can remain as the default **Pool user**.
- 1. Select **OK**.
+[ ![Screenshot of Storage Explorer with containers and blobs created in the storage account.](media/run-python-batch-azure-data-factory/storage-explorer.png)](media/run-python-batch-azure-data-factory/storage-explorer.png#lightbox)
+## Develop a Python script
-## Create blob containers
+The following Python script loads the *iris.csv* dataset file from your Storage Explorer **input** container, manipulates the data, and saves the results to the **output** container.
-Here you'll create blob containers that will store your input and output files for the OCR Batch job.
+The script needs to use the connection string for the Azure Storage account that's linked to your Batch account. To get the connection string:
-1. Sign in to Storage Explorer using your Azure credentials.
-1. Using the storage account linked to your Batch account, create two blob containers (one for input files, one for output files) by following the steps at [Create a blob container](../vs-azure-tools-storage-explorer-blobs.md#create-a-blob-container).
- * In this example, we'll call our input container `input`, and our output container `output`.
-1. Upload [`iris.csv`](https://github.com/Azure-Samples/batch-adf-pipeline-tutorial/blob/master/iris.csv) to your input container `input` using Storage Explorer by following the steps at [Managing blobs in a blob container](../vs-azure-tools-storage-explorer-blobs.md#managing-blobs-in-a-blob-container)
+1. In the [Azure portal](https://portal.azure.com), search for and select the name of the storage account that's linked to your Batch account.
+1. On the page for the storage account, select **Access keys** from the left navigation under **Security + networking**.
+1. Under **key1**, select **Show** next to **Connection string**, and then select the **Copy** icon to copy the connection string.
-## Develop a script in Python
+Paste the connection string into the following script, replacing the `<storage-account-connection-string>` placeholder. Save the script as a file named *main.py*.
-The following Python script loads the `iris.csv` dataset from your `input` container, performs a data manipulation process, and saves the results back to the `output` container.
+>[!IMPORTANT]
+>Exposing account keys in the app source isn't recommended for Production usage. You should restrict access to credentials and refer to them in your code by using variables or a configuration file. It's best to store Batch and Storage account keys in Azure Key Vault.
``` python # Load libraries
df = pd.read_csv("iris.csv")
# Take a subset of the records df = df[df['Species'] == "setosa"]
-# Save the subset of the iris dataframe locally in task node
+# Save the subset of the iris dataframe locally in the task node
df.to_csv(outputBlobName, index = False) with open(outputBlobName, "rb") as data:
- blob.upload_blob(data)
+ blob.upload_blob(data, overwrite=True)
```
-Save the script as `main.py` and upload it to the **Azure Storage** `input` container. Be sure to test and validate its functionality locally before uploading it to your blob container:
+Run the script locally to test and validate functionality.
``` bash python main.py ```
-## Set up an Azure Data Factory pipeline
+The script should produce an output file named *iris_setosa.csv* that contains only the data records that have Species = setosa. After you verify that it works correctly, upload the *main.py* script file to your Storage Explorer **input** container.
-In this section, you'll create and validate a pipeline using your Python script.
+## Set up a Data Factory pipeline
-1. Follow the steps to create a data factory under the "Create a data factory" section of [this article](../data-factory/quickstart-create-data-factory-portal.md#create-a-data-factory).
-1. In the **Factory Resources** box, select the + (plus) button and then select **Pipeline**
-1. In the **General** tab, set the name of the pipeline as "Run Python"
+Create and validate a Data Factory pipeline that uses your Python script.
- ![In the General tab, set the name of the pipeline as "Run Python"](./media/run-python-batch-azure-data-factory/create-pipeline.png)
+### Get account information
-1. In the **Activities** box, expand **Batch Service**. Drag the custom activity from the **Activities** toolbox to the pipeline designer surface. Fill out the following tabs for the custom activity:
- 1. In the **General** tab, specify **testPipeline** for Name
- ![In the General tab, specify testPipeline for Name](./media/run-python-batch-azure-data-factory/create-custom-task.png)
- 1. In the **Azure Batch** tab, add the **Batch Account** that was created in the previous steps and **Test connection** to ensure that it is successful.
- ![In the Azure Batch tab, add the Batch Account that was created in the previous steps, then test connection](./media/run-python-batch-azure-data-factory/integrate-pipeline-with-azure-batch.png)
- 1. In the **Settings** tab:
- 1. Set the **Command** as `cmd /C python main.py`.
- 1. For the **Resource Linked Service**, add the storage account that was created in the previous steps. Test the connection to ensure it is successful.
- 1. In the **Folder Path**, select the name of the **Azure Blob Storage** container that contains the Python script and the associated inputs. This will download the selected files from the container to the pool node instances before the execution of the Python script.
+The Data Factory pipeline uses your Batch and Storage account names, account key values, and Batch account endpoint. To get this information from the [Azure portal](https://portal.azure.com):
- ![In the Folder Path, select the name of the Azure Blob Storage container](./media/run-python-batch-azure-data-factory/create-custom-task-py-script-command.png)
+1. From the Azure Search bar, search for and select your Batch account name.
+1. On your Batch account page, select **Keys** from the left navigation.
+1. On the **Keys** page, copy the following values:
-1. Click **Validate** on the pipeline toolbar above the canvas to validate the pipeline settings. Confirm that the pipeline has been successfully validated. To close the validation output, select the &gt;&gt; (right arrow) button.
-1. Click **Debug** to test the pipeline and ensure it works accurately.
-1. Click **Publish** to publish the pipeline.
-1. Click **Trigger** to run the Python script as part of a batch process.
+ - **Batch account**
+ - **Account endpoint**
+ - **Primary access key**
+ - **Storage account name**
+ - **Key1**
- ![Click Trigger to run the Python script as part of a batch process](./media/run-python-batch-azure-data-factory/create-custom-task-py-success-run.png)
+### Create and run the pipeline
-### Monitor the log files
+1. If Azure Data Factory Studio isn't already running, select **Launch studio** on your Data Factory page in the Azure portal.
+1. In Data Factory Studio, select the **Author** pencil icon in the left navigation.
+1. Under **Factory Resources**, select the **+** icon, and then select **Pipeline**.
+1. In the **Properties** pane on the right, change the name of the pipeline to *Run Python*.
-In case warnings or errors are produced by the execution of your script, you can check out `stdout.txt` or `stderr.txt` for more information on output that was logged.
+ [ ![Screenshot of Data Factory Studio after you select Add pipeline.](media/run-python-batch-azure-data-factory/create-pipeline.png)](media/run-python-batch-azure-data-factory/create-pipeline.png#lightbox)
-1. Select **Jobs** from the left-hand side of Batch Explorer.
-1. Choose the job created by your data factory. Assuming you named your pool `custom-activity-pool`, select `adfv2-custom-activity-pool`.
-1. Click on the task that had a failure exit code.
-1. View `stdout.txt` and `stderr.txt` to investigate and diagnose your problem.
+1. In the **Activities** pane, expand **Batch Service**, and drag the **Custom** activity to the pipeline designer surface.
+1. Below the designer canvas, on the **General** tab, enter *testPipeline* under **Name**.
-## Clean up resources
+ ![Screenshot of the General tab for creating a pipeline task.](media/run-python-batch-azure-data-factory/create-custom-task.png)
-Although you're not charged for jobs and tasks themselves, you are charged for compute nodes. Thus, we recommend that you allocate pools only as needed. When you delete the pool, all task output on the nodes is deleted. However, the input and output files remain in the storage account. When no longer needed, you can also delete the Batch account and the storage account.
+1. Select the **Azure Batch** tab, and then select **New**.
+1. Complete the **New linked service** form as follows:
-## Next steps
+ - **Name**: Enter a name for the linked service, such as **AzureBatch1**.
+ - **Access key**: Enter the primary access key you copied from your Batch account.
+ - **Account name**: Enter your Batch account name.
+ - **Batch URL**: Enter the account endpoint you copied from your Batch account, such as `https://batchdotnet.eastus.batch.azure.com`.
+ - **Pool name**: Enter *custom-activity-pool*, the pool you created in Batch Explorer.
+ - **Storage account linked service name**: Select **New**. On the next screen, enter a **Name** for the linked storage service, such as *AzureBlobStorage1*, select your Azure subscription and linked storage account, and then select **Create**.
-In this tutorial, you learned how to:
+1. At the bottom of the Batch **New linked service** screen, select **Test connection**. When the connection is successful, select **Create**.
-> [!div class="checklist"]
-> * Authenticate with Batch and Storage accounts
-> * Develop and run a script in Python
-> * Create a pool of compute nodes to run an application
-> * Schedule your Python workloads
-> * Monitor your analytics pipeline
-> * Access your logfiles
+ ![Screenshot of the New linked service screen for the Batch job.](media/run-python-batch-azure-data-factory/integrate-pipeline-with-azure-batch.png)
+
+1. Select the **Settings** tab, and enter or select the following settings:
+
+ - **Command**: Enter `cmd /C python main.py`.
+ - **Resource linked service**: Select the linked storage service you created, such as **AzureBlobStorage1**, and test the connection to make sure it's successful.
+ - **Folder path**: Select the folder icon, and then select the **input** container and select **OK**. The files from this folder download from the container to the pool nodes before the Python script runs.
-To learn more about Azure Data Factory, see:
+ ![Screenshot of the Settings tab for the Batch job.](./media/run-python-batch-azure-data-factory/create-custom-task-py-script-command.png)
+
+1. Select **Validate** on the pipeline toolbar to validate the pipeline.
+1. Select **Debug** to test the pipeline and ensure it works correctly.
+1. Select **Publish all** to publish the pipeline.
+1. Select **Add trigger**, and then select **Trigger now** to run the pipeline, or **New/Edit** to schedule it.
+
+ ![Screenshot of Validate, Debug, Publish all, and Add trigger selections in Data Factory.](./media/run-python-batch-azure-data-factory/create-custom-task-py-success-run.png)
+
+## Use Batch Explorer to view log files
+
+If running your pipeline produces warnings or errors, you can use Batch Explorer to look at the *stdout.txt* and *stderr.txt* output files for more information.
+
+1. In Batch Explorer, select **Jobs** from the left sidebar.
+1. Select the **adfv2-custom-activity-pool** job.
+1. Select a task that had a failure exit code.
+1. View the *stdout.txt* and *stderr.txt* files to investigate and diagnose your problem.
+
+## Clean up resources
+
+Batch accounts, jobs, and tasks are free, but compute nodes incur charges even when they're not running jobs. It's best to allocate node pools only as needed, and delete the pools when you're done with them. Deleting pools deletes all task output on the nodes, and the nodes themselves.
+
+Input and output files remain in the storage account and can incur charges. When you no longer need the files, you can delete the files or containers. When you no longer need your Batch account or linked storage account, you can delete them.
+
+## Next steps
-> [!div class="nextstepaction"]
-> [Azure Data Factory overview](../data-factory/introduction.md)
+In this tutorial, you learned how to use a Python script with Batch Explorer, Storage Explorer, and Data Factory to run a Batch workload. For more information about Data Factory, see [What is Azure Data Factory?](/azure/data-factory/introduction)
batch Virtual File Mount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/virtual-file-mount.md
You can mount an Azure file share on a Batch pool using [Azure PowerShell](/powe
1. Sign in to your Azure subscription.
- ```powershell
+ ```powershell-interactive
Connect-AzAccount -Subscription "<subscription-ID>" ``` 1. Get the context for your Batch account.
- ```powershell
+ ```powershell-interactive
$context = Get-AzBatchAccount -AccountName <batch-account-name> ``` 1. Create a Batch pool with the following settings. Replace the sample values with your own information as needed.
- ```powershell
+ ```powershell-interactive
$fileShareConfig = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSAzureFileShareConfiguration" -ArgumentList @("<Storage-Account-name>", "https://<Storage-Account-name>.file.core.windows.net/batchfileshare1", "S", "Storage-Account-key") $mountConfig = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSMountConfiguration" -ArgumentList @($fileShareConfig)
You can mount an Azure file share on a Batch pool using [Azure PowerShell](/powe
1. Access the mount files using your drive's direct path. For example:
- ```powershell
+ ```powershell-interactive
cmd /c "more S:\folder1\out.txt & timeout /t 90 > NULL" ```
You can mount an Azure file share on a Batch pool using [Azure PowerShell](/powe
Use `cmdkey` to add your credentials. Replace the sample values with your own information.
- ```powershell
+ ```powershell-interactive
cmdkey /add:"<storage-account-name>.file.core.windows.net" /user:"Azure\<storage-account-name>" /pass:"<storage-account-key>" ```
You can mount an Azure file share on a Batch pool using [Azure PowerShell](/powe
1. Sign in to your Azure subscription.
- ```powershell
+ ```powershell-interactive
Connect-AzAccount -Subscription "<subscription-ID>" ``` 1. Get the context for your Batch account.
- ```powershell
+ ```powershell-interactive
$context = Get-AzBatchAccount -AccountName <batch-account-name> ``` 1. Create a Batch pool with the following settings. Replace the sample values with your own information as needed.
- ```powershell
+ ```powershell-interactive
$fileShareConfig = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSAzureFileShareConfiguration" -ArgumentList @("<Storage-Account-name>", https://<Storage-Account-name>.file.core.windows.net/batchfileshare1, "S", "<Storage-Account-key>", "-o vers=3.0,dir_mode=0777,file_mode=0777,sec=ntlmssp") $mountConfig = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSMountConfiguration" -ArgumentList @($fileShareConfig)
You can mount an Azure file share on a Batch pool using [Azure PowerShell](/powe
When you mount an Azure file share to a Batch pool with PowerShell or Cloud Shell, you might receive the following error:
-```text
+```output
Mount Configuration Error | An error was encountered while configuring specified mount(s) Message: System error (out of memory, cannot fork, no more loop devices) MountConfigurationPath: S
MountConfigurationPath: S
If you receive this error, RDP or SSH to the node to check the related log files. The Batch agent implements mounting differently on Windows and Linux. On Linux, Batch installs the package `cifs-utils`. Then, Batch issues the mount command. On Windows, Batch uses `cmdkey` to add your Batch account credentials. Then, Batch issues the mount command through `net use`. For example:
-```powershell
+```powershell-interactive
net use S: \\<storage-account-name>.file.core.windows.net\<fileshare> /u:AZURE\<storage-account-name> <storage-account-key> ```
net use S: \\<storage-account-name>.file.core.windows.net\<fileshare> /u:AZURE\<
1. Review the error messages. For example:
- ```text
+ ```output
CMDKEY: Credential added successfully. System error 86 has occurred.
If you can't use RDP or SSH to check the log files on the node, check the Batch
1. Review the error messages. For example:
- ```text
+ ```output
..20210322T113107.448Z.00000000-0000-0000-0000-000000000000.ERROR.agent.mount.filesystems.basefilesystem.basefilesystem.py.run_cmd_persist_output_async.59.2912.MainThread.3580.Mount command failed with exit code: 2, output: CMDKEY: Credential added successfully.
If you're unable to diagnose or fix mounting errors with PowerShell, you can mou
1. Create a pool without a mounting configuration. For example:
- ```powershell
+ ```powershell-interactive
$imageReference = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSImageReference" -ArgumentList @("WindowsServer", "MicrosoftWindowsServer", "2016-Datacenter", "latest") $configuration = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSVirtualMachineConfiguration" -ArgumentList @($imageReference, "batch.node.windows amd64")
If you're unable to diagnose or fix mounting errors with PowerShell, you can mou
1. Create a pool without a mounting configuration. For example:
- ```bash
+ ```powershell-interactive
$imageReference = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSImageReference" -ArgumentList @("ubuntuserver", "canonical", "20.04-lts", "latest") $configuration = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSVirtualMachineConfiguration" -ArgumentList @($imageReference, "batch.node.ubuntu 20.04")
cognitive-services Advanced Prompt Engineering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/advanced-prompt-engineering.md
Title: Prompt engineering techniques with Azure OpenAI description: Learn about the options for how to use prompt engineering with GPT-3, ChatGPT, and GPT-4 models-+
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
description: Learn about the different model capabilities that are available with Azure OpenAI. Previously updated : 04/19/2023 Last updated : 04/26/2023
keywords:
# Azure OpenAI Service models
-Azure OpenAI provides access to many different models, grouped by family and capability. A model family typically associates models by their intended task. The following table describes model families currently available in Azure OpenAI. Not all models are available in all regions currently. Refer to the [model capability table](#model-capabilities) in this article for a full breakdown.
+Azure OpenAI provides access to many different models, grouped by family and capability. A model family typically associates models by their intended task. The following table describes model families currently available in Azure OpenAI. Not all models are available in all regions currently. Refer to the [model capability table](#model-capabilities) in this article for a full breakdown.
| Model family | Description | |--|--|
These models can be used with Completion API requests. `gpt-35-turbo` is the onl
| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | | | | - | -- | - |
-| ada | N/A | South Central US, West Europe <sup>2</sup> | 2,049 | Oct 2019|
+| ada | N/A | N/A | 2,049 | Oct 2019|
| text-ada-001 | East US, South Central US, West Europe | N/A | 2,049 | Oct 2019|
-| babbage | N/A | South Central US, West Europe<sup>2</sup> | 2,049 | Oct 2019 |
+| babbage | N/A | N/A | 2,049 | Oct 2019 |
| text-babbage-001 | East US, South Central US, West Europe | N/A | 2,049 | Oct 2019 |
-| curie | N/A | South Central US, West Europe<sup>2</sup> | 2,049 | Oct 2019 |
+| curie | N/A | N/A | 2,049 | Oct 2019 |
| text-curie-001 | East US, South Central US, West Europe | N/A | 2,049 | Oct 2019 |
-| davinci<sup>1</sup> | N/A | Currently unavailable | 2,049 | Oct 2019|
+| davinci | N/A | N/A | 2,049 | Oct 2019|
| text-davinci-001 | South Central US, West Europe | N/A | | | | text-davinci-002 | East US, South Central US, West Europe | N/A | 4,097 | Jun 2021 | | text-davinci-003 | East US, West Europe | N/A | 4,097 | Jun 2021 |
-| text-davinci-fine-tune-002<sup>1</sup> | N/A | Currently unavailable | | |
-| gpt-35-turbo<sup>3</sup> (ChatGPT) (preview) | East US, South Central US, West Europe | N/A | 4,096 | Sep 2021 |
+| text-davinci-fine-tune-002 | N/A | N/A | | |
+| gpt-35-turbo<sup>1</sup> (ChatGPT) (preview) | East US, South Central US, West Europe | N/A | 4,096 | Sep 2021 |
-<sup>1</sup> The model is available by request only. Currently we aren't accepting new requests to use the model.
-<br><sup>2</sup> East US was previously available, but due to high demand this region is currently unavailable for new customers to use for fine-tuning. Please use the South Central US, and West Europe regions for fine-tuning.
-<br><sup>3</sup> Currently, only version `0301` of this model is available. This version of the model will be deprecated on 8/1/2023 in favor of newer version of the gpt-35-model. See [ChatGPT model versioning](../how-to/chatgpt.md#model-versioning) for more details.
+<br><sup>1</sup> Currently, only version `0301` of this model is available. This version of the model will be deprecated on 8/1/2023 in favor of newer version of the gpt-35-model. See [ChatGPT model versioning](../how-to/chatgpt.md#model-versioning) for more details.
### GPT-4 Models
cognitive-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/tutorials/embeddings.md
Previously updated : 03/31/2023 Last updated : 04/25/2023 recommendations: false
res = search_docs(df_bills, "Can I get information on cable company tax revenue?
:::image type="content" source="../media/tutorials/query-result.png" alt-text="Screenshot of the formatted results of res once the search query has been run." lightbox="../media/tutorials/query-result.png":::
-Finally, we'll show the top result from document search based on user query against the entire knowledge base. This returns the top result of the "Taxpayer's Right to View Act of 1993". This document has a cosine similarity score of 0.36 between the query and the document:
+Finally, we'll show the top result from document search based on user query against the entire knowledge base. This returns the top result of the "Taxpayer's Right to View Act of 1993". This document has a cosine similarity score of 0.76 between the query and the document:
```python res["summary"][9]
communication-services Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md
In this article, you will learn which capabilities are supported for Teams exter
| | Prevent joining locked meeting | ✔️ | | | Honor assigned Teams meeting role | ✔️ | | Chat | Send and receive chat messages | ✔️ |
+| | [Receive inline images](../../../tutorials/chat-interop/meeting-interop-features-inline-image.md) | ✔️** |
| | Send and receive Giphy | ❌ | | | Send messages with high priority | ❌ | | | Receive messages with high priority | ✔️ |
In this article, you will learn which capabilities are supported for Teams exter
| | React to chat message | ❌ | | | [Data Loss Prevention (DLP)](/microsoft-365/compliance/dlp-microsoft-teams) | ✔️*| | | [Customer Managed Keys (CMK)](/microsoft-365/compliance/customer-key-overview) | ✔️ |
-| Chat with Teams Interoperability | Send and receive text messages | ✔️ |
-| | Send and receive rich text messages | ✔️ |
-| | Send and receive typing indicators | ✔️ |
-| | [Receive inline images](../../../tutorials/chat-interop/meeting-interop-features-inline-image.md) | ✔️** |
-| | Receive read receipts | ❌ |
-| | Receive shared files | ❌ |
| Mid call control | Turn your video on/off | ✔️ | | | Mute/Unmute mic | ✔️ | | | Switch between cameras | ✔️ |
When Teams external users leave the meeting, or the meeting ends, they can no lo
*Azure Communication Services provides developers tools to integrate Microsoft Teams Data Loss Prevention that is compatible with Microsoft Teams. For more information, go to [how to implement Data Loss Prevention (DLP)](../../../how-tos/chat-sdk/data-loss-prevention.md)
-**Inline images are images that are copied and pasted directly into the send box of Teams client. For images that were uploaded via "Upload from this device" menu or via drag-and-drop (such as dragging images directly to the send box) in the Teams, they are not supported at this moment. To copy an image, the Teams user can either use their operating system's context menu to copy the image file then paste it into the send box of their Teams client, or use keyboard shortcuts instead.
+**Microsoft Teams allows users to share images by:
+- Copying & paste into the box at the bottom of the chat - inline images.
+- Drag & drop into the chat area.
+- Upload an image as a file via the "Upload from this device" button.
+
+Azure Communication Services currently support only option one for copying and pasting the image. Users can achieve it using keyboard shortcuts or the operating system's context menu options for copy and paste.
**Inline image support is currently in public preview and is available in the Chat SDK for JavaScript only. Preview APIs and SDKs are provided without a service-level agreement. We recommend that you don't use them for production workloads. Some features might not be supported, or they might have constrained capabilities. For more information, review [Supplemental Terms of Use for Microsoft Azure Previews.](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
container-apps Dapr Keda Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-keda-scaling.md
Title: Scale Dapr applications with KEDA scalers using Bicep+ description: Learn how to use KEDA scalers to scale an Azure Container App and its Dapr sidecar.
container-apps Microservices Dapr Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-bindings.md
Title: "Event-driven work using Dapr Bindings"+ description: Deploy a sample Dapr Bindings application to Azure Container Apps.
container-apps Microservices Dapr Pubsub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-pubsub.md
Title: "Microservices communication using Dapr Publish and Subscribe"+ description: Enable two sample Dapr applications to send and receive messages and leverage Azure Container Apps.
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
Important notes for configuring UDR with Azure Firewall:
- You need to allow the `MicrosoftContainerRegistry` and its dependency `AzureFrontDoor.FirstParty` service tags to your Azure Firewall. Alternatively, you can add the following FQDNs: *mcr.microsoft.com* and **.data.mcr.microsoft.com*. - If you're using Azure Container Registry (ACR), you need to add the `AzureContainerRegistry` service tag and the **.blob.core.windows.net* FQDN in the Azure Firewall. - If you're using [Docker Hub registry](https://docs.docker.com/desktop/allow-list/) and want to access it through the firewall, you need to add the following FQDNs to your firewall: *hub.docker.com*, *registry-1.docker.io*, and *production.cloudflare.docker.com*.
+- If you're using [Azure Key Vault references](./manage-secrets.md#reference-secret-from-key-vault), you will need to add the `AzureKeyVault` service tag and the *login.microsoft.com* FQDN to the allow list for your firewall.
- External environments aren't supported. Azure creates a default route table for your virtual networks upon create. By implementing a user-defined route table, you can control how traffic is routed within your virtual network. For example, you can create a UDR that routes all traffic to the firewall. For a guide on how to setup UDR with Container Apps to restrict outbound traffic with Azure Firewall, visit the [how to for Container Apps and Azure Firewall](./user-defined-routes.md).
container-instances Container Instances Container Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-container-groups.md
Learn how to deploy a multi-container container group with an Azure Resource Man
[resource-manager template]: container-instances-multi-container-group.md [yaml-file]: container-instances-multi-container-yaml.md [region-availability]: container-instances-region-availability.md
-[resource-requests]: /rest/api/container-instances/containergroups/createorupdate#resourcerequests
-[resource-limits]: /rest/api/container-instances/containergroups/createorupdate#resourcelimits
-[resource-requirements]: /rest/api/container-instances/containergroups/createorupdate#resourcerequirements
+[resource-requests]: /rest/api/container-instances/2022-09-01/container-groups/create-or-update#resourcerequests
+[resource-limits]: /rest/api/container-instances/2022-09-01/container-groups/create-or-update#resourcelimits
+[resource-requirements]: /rest/api/container-instances/2022-09-01/container-groups/create-or-update#resourcerequirements
[azure-files]: container-instances-volume-azure-files.md [virtual-network]: container-instances-virtual-network-concepts.md [secret]: container-instances-volume-secret.md
container-instances Container Instances Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-custom-dns.md
Once you've completed the steps above, you should see an output with a final key
## Deploy your container group > [!NOTE]
-> Custom DNS settings are not currently available in the Azure portal for container group deployments. They must be provided with YAML file, Resource Manager template, [REST API](/rest/api/container-instances/containergroups/createorupdate), or an [Azure SDK](https://azure.microsoft.com/downloads/).
+> Custom DNS settings are not currently available in the Azure portal for container group deployments. They must be provided with YAML file, Resource Manager template, [REST API](/rest/api/container-instances/2022-09-01/container-groups/create-or-update), or an [Azure SDK](https://azure.microsoft.com/downloads/).
Copy the following YAML into a new file named *custom-dns-deploy-aci.yaml*. Edit the following configurations with your values:
container-instances Container Instances Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-managed-identity.md
Use a managed identity in a running container to authenticate to any [service th
### Enable a managed identity
- When you create a container group, enable one or more managed identities by setting a [ContainerGroupIdentity](/rest/api/container-instances/containergroups/createorupdate#containergroupidentity) property. You can also enable or update managed identities after a container group is running - either action causes the container group to restart. To set the identities on a new or existing container group, use the Azure CLI, a Resource Manager template, a YAML file, or another Azure tool.
+ When you create a container group, enable one or more managed identities by setting a [ContainerGroupIdentity](/rest/api/container-instances/2022-09-01/container-groups/create-or-update#containergroupidentity) property. You can also enable or update managed identities after a container group is running - either action causes the container group to restart. To set the identities on a new or existing container group, use the Azure CLI, a Resource Manager template, a YAML file, or another Azure tool.
Azure Container Instances supports both types of managed Azure identities: user-assigned and system-assigned. On a container group, you can enable a system-assigned identity, one or more user-assigned identities, or both types of identities. If you're unfamiliar with managed identities for Azure resources, see the [overview](../active-directory/managed-identities-azure-resources/overview.md).
container-instances Container Instances Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-quotas.md
All Azure services include certain default limits and quotas for resources and f
Availability of compute, memory, and storage resources for Azure Container Instances varies by region and operating system. For details, see [Resource availability for Azure Container Instances](container-instances-region-availability.md).
-Use the [List Usage](/rest/api/container-instances/location/listusage) API to review current quota usage in a region for a subscription.
+Use the [List Usage](/rest/api/container-instances/2022-09-01/location/list-usage) API to review current quota usage in a region for a subscription.
## Service quotas and limits
container-instances Container Instances Resource And Quota Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-resource-and-quota-limits.md
Values presented are the maximum resources available per deployment of a [contai
All Azure services include certain default limits and quotas for resources and features. This section details the default quotas and limits for Azure Container Instances.
-Use the [List Usage](/rest/api/container-instances/location/listusage) API to review current quota usage in a region for a subscription.
+Use the [List Usage](/rest/api/container-instances/2022-09-01/location/list-usage) API to review current quota usage in a region for a subscription.
Certain default limits and quotas can be increased. To request an increase of one or more resources that support such an increase, please submit an [Azure support request][azure-support] (select "Quota" for **Issue type**).
container-instances Container Instances Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-troubleshooting.md
Learn how to [retrieve container logs and events](container-instances-get-logs.m
<!-- LINKS - Internal --> [az-container-show]: /cli/azure/container#az_container_show
-[list-cached-images]: /rest/api/container-instances/location/listcachedimages
+[list-cached-images]: /rest/api/container-instances/2022-09-01/location/list-cached-images
cosmos-db Dedicated Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/dedicated-gateway.md
The dedicated gateway is available in the following sizes. The integrated cache
There are many different ways to provision a dedicated gateway: - [Provision a dedicated gateway using the Azure portal](how-to-configure-integrated-cache.md#provision-the-dedicated-gateway)-- [Use Azure Cosmos DB's REST API](/rest/api/cosmos-db-resource-provider/2022-05-15/service/create#sqldedicatedgatewayservicecreate)
+- [Use Azure Cosmos DB's REST API](/rest/api/cosmos-db-resource-provider/2022-11-15/service/create#sqldedicatedgatewayservicecreate)
- [Azure CLI](/cli/azure/cosmosdb/service?view=azure-cli-latest&preserve-view=true#az-cosmosdb-service-create) - [ARM template](/azure/templates/microsoft.documentdb/databaseaccounts/services?tabs=bicep) - Note: You cannot deprovision a dedicated gateway using ARM templates
cosmos-db How To Container Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-container-copy.md
[Container copy jobs](intra-account-container-copy.md) help create offline copies of containers within an Azure Cosmos DB account.
-This article describes how to create, monitor, and manage intra-account container copy jobs using Azure PowerShell or CLI commands.
+This article describes how to create, monitor, and manage intra-account container copy jobs using Azure CLI commands.
## Prerequisites
-* You may use the portal [Cloud Shell](/azure/cloud-shell/quickstart?tabs=powershell) to run container copy commands. Alternately, you may run the commands locally; make sure you have [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps-msi) downloaded and installed on your machine.
+* You may use the portal [Cloud Shell](/azure/cloud-shell/quickstart?tabs=powershell) to run container copy commands. Alternately, you may run the commands locally; make sure you have [Azure CLI](/cli/azure/install-azure-cli) downloaded and installed on your machine.
* Currently, container copy is only supported in [these regions](intra-account-container-copy.md#supported-regions). Make sure your account's write region belongs to this list.
This article describes how to create, monitor, and manage intra-account containe
This extension contains the container copy commands.
-```azurepowershell-interactive
+```azurecli-interactive
az extension add --name cosmosdb-preview ```
az extension add --name cosmosdb-preview
First, set all of the variables that each individual script uses.
-```azurepowershell-interactive
+```azurecli-interactive
$resourceGroup = "<resource-group-name>" $accountName = "<cosmos-account-name>" $jobName = ""
$destinationContainer = ""
Create a job to copy a container within an Azure Cosmos DB API for NoSQL account:
-```azurepowershell-interactive
+```azurecli-interactive
az cosmosdb dts copy ` --resource-group $resourceGroup ` --account-name $accountName `
az cosmosdb dts copy `
Create a job to copy a container within an Azure Cosmos DB API for Cassandra account:
-```azurepowershell-interactive
+```azurecli-interactive
az cosmosdb dts copy ` --resource-group $resourceGroup ` --account-name $accountName `
az cosmosdb dts copy `
View the progress and status of a copy job:
-```azurepowershell-interactive
+```azurecli-interactive
az cosmosdb dts show ` --resource-group $resourceGroup ` --account-name $accountName `
az cosmosdb dts show `
To list all the container copy jobs created in an account:
-```azurepowershell-interactive
+```azurecli-interactive
az cosmosdb dts list ` --resource-group $resourceGroup ` --account-name $accountName
az cosmosdb dts list `
In order to pause an ongoing container copy job, you may use the command:
-```azurepowershell-interactive
+```azurecli-interactive
az cosmosdb dts pause ` --resource-group $resourceGroup ` --account-name $accountName `
az cosmosdb dts pause `
In order to resume an ongoing container copy job, you may use the command:
-```azurepowershell-interactive
+```azurecli-interactive
az cosmosdb dts resume ` --resource-group $resourceGroup ` --account-name $accountName `
cosmos-db Intra Account Container Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/intra-account-container-copy.md
The rate of container copy job progress is determined by these factors:
> [!IMPORTANT] > The default SKU offers two 4-vCPU 16-GB server-side instances per account.
+## Limitations
+
+### Preview eligibility criteria
+
+Container copy jobs don't work with accounts having following capabilities enabled. You will need to disable these features before running the container copy jobs.
+
+- [Disable local auth](https://learn.microsoft.com/azure/cosmos-db/how-to-setup-rbac#use-azure-resource-manager-templates)
+- [Private endpoint / IP Firewall enabled](https://learn.microsoft.com/azure/cosmos-db/how-to-configure-firewall#allow-requests-from-global-azure-datacenters-or-other-sources-within-azure). You will need to provide access to connections within public Azure datacenters to run container copy jobs.
+- [Merge partition](https://learn.microsoft.com/azure/cosmos-db/merge).
++
+### Account Configurations
+
+- The time-to-live (TTL) setting is not adjusted in the destination container. As a result, if a document has not expired in the source container, it will start its countdown anew in the destination container.
++ ## FAQs ### Is there an SLA for the container copy jobs?
The container copy job runs in the write region. If there are accounts configure
The account's write region may change in the rare scenario of a region outage or due to manual failover. In such a scenario, incomplete container copy jobs created within the account would fail. You would need to recreate these failed jobs. Recreated jobs would then run in the new (current) write region.
-### Why is a new database *__datatransferstate* created in the account when I run container copy jobs? Am I being charged for this database?
-
-* *__datatransferstate* is a database that is created while running container copy jobs. This database is used by the platform to store the state and progress of the copy job.
-* The database uses manual provisioned throughput of 800 RUs. You are charged for this database.
-* Deleting this database removes the container copy job history from the account. It can be safely deleted once all the jobs in the account have completed, if you no longer need the job history. The platform doesn't clean up the *__datatransferstate* database automatically.
## Supported regions
Currently, container copy is supported in the following regions:
Make sure the target container is created before running the job as specified in the [overview section.](#overview-of-steps-needed-to-do-container-copy) ```output
- "code": "500",
+ "code": "404",
"message": "Response status code does not indicate success: NotFound (404); Substatus: 1003; ActivityId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx; Reason: (Message: {\"Errors\":[\"Owner resource does not exist\"] ```
-* Error - Shared throughput database creation isn't supported for serverless accounts
-
- Job creation on serverless accounts may fail with the error *"Shared throughput database creation isn't supported for serverless accounts"*.
- As a work-around, create a database called *__datatransferstate* manually within the account and try creating the container copy job again.
-
- ```output
- ERROR: (BadRequest) Response status code does not indicate success: BadRequest (400); Substatus: 0; ActivityId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx; Reason: (Shared throughput database creation is not supported for serverless accounts.
- ```
- * Error - (Request) is blocked by your Cosmos DB account firewall settings. The job creation request could be blocked if the client IP isn't allowed as per the VNet and Firewall IPs configured on the account. In order to get past this issue, you need to [allow access to the IP through the Firewall setting](how-to-configure-firewall.md). Alternately, you may set **Accept connections from within public Azure datacenters** in your firewall settings and run the container copy commands through the portal [Cloud Shell](/azure/cloud-shell/quickstart?tabs=powershell).
Currently, container copy is supported in the following regions:
InternalServerError Request originated from IP xxx.xxx.xxx.xxx through public internet. This is blocked by your Cosmos DB account firewall settings. More info: https://aka.ms/cosmosdb-tsg-forbidden ActivityId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx ```
+* Error - Error while getting resources for job.
+
+ This error can occur due to internal server issues. To resolve this issue, contact Microsoft support by raising a **New Support Request** from the Azure portal. Set the Problem Type as **'Data Migration'** and Problem subtype as **'Intra-account container copy'**.
+
+ ```output
+ "code": "500"
+ "message": "Error while getting resources for job, StatusCode: 500, SubStatusCode: 0, OperationId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, ActivityId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+ ```
+
++ ## Next steps
cosmos-db Monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-resource-logs.md
Title: Monitor Azure Cosmos DB data by using Azure Diagnostic settings
+ Title: Monitor data by using Azure Diagnostic settings
+ description: Learn how to use Azure diagnostic settings to monitor the performance and availability of data stored in Azure Cosmos DB Previously updated : 04/23/2023 Last updated : 04/26/2023
Platform metrics and the Activity logs are collected automatically, whereas you
> [!NOTE] > We recommend creating the diagnostic setting in resource-specific mode (for all APIs except API for Table) [following our instructions for creating diagnostics setting via REST API](monitor-resource-logs.md). This option provides additional cost-optimizations with an improved view for handling data.
+## Prerequisites
+
+- An existing Azure Cosmos DB account.
+ - If you have an Azure subscription, [create a new account](nosql/how-to-create-account.md?tabs=azure-portal).
+ - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+ - Alternatively, you can [try Azure Cosmos DB free](try-free.md) before you commit.
+ ## Create diagnostic settings
+Here, we walk through the process of creating diagnostic settings for your account.
+ ### [Azure portal](#tab/azure-portal) 1. Sign into the [Azure portal](https://portal.azure.com).
Use the [Azure Monitor REST API](/rest/api/monitor/diagnosticsettings/createorup
> [!NOTE] > The URI for the Microsoft Insights subresource is in this format: `subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.DocumentDb/databaseAccounts/{ACCOUNT_NAME}/providers/microsoft.insights/diagnosticSettings/{DIAGNOSTIC_SETTING_NAME}`. For more information about Azure Cosmos DB resource URIs, see [resource URI syntax for Azure Cosmos DB REST API](/rest/api/cosmos-db/cosmosdb-resource-uri-syntax-for-rest). - 1. Set the body of the request to this JSON payload. ```json
Use the [Azure Monitor REST API](/rest/api/monitor/diagnosticsettings/createorup
} ```
+### [ARM Template](#tab/azure-resource-manager-template)
+
+Here, use an [Azure Resource Manager (ARM) template](../azure-resource-manager/templates/index.yml) to create a diagnostic setting.
+
+> [!NOTE]
+> Set the **logAnalyticsDestinationType** property to **Dedicated** to enable resource-specific tables.
+
+1. Create the following JSON template file to deploy diagnostic settings for your Azure Cosmos DB resource.
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "settingName": {
+ "type": "string",
+ "metadata": {
+ "description": "The name of the diagnostic setting."
+ }
+ },
+ "dbName": {
+ "type": "string",
+ "metadata": {
+ "description": "The name of the database."
+ }
+ },
+ "workspaceId": {
+ "type": "string",
+ "metadata": {
+ "description": "The resource Id of the workspace."
+ }
+ },
+ "storageAccountId": {
+ "type": "string",
+ "metadata": {
+ "description": "The resource Id of the storage account."
+ }
+ },
+ "eventHubAuthorizationRuleId": {
+ "type": "string",
+ "metadata": {
+ "description": "The resource Id of the event hub authorization rule."
+ }
+ },
+ "eventHubName": {
+ "type": "string",
+ "metadata": {
+ "description": "The name of the event hub."
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/diagnosticSettings",
+ "apiVersion": "2021-05-01-preview",
+ "scope": "[format('Microsoft.DocumentDB/databaseAccounts/{0}', parameters('dbName'))]",
+ "name": "[parameters('settingName')]",
+ "properties": {
+ "workspaceId": "[parameters('workspaceId')]",
+ "storageAccountId": "[parameters('storageAccountId')]",
+ "eventHubAuthorizationRuleId": "[parameters('eventHubAuthorizationRuleId')]",
+ "eventHubName": "[parameters('eventHubName')]",
+ "logAnalyticsDestinationType": "[parameters('logAnalyticsDestinationType')]",
+ "logs": [
+ {
+ "category": "DataPlaneRequests",
+ "categoryGroup": null,
+ "enabled": true,
+ "retentionPolicy": {
+ "days": 0,
+ "enabled": false
+ }
+ },
+ {
+ "category": "MongoRequests",
+ "categoryGroup": null,
+ "enabled": false,
+ "retentionPolicy": {
+ "days": 0,
+ "enabled": false
+ }
+ },
+ {
+ "category": "QueryRuntimeStatistics",
+ "categoryGroup": null,
+ "enabled": true,
+ "retentionPolicy": {
+ "days": 0,
+ "enabled": false
+ }
+ },
+ {
+ "category": "PartitionKeyStatistics",
+ "categoryGroup": null,
+ "enabled": true,
+ "retentionPolicy": {
+ "days": 0,
+ "enabled": false
+ }
+ },
+ {
+ "category": "PartitionKeyRUConsumption",
+ "categoryGroup": null,
+ "enabled": true,
+ "retentionPolicy": {
+ "days": 0,
+ "enabled": false
+ }
+ },
+ {
+ "category": "ControlPlaneRequests",
+ "categoryGroup": null,
+ "enabled": true,
+ "retentionPolicy": {
+ "days": 0,
+ "enabled": false
+ }
+ },
+ {
+ "category": "CassandraRequests",
+ "categoryGroup": null,
+ "enabled": false,
+ "retentionPolicy": {
+ "days": 0,
+ "enabled": false
+ }
+ },
+ {
+ "category": "GremlinRequests",
+ "categoryGroup": null,
+ "enabled": false,
+ "retentionPolicy": {
+ "days": 0,
+ "enabled": false
+ }
+ },
+ {
+ "category": "TableApiRequests",
+ "categoryGroup": null,
+ "enabled": false,
+ "retentionPolicy": {
+ "days": 0,
+ "enabled": false
+ }
+ }
+ ],
+ "metrics": [
+ {
+ "timeGrain": null,
+ "enabled": false,
+ "retentionPolicy": {
+ "days": 0,
+ "enabled": false
+ },
+ "category": "Requests"
+ }
+ ]
+ }
+ }
+ ]
+ }
+ ```
+
+1. Create the following JSON parameter file with settings appropriate for your Azure Cosmos DB resource.
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "settingName": {
+ "value": "{DIAGNOSTIC_SETTING_NAME}"
+ },
+ "dbName": {
+ "value": "{ACCOUNT_NAME}"
+ },
+ "workspaceId": {
+ "value": "/subscriptions/{SUBSCRIPTION_ID}/resourcegroups/{RESOURCE_GROUP}/providers/microsoft.operationalinsights/workspaces/{WORKSPACE_NAME}"
+ },
+ "storageAccountId": {
+ "value": "/subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.Storage/storageAccounts/{STORAGE_ACCOUNT_NAME}"
+ },
+ "eventHubAuthorizationRuleId": {
+ "value": "/subscriptions/{SUBSCRIPTION_ID}/resourcegroups{RESOURCE_GROUP}/providers/Microsoft.EventHub/namespaces/{EVENTHUB_NAMESPACE}/authorizationrules/{EVENTHUB_POLICY_NAME}"
+ },
+ "eventHubName": {
+ "value": "{EVENTHUB_NAME}"
+ },
+ "logAnalyticsDestinationType": {
+ "value": "Dedicated"
+ }
+ }
+ }
+ ```
+
+1. Deploy the template using [`az deployment group create`](/cli/azure/deployment/group#az-deployment-group-create).
+
+ ```azurecli
+ az deployment group create \
+ --resource-group <resource-group-name> \
+ --template-file <path-to-template>.json \
+ --parameters @<parameters-file-name>.json
+ ```
+ ## Enable full-text query for logging query text
Azure Cosmos DB provides advanced logging for detailed troubleshooting. By enabl
1. To enable this feature, navigate to the `Features` page in your Azure Cosmos DB account.
- :::image type="content" source="media/monitor/full-text-query-features.png" lightbox="media/monitor/full-text-query-features.png" alt-text="Screenshot of navigation to the Features page.":::
+ :::image type="content" source="media/monitor/full-text-query-features.png" lightbox="media/monitor/full-text-query-features.png" alt-text="Screenshot of the navigation process to the Features page.":::
2. Select `Enable`. This setting is applied within a few minutes. All newly ingested logs have the full-text or PIICommand text for each request.
- :::image type="content" source="media/monitor/select-enable-full-text.png" alt-text="Screenshot of full-text being enabled.":::
+ :::image type="content" source="media/monitor/select-enable-full-text.png" alt-text="Screenshot of the full-text feature being enabled.":::
-### [Azure CLI / REST API](#tab/azure-cli+rest-api)
+### [Azure CLI / REST API / ARM template](#tab/azure-cli+rest-api+azure-resource-manager-template)
1. Ensure you're logged in to the Azure CLI. For more information, see [sign in with Azure CLI](/cli/azure/authenticate-azure-cli). Optionally, ensure that you've configured the active subscription for your CLI. For more information, see [change the active Azure CLI subscription](/cli/azure/manage-azure-subscriptions-azure-cli#change-the-active-subscription).
To learn how to query using these newly enabled features, see:
## Next steps -- For a reference of the log and metric data, see [monitoring Azure Cosmos DB data reference](monitor-reference.md#resource-logs).-- For more information on how to query resource-specific tables, see [troubleshooting using resource-specific tables](monitor-logs-basic-queries.md#resource-specific-queries).-- For more information on how to query AzureDiagnostics tables, see [troubleshooting using AzureDiagnostics tables](monitor-logs-basic-queries.md#azure-diagnostics-queries).-- For detailed information about how to create a diagnostic setting by using the Azure portal, CLI, or PowerShell, see [create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) article.
+> [!div class="nextstepaction"]
+> [Monitoring Azure Cosmos DB data reference](monitor-reference.md#resource-logs)
cosmos-db Tutorial Import Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-import-notebooks.md
This tutorial walks through how to import Jupyter notebooks from a GitHub reposi
## Create a copy of a GitHub repository
-1. Navigate to the [azure-samples/cosmos-db-nosql-notebooks](https://github.com/azure-samples/cosmos-db-nosql-notebooks/generate) template repository.
+1. Navigate to the [azure-samples/cosmos-db-nosql-notebooks](https://github.com/azure-samples/cosmos-db-nosql-notebooks) template repository.
1. Create a new copy of the template repository in your own GitHub account or organization.
cosmos-db Tutorial Design Database Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/tutorial-design-database-multi-tenant.md
WHERE company_id = 5;
``` More generally, we can create a [GIN
-indices](https://www.postgresql.org/docs/current/static/gin-intro.html) on
+indices](https://www.postgresql.org/docs/current/gin-intro.html) on
every key and value within the column. ```sql
cosmos-db Provision Throughput Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/provision-throughput-autoscale.md
Use the [Azure portal](how-to-provision-autoscale-throughput.md#enable-autoscale
## <a id="autoscale-limits"></a> Throughput and storage limits for autoscale
-For any value of `Tmax`, the database or container can store a total of `0.01 * Tmax GB`. After this amount of storage is reached, the maximum RU/s will be automatically increased based on the new storage value, with no impact to your application.
+For any value of `Tmax`, the database or container can store a total of `0.1 * Tmax GB`. After this amount of storage is reached, the maximum RU/s will be automatically increased based on the new storage value, with no impact to your application.
-For example, if you start with a maximum RU/s of 50,000 RU/s (scales between 5000 - 50,000 RU/s), you can store up to 500 GB of data. If you exceed 500 GB - e.g. storage is now 600 GB, the new maximum RU/s will be 60,000 RU/s (scales between 6000 - 60,000 RU/s).
+For example, if you start with a maximum RU/s of 50,000 RU/s (scales between 5000 - 50,000 RU/s), you can store up to 5000 GB of data. If you exceed 500 GB - e.g. storage is now 6000 GB, the new maximum RU/s will be 60,000 RU/s (scales between 6000 - 60,000 RU/s).
-When you use database level throughput with autoscale, you can have the first 25 containers share an autoscale maximum RU/s of 1000 (scales between 100 - 1000 RU/s), as long as you don't exceed 10 GB of storage. See this [documentation](autoscale-faq.yml#can-i-change-the-max-ru-s-on-the-database-or-container--) for more information.
+When you use database level throughput with autoscale, you can have the first 25 containers share an autoscale maximum RU/s of 1000 (scales between 100 - 1000 RU/s), as long as you don't exceed 100 GB of storage. See this [documentation](autoscale-faq.yml#can-i-change-the-max-ru-s-on-the-database-or-container--) for more information.
## Comparison ΓÇô containers configured with manual vs autoscale throughput For more detail, see this [documentation](how-to-choose-offer.md) on how to choose between standard (manual) and autoscale throughput.
cosmos-db Store Credentials Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/store-credentials-key-vault.md
Now, store your Azure Cosmos DB credentials as secrets in the key vault.
In this section, create a new Azure Web App, deploy a sample application, and then register the Web App's managed identity with Azure Key Vault.
-1. Create a new GitHub repository using the [cosmos-db-nosql-dotnet-sample-web-environment-variables template](https://github.com/azure-samples/cosmos-db-nosql-dotnet-sample-web-environment-variables/generate).
+1. Create a new GitHub repository using the [cosmos-db-nosql-dotnet-sample-web-environment-variables template](https://github.com/azure-samples/cosmos-db-nosql-dotnet-sample-web-environment-variables).
1. In the Azure portal, select **Create a resource > Web > Web App**.
cost-management-billing Account Admin Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/account-admin-tasks.md
tags: billing
Previously updated : 04/05/2023 Last updated : 04/26/2023
If your credit card is the active payment method for any of your Microsoft subsc
### Switch to invoice payment
-If you are eligible to pay by invoice (check/wire transfer), you can switch your subscription to invoice payment (check/wire transfer) in the Azure portal.
+If you are eligible to pay by invoice (wire transfer), you can switch your subscription to invoice payment (wire transfer) in the Azure portal.
1. Select **Pay by invoice** in the command bar.
cost-management-billing Billing Troubleshoot Azure Payment Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/billing-troubleshoot-azure-payment-issues.md
tags: billing
Previously updated : 12/02/2022 Last updated : 04/26/2023
To add card details, sign-in to the Azure Account portal by using the account ad
Best practices: -- Submit one check/wire transfer payment per invoice.
+- Submit one wire transfer payment per invoice.
- Specify the invoice number on the remittance. - Send proof of payment, identification, and remittance details.
cost-management-billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-transfer.md
tags: billing
Previously updated : 04/12/2023 Last updated : 04/26/2023
Dev/Test products aren't shown in the following table. Transfers for Dev/Test pr
| EA | MPA | ΓÇó Transfer is only allowed for direct EA to MPA. A direct EA is signed between Microsoft and an EA customer.<br><br>ΓÇó Only CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure products for their customers that have a Direct Enterprise Agreement (EA). For more information, see [Get billing ownership of Azure subscriptions to your MPA account](mpa-request-ownership.md). Product transfers are allowed only for customers who have accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program.<br><br> ΓÇó Transfer from EA Government to MPA isn't supported.<br><br>ΓÇó There are limitations and restrictions. For more information, see [Transfer EA subscriptions to a CSP partner](transfer-subscriptions-subscribers-csp.md#transfer-ea-or-mca-enterprise-subscriptions-to-a-csp-partner). | | MCA - individual | MOSP (PAYG) | ΓÇó For details, see [Transfer billing ownership of an Azure subscription to another account](billing-subscription-transfer.md).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. | | MCA - individual | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation and savings plan transfers are supported. |
-| MCA - individual | EA | ΓÇó For details, see [Transfer a subscription to an EA](mosp-ea-transfer.md#transfer-the-subscription-to-the-ea).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. |
+| MCA - individual | EA | ΓÇó The transfer isnΓÇÖt supported by Microsoft, so you must move resources yourself. For more information, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. |
| MCA - individual | MCA - Enterprise | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br>ΓÇó Self-service reservation and savings plan transfers are supported. | | MCA - Enterprise | MOSP | ΓÇó Requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. | | MCA - Enterprise | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation and savings plan transfers are supported. |
cost-management-billing Troubleshoot Declined Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/troubleshoot-declined-card.md
Previously updated : 12/09/2022 Last updated : 04/26/2023
For more information about how to troubleshoot Azure sign-up issues, see the fol
## You represent a business that doesn't want to pay by card
-If you represent a business, you can use invoice payment methods such as checks, overnight checks, or wire transfers to pay for your Azure subscription. After you set up the account to pay by invoice, you can't change to another payment option, unless you have a Microsoft Customer Agreement and signed up for Azure through the Azure website.
+If you represent a business, you can use the invoice payment method (wire transfer) to pay for your Azure subscription. After you set up the account to pay by invoice, you can't change to another payment option, unless you have a Microsoft Customer Agreement and signed up for Azure through the Azure website.
For more information about how to pay by invoice, see [Submit a request to pay Azure subscription by invoice](pay-by-invoice.md).
cost-management-billing Create Sql License Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/create-sql-license-assignments.md
Title: Create SQL Server license assignments for Azure Hybrid Benefit
description: This article explains how to create SQL Server license assignments for Azure Hybrid Benefit. Previously updated : 04/20/2023 Last updated : 04/23/2023
The centralized Azure Hybrid Benefit experience in the Azure portal supports SQL
For each license assignment, a scope is selected and then licenses are assigned to the scope. Each scope can have multiple license entries.
-Here's a video demonstrating how [centralized Azure Hybrid Benefit works](https://www.youtube.com/watch?v=LvtUXO4wcjs).
+Here's a video demonstrating how [centralized Azure Hybrid Benefit works](https://aka.ms/azure/pricing/CM_AHB_SQL/DevVideo).
+
+>[!VIDEO https://www.youtube.com/embed/ReoLB9N76Lo]
## Prerequisites
cost-management-billing Mca Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/mca-overview.md
Previously updated : 04/05/2023 Last updated : 04/26/2023
Each billing profile has its own payment methods that are used to pay its invoic
| Type | Definition | ||-| |Azure credits | Credits are automatically applied to the eligible charges on your invoice, reducing the amount that you need to pay. For more information, see [track Azure credit balance for your billing profile](../manage/mca-check-azure-credits-balance.md). |
-|Check/wire transfer | If your account is approved for payment through check/wire transfer. You can pay the amount due for your invoice through check/wire transfer. The instructions for payment are given on the invoice |
+|Wire transfer | If your account is approved for payment through wire transfer, you can pay the amount due for your invoice with a wire transfer. The instructions for payment are given on the invoice. |
|Credit card | Customers who sign up for Azure through the Azure website can pay through a credit card. | ### Apply policies to control purchases
cost-management-billing Mca Understand Your Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/mca-understand-your-invoice.md
tags: billing
Previously updated : 01/24/2023 Last updated : 04/26/2023
The total amount due for each service family is calculated by subtracting *Azure
### How to pay
-At the bottom of the invoice, there are instructions for paying your bill. You can pay by check, wire, or online. If you pay online, you can use a credit card or Azure credits, if applicable.
+At the bottom of the invoice, there are instructions for paying your bill. You can pay by wire transfer or online. If you pay online, you can use a credit card or Azure credits, if applicable.
### Publisher information
cost-management-billing Review Partner Agreement Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/review-partner-agreement-bill.md
tags: billing
Previously updated : 04/05/2023 Last updated : 04/26/2023
You can also filter the **customerName** column in the Azure usage and charges C
## Pay your bill
-Instructions for paying your bill are shown at the bottom of the invoice. You can pay by wire or by check within 60 days of your invoice date.
+Instructions for paying your bill are shown at the bottom of the invoice. You can pay by wire transfer within 60 days of your invoice date.
If you've already paid your bill, you can check the status of the payment on the Invoices page in the Azure portal.
dns Dns Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-import-export.md
Previously updated : 09/27/2022 Last updated : 04/25/2023
Before you import a DNS zone file into Azure DNS, you need to obtain a copy of t
* If your DNS zone is hosted on Windows DNS, the default folder for the zone files is **%systemroot%\system32\dns**. The full path to each zone file is also shown on the **General** tab of the DNS console. * If your DNS zone is hosted using BIND, the location of the zone file for each zone gets specified in the BIND configuration file **named.conf**.
+> [!IMPORTANT]
+> If the zone file that you import contains CNAME entries that point to names in another private zone, Azure DNS resolution of the CNAME will fail unless the other zone is also imported, or the CNAME entries are modified.
+ ## Import a DNS zone file into Azure DNS
-Importing a zone file creates a new zone in Azure DNS if the zone doesn't already exist. If the zone exist, then the record sets in the zone file will be merged with the existing record sets.
+Importing a zone file creates a new zone in Azure DNS if the zone doesn't already exist. If the zone exists, then the record sets in the zone file will be merged with the existing record sets.
### Merge behavior
dns Private Dns Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-scenarios.md
Previously updated : 09/27/2022 Last updated : 04/25/2023
You can also do a reverse DNS query (PTR) for the private IP of VNETA-VM1 (10.0.
![Single Virtual network resolution](./media/private-dns-scenarios/single-vnet-resolution.png)
+> [!NOTE]
+> The IP addresses 10.0.0.1 and 10.0.0.2 are examples only. Since Azure reserves the first four addresses in a subnet, the .1 and .2 addresses are not normally assigned to a VM.
+ ## Scenario: Name Resolution across virtual networks In this scenario, you need to associate a private zone with multiple virtual networks. You can implement this solution in various network architectures such as the Hub-and-Spoke model. This configuration is when a central hub virtual network is used to connect multiple spoke virtual networks together. The central hub virtual network can be linked as the registration virtual network and the spoke virtual networks can be linked as resolution virtual networks.
event-grid Delivery And Retry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/delivery-and-retry.md
Batched delivery in configured on a per-event subscription basis via the portal,
* All or none
- Event Grid operates with all-or-none semantics. It doesn't support partial success of a batch delivery. Subscribers should be careful to only ask for as many events per batch as they can reasonably handle in 60 seconds.
+ Event Grid operates with all-or-none semantics. It doesn't support partial success of a batch delivery. Subscribers should be careful to only ask for as many events per batch as they can reasonably handle in 30 seconds.
* Optimistic batching
event-hubs Event Hubs Python Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-python-get-started-send.md
In this section, create a Python script to send events to the event hub that you
from azure.eventhub import EventData from azure.eventhub.aio import EventHubProducerClient
- from azure.identity import DefaultAzureCredential
+ from azure.identity.aio import DefaultAzureCredential
EVENT_HUB_FULLY_QUALIFIED_NAMESPACE = "EVENT_HUB_FULLY_QUALIFIED_NAMESPACE" EVENT_HUB_NAME = "EVENT_HUB_NAME"
firewall Firewall Structured Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-structured-logs.md
By default, the new resource specific tables are disabled.
Run the following Azure PowerShell commands to enable Azure Firewall Structured logs:
+> [!NOTE]
+> It can take several minutes for this to take effect. Consider performing an update on Azure Firewall for the change to take effect immediately.
+ ```azurepowershell Connect-AzAccount Select-AzSubscription -Subscription "subscription_id or subscription_name"
frontdoor Front Door Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-custom-domain.md
After you create a Front Door profile, the default frontend host is a subdomain
Before you can use a custom domain with your Front Door, you must first create a canonical name (CNAME) record with your domain provider to point to the Front Door default frontend host. A CNAME record is a type of DNS record that maps a source domain name to a destination domain name. In Azure Front Door, the source domain name is your custom domain name and the destination domain name is your Front Door default hostname. Once Front Door verifies the CNAME record gets created, traffic to the source custom domain gets routed to the specified destination Front Door default frontend host.
-A custom domain and its subdomain can be associated with only a single Front Door at a time. However, you can use different subdomains from the same custom domain for different Front Doors by using multiple CNAME records. You can also map a custom domain with different subdomains to the same Front Door.
-
+A custom domain can only be associated with one Front Door profile at a time. However, you can have different subdomains of an apex domain in the same or a different Front Door profile.
## Map the temporary afdverify subdomain
hdinsight Apache Hadoop On Premises Migration Best Practices Security Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-on-premises-migration-best-practices-security-devops.md
description: Learn security and DevOps best practices for migrating on-premises
Previously updated : 12/19/2019 Last updated : 04/26/2023 # Migrate on-premises Apache Hadoop clusters to Azure HDInsight - security and DevOps best practices
For more information, see the article: [OS patching for HDInsight](../hdinsight-
## Next steps
-Read more about [HDInsight 4.0](./apache-hadoop-introduction.md).
+Read more about [HDInsight 4.0](./apache-hadoop-introduction.md).
hdinsight Apache Hadoop Use Mapreduce Dotnet Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-mapreduce-dotnet-sdk.md
description: Learn how to submit MapReduce jobs to Azure HDInsight Apache Hadoop
Previously updated : 01/15/2020 Last updated : 04/24/2023 # Run MapReduce jobs using HDInsight .NET SDK
hdinsight Troubleshoot Yarn Log Invalid Bcfile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/troubleshoot-yarn-log-invalid-bcfile.md
Title: Unable to read Apache Yarn log in Azure HDInsight
description: Troubleshooting steps and possible resolutions for issues when interacting with Azure HDInsight clusters. Previously updated : 01/23/2020 Last updated : 04/26/2023 # Scenario: Unable to read Apache Yarn log in Azure HDInsight
If you didn't see your problem or are unable to solve your issue, visit one of t
* Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts.
-* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
+* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
hdinsight Apache Hbase Tutorial Get Started Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-tutorial-get-started-linux.md
description: Follow this Apache HBase tutorial to start using hadoop on HDInsigh
Previously updated : 03/31/2022 Last updated : 04/26/2023 # Tutorial: Use Apache HBase in Azure HDInsight
hdinsight Troubleshoot Hbase Performance Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/troubleshoot-hbase-performance-issues.md
Title: Troubleshoot Apache HBase performance issues on Azure HDInsight
description: Various Apache HBase performance tuning guidelines and tips for getting optimal performance on Azure HDInsight. Previously updated : 09/24/2019 Last updated : 04/26/2023 # Troubleshoot Apache HBase performance issues on Azure HDInsight
If your problem remains unresolved, visit one of the following channels for more
- Connect with [@AzureSupport](https://twitter.com/azuresupport). This is the official Microsoft Azure account for improving customer experience. It connects the Azure community to the right resources: answers, support, and experts. -- If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Your Microsoft Azure subscription includes access to subscription management and billing support, and technical support is provided through one of the [Azure support plans](https://azure.microsoft.com/support/plans/).
+- If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Your Microsoft Azure subscription includes access to subscription management and billing support, and technical support is provided through one of the [Azure support plans](https://azure.microsoft.com/support/plans/).
hdinsight Hdinsight Hadoop Script Actions Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-script-actions-linux.md
Title: Develop script actions to customize Azure HDInsight clusters
description: Learn how to use Bash scripts to customize HDInsight clusters. Script actions allow you to run scripts during or after cluster creation to change cluster configuration settings or install additional software. Previously updated : 07/19/2022 Last updated : 04/26/2023 # Script action development with HDInsight
The best practice is to download and archive everything in an Azure Storage acco
> [!IMPORTANT] > The storage account used must be the default storage account for the cluster or a public, read-only container on any other storage account.
-For example, the samples provided by Microsoft are stored in the [https://hdiconfigactions.blob.core.windows.net/](https://hdiconfigactions.blob.core.windows.net/) storage account. This location is a public, read-only container maintained by the HDInsight team.
+For example, the samples provided by Microsoft are stored in the `https://hdiconfigactions.blob.core.windows.net/` storage account. This location is a public, read-only container maintained by the HDInsight team.
### <a name="bPS4"></a>Use pre-compiled resources
hdinsight Hdinsight Phoenix In Hdinsight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-phoenix-in-hdinsight.md
description: Overview of Apache Phoenix
Previously updated : 04/08/2022 Last updated : 04/26/2023 # Apache Phoenix in Azure HDInsight
A skip scan uses the `SEEK_NEXT_USING_HINT` enumeration of the HBase filter. Usi
### Transactions
-While HBase provides row-level transactions, Phoenix integrates with [Tephra](https://tephra.io/) to add cross-row and cross-table transaction support with full [ACID](https://en.wikipedia.org/wiki/ACID) semantics.
+While HBase provides row-level transactions, Phoenix integrates with [Tephra](https://tephra.apache.org/) to add cross-row and cross-table transaction support with full [ACID](https://en.wikipedia.org/wiki/ACID) semantics.
As with traditional SQL transactions, transactions provided through the Phoenix transaction manager allow you to ensure an atomic unit of data is successfully upserted, rolling back the transaction if the upsert operation fails on any transaction-enabled table.
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md
End of support for Azure HDInsight clusters on Spark 2.4 February 10, 2024. For
* Apache Spark 3.3.0 and Hadoop 3.3.4 are under development on HDInsight 5.1 and will include several significant new features, performance and other improvements. > [!NOTE]
- > We advise customers to use to latest versions of HDInsight [Images](./view-hindsight-cluster-image-version.md) as they bring in the best of open source updates, Azure updates and security fixes.
+ > We advise customers to use to latest versions of HDInsight [Images](./view-hindsight-cluster-image-version.md) as they bring in the best of open source updates, Azure updates and security fixes. For more information, see [Best practices](./hdinsight-overview-before-you-start.md).
### Next steps * [Azure HDInsight: Frequently asked questions](./hdinsight-faq.yml)
hdinsight Hdinsight Rotate Storage Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-rotate-storage-keys.md
Title: Update Azure Storage account access key in Azure HDInsight
description: Learn how to update Azure Storage account access key in Azure HDInsight cluster. Previously updated : 06/29/2021 Last updated : 04/26/2023 # Update Azure storage account access keys in HDInsight cluster
-In this article, you will learn how to rotate Azure Storage account access keys for the primary or secondary storage accounts in Azure HDInsight.
+In this article, you learn how to rotate Azure Storage account access keys for the primary or secondary storage accounts in Azure HDInsight.
>[!CAUTION] > Directly rotating the access key on the storage side will make the HDInsight cluster inaccessible. ## Prerequisites
-* We are going to use an approach to rotate the primary and secondary access keys of the storage account in a staggered, alternating fashion to ensure HDInsight cluster is accessible throughout the process.
+* We're going to use an approach to rotate the primary and secondary access keys of the storage account in a staggered, alternating fashion to ensure HDInsight cluster is accessible throughout the process.
- Here is an example on how to use primary and secondary storage access keys and set up rotation policies on them:
+ Here's an example of how to use primary and secondary storage access keys and set up rotation policies on them:
1. Use access key1 on the storage account when creating HDInsight cluster.
- 1. Set up rotation policy for access key2 every N days. As part of this rotation update HDInsight to use access key1 and then rotate access key2 on storage account.
- 1. Set up rotation policy for access key1 every N/2 days. As part of this rotation update HDInsight to use access key2 and then rotate access key1 on storage account.
- 1. With above approach access key1 will be rotated N/2, 3N/2 etc. days and access key2 will be rotated N, 2N, 3N etc. days.
+ 1. Set up rotation policy for access key2 every N day. As part of this rotation update, HDInsight to use access key1 and then rotate access key2 on storage account.
+ 1. Set up rotation policy for access key1 every N/2 day. As part of this rotation update, HDInsight to use access key2 and then rotate access key1 on storage account.
+ 1. With approach access key1 will be rotated N/2, 3N/2 etc. days and access key2 will be rotated N, 2N, 3N etc. days.
-* To set up periodic rotation of storage account keys see [Automate the rotation of a secret](../key-vault/secrets/tutorial-rotation-dual.md).
+* To set up periodic rotation of storage account keys, see [Automate the rotation of a secret](../key-vault/secrets/tutorial-rotation-dual.md).
## Update storage account access keys
Use [Script Action](hdinsight-hadoop-customize-cluster-linux.md#script-action-to
## Known issues
-The preceding script directly updates the access key on the cluster side only and does not renew a copy on the HDInsight Resource provider side. Therefore, the script action hosted in the storage account will fail after the access key is rotated.
+The preceding script directly updates the access key on the cluster side only and doesn't renew a copy on the HDInsight Resource provider side. Therefore, the script action hosted in the storage account will fail after the access key is rotated.
Workaround: Use [SAS URIs](hdinsight-storage-sharedaccesssignature-permissions.md) for script actions or make the scripts publicly accessible. ## Next steps
-* [Add additional storage accounts](hdinsight-hadoop-add-storage.md)
+* [Add more storage accounts](hdinsight-hadoop-add-storage.md)
hdinsight Hdinsight Troubleshoot Hdfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-troubleshoot-hdfs.md
Title: Troubleshoot HDFS in Azure HDInsight
description: Get answers to common questions about working with HDFS and Azure HDInsight. Previously updated : 04/27/2020 Last updated : 04/26/2023
hdfs dfs -rm hdfs://mycluster/tmp/testfile
## Next steps
hdinsight Interactive Query Troubleshoot Migrate 36 To 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-troubleshoot-migrate-36-to-40.md
Title: Troubleshoot migration of Hive from 3.6 to 4.0 - Azure HDInsight
description: Troubleshooting guide for migration of Hive workloads from HDInsight 3.6 to 4.0 Previously updated : 07/12/2021 Last updated : 04/24/2023 # Troubleshooting guide for migration of Hive workloads from HDInsight 3.6 to HDInsight 4.0
hdinsight Interactive Query Tutorial Analyze Flight Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-tutorial-analyze-flight-data.md
description: Tutorial - Learn how to extract data from a raw CSV dataset. Transf
Previously updated : 08/28/2022 Last updated : 04/26/2023 #Customer intent: As a data analyst, I need to load some data using Interactive Query, transform, and then export it to an Azure SQL database
This tutorial covers the following tasks:
## Download the flight data
-1. Browse to [Research and Innovative Technology Administration, Bureau of Transportation Statistics](https://www.transtats.bts.gov/DL_SelectFields.asp?gnoyr_VQ=FGJ).
+1. Browse to [Research and Innovative Technology Administration, Bureau of Transportation Statistics](https://www.transtats.bts.gov/Homepage.asp).
2. On the page, clear all fields, and then select the following values:
hdinsight Share Hive Metastore With Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/share-hive-metastore-with-synapse.md
description: Learn how to share existing Azure HDInsight external Hive Metastore
keywords: external Hive metastore,share,Synapse Previously updated : 09/09/2021 Last updated : 04/26/2023 # Share Hive Metastore with Synapse Spark Pool (Preview)
The feature works with both Spark 2.4 and Spark 3.0. The following table shows t
> [!NOTE] > You can use the existing external Hive metastore from HDInsight clusters, both 3.6 and 4.0 clusters. See [use external metadata stores in Azure HDInsight](./hdinsight-use-external-metadata-stores.md).
-Follow below steps to set up a linked service to the external Hive metastore and underlying catalog storage in Synapse workspace, and configure Spark pool to use the linked external Hive metastore.
+Follow the following steps to set up a linked service to the external Hive metastore and underlying catalog storage in Synapse workspace, and configure Spark pool to use the linked external Hive metastore.
## Set up Hive metastore linked service > [!NOTE] > Only Azure SQL Database is supported as an external Hive metastore.
-Follow below steps to set up a linked service to the external Hive metastore in Synapse workspace.
+Follow steps to set up a linked service to the external Hive metastore in Synapse workspace.
1. Open Synapse Studio, go to **Manage > Linked services** at left, click **New** to create a new linked service. :::image type="content" source="./media/share-hive-metastore-with-synapse/set-up-hive-metastore-linked-service.png" alt-text="Set up Hive Metastore linked service" border="true"::: 2. Choose **Azure SQL Database**, click **Continue**.
-3. Provide **Name** of the linked service. Record the name of the linked service, this info will be used to configure Spark shortly.
+3. Provide **Name** of the linked service. Record the name of the linked service, this info is used to configure Spark shortly.
4. You can either select the Azure SQL Database for the external Hive metastore from Azure subscription list, or enter the info manually.
Follow below steps to set up a linked service to the external Hive metastore in
7. Click **Create** to create the linked service. ### Test connection and get the metastore version in notebook
-Some network security rule settings may block access from Spark pool to the external Hive metastore DB. Before you configure the Spark pool, run below code in any Spark pool notebook to test connection to the external Hive metastore DB.
+Some network security rule settings may block access from Spark pool to the external Hive metastore DB. Before you configure the Spark pool, run following code in any Spark pool notebook to test connection to the external Hive metastore DB.
-You can also get your Hive metastore version from the output results. The Hive metastore version will be used in the Spark configuration.
+You can also get your Hive metastore version from the output results. The Hive metastore version is used in the Spark configuration.
``` %%spark
try {
``` ## Configure Spark to use the external Hive metastore
-After creating the linked service to the external Hive metastore successfully, you need to setup a few configurations in the Spark to use the external Hive metastore. You can both set up the configuration at Spark pool level, or at Spark session level.
+After creating the linked service to the external Hive metastore successfully, you need to set up a few configurations in the Spark to use the external Hive metastore. You can both set up the configuration at Spark pool level, or at Spark session level.
Here are the configurations and descriptions:
spark.hadoop.hive.synapse.externalmetastore.linkedservice.name <your linked serv
spark.sql.hive.metastore.jars /opt/hive-metastore/lib-<your hms version, 2 parts>/*:/usr/hdp/current/hadoop-client/lib/* ```
-Here is an example for metastore version 2.1 with linked service named as HiveCatalog21:
+Here's an example for metastore version 2.1 with linked service named as HiveCatalog21:
``` spark.sql.hive.metastore.version 2.1
spark.sql.hive.metastore.jars /opt/hive-metastore/lib-2.1/*:/usr/hdp/current/had
``` ### Configure a Spark session
-If you donΓÇÖt want to configure your Spark pool, you can also configure the Spark session in notebook using %%configure magic command. Here is the code. Same configuration can also be applied to a Spark batch job.
+If you donΓÇÖt want to configure your Spark pool, you can also configure the Spark session in notebook using %%configure magic command. Here's the code. Same configuration can also be applied to a Spark batch job.
``` %%configure -f
The linked service to Hive metastore database just provides access to Hive catal
### Set up connection to ADLS Gen 2 #### Workspace primary storage account
-If the underlying data of your Hive tables is stored in the workspace primary storage account, you donΓÇÖt need to do extra settings. It will just work as long as you followed storage setting up instructions during workspace creation.
+If the underlying data of your Hive tables is stored in the workspace primary storage account, you donΓÇÖt need to do extra settings. It works as long as you followed storage setting up instructions during workspace creation.
#### Other ADLS Gen 2 account If the underlying data of your Hive catalogs is stored in another ADLS Gen 2 account, you need to make sure the users who run Spark queries have **Storage Blob Data Contributor** role on the ADLS Gen2 storage account.
After setting up storage connections, you can query the existing tables in the H
## Known limitations -- Synapse Studio object explorer will continue to show objects in managed Synapse metastore instead of the external HMS, we are improving the experience of this.
+- Synapse Studio object explorer continues to show objects in managed Synapse metastore instead of the external HMS, we're improving the experience.
- [SQL <-> spark synchronization](../synapse-analytics/sql/develop-storage-files-spark-tables.md) doesnΓÇÖt work when using external HMS. - Only Azure SQL Database is supported as external Hive Metastore database. Only SQL authorization is supported. - Currently Spark only works external Hive tables and non-transactional/non-ACID managed Hive tables. It doesnΓÇÖt support Hive ACID/transactional tables currently.-- Apache Ranger integration is not supported as of now.
+- Apache Ranger integration isn't supported as of now.
## Troubleshooting ### See below error when querying a Hive table with data stored in Blob Storage ```
-Py4JJavaError : An error occurred while calling o241.load. : org.apache.hadoop.fs.azure.AzureException: org.apache.hadoop.fs.azure.AzureException: No credentials found for account demohdicatalohdistorage.blob.core.windows.net in the configuration, and its container demohdicatalog-2021-07-15t23-42-51-077z is not accessible using anonymous credentials. Please check if the container exists first. If it is not publicly available, you have to provide account credentials.
+Py4JJavaError : An error occurred while calling o241.load. : org.apache.hadoop.fs.azure.AzureException: org.apache.hadoop.fs.azure.AzureException: No credentials found for account demohdicatalohdistorage.blob.core.windows.net in the configuration, and its container demohdicatalog-2021-07-15t23-42-51-077z isn't accessible using anonymous credentials. Please check if the container exists first. If it isn't publicly available, you have to provide account credentials.
``` When use key authentication to your storage account via linked service, you need to take an extra step to get the token for Spark session. Run below code to configure your Spark session before running the query. Learn more about why you need to do this here.
spark.conf.set('fs.azure.sas.%s.%s.blob.core.windows.net' % (blob_container_name
### See below error when query a table stored in ADLS Gen2 account ```
-Py4JJavaError : An error occurred while calling o305.load. : Operation failed: "This request is not authorized to perform this operation using this permission.", 403, HEAD
+Py4JJavaError : An error occurred while calling o305.load. : Operation failed: "This request isn't authorized to perform this operation using this permission.", 403, HEAD
``` This could happen because the user who run Spark query doesnΓÇÖt have enough access to the underlying storage account. Make sure the users who run Spark queries have **Storage Blob Data Contributor** role on the ADLS Gen2 storage account. This step can be done later after creating the linked service. ### HMS schema related settings
-To avoid changing HMS backend schema/version, following hive configs are set by system by default:
+To avoid changing HMS backend schema/version, following hive configs set by system by default:
``` spark.hadoop.hive.metastore.schema.verification true spark.hadoop.hive.metastore.schema.verification.record.version false
spark.hadoop.hive.synapse.externalmetastore.schema.usedefault false
If you need to migrate your HMS version, we recommend using [hive schema tool](https://cwiki.apache.org/confluence/display/Hive/Hive+Schema+Tool). And if the HMS has been used by HDInsight clusters, we suggest using [HDI provided version](./interactive-query/apache-hive-migrate-workloads.md).
-### When sharing the metastore with HDInsight 4.0 Spark clusters, I cannot see the tables
-If you want to share the Hive catalog with a spark cluster in HDInsight 4.0, please ensure your property `spark.hadoop.metastore.catalog.default` in Synapse spark aligns with the value in HDInsight spark. The default value is `Spark`.
+### When sharing the metastore with HDInsight 4.0 Spark clusters, I can't see the tables
+If you want to share the Hive catalog with a spark cluster in HDInsight 4.0, ensure your property `spark.hadoop.metastore.catalog.default` in Synapse spark aligns with the value in HDInsight spark. The default value is `Spark`.
### When sharing the Hive metastore with HDInsight 4.0 Hive clusters, I can list the tables successfully, but only get empty result when I query the table As mentioned in the limitations, Synapse Spark pool only supports external hive tables and non-transactional/ACID managed tables, it doesnΓÇÖt support Hive ACID/transactional tables currently. By default in HDInsight 4.0 Hive clusters, all managed tables are created as ACID/transactional tables by default, thatΓÇÖs why you get empty results when querying those tables.
hdinsight Troubleshoot Sqoop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/troubleshoot-sqoop.md
Title: Sqoop import/export command fails for some users in ESP clusters - Azure
description: 'Apache Sqoop import/export command fails with "Import Failed: java.io.IOException: The ownership on the staging directory /user/yourusername/.staging is not as expected" error for some users in Azure HDInsight ESP cluster' Previously updated : 04/01/2021 Last updated : 04/26/2023 # Scenario: Sqoop import/export command fails for usernames greater than 20 characters in Azure HDInsight ESP clusters
This article describes a known issue and workaround when using Azure HDInsight E
## Issue
-When running sqoop import/export command, it fails with the error below for some users:
+When you run sqoop import/export command, it fails with the error for some users:
``` ERROR tool.ImportTool: Import failed: java.io.IOException:
-The ownership on the staging directory /user/yourlongdomainuserna/.staging is not as expected.
+The ownership on the staging directory /user/yourlongdomainuserna/.staging isn't as expected.
It is owned by yourlongdomainusername. The directory must be owned by the submitter yourlongdomainuserna or yourlongdomainuserna@AADDS.CONTOSO.COM ```
-In the example above, `/user/yourlongdomainuserna/.staging` displays the truncated 20 character username for the username `yourlongdomainusername`.
+In the example, `/user/yourlongdomainuserna/.staging` displays the truncated 20 character username for the username `yourlongdomainusername`.
## Cause
healthcare-apis Api Versioning Dicom Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/api-versioning-dicom-service.md
This reference guide provides you with an overview of the API version policies for the DICOM service.
-All versions of the DICOM APIs will always conform to the DICOMwebΓäó Standard specifications, but versions may expose different APIs based on the [DICOM Conformance Statement](dicom-services-conformance-statement.md).
- ## Specifying version of REST API in requests The version of the REST API must be explicitly specified in the request URL as in the following example:
The version of the REST API must be explicitly specified in the request URL as i
`<service_url>/v<version>/studies` > [!NOTE]
-> Routes without a version are no longer supported.
+> Routes without a version are not supported.
## Supported versions
Currently the supported versions are:
* v1.0-prerelease * v1
+* v2
The OpenAPI Doc for the supported versions can be found at the following url: `<service_url>/v<version>/api.yaml`
+## DICOM Conformance Statement
+All versions of the DICOM APIs will always conform to the DICOMwebΓäó Standard specifications, but different versions may expose different APIs. See the specific version of the conformance statement for details:
+
+* [DICOM Conformance Statement v1](dicom-services-conformance-statement.md)
+* [DICOM Conformance Statement v2](dicom-services-conformance-statement-v2.md)
++ ## Prerelease versions An API version with the label "prerelease" indicates that the version isn't ready for production, and it should only be used in testing environments. These endpoints may experience breaking changes without notice.
We currently only increment the major version whenever there's a breaking change
Below are some examples of breaking changes (Major version is incremented):
-1. Renaming or removing endpoints.
-2. Removing parameters or adding mandatory parameters.
-3. Changing status code.
-4. Deleting a property in a response, or altering a response type at all, but it's okay to add properties to the response.
-5. Changing the type of a property.
-6. Behavior when an API changes such as changes in business logic used to do foo, but it now does bar.
+* Renaming or removing endpoints.
+* Removing parameters or adding mandatory parameters.
+* Changing status code.
+* Deleting a property in a response, or altering a response type at all, but it's okay to add properties to the response.
+* Changing the type of a property.
+* Behavior when an API changes such as changes in business logic used to do foo, but it now does bar.
Non-breaking changes (Version isn't incremented):
-1. Addition of properties that are nullable or have a default value.
-2. Addition of properties to a response model.
-3. Changing the order of properties.
+* Addition of properties that are nullable or have a default value.
+* Addition of properties to a response model.
+* Changing the order of properties.
## Header in response
healthcare-apis Dicom Services Conformance Statement V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement-v2.md
+
+ Title: DICOM Conformance Statement version 2 for Azure Health Data Services
+description: This document provides details about the DICOM Conformance Statement v2 for Azure Health Data Services.
+++++ Last updated : 4/20/2023+++
+# DICOM Conformance Statement v2
+
+> [!NOTE]
+> API version 2 is in **Preview** and should be used only for testing.
+
+The Medical Imaging Server for DICOM supports a subset of the DICOMwebΓäó Standard. Support includes:
+
+* [Studies Service](#studies-service)
+ * [Store (STOW-RS)](#store-stow-rs)
+ * [Retrieve (WADO-RS)](#retrieve-wado-rs)
+ * [Search (QIDO-RS)](#search-qido-rs)
+ * [Delete](#delete)
+* [Worklist Service (UPS Push and Pull SOPs)](#worklist-service-ups-rs)
+ * [Create Workitem](#create-workitem)
+ * [Retrieve Workitem](#retrieve-workitem)
+ * [Update Workitem](#update-workitem)
+ * [Change Workitem State](#change-workitem-state)
+ * [Request Cancellation](#request-cancellation)
+ * [Search Workitems](#search-workitems)
+
+Additionally, the following nonstandard API(s) are supported:
+
+* [Change Feed](dicom-change-feed-overview.md)
+* [Extended Query Tags](dicom-extended-query-tags-overview.md)
+
+The service uses REST API versioning. The version of the REST API must be explicitly specified as part of the base URL, as in the following example:
+
+`https://<service_url>/v<version>/studies`
+
+This version of the conformance statement corresponds to the `v2` version of the REST APIs.
+
+For more information on how to specify the version when making requests, see the [API Versioning Documentation](api-versioning-dicom-service.md).
+
+You can find example requests for supported transactions in the [Postman collection](https://github.com/microsoft/dicom-server/blob/main/docs/resources/Conformance-as-Postman.postman_collection.json).
+
+## Preamble Sanitization
+
+The service ignores the 128-byte File Preamble, and replaces its contents with null characters. This behavior ensures that no files passed through the service are vulnerable to the [malicious preamble vulnerability](https://dicom.nema.org/medical/dicom/current/output/chtml/part10/sect_7.5.html). However, this preamble sanitization also means that [preambles used to encode dual format content](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6489422/) such as TIFF can't be used with the service.
+
+## Studies Service
+
+The [Studies Service](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#chapter_10) allows users to store, retrieve, and search for DICOM Studies, Series, and Instances. We've added the nonstandard Delete transaction to enable a full resource lifecycle.
+
+### Store (STOW-RS)
+
+This transaction uses the POST method to store representations of studies, series, and instances contained in the request payload.
+
+| Method | Path | Description |
+| :-- | :-- | :- |
+| POST | ../studies | Store instances. |
+| POST | ../studies/{study} | Store instances for a specific study. |
+
+Parameter `study` corresponds to the DICOM attribute StudyInstanceUID. If it's specified, any instance that doesn't belong to the provided study is rejected with a `43265` warning code.
+
+The following `Accept` header(s) for the response are supported:
+
+* `application/dicom+json`
+
+The following `Content-Type` header(s) are supported:
+
+* `multipart/related; type="application/dicom"`
+* `application/dicom`
+
+> [!NOTE]
+> The Server **will not** coerce or replace attributes that conflict with existing data. All data will be stored as provided.
+
+#### Store required attributes
+The following DICOM elements are required to be present in every DICOM file attempting to be stored:
+
+* `StudyInstanceUID`
+* `SeriesInstanceUID`
+* `SOPInstanceUID`
+* `SOPClassUID`
+* `PatientID`
+
+> [!NOTE]
+> All identifiers must be between 1 and 64 characters long, and only contain alpha numeric characters or the following special characters: `.`, `-`.
+
+Each file stored must have a unique combination of `StudyInstanceUID`, `SeriesInstanceUID`, and `SopInstanceUID`. The warning code `45070` is returned if a file with the same identifiers already exists.
+
+Only transfer syntaxes with explicit Value Representations are accepted.
+
+> [!NOTE]
+> Requests are limited to 2GB. No single DICOM file or combination of files may exceed this limit.
+
+#### Store changes from v1
+In previous versions, a Store request would fail if any of the [required](#store-required-attributes) or [searchable attributes](#searchable-attributes) failed validation. Beginning with V2, the request fails only if **required attributes** fail validation.
+
+Failed validation of attributes not required by the API results in the file being stored with a warning. A warning is given about each failing attribute per instance.
+When a sequence contains an attribute that fails validation, or when there are multiple issues with a single attribute, only the first failing attribute reason is noted.
+
+#### Store response status codes
+
+| Code | Description |
+| : |:|
+| `200 (OK)` | All the SOP instances in the request have been stored. |
+| `202 (Accepted)` | The origin server stored some of the Instances and others have failed or returned warnings. Additional information regarding this error may be found in the response message body. |
+| `204 (No Content)` | No content was provided in the store transaction request. |
+| `400 (Bad Request)` | The request was badly formatted. For example, the provided study instance identifier didn't conform the expected UID format. |
+| `401 (Unauthorized)` | The client isn't authenticated. |
+| `406 (Not Acceptable)` | The specified `Accept` header isn't supported. |
+| `409 (Conflict)` | None of the instances in the store transaction request have been stored. |
+| `415 (Unsupported Media Type)` | The provided `Content-Type` isn't supported. |
+| `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. |
+
+### Store response payload
+
+The response payload populates a DICOM dataset with the following elements:
+
+| Tag | Name | Description |
+| :-- | :-- | :- |
+| (0008, 1190) | `RetrieveURL` | The Retrieve URL of the study if the StudyInstanceUID was provided in the store request and at least one instance is successfully stored. |
+| (0008, 1198) | `FailedSOPSequence` | The sequence of instances that failed to store. |
+| (0008, 1199) | `ReferencedSOPSequence` | The sequence of stored instances. |
+
+Each dataset in the `FailedSOPSequence` has the following elements (if the DICOM file attempting to be stored could be read):
+
+| Tag | Name | Description |
+|: |: |:--|
+| (0008, 1150) | `ReferencedSOPClassUID` | The SOP class unique identifier of the instance that failed to store. |
+| (0008, 1155) | `ReferencedSOPInstanceUID` | The SOP instance unique identifier of the instance that failed to store. |
+| (0008, 1197) | `FailureReason` | The reason code why this instance failed to store. |
+| (0008, 1196) | `WarningReason` | A `WarningReason` indicates validation issues that were detected but weren't severe enough to fail the store operation. |
+| (0074, 1048) | `FailedAttributesSequence` | The sequence of `ErrorComment` that includes the reason for each failed attribute. |
+
+Each dataset in the `ReferencedSOPSequence` has the following elements:
+
+| Tag | Name | Description |
+| :-- | :-- | :- |
+| (0008, 1150) | `ReferencedSOPClassUID` | The SOP class unique identifier of the instance that was stored. |
+| (0008, 1155) | `ReferencedSOPInstanceUID` | The SOP instance unique identifier of the instance that was stored. |
+| (0008, 1190) | `RetrieveURL` | The retrieve URL of this instance on the DICOM server. |
+
+An example response with `Accept` header `application/dicom+json` without a FailedAttributesSequence in a ReferencedSOPSequence:
+
+```json
+{
+ "00081190":
+ {
+ "vr":"UR",
+ "Value":["http://localhost/studies/d09e8215-e1e1-4c7a-8496-b4f6641ed232"]
+ },
+ "00081198":
+ {
+ "vr":"SQ",
+ "Value":
+ [{
+ "00081150":
+ {
+ "vr":"UI","Value":["cd70f89a-05bc-4dab-b6b8-1f3d2fcafeec"]
+ },
+ "00081155":
+ {
+ "vr":"UI",
+ "Value":["22c35d16-11ce-43fa-8f86-90ceed6cf4e7"]
+ },
+ "00081197":
+ {
+ "vr":"US",
+ "Value":[43265]
+ }
+ }]
+ },
+ "00081199":
+ {
+ "vr":"SQ",
+ "Value":
+ [{
+ "00081150":
+ {
+ "vr":"UI",
+ "Value":["d246deb5-18c8-4336-a591-aeb6f8596664"]
+ },
+ "00081155":
+ {
+ "vr":"UI",
+ "Value":["4a858cbb-a71f-4c01-b9b5-85f88b031365"]
+ },
+ "00081190":
+ {
+ "vr":"UR",
+ "Value":["http://localhost/studies/d09e8215-e1e1-4c7a-8496-b4f6641ed232/series/8c4915f5-cc54-4e50-aa1f-9b06f6e58485/instances/4a858cbb-a71f-4c01-b9b5-85f88b031365"]
+ }
+ }]
+ }
+}
+```
+
+An example response with `Accept` header `application/dicom+json` with a FailedAttributesSequence in a ReferencedSOPSequence:
+
+```json
+{
+ "00081190":
+ {
+ "vr":"UR",
+ "Value":["http://localhost/studies/d09e8215-e1e1-4c7a-8496-b4f6641ed232"]
+ },
+ "00081199":
+ {
+ "vr":"SQ",
+ "Value":
+ [{
+ "00081150":
+ {
+ "vr":"UI",
+ "Value":["d246deb5-18c8-4336-a591-aeb6f8596664"]
+ },
+ "00081155":
+ {
+ "vr":"UI",
+ "Value":["4a858cbb-a71f-4c01-b9b5-85f88b031365"]
+ },
+ "00081190":
+ {
+ "vr":"UR",
+ "Value":["http://localhost/studies/d09e8215-e1e1-4c7a-8496-b4f6641ed232/series/8c4915f5-cc54-4e50-aa1f-9b06f6e58485/instances/4a858cbb-a71f-4c01-b9b5-85f88b031365"]
+ },
+ "00081196": {
+ "vr": "US",
+ "Value": [
+ 1
+ ]
+ },
+ "00741048": {
+ "vr": "SQ",
+ "Value": [
+ {
+ "00000902": {
+ "vr": "LO",
+ "Value": [
+ "DICOM100: (0008,0020) - Content \"NotAValidDate\" does not validate VR DA: one of the date values does not match the pattern YYYYMMDD"
+ ]
+ }
+ },
+ {
+ "00000902": {
+ "vr": "LO",
+ "Value": [
+ "DICOM100: (0008,002a) - Content \"NotAValidDate\" does not validate VR DT: value does not mach pattern YYYY[MM[DD[HH[MM[SS[.F{1-6}]]]]]]"
+ ]
+ }
+ }
+ ]
+ }
+ }]
+ }
+}
+```
+
+#### Store failure reason codes
+
+| Code | Description |
+| :- | :- |
+| `272` | The store transaction didn't store the instance because of a general failure in processing the operation. |
+| `43264` | The DICOM instance failed the validation. |
+| `43265` | The provided instance `StudyInstanceUID` didn't match the specified `StudyInstanceUID` in the store request. |
+| `45070` | A DICOM instance with the same `StudyInstanceUID`, `SeriesInstanceUID`, and `SopInstanceUID` has already been stored. If you wish to update the contents, delete this instance first. |
+| `45071` | A DICOM instance is being created by another process, or the previous attempt to create has failed and the cleanup process hasn't had chance to clean up yet. Delete the instance first before attempting to create again. |
+
+#### Store warning reason codes
+
+| Code | Description |
+|:|:-|
+| `45063` | A DICOM instance Data Set doesn't match SOP Class. The Studies Store Transaction (Section 10.5) observed that the Data Set didn't match the constraints of the SOP Class during storage of the instance. |
+| `1` | The Studies Store Transaction (Section 10.5) observed that the Data Set has validation warnings. |
+
+#### Store Error Codes
+
+| Code | Description |
+| :- | :- |
+| `100` | The provided instance attributes didn't meet the validation criteria. |
+
+### Retrieve (WADO-RS)
+
+This Retrieve Transaction offers support for retrieving stored studies, series, instances and frames by reference.
+
+| Method | Path | Description |
+| :-- | :- | :- |
+| GET | ../studies/{study} | Retrieves all instances within a study. |
+| GET | ../studies/{study}/metadata | Retrieves the metadata for all instances within a study. |
+| GET | ../studies/{study}/series/{series} | Retrieves all instances within a series. |
+| GET | ../studies/{study}/series/{series}/metadata | Retrieves the metadata for all instances within a series. |
+| GET | ../studies/{study}/series/{series}/instances/{instance} | Retrieves a single instance. |
+| GET | ../studies/{study}/series/{series}/instances/{instance}/metadata | Retrieves the metadata for a single instance. |
+| GET | ../studies/{study}/series/{series}/instances/{instance}/frames/{frames} | Retrieves one or many frames from a single instance. To specify more than one frame, a comma separate each frame to return. For example, /studies/1/series/2/instance/3/frames/4,5,6 |
+
+#### Retrieve instances within study or series
+
+The following `Accept` header(s) are supported for retrieving instances within a study or a series:
++
+* `multipart/related; type="application/dicom"; transfer-syntax=*`
+* `multipart/related; type="application/dicom";` (when transfer-syntax isn't specified, 1.2.840.10008.1.2.1 is used as default)
+* `multipart/related; type="application/dicom"; transfer-syntax=1.2.840.10008.1.2.1`
+* `multipart/related; type="application/dicom"; transfer-syntax=1.2.840.10008.1.2.4.90`
+
+#### Retrieve an Instance
+
+The following `Accept` header(s) are supported for retrieving a specific instance:
+
+* `application/dicom; transfer-syntax=*`
+* `multipart/related; type="application/dicom"; transfer-syntax=*`
+* `application/dicom;` (when transfer-syntax isn't specified, `1.2.840.10008.1.2.1` is used as default)
+* `multipart/related; type="application/dicom"` (when transfer-syntax isn't specified, `1.2.840.10008.1.2.1` is used as default)
+* `application/dicom; transfer-syntax=1.2.840.10008.1.2.1`
+* `multipart/related; type="application/dicom"; transfer-syntax=1.2.840.10008.1.2.1`
+* `application/dicom; transfer-syntax=1.2.840.10008.1.2.4.90`
+* `multipart/related; type="application/dicom"; transfer-syntax=1.2.840.10008.1.2.4.90`
+
+#### Retrieve Frames
+
+The following `Accept` headers are supported for retrieving frames:
+* `multipart/related; type="application/octet-stream"; transfer-syntax=*`
+* `multipart/related; type="application/octet-stream";` (when transfer-syntax isn't specified, `1.2.840.10008.1.2.1` is used as default)
+* `multipart/related; type="application/octet-stream"; transfer-syntax=1.2.840.10008.1.2.1`
+* `multipart/related; type="image/jp2";` (when transfer-syntax isn't specified, `1.2.840.10008.1.2.4.90` is used as default)
+* `multipart/related; type="image/jp2";transfer-syntax=1.2.840.10008.1.2.4.90`
+* `application/octet-stream; transfer-syntax=*` for single frame retrieval
+
+#### Retrieve transfer syntax
+
+When the requested transfer syntax is different from original file, the original file is transcoded to requested transfer syntax. The original file needs to be one of the formats below for transcoding to succeed, otherwise transcoding may fail:
+* 1.2.840.10008.1.2 (Little Endian Implicit)
+* 1.2.840.10008.1.2.1 (Little Endian Explicit)
+* 1.2.840.10008.1.2.2 (Explicit VR Big Endian)
+* 1.2.840.10008.1.2.4.50 (JPEG Baseline Process 1)
+* 1.2.840.10008.1.2.4.57 (JPEG Lossless)
+* 1.2.840.10008.1.2.4.70 (JPEG Lossless Selection Value 1)
+* 1.2.840.10008.1.2.4.90 (JPEG 2000 Lossless Only)
+* 1.2.840.10008.1.2.4.91 (JPEG 2000)
+* 1.2.840.10008.1.2.5 (RLE Lossless)
+
+An unsupported `transfer-syntax` results in `406 Not Acceptable`.
+
+### Retrieve metadata (for study, series, or instance)
+
+The following `Accept` header is supported for retrieving metadata for a study, a series, or an instance:
+
+* `application/dicom+json`
+
+Retrieving metadata won't return attributes with the following value representations:
+
+| VR Name | Description |
+| : | : |
+| OB | Other Byte |
+| OD | Other Double |
+| OF | Other Float |
+| OL | Other Long |
+| OV | Other 64-Bit Very Long |
+| OW | Other Word |
+| UN | Unknown |
+
+### Retrieve metadata cache validation for (study, series, or instance)
+
+Cache validation is supported using the `ETag` mechanism. In the response to a metadata request, ETag is returned as one of the headers. This ETag can be cached and added as `If-None-Match` header in the later requests for the same metadata. Two types of responses are possible if the data exists:
+
+* Data hasn't changed since the last request: `HTTP 304 (Not Modified)` response is sent with no response body.
+* Data has changed since the last request: `HTTP 200 (OK)` response is sent with updated ETag. Required data is returned as part of the body.
+
+### Retrieve response status codes
+
+| Code | Description |
+| : | :- |
+| `200 (OK)` | All requested data has been retrieved. |
+| `304 (Not Modified)` | The requested data hasn't been modified since the last request. Content isn't added to the response body in such case. For more information, see the above section **Retrieve Metadata Cache Validation (for Study, Series, or Instance)**. |
+| `400 (Bad Request)` | The request was badly formatted. For example, the provided study instance identifier didn't conform to the expected UID format, or the requested transfer-syntax encoding isn't supported. |
+| `401 (Unauthorized)` | The client isn't authenticated. |
+| `403 (Forbidden)` | The user isn't authorized. |
+| `404 (Not Found)` | The specified DICOM resource couldn't be found. |
+| `406 (Not Acceptable)` | The specified `Accept` header isn't supported. |
+| `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. |
+
+### Search (QIDO-RS)
+
+Query based on ID for DICOM Objects (QIDO) enables you to search for studies, series, and instances by attributes.
+
+| Method | Path | Description |
+| :-- | :- | :-- |
+| *Search for Studies* |
+| GET | ../studies?... | Search for studies |
+| *Search for Series* |
+| GET | ../series?... | Search for series |
+| GET |../studies/{study}/series?... | Search for series in a study |
+| *Search for Instances* |
+| GET |../instances?... | Search for instances |
+| GET |../studies/{study}/instances?... | Search for instances in a study |
+| GET |../studies/{study}/series/{series}/instances?... | Search for instances in a series |
+
+The following `Accept` header(s) are supported for searching:
+
+* `application/dicom+json`
+
+### Search changes from v1
+In the v1 API and continued for v2, if an [extended query tag](dicom-extended-query-tags-overview.md) has any errors, because one or more of the existing instances had a tag value that couldn't be indexed, then subsequent search queries containing the extended query tag will return `erroneous-dicom-attributes` as detailed in the [documentation](dicom-extended-query-tags-overview.md#tag-query-status). However, tags (also known as attributes) with validation warnings from STOW-RS are **not** included in this header. If a store request results in validation warnings on [searchable tags](#searchable-attributes), subsequent searches containing these tags won't consider any DICOM SOP instance that produced a warning. This behavior may result in incomplete search results.
+To correct an attribute, delete the stored instance and upload the corrected data.
+
+### Supported search parameters
+
+The following parameters for each query are supported:
+
+| Key | Support Value(s) | Allowed Count | Description |
+| : | :- | : | :- |
+| `{attributeID}=` | `{value}` | 0...N | Search for attribute/ value matching in query. |
+| `includefield=` | `{attributeID}`<br/>`all` | 0...N | The additional attributes to return in the response. Both, public and private tags are supported.<br/>When `all` is provided, refer to [Search Response](#search-response) for more information about which attributes are returned for each query type.<br/>If a mixture of `{attributeID}` and `all` is provided, the server defaults to using `all`. |
+| `limit=` | `{value}` | 0..1 | Integer value to limit the number of values returned in the response.<br/>Value can be between the range 1 >= x <= 200. Defaulted to 100. |
+| `offset=` | `{value}` | 0..1 | Skip `{value}` results.<br/>If an offset is provided larger than the number of search query results, a 204 (no content) response is returned. |
+| `fuzzymatching=` | `true` / `false` | 0..1 | If true fuzzy matching is applied to PatientName attribute. It does a prefix word match of any name part inside PatientName value. For example, if PatientName is "John^Doe", then "joh", "do", "jo do", "Doe" and "John Doe" all match. However "ohn" doesn't match. |
+
+#### Searchable attributes
+
+We support searching the following attributes and search types.
+
+| Attribute Keyword | All Studies | All Series | All Instances | Study's Series | Study's Instances | Study Series' Instances |
+| :- | :: | :-: | :: | :: | :-: | :: |
+| `StudyInstanceUID` | X | X | X | | | |
+| `PatientName` | X | X | X | | | |
+| `PatientID` | X | X | X | | | |
+| `PatientBirthDate` | X | X | X | | | |
+| `AccessionNumber` | X | X | X | | | |
+| `ReferringPhysicianName` | X | X | X | | | |
+| `StudyDate` | X | X | X | | | |
+| `StudyDescription` | X | X | X | | | |
+| `ModalitiesInStudy` | X | X | X | | | |
+| `SeriesInstanceUID` | | X | X | X | X | |
+| `Modality` | | X | X | X | X | |
+| `PerformedProcedureStepStartDate` | | X | X | X | X | |
+| `ManufacturerModelName` | | X | X | X | X | |
+| `SOPInstanceUID` | | | X | | X | X |
+
+#### Search matching
+
+We support the following matching types.
+
+| Search Type | Supported Attribute | Example |
+| :- | : | : |
+| Range Query | `StudyDate`/`PatientBirthDate` | `{attributeID}={value1}-{value2}`. For date/ time values, we support an inclusive range on the tag. This is mapped to `attributeID >= {value1} AND attributeID <= {value2}`. If `{value1}` isn't specified, all occurrences of dates/times prior to and including `{value2}` are matched. Likewise, if `{value2}` isn't specified, all occurrences of `{value1}` and subsequent dates/times are matched. However, one of these values has to be present. `{attributeID}={value1}-` and `{attributeID}=-{value2}` are valid, however, `{attributeID}=-` is invalid. |
+| Exact Match | All supported attributes | `{attributeID}={value1}` |
+| Fuzzy Match | `PatientName`, `ReferringPhysicianName` | Matches any component of the name that starts with the value. |
+
+#### Attribute ID
+
+Tags can be encoded in several ways for the query parameter. We have partially implemented the standard as defined in [PS3.18 6.7.1.1.1](http://dicom.nema.org/medical/dicom/2019a/output/chtml/part18/sect_6.7.html#sect_6.7.1.1.1). The following encodings for a tag are supported:
+
+| Value | Example |
+| : | : |
+| `{group}{element}` | `0020000D` |
+| `{dicomKeyword}` | `StudyInstanceUID` |
+
+Example query searching for instances:
+
+`../instances?Modality=CT&00280011=512&includefield=00280010&limit=5&offset=0`
+
+### Search response
+
+The response is an array of DICOM datasets. Depending on the resource, by *default* the following attributes are returned:
+
+#### Default Study tags
+
+| Tag | Attribute Name |
+| :-- | :- |
+| (0008, 0020) | `StudyDate` |
+| (0008, 0050) | `AccessionNumber` |
+| (0008, 1030) | `StudyDescription` |
+| (0009, 0090) | `ReferringPhysicianName` |
+| (0010, 0010) | `PatientName` |
+| (0010, 0020) | `PatientID` |
+| (0010, 0030) | `PatientBirthDate` |
+| (0020, 000D) | `StudyInstanceUID` |
+
+#### Default Series tags
+
+| Tag | Attribute Name |
+| :-- | :- |
+| (0008, 0060) | `Modality` |
+| (0008, 1090) | `ManufacturerModelName` |
+| (0020, 000E) | `SeriesInstanceUID` |
+| (0040, 0244) | `PerformedProcedureStepStartDate` |
+
+#### Default Instance tags
+
+| Tag | Attribute Name |
+| :-- | :- |
+| (0008, 0018) | `SOPInstanceUID` |
+
+If `includefield=all`, the following attributes are included along with default attributes. Along with the default attributes, this is the full list of attributes supported at each resource level.
+
+#### Additional Study tags
+
+| Tag | Attribute Name |
+| :-- | :- |
+| (0008, 0005) | `SpecificCharacterSet` |
+| (0008, 0030) | `StudyTime` |
+| (0008, 0056) | `InstanceAvailability` |
+| (0008, 0201) | `TimezoneOffsetFromUTC` |
+| (0008, 0063) | `AnatomicRegionsInStudyCodeSequence` |
+| (0008, 1032) | `ProcedureCodeSequence` |
+| (0008, 1060) | `NameOfPhysiciansReadingStudy` |
+| (0008, 1080) | `AdmittingDiagnosesDescription` |
+| (0008, 1110) | `ReferencedStudySequence` |
+| (0010, 1010) | `PatientAge` |
+| (0010, 1020) | `PatientSize` |
+| (0010, 1030) | `PatientWeight` |
+| (0010, 2180) | `Occupation` |
+| (0010, 21B0) | `AdditionalPatientHistory` |
+| (0010, 0040) | `PatientSex` |
+| (0020, 0010) | `StudyID` |
+
+#### Additional Series tags
+
+| Tag | Attribute Name |
+| :-- | :- |
+| (0008, 0005) | SpecificCharacterSet |
+| (0008, 0201) | TimezoneOffsetFromUTC |
+| (0020, 0011) | SeriesNumber |
+| (0020, 0060) | Laterality |
+| (0008, 0021) | SeriesDate |
+| (0008, 0031) | SeriesTime |
+| (0008, 103E) | SeriesDescription |
+| (0040, 0245) | PerformedProcedureStepStartTime |
+| (0040, 0275) | RequestAttributesSequence |
+
+#### Additional Instance tags
+
+| Tag | Attribute Name |
+| :-- | :- |
+| (0008, 0005) | SpecificCharacterSet |
+| (0008, 0016) | SOPClassUID |
+| (0008, 0056) | InstanceAvailability |
+| (0008, 0201) | TimezoneOffsetFromUTC |
+| (0020, 0013) | InstanceNumber |
+| (0028, 0010) | Rows |
+| (0028, 0011) | Columns |
+| (0028, 0100) | BitsAllocated |
+| (0028, 0008) | NumberOfFrames |
+
+The following attributes are returned:
+
+* All the match query parameters and UIDs in the resource url.
+* `IncludeField` attributes supported at that resource level.
+* If the target resource is `All Series`, then `Study` level attributes are also returned.
+* If the target resource is `All Instances`, then `Study` and `Series` level attributes are also returned.
+* If the target resource is `Study's Instances`, then `Series` level attributes are also returned.
+* `NumberOfStudyRelatedInstances` aggregated attribute is supported in `Study` level `includeField`.
+* `NumberOfSeriesRelatedInstances` aggregated attribute is supported in `Series` level `includeField`.
+
+### Search response codes
+
+The query API returns one of the following status codes in the response:
+
+| Code | Description |
+| : | :- |
+| `200 (OK)` | The response payload contains all the matching resources. |
+| `204 (No Content)` | The search completed successfully but returned no results. |
+| `400 (Bad Request)` | The server was unable to perform the query because the query component was invalid. Response body contains details of the failure. |
+| `401 (Unauthorized)` | The client isn't authenticated. |
+| `403 (Forbidden)` | The user isn't authorized. |
+| `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. |
+
+### Additional notes
+
+* Querying using the `TimezoneOffsetFromUTC (00080201)` isn't supported.
+* The query API doesn't return `413 (request entity too large)`. If the requested query response limit is outside of the acceptable range, a bad request is returned. Anything requested within the acceptable range, will be resolved.
+* When target resource is Study/Series, there's a potential for inconsistent study/series level metadata across multiple instances. For example, two instances could have different patientName. In this case, the latest wins and you can search only on the latest data.
+* Paged results are optimized to return matched _newest_ instance first, this may result in duplicate records in subsequent pages if newer data matching the query was added.
+* Matching is case in-sensitive and accent in-sensitive for PN VR types.
+* Matching is case in-sensitive and accent sensitive for other string VR types.
+* Only the first value is indexed of a single valued data element that incorrectly has multiple values.
+* Using the default attributes or limiting the number of results requested maximizes performance.
+
+### Delete
+
+This transaction isn't part of the official DICOMweb&trade; Standard. It uses the DELETE method to remove representations of Studies, Series, and Instances from the store.
+
+| Method | Path | Description |
+| :-- | : | :- |
+| DELETE | ../studies/{study} | Delete all instances for a specific study. |
+| DELETE | ../studies/{study}/series/{series} | Delete all instances for a specific series within a study. |
+| DELETE | ../studies/{study}/series/{series}/instances/{instance} | Delete a specific instance within a series. |
+
+Parameters `study`, `series`, and `instance` correspond to the DICOM attributes `StudyInstanceUID`, `SeriesInstanceUID`, and `SopInstanceUID` respectively.
+
+There are no restrictions on the request's `Accept` header, `Content-Type` header or body content.
+
+> [!NOTE]
+> After a Delete transaction, the deleted instances will not be recoverable.
+
+### Response status codes
+
+| Code | Description |
+| : | :- |
+| `204 (No Content)` | When all the SOP instances have been deleted. |
+| `400 (Bad Request)` | The request was badly formatted. |
+| `401 (Unauthorized)` | The client isn't authenticated. |
+| `403 (Forbidden)` | The user isn't authorized. |
+| `404 (Not Found)` | When the specified series wasn't found within a study or the specified instance wasn't found within the series. |
+| `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. |
+
+### Delete response payload
+
+The response body is empty. The status code is the only useful information returned.
+
+## Worklist Service (UPS-RS)
+
+The DICOM service supports the Push and Pull SOPs of the [Worklist Service (UPS-RS)](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#chapter_11). This service provides access to one Worklist containing Workitems, each of which represents a Unified Procedure Step (UPS).
+
+Throughout, the variable `{workitem}` in a URI template stands for a Workitem UID.
+
+Available UPS-RS endpoints include:
+
+|Verb| Path | Description |
+|: |: |: |
+|POST| {s}/workitems{?AffectedSOPInstanceUID}| Create a work item|
+|POST| {s}/workitems/{instance}{?transaction}| Update a work item
+|GET| {s}/workitems{?query*} | Search for work items
+|GET| {s}/workitems/{instance}| Retrieve a work item
+|PUT| {s}/workitems/{instance}/state| Change work item state
+|POST| {s}/workitems/{instance}/cancelrequest | Cancel work item|
+|POST |{s}/workitems/{instance}/subscribers/{AETitle}{?deletionlock} | Create subscription|
+|POST| {s}/workitems/1.2.840.10008.5.1.4.34.5/ | Suspend subscription|
+|DELETE | {s}/workitems/{instance}/subscribers/{AETitle} | Delete subscription
+|GET | {s}/subscribers/{AETitle}| Open subscription channel |
+
+### Create Workitem
+
+This transaction uses the POST method to create a new Workitem.
+
+| Method | Path | Description |
+| :-- | :-- | :- |
+| POST | ../workitems | Create a Workitem. |
+| POST | ../workitems?{workitem} | Creates a Workitem with the specified UID. |
+
+If not specified in the URI, the payload dataset must contain the Workitem in the `SOPInstanceUID` attribute.
+
+The `Accept` and `Content-Type` headers are required in the request, and must both have the value `application/dicom+json`.
+
+There are several requirements related to DICOM data attributes in the context of a specific transaction. Attributes may be
+required to be present, required to not be present, required to be empty, or required to not be empty. These requirements can be
+found [in this table](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.5-3).
+
+> [!NOTE]
+> Although the reference table above says that SOP Instance UID shouldn't be present, this guidance is specific to the DIMSE protocol and is handled differently in DICOMWebΓäó. SOP Instance UID should be present in the dataset if not in the URI.
+
+> [!NOTE]
+> All the conditional requirement codes including 1C and 2C are treated as optional.
+
+#### Create response status codes
+
+| Code | Description |
+| :-- | :- |
+| `201 (Created)` | The target Workitem was successfully created. |
+| `400 (Bad Request)` | There was a problem with the request. For example, the request payload didn't satisfy the requirements above. |
+| `401 (Unauthorized)` | The client isn't authenticated. |
+| `403 (Forbidden)` | The user isn't authorized. |
+| `409 (Conflict)` | The Workitem already exists. |
+| `415 (Unsupported Media Type)` | The provided `Content-Type` isn't supported. |
+| `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. |
+
+#### Create response payload
+
+A success response has no payload. The `Location` and `Content-Location` response headers contain a URI reference to the created Workitem.
+
+A failure response payload contains a message describing the failure.
+
+### Request cancellation
+
+This transaction enables the user to request cancellation of a non-owned Workitem.
+
+There are [four valid Workitem states](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.1.1-1):
+
+* `SCHEDULED`
+* `IN PROGRESS`
+* `CANCELED`
+* `COMPLETED`
+
+This transaction will only succeed against Workitems in the `SCHEDULED` state. Any user can claim ownership of a Workitem by setting its Transaction UID and changing its state to `IN PROGRESS`. From then on, a user can only modify the Workitem by providing the correct Transaction UID. While UPS defines Watch and Event SOP classes that allow cancellation requests and other events to be forwarded, this DICOM service doesn't implement these classes, and so cancellation requests on workitems that are `IN PROGRESS` will return failure. An owned Workitem can be canceled via the [Change Workitem State](#change-workitem-state) transaction.
+
+| Method | Path | Description |
+| : | :- | :-- |
+| POST | ../workitems/{workitem}/cancelrequest | Request the cancellation of a scheduled Workitem |
+
+The `Content-Type` header is required, and must have the value `application/dicom+json`.
+
+The request payload may include Action Information as [defined in the DICOM Standard](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.2-1).
+
+#### Request cancellation response status codes
+
+| Code | Description |
+| : | :- |
+| `202 (Accepted)` | The request was accepted by the server, but the Target Workitem state hasn't necessarily changed yet. |
+| `400 (Bad Request)` | There was a problem with the syntax of the request. |
+| `401 (Unauthorized)` | The client isn't authenticated. |
+| `403 (Forbidden)` | The user isn't authorized. |
+| `404 (Not Found)` | The Target Workitem wasn't found. |
+| `409 (Conflict)` | The request is inconsistent with the current state of the Target Workitem. For example, the Target Workitem is in the `SCHEDULED` or `COMPLETED` state. |
+| `415 (Unsupported Media Type)` | The provided `Content-Type` isn't supported. |
+| `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. |
+
+#### Request cancellation response payload
+
+A success response has no payload, and a failure response payload contains a message describing the failure.
+If the Workitem Instance is already in a canceled state, the response includes the following HTTP Warning header:
+`299: The UPS is already in the requested state of CANCELED.`
+
+### Retrieve Workitem
+
+This transaction retrieves a Workitem. It corresponds to the UPS DIMSE N-GET operation.
+
+Refer to: https://dicom.nema.org/medical/dicom/current/output/html/part18.html#sect_11.5
+
+If the Workitem exists on the origin server, the Workitem shall be returned in an Acceptable Media Type. The returned Workitem shall not contain the Transaction UID (0008,1195) Attribute. This is necessary to preserve this Attribute's role as an access lock.
+
+| Method | Path | Description |
+| : | :- | : |
+| GET | ../workitems/{workitem} | Request to retrieve a Workitem |
+
+The `Accept` header is required and must have the value `application/dicom+json`.
+
+#### Retrieve Workitem response status codes
+
+| Code | Description |
+| :- | :- |
+| 200 (OK) | Workitem Instance was successfully retrieved. |
+| 400 (Bad Request) | There was a problem with the request. |
+| 401 (Unauthorized) | The client isn't authenticated. |
+| 403 (Forbidden) | The user isn't authorized. |
+| 404 (Not Found) | The Target Workitem wasn't found. |
+| 503 (Service Unavailable) | The service is unavailable or busy. Try again later. |
+
+#### Retrieve Workitem response payload
+
+* A success response has a single part payload containing the requested Workitem in the Selected Media Type.
+* The returned Workitem shall not contain the Transaction UID (0008, 1195) attribute of the Workitem, since that should only be known to the Owner.
+
+### Update Workitem
+
+This transaction modifies attributes of an existing Workitem. It corresponds to the UPS DIMSE N-SET operation.
+
+Refer to: https://dicom.nema.org/medical/dicom/current/output/html/part18.html#sect_11.6
+
+To update a Workitem currently in the `SCHEDULED` state, the `Transaction UID` attribute shall not be present. For a Workitem in the `IN PROGRESS` state, the request must include the current Transaction UID as a query parameter. If the Workitem is already in the `COMPLETED` or `CANCELED` states, the response is `400 (Bad Request)`.
+
+| Method | Path | Description |
+| : | : | :-- |
+| POST | ../workitems/{workitem}?{transaction-uid} | Update Workitem Transaction |
+
+The `Content-Type` header is required, and must have the value `application/dicom+json`.
+
+The request payload contains a dataset with the changes to be applied to the target Workitem. When modifying a sequence, the request must include all Items in the sequence, not just the Items to be modified.
+When multiple Attributes need updated as a group, do this as multiple Attributes in a single request, not as multiple requests.
+
+There are many requirements related to DICOM data attributes in the context of a specific transaction. Attributes may be
+required to be present, required to not be present, required to be empty, or required to not be empty. These requirements can be
+found in [this table](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.5-3).
+
+> [!NOTE]
+> All the conditional requirement codes including 1C and 2C are treated as optional.
+
+> [!NOTE]
+> The request can't set the value of the Procedure Step State (0074,1000) attribute. Procedure Step State is managed using the Change State transaction, or the Request Cancellation transaction.
+
+#### Update Workitem transaction response status codes
+
+| Code | Description |
+| :- | :- |
+| `200 (OK)` | The Target Workitem was updated. |
+| `400 (Bad Request)` | There was a problem with the request. For example: (1) the Target Workitem was in the `COMPLETED` or `CANCELED` state. (2) the Transaction UID is missing. (3) the Transaction UID is incorrect. (4) the dataset didn't conform to the requirements.
+| `401 (Unauthorized)` | The client isn't authenticated. |
+| `403 (Forbidden)` | The user isn't authorized. |
+| `404 (Not Found)` | The Target Workitem wasn't found. |
+| `409 (Conflict)` | The request is inconsistent with the current state of the Target Workitem. |
+| `415 (Unsupported Media Type)` | The provided `Content-Type` isn't supported. |
+| `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. |
+
+#### Update Workitem transaction response payload
+
+The origin server shall support header fields as required in [Table 11.6.3-2](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#table_11.6.3-2).
+
+A success response shall have either no payload or a payload containing a Status Report document.
+
+A failure response payload may contain a Status Report describing any failures, warnings, or other useful information.
+
+### Change Workitem state
+
+This transaction is used to change the state of a Workitem. It corresponds to the UPS DIMSE N-ACTION operation "Change UPS State". State changes are used to claim ownership, complete, or cancel a Workitem.
+
+Refer to: https://dicom.nema.org/medical/dicom/current/output/html/part18.html#sect_11.7
+
+If the Workitem exists on the origin server, the Workitem shall be returned in an Acceptable Media Type. The returned Workitem shall not contain the Transaction UID (0008,1195) attribute. This is necessary to preserve this Attribute's role as an access lock as described [here.](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#sect_CC.1.1)
+
+| Method | Path | Description |
+| : | : | :-- |
+| PUT | ../workitems/{workitem}/state | Change Workitem State |
+
+The `Accept` header is required, and must have the value `application/dicom+json`.
+
+The request payload shall contain the Change UPS State Data Elements. These data elements are:
+
+* **Transaction UID (0008, 1195)**. The request payload shall include a Transaction UID. The user agent creates the Transaction UID when requesting a transition to the `IN PROGRESS` state for a given Workitem. The user agent provides that Transaction UID in subsequent transactions with that Workitem.
+* **Procedure Step State (0074, 1000)**. The legal values correspond to the requested state transition. They are: `IN PROGRESS`, `COMPLETED`, or `CANCELED`.
+
+#### Change Workitem state response status codes
+
+| Code | Description |
+| :- | :- |
+| `200 (OK)` | Workitem Instance was successfully retrieved. |
+| `400 (Bad Request)` | The request can't be performed for one of the following reasons: (1) the request is invalid given the current state of the Target Workitem. (2) the Transaction UID is missing. (3) the Transaction UID is incorrect |
+| `401 (Unauthorized)` | The client isn't authenticated. |
+| `403 (Forbidden)` | The user isn't authorized. |
+| `404 (Not Found)` | The Target Workitem wasn't found. |
+| `409 (Conflict)` | The request is inconsistent with the current state of the Target Workitem. |
+| `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. |
+
+#### Change Workitem state response payload
+
+* Responses include the header fields specified in [section 11.7.3.2](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#sect_11.7.3.2).
+* A success response shall have no payload.
+* A failure response payload may contain a Status Report describing any failures, warnings, or other useful information.
+
+### Search Workitems
+
+This transaction enables you to search for Workitems by attributes.
+
+| Method | Path | Description |
+| :-- | :- | :-- |
+| GET | ../workitems? | Search for Workitems |
+
+The following `Accept` header(s) are supported for searching:
+
+* `application/dicom+json`
+
+#### Supported Search Parameters
+
+The following parameters for each query are supported:
+
+| Key | Support Value(s) | Allowed Count | Description |
+| : | :- | : | :- |
+| `{attributeID}=` | `{value}` | 0...N | Search for attribute/ value matching in query. |
+| `includefield=` | `{attributeID}`<br/>`all` | 0...N | The additional attributes to return in the response. Only top-level attributes can be specified to be included - not attributes that are part of sequences. Both public and private tags are supported. When `all` is provided, see [Search Response](#search-response) for more information about which attributes will be returned for each query type. If a mixture of `{attributeID}` and `all` is provided, the server defaults to using 'all'. |
+| `limit=` | `{value}` | 0...1 | Integer value to limit the number of values returned in the response. Value can be between the range `1 >= x <= 200`. Defaulted to `100`. |
+| `offset=` | `{value}` | 0...1 | Skip {value} results. If an offset is provided larger than the number of search query results, a `204 (no content)` response is returned. |
+| `fuzzymatching=` | `true` \| `false` | 0...1 | If true fuzzy matching is applied to any attributes with the Person Name (PN) Value Representation (VR). It does a prefix word match of any name part inside these attributes. For example, if `PatientName` is `John^Doe`, then `joh`, `do`, `jo do`, `Doe` and `John Doe` all match. However `ohn` will **not** match. |
+
+##### Searchable Attributes
+
+We support searching on these attributes:
+
+| Attribute Keyword |
+| :- |
+|`PatientName`|
+|`PatientID`|
+|`ReferencedRequestSequence.AccessionNumber`|
+|`ReferencedRequestSequence.RequestedProcedureID`|
+|`ScheduledProcedureStepStartDateTime`|
+|`ScheduledStationNameCodeSequence.CodeValue`|
+|`ScheduledStationClassCodeSequence.CodeValue`|
+|`ScheduledStationGeographicLocationCodeSequence.CodeValue`|
+|`ProcedureStepState`|
+|`StudyInstanceUID`|
+
+##### Search Matching
+
+We support these matching types:
+
+| Search Type | Supported Attribute | Example |
+| :- | : | : |
+| Range Query | `ScheduledΓÇïProcedureΓÇïStepΓÇïStartΓÇïDateΓÇïTime` | `{attributeID}={value1}-{value2}`. For date/time values, we support an inclusive range on the tag. This will be mapped to `attributeID >= {value1} AND attributeID <= {value2}`. If `{value1}` isn't specified, all occurrences of dates/times prior to and including `{value2}` will be matched. Likewise, if `{value2}` isn't specified, all occurrences of `{value1}` and subsequent dates/times will be matched. However, one of these values has to be present. `{attributeID}={value1}-` and `{attributeID}=-{value2}` are valid, however, `{attributeID}=-` is invalid. |
+| Exact Match | All supported attributes | `{attributeID}={value1}` |
+| Fuzzy Match | `PatientName` | Matches any component of the name that starts with the value. |
+
+> [!NOTE]
+> While we don't support full sequence matching, we do support exact match on the attributes listed above that are contained in a sequence.
+
+##### Attribute ID
+
+Tags can be encoded in many ways for the query parameter. We have partially implemented the standard as defined in [PS3.18 6.7.1.1.1](http://dicom.nema.org/medical/dicom/2019a/output/chtml/part18/sect_6.7.html#sect_6.7.1.1.1). The following encodings for a tag are supported:
+
+| Value | Example |
+| :-- | : |
+| `{group}{element}` | `00100010` |
+| `{dicomKeyword}` | `PatientName` |
+
+Example query:
+
+`../workitems?PatientID=K123&0040A370.00080050=1423JS&includefield=00404005&limit=5&offset=0`
+
+#### Search Response
+
+The response is an array of `0...N` DICOM datasets with the following attributes returned:
+
+* All attributes in [DICOM PowerShell 3.4 Table CC.2.5-3](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.5-3) with a Return Key Type of 1 or 2
+* All attributes in [DICOM PowerShell 3.4 Table CC.2.5-3](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.5-3) with a Return Key Type of 1C for which the conditional requirements are met
+* All other Workitem attributes passed as match parameters
+* All other Workitem attributes passed as `includefield` parameter values
+
+#### Search Response Codes
+
+The query API returns one of the following status codes in the response:
+
+| Code | Description |
+| :-- | :- |
+| `200 (OK)` | The response payload contains all the matching resource. |
+| `206 (Partial Content)` | The response payload contains only some of the search results, and the rest can be requested through the appropriate request. |
+| `204 (No Content)` | The search completed successfully but returned no results. |
+| `400 (Bad Request)` | There was a problem with the request. For example, invalid Query Parameter syntax. The response body contains details of the failure. |
+| `401 (Unauthorized)` | The client isn't authenticated. |
+| `403 (Forbidden)` | The user isn't authorized. |
+| `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. |
+
+#### Additional Notes
+
+The query API will not return `413 (request entity too large)`. If the requested query response limit is outside of the acceptable range, a bad request is returned. Anything requested within the acceptable range, will be resolved.
+
+* Paged results are optimized to return matched newest instance first, this may result in duplicate records in subsequent pages if newer data matching the query was added.
+* Matching is case insensitive and accent insensitive for PN VR types.
+* Matching is case insensitive and accent sensitive for other string VR types.
+* If there's a scenario where canceling a Workitem and querying the same happens at the same time, then the query will most likely exclude the Workitem that's getting updated and the response code will be `206 (Partial Content)`.
+
+### Next Steps
+
+For more information about the DICOM service, see
+
+>[!div class="nextstepaction"]
+>[Overview of the DICOM service](dicom-services-overview.md)
healthcare-apis Dicom Services Conformance Statement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement.md
Title: DICOM Conformance Statement for Azure Health Data Services
-description: This document provides details about the DICOM Conformance Statement for Azure Health Data Services.
+ Title: DICOM Conformance Statement version 1 for Azure Health Data Services
+description: This document provides details about the DICOM Conformance Statement v1 for Azure Health Data Services.
Last updated 10/13/2022
-# DICOM Conformance Statement
+# DICOM Conformance Statement v1
The Medical Imaging Server for DICOM supports a subset of the DICOMwebΓäó Standard. Support includes:
The Medical Imaging Server for DICOM supports a subset of the DICOMwebΓäó Standa
* [Request Cancellation](#request-cancellation) * [Search Workitems](#search-workitems)
-Additionally, the following non-standard API(s) are supported:
+Additionally, the following nonstandard API(s) are supported:
* [Change Feed](dicom-change-feed-overview.md) * [Extended Query Tags](dicom-extended-query-tags-overview.md)
The service uses REST API versioning. The version of the REST API must be explic
`https://<service_url>/v<version>/studies`
+This version of the conformance statement corresponds to the `v1` version of the REST APIs.
+ For more information on how to specify the version when making requests, see the [API Versioning Documentation](api-versioning-dicom-service.md).
-You'll find example requests for supported transactions in the [Postman collection](https://github.com/microsoft/dicom-server/blob/main/docs/resources/Conformance-as-Postman.postman_collection.json).
+You can find example requests for supported transactions in the [Postman collection](https://github.com/microsoft/dicom-server/blob/main/docs/resources/Conformance-as-Postman.postman_collection.json).
## Preamble Sanitization
-The service ignores the 128-byte File Preamble, and replaces its contents with null characters. This behavior ensures that no files passed through the service are vulnerable to the [malicious preamble vulnerability](https://dicom.nema.org/medical/dicom/current/output/chtml/part10/sect_7.5.html). However, this also means that [preambles used to encode dual format content](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6489422/) such as TIFF can't be used with the service.
+The service ignores the 128-byte File Preamble, and replaces its contents with null characters. This behavior ensures that no files passed through the service are vulnerable to the [malicious preamble vulnerability](https://dicom.nema.org/medical/dicom/current/output/chtml/part10/sect_7.5.html). However, this preamble sanitization also means that [preambles used to encode dual format content](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6489422/) such as TIFF can't be used with the service.
## Studies Service
-The [Studies Service](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#chapter_10) allows users to store, retrieve, and search for DICOM Studies, Series, and Instances. We've added the non-standard Delete transaction to enable a full resource lifecycle.
+The [Studies Service](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#chapter_10) allows users to store, retrieve, and search for DICOM Studies, Series, and Instances. We've added the nonstandard Delete transaction to enable a full resource lifecycle.
### Store (STOW-RS)
This transaction uses the POST method to store representations of studies, serie
| POST | ../studies | Store instances. | | POST | ../studies/{study} | Store instances for a specific study. |
-Parameter `study` corresponds to the DICOM attribute StudyInstanceUID. If it's specified, any instance that doesn't belong to the provided study will be rejected with a `43265` warning code.
+Parameter `study` corresponds to the DICOM attribute StudyInstanceUID. If it's specified, any instance that doesn't belong to the provided study is rejected with a `43265` warning code.
The following `Accept` header(s) for the response are supported:
The following `Content-Type` header(s) are supported:
> [!NOTE] > The Server **will not** coerce or replace attributes that conflict with existing data. All data will be stored as provided.
+#### Store required attributes
The following DICOM elements are required to be present in every DICOM file attempting to be stored: * `StudyInstanceUID`
The following DICOM elements are required to be present in every DICOM file atte
> [!NOTE] > All identifiers must be between 1 and 64 characters long, and only contain alpha numeric characters or the following special characters: `.`, `-`.
-Each file stored must have a unique combination of `StudyInstanceUID`, `SeriesInstanceUID`, and `SopInstanceUID`. The warning code `45070` will be returned if a file with the same identifiers already exists.
+Each file stored must have a unique combination of `StudyInstanceUID`, `SeriesInstanceUID`, and `SopInstanceUID`. The warning code `45070` is returned if a file with the same identifiers already exists.
Only transfer syntaxes with explicit Value Representations are accepted.
Only transfer syntaxes with explicit Value Representations are accepted.
### Store response payload
-The response payload will populate a DICOM dataset with the following elements:
+The response payload populates a DICOM dataset with the following elements:
| Tag | Name | Description | | :-- | :-- | :- |
The response payload will populate a DICOM dataset with the following elements:
| (0008, 1198) | `FailedSOPSequence` | The sequence of instances that failed to store. | | (0008, 1199) | `ReferencedSOPSequence` | The sequence of stored instances. |
-Each dataset in the `FailedSOPSequence` will have the following elements (if the DICOM file attempting to be stored could be read):
+Each dataset in the `FailedSOPSequence` has the following elements (if the DICOM file attempting to be stored could be read):
| Tag | Name | Description | | :-- | :-- | :- |
Each dataset in the `FailedSOPSequence` will have the following elements (if the
| (0008, 1197) | `FailureReason` | The reason code why this instance failed to store. | | (0074, 1048) | `FailedAttributesSequence` | The sequence of `ErrorComment` that includes the reason for each failed attribute. |
-Each dataset in the `ReferencedSOPSequence` will have the following elements:
+Each dataset in the `ReferencedSOPSequence` has the following elements:
| Tag | Name | Description | | :-- | :-- | :- |
The following `Accept` headers are supported for retrieving frames:
#### Retrieve transfer syntax
-When the requested transfer syntax is different from original file, the original file is transcoded to requested transfer syntax. The original file needs to be one of the formats below for transcoding to succeed; otherwise, transcoding may fail:
+When the requested transfer syntax is different from original file, the original file is transcoded to requested transfer syntax. The original file needs to be one of the following formats for transcoding to succeed, otherwise transcoding may fail:
* 1.2.840.10008.1.2 (Little Endian Implicit) * 1.2.840.10008.1.2.1 (Little Endian Explicit)
When the requested transfer syntax is different from original file, the original
* 1.2.840.10008.1.2.4.91 (JPEG 2000) * 1.2.840.10008.1.2.5 (RLE Lossless)
-An unsupported `transfer-syntax` will result in `406 Not Acceptable`.
+An unsupported `transfer-syntax` results in `406 Not Acceptable`.
### Retrieve metadata (for study, series, or instance)
The following `Accept` header is supported for retrieving metadata for a study,
* `application/dicom+json`
-Retrieving metadata will not return attributes with the following value representations:
+Retrieving metadata doesn't return attributes with the following value representations:
| VR Name | Description | | : | : |
Retrieving metadata will not return attributes with the following value represen
Cache validation is supported using the `ETag` mechanism. In the response to a metadata request, ETag is returned as one of the headers. This ETag can be cached and added as `If-None-Match` header in the later requests for the same metadata. Two types of responses are possible if the data exists:
-* Data hasn't changed since the last request: `HTTP 304 (Not Modified)` response will be sent with no response body.
-* Data has changed since the last request: `HTTP 200 (OK)` response will be sent with updated ETag. Required data will also be returned as part of the body.
+* Data hasn't changed since the last request: `HTTP 304 (Not Modified)` response is sent with no response body.
+* Data has changed since the last request: `HTTP 200 (OK)` response is sent with updated ETag. Required data is also returned as part of the body.
### Retrieve response status codes
Cache validation is supported using the `ETag` mechanism. In the response to a m
| : | :- | | `200 (OK)` | All requested data has been retrieved. | | `304 (Not Modified)` | The requested data hasn't been modified since the last request. Content isn't added to the response body in such case. For more information, see the above section **Retrieve Metadata Cache Validation (for Study, Series, or Instance)**. |
-| `400 (Bad Request)` | The request was badly formatted. For example, the provided study instance identifier didn't conform to the expected UID format, or the requested transfer-syntax encoding isn't supported. |
+| `400 (Bad Request)` | The request was badly formatted. For example, the provided study instance identifier didn't conform to the expected UID format, or the requested transfer-syntax encoding isn't supported. |
| `401 (Unauthorized)` | The client isn't authenticated. | | `403 (Forbidden)` | The user isn't authorized. | | `404 (Not Found)` | The specified DICOM resource couldn't be found. |
The following parameters for each query are supported:
| Key | Support Value(s) | Allowed Count | Description | | :-- | :-- | : | :- | | `{attributeID}=` | `{value}` | 0...N | Search for attribute/ value matching in query. |
-| `includefield=` | `{attributeID}`<br/>`all` | 0...N | The additional attributes to return in the response. Both, public and private tags are supported.<br/>When `all` is provided, refer to [Search Response](#search-response) for more information about which attributes will be returned for each query type.<br/>If a mixture of `{attributeID}` and `all` is provided, the server will default to using `all`. |
+| `includefield=` | `{attributeID}`<br/>`all` | 0...N | The additional attributes to return in the response. Both, public and private tags are supported.<br/>When `all` is provided, refer to [Search Response](#search-response) for more information about which attributes are returned for each query type.<br/>If a mixture of `{attributeID}` and `all` is provided, the server defaults to using `all`. |
| `limit=` | `{value}` | 0..1 | Integer value to limit the number of values returned in the response.<br/>Value can be between the range 1 >= x <= 200. Defaulted to 100. |
-| `offset=` | `{value}` | 0..1 | Skip `{value}` results.<br/>If an offset is provided larger than the number of search query results, a 204 (no content) response will be returned. |
-| `fuzzymatching=` | `true` / `false` | 0..1 | If true fuzzy matching is applied to PatientName attribute. It will do a prefix word match of any name part inside PatientName value. For example, if PatientName is "John^Doe", then "joh", "do", "jo do", "Doe" and "John Doe" will all match. However, "ohn" won't match. |
+| `offset=` | `{value}` | 0..1 | Skip `{value}` results.<br/>If an offset is provided larger than the number of search query results, a 204 (no content) response is returned. |
+| `fuzzymatching=` | `true` / `false` | 0..1 | If true fuzzy matching is applied to PatientName attribute. It does a prefix word match of any name part inside PatientName value. For example, if PatientName is "John^Doe", then "joh", "do", "jo do", "Doe" and "John Doe" all match. However, "ohn" doesn't match. |
#### Searchable attributes
We support the following matching types.
| Search Type | Supported Attribute | Example | | :- | : | : |
-| Range Query | `StudyDate`/`PatientBirthDate` | `{attributeID}={value1}-{value2}`. For date/ time values, we support an inclusive range on the tag. This will be mapped to `attributeID >= {value1} AND attributeID <= {value2}`. If `{value1}` isn't specified, all occurrences of dates/times prior to and including `{value2}` will be matched. Likewise, if `{value2}` isn't specified, all occurrences of `{value1}` and subsequent dates/times will be matched. However, one of these values has to be present. `{attributeID}={value1}-` and `{attributeID}=-{value2}` are valid, however, `{attributeID}=-` is invalid. |
+| Range Query | `StudyDate`/`PatientBirthDate` | `{attributeID}={value1}-{value2}`. For date/ time values, we support an inclusive range on the tag. This is mapped to `attributeID >= {value1} AND attributeID <= {value2}`. If `{value1}` isn't specified, all occurrences of dates/times prior to and including `{value2}` are matched. Likewise, if `{value2}` isn't specified, all occurrences of `{value1}` and subsequent dates/times are matched. However, one of these values has to be present. `{attributeID}={value1}-` and `{attributeID}=-{value2}` are valid, however, `{attributeID}=-` is invalid. |
| Exact Match | All supported attributes | `{attributeID}={value1}` |
-| Fuzzy Match | `PatientName`, `ReferringPhysicianName` | Matches any component of the name which starts with the value. |
+| Fuzzy Match | `PatientName`, `ReferringPhysicianName` | Matches any component of the name that starts with the value. |
#### Attribute ID
-Tags can be encoded in several ways for the query parameter. We've partially implemented the standard as defined in [PS3.18 6.7.1.1.1](http://dicom.nema.org/medical/dicom/2019a/output/chtml/part18/sect_6.7.html#sect_6.7.1.1.1). The following encodings for a tag are supported:
+Tags can be encoded in several ways for the query parameter. We have partially implemented the standard as defined in [PS3.18 6.7.1.1.1](http://dicom.nema.org/medical/dicom/2019a/output/chtml/part18/sect_6.7.html#sect_6.7.1.1.1). The following encodings for a tag are supported:
| Value | Example | | : | : |
Tags can be encoded in several ways for the query parameter. We've partially imp
Example query searching for instances:
- `../instances?Modality=CT&00280011=512&includefield=00280010&limit=5&offset=0`
+`../instances?Modality=CT&00280011=512&includefield=00280010&limit=5&offset=0`
### Search response
-The response will be an array of DICOM datasets. Depending on the resource, by *default* the following attributes are returned
+The response is an array of DICOM datasets. Depending on the resource, by *default* the following attributes are returned:
-#### Default study tags
+#### Default Study tags
| Tag | Attribute Name | | :-- | :- |
The response will be an array of DICOM datasets. Depending on the resource, by *
| (0020, 0010) | `StudyID` | | (0020, 000D) | `StudyInstanceUID` |
-#### Default series tags
+#### Default Series tags
| Tag | Attribute Name | | :-- | :- |
The response will be an array of DICOM datasets. Depending on the resource, by *
| (0040, 0245) | `PerformedProcedureStepStartTime` | | (0040, 0275) | `RequestAttributesSequence` |
-#### Default instance tags
+#### Default Instance tags
| Tag | Attribute Name | | :-- | :- |
The response will be an array of DICOM datasets. Depending on the resource, by *
| (0028, 0100) | `BitsAllocated` | | (0028, 0008) | `NumberOfFrames` |
-If `includefield=all`, the below attributes are included along with default attributes. Along with the default attributes, this is the full list of attributes supported at each resource level.
+If `includefield=all`, the following attributes are included along with default attributes. Along with the default attributes, this is the full list of attributes supported at each resource level.
#### Additional Study tags
If `includefield=all`, the below attributes are included along with default attr
| (0008, 0021) | `SeriesDate` | | (0008, 0031) | `SeriesTime` |
-Along with those below attributes are returned:
+The following attributes are returned:
* All the match query parameters and UIDs in the resource url.
-* `IncludeField` attributes supported at that resource level.
+* `IncludeField` attributes supported at that resource level.
* If the target resource is `All Series`, then `Study` level attributes are also returned. * If the target resource is `All Instances`, then `Study` and `Series` level attributes are also returned. * If the target resource is `Study's Instances`, then `Series` level attributes are also returned.
The query API returns one of the following status codes in the response:
### Additional notes * Querying using the `TimezoneOffsetFromUTC (00080201)` isn't supported.
-* The query API won't return `413 (request entity too large)`. If the requested query response limit is outside of the acceptable range, a bad request will be returned. Anything requested within the acceptable range, will be resolved.
-* When target resource is Study/Series there's a potential for inconsistent study/series level metadata across multiple instances. For example, two instances could have different patientName. In this case, the latest will win and you can search only on the latest data.
-* Paged results are optimized to return matched newest instance first, this may result in duplicate records in subsequent pages if newer data matching the query was added.
+* The query API doesn't return `413 (request entity too large)`. If the requested query response limit is outside of the acceptable range, a bad request is returned. Anything requested within the acceptable range will be resolved.
+* When target resource is Study/Series, there's a potential for inconsistent study/series level metadata across multiple instances. For example, two instances could have different patientName. In this case, the latest wins and you can search only on the latest data.
+* Paged results are optimized to return matched _newest_ instance first, this may result in duplicate records in subsequent pages if newer data matching the query was added.
* Matching is case in-sensitive and accent in-sensitive for PN VR types. * Matching is case in-sensitive and accent sensitive for other string VR types.
-* Only the first value will be indexed of a single valued data element that incorrectly has multiple values.
+* Only the first value is indexed of a single valued data element that incorrectly has multiple values.
### Delete
-This transaction isn't part of the official DICOMweb&trade; Standard. It uses the DELETE method to remove representations of studies, series, and instances from the store.
+This transaction isn't part of the official DICOMweb&trade; Standard. It uses the DELETE method to remove representations of Studies, Series, and Instances from the store.
-| Method | Path | Description |
-| :-- | : | :- |
-| DELETE | ../studies/{study} | Delete all instances for a specific study. |
-| DELETE | ../studies/{study}/series/{series} | Delete all instances for a specific series within a study. |
+| Method | Path | Description |
+| :-- | : | :- |
+| DELETE | ../studies/{study} | Delete all instances for a specific study. |
+| DELETE | ../studies/{study}/series/{series} | Delete all instances for a specific series within a study. |
| DELETE | ../studies/{study}/series/{series}/instances/{instance} | Delete a specific instance within a series. | Parameters `study`, `series`, and `instance` correspond to the DICOM attributes `StudyInstanceUID`, `SeriesInstanceUID`, and `SopInstanceUID` respectively.
There are no restrictions on the request's `Accept` header, `Content-Type` heade
> [!NOTE] > After a Delete transaction, the deleted instances will not be recoverable.
-### Response Status Codes
+### Response status codes
| Code | Description | | : | :- |
There are no restrictions on the request's `Accept` header, `Content-Type` heade
### Delete response payload
-The response body will be empty. The status code is the only useful information returned.
+The response body is empty. The status code is the only useful information returned.
## Worklist Service (UPS-RS)
Available UPS-RS endpoints include:
This transaction uses the POST method to create a new Workitem.
-|Method| Path |Description|
-|:|:|:|
-| POST |../workitems| Create a Workitem.|
-| POST |../workitems?{workitem}| Creates a Workitem with the specified UID.|
+| Method | Path | Description |
+| :-- | :-- | :- |
+| POST | ../workitems | Create a Workitem. |
+| POST | ../workitems?{workitem} | Creates a Workitem with the specified UID. |
If not specified in the URI, the payload dataset must contain the Workitem in the `SOPInstanceUID` attribute. The `Accept` and `Content-Type` headers are required in the request, and must both have the value `application/dicom+json`.
-There are several requirements related to DICOM data attributes in the context of a specific transaction. Attributes may be required to be present, required to not be present, required to be empty, or required to not be empty. These requirements can be found [in this table](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.5-3).
+There are several requirements related to DICOM data attributes in the context of a specific transaction. Attributes may be
+required to be present, required to not be present, required to be empty, or required to not be empty. These requirements can be
+found [in this table](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.5-3).
-Notes on dataset attributes:
+> [!NOTE]
+> Although the reference table above says that SOP Instance UID shouldn't be present, this guidance is specific to the DIMSE protocol and is handled differently in DICOMWebΓäó. SOP Instance UID should be present in the dataset if not in the URI.
-* **SOP Instance UID:** Although the reference table above says that SOP Instance UID shouldn't be present, this guidance is specific to the DIMSE protocol and is handled differently in DICOMWebΓäó. SOP Instance UID should be present in the dataset if not in the URI.
-* **Conditional requirement codes:** All the conditional requirement codes including 1C and 2C are treated as optional.
+> [!NOTE]
+> All the conditional requirement codes including 1C and 2C are treated as optional.
-#### Create Response Status Codes
+#### Create response status codes
-|Code |Description|
-|:|:|
-|`201 (Created)`| The target Workitem was successfully created.|
-|`400 (Bad Request)`| There was a problem with the request. For example, the request payload didn't satisfy the requirements above.|
-|`401 (Unauthorized)`| The client isn't authenticated.
-|`403 (Forbidden)` | The user isn't authorized. |
-|`409 (Conflict)` |The Workitem already exists.
-|`415 (Unsupported Media Type)`| The provided `Content-Type` isn't supported.
-|`503 (Service Unavailable)`| The service is unavailable or busy. Try again later.|
+| Code | Description |
+| :-- | :- |
+| `201 (Created)` | The target Workitem was successfully created. |
+| `400 (Bad Request)` | There was a problem with the request. For example, the request payload didn't satisfy the requirements above. |
+| `401 (Unauthorized)` | The client isn't authenticated. |
+| `403 (Forbidden)` | The user isn't authorized. |
+| `409 (Conflict)` | The Workitem already exists. |
+| `415 (Unsupported Media Type)` | The provided `Content-Type` isn't supported. |
+| `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. |
-#### Create Response Payload
+#### Create response payload
-A success response will have no payload. The `Location` and `Content-Location` response headers will contain a URI reference to the created Workitem.
+A success response has no payload. The `Location` and `Content-Location` response headers contain a URI reference to the created Workitem.
-A failure response payload will contain a message describing the failure.
+A failure response payload contains a message describing the failure.
-### Request Cancellation
+### Request cancellation
This transaction enables the user to request cancellation of a non-owned Workitem.
There are [four valid Workitem states](https://dicom.nema.org/medical/dicom/curr
* `CANCELED` * `COMPLETED`
-This transaction will only succeed against Workitems in the `SCHEDULED` state. Any user can claim ownership of a Workitem by setting its Transaction UID and changing its state to `IN PROGRESS`. From then on, a user can only modify the Workitem by providing the correct Transaction UID. While UPS defines Watch and Event SOP classes that allow cancellation requests and other events to be forwarded, this DICOM service doesn't implement these classes, and so cancellation requests on workitems that are `IN PROGRESS` will return failure. An owned Workitem can be canceled via the [Change Workitem State](#change-workitem-state) transaction.
+This transaction only succeeds against Workitems in the `SCHEDULED` state. Any user can claim ownership of a Workitem by setting its Transaction UID and changing its state to `IN PROGRESS`. From then on, a user can only modify the Workitem by providing the correct Transaction UID. While UPS defines Watch and Event SOP classes that allow cancellation requests and other events to be forwarded, this DICOM service doesn't implement these classes, and so cancellation requests on workitems that are `IN PROGRESS` returns failure. An owned Workitem can be canceled via the [Change Workitem State](#change-workitem-state) transaction.
-|Method |Path| Description|
-|:|:|:|
-|POST |../workitems/{workitem}/cancelrequest| Request the cancellation of a scheduled Workitem|
+| Method | Path | Description |
+| : | :- | :-- |
+| POST | ../workitems/{workitem}/cancelrequest | Request the cancellation of a scheduled Workitem |
The `Content-Type` header is required, and must have the value `application/dicom+json`. The request payload may include Action Information as [defined in the DICOM Standard](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.2-1).
-#### Request Cancellation Response Status Codes
+#### Request cancellation response status codes
-|Code |Description|
-|:|:|
-|`202 (Accepted)`| The request was accepted by the server, but the Target Workitem state hasn't necessarily changed yet.|
-|`400 (Bad Request)`| There was a problem with the syntax of the request.|
-|`401 (Unauthorized)`| The client isn't authenticated.
-|`403 (Forbidden)` | The user isn't authorized. |
-|`404 (Not Found)`| The Target Workitem wasn't found.
-|`409 (Conflict)`| The request is inconsistent with the current state of the Target Workitem. For example, the Target Workitem is in the **SCHEDULED** or **COMPLETED** state.
-|`415 (Unsupported Media Type)` |The provided `Content-Type` isn't supported.|
+| Code | Description |
+| : | :- |
+| `202 (Accepted)` | The request was accepted by the server, but the Target Workitem state hasn't necessarily changed yet. |
+| `400 (Bad Request)` | There was a problem with the syntax of the request. |
+| `401 (Unauthorized)` | The client isn't authenticated. |
+| `403 (Forbidden)` | The user isn't authorized. |
+| `404 (Not Found)` | The Target Workitem wasn't found. |
+| `409 (Conflict)` | The request is inconsistent with the current state of the Target Workitem. For example, the Target Workitem is in the `SCHEDULED` or `COMPLETED` state. |
+| `415 (Unsupported Media Type)` | The provided `Content-Type` isn't supported. |
-#### Request Cancellation Response Payload
+#### Request cancellation response payload
-A success response will have no payload, and a failure response payload will contain a message describing the failure. If the Workitem Instance is already in a canceled state, the response will include the following HTTP Warning header: `299: The UPS is already in the requested state of CANCELED.`
+A success response has no payload, and a failure response payload contains a message describing the failure.
+If the Workitem Instance is already in a canceled state, the response includes the following HTTP Warning header:
+`299: The UPS is already in the requested state of CANCELED.`
### Retrieve Workitem
Refer to: https://dicom.nema.org/medical/dicom/current/output/html/part18.html#s
If the Workitem exists on the origin server, the Workitem shall be returned in an Acceptable Media Type. The returned Workitem shall not contain the Transaction UID (0008,1195) Attribute. This is necessary to preserve this Attribute's role as an access lock.
-|Method |Path |Description
-|:|:|:|
-|GET| ../workitems/{workitem}| Request to retrieve a Workitem
+| Method | Path | Description |
+| : | :- | : |
+| GET | ../workitems/{workitem} | Request to retrieve a Workitem |
The `Accept` header is required and must have the value `application/dicom+json`.
-#### Retrieve Workitem Response Status Codes
+#### Retrieve Workitem response status codes
-|Code |Description|
-|: |:
-|`200 (OK)`| Workitem Instance was successfully retrieved.|
-|`400 (Bad Request)`| There was a problem with the request.|
-|`401 (Unauthorized)`| The client isn't authenticated.|
-|`403 (Forbidden)` | The user isn't authorized. |
-|`404 (Not Found)`| The Target Workitem wasn't found.|
+| Code | Description |
+| :- | :- |
+| 200 (OK) | Workitem Instance was successfully retrieved. |
+| 400 (Bad Request) | There was a problem with the request. |
+| 401 (Unauthorized) | The client isn't authenticated. |
+| 403 (Forbidden) | The user isn't authorized. |
+| 404 (Not Found) | The Target Workitem wasn't found. |
-#### Retrieve Workitem Response Payload
+#### Retrieve Workitem response payload
* A success response has a single part payload containing the requested Workitem in the Selected Media Type. * The returned Workitem shall not contain the Transaction UID (0008, 1195) attribute of the Workitem, since that should only be known to the Owner.
This transaction modifies attributes of an existing Workitem. It corresponds to
Refer to: https://dicom.nema.org/medical/dicom/current/output/html/part18.html#sect_11.6
-To update a Workitem currently in the **SCHEDULED** state, the `Transaction UID` attribute shall not be present. For a Workitem in the **IN PROGRESS** state, the request must include the current Transaction UID as a query parameter. If the Workitem is already in the **COMPLETED** or **CANCELED** states, the response will be `400 (Bad Request)`.
+To update a Workitem currently in the `SCHEDULED` state, the `Transaction UID` attribute shall not be present. For a Workitem in the `IN PROGRESS` state, the request must include the current Transaction UID as a query parameter. If the Workitem is already in the `COMPLETED` or `CANCELED` states, the response is `400 (Bad Request)`.
-|Method |Path |Description
-|:|:|:|
-|POST| ../workitems/{workitem}?{transaction-uid}| Update Workitem Transaction|
+| Method | Path | Description |
+| : | : | :-- |
+| POST | ../workitems/{workitem}?{transaction-uid} | Update Workitem Transaction |
The `Content-Type` header is required, and must have the value `application/dicom+json`.
-The request payload contains a dataset with the changes to be applied to the target Workitem. When modifying a sequence, the request must include all Items in the sequence, not just the Items to be modified. When multiple Attributes need updating as a group, do this as multiple Attributes in a single request, not as multiple requests.
+The request payload contains a dataset with the changes to be applied to the target Workitem. When modifying a sequence, the request must include all Items in the sequence, not just the Items to be modified.
+When multiple Attributes need to be updated as a group, do this as multiple Attributes in a single request, not as multiple requests.
-There are a number of requirements related to DICOM data attributes in the context of a specific transaction. Attributes may be required to be present, required to not be present, required to be empty, or required to not be empty. These requirements can be found in this table.
+There are many requirements related to DICOM data attributes in the context of a specific transaction. Attributes may be
+required to be present, required to not be present, required to be empty, or required to not be empty. These requirements can be
+found in [this table](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.5-3).
-Notes on dataset attributes:
-
-* **Conditional requirement codes:** All the conditional requirement codes including 1C and 2C are treated as optional.
-
-* The request can't set the value of the Procedure Step State (0074,1000) attribute. Procedure Step State is managed using the Change State transaction, or the Request Cancellation transaction.
-
-#### Update Workitem Transaction Response Status Codes
+> [!NOTE]
+> All the conditional requirement codes including 1C and 2C are treated as optional.
-|Code |Description|
-|:|:|
-|`200 (OK)`| The Target Workitem was updated.|
-|`400 (Bad Request)`| There was a problem with the request. For example: (1) the Target Workitem was in the `COMPLETED` or `CANCELED` state. (2) the Transaction UID is missing. (3) the Transaction UID is incorrect. (4) the dataset didn't conform to the requirements.|
-|`401 (Unauthorized)`| The client isn't authenticated.|
-| `403 (Forbidden)` | The user isn't authorized. |
-|`404 (Not Found)`| The Target Workitem wasn't found.|
-|`409 (Conflict)` |The request is inconsistent with the current state of the Target Workitem.|
-|`415 (Unsupported Media Type)`| The provided `Content-Type` isn't supported.|
+> [!NOTE]
+> The request can't set the value of the Procedure Step State (0074,1000) attribute. Procedure Step State is managed using the Change State transaction, or the Request Cancellation transaction.
+
+#### Update Workitem transaction response status codes
+
+| Code | Description |
+| :- | :- |
+| `200 (OK)` | The Target Workitem was updated. |
+| `400 (Bad Request)` | There was a problem with the request. For example: (1) the Target Workitem was in the `COMPLETED` or `CANCELED` state. (2) the Transaction UID is missing. (3) the Transaction UID is incorrect. (4) the dataset didn't conform to the requirements.
+| `401 (Unauthorized)` | The client isn't authenticated. |
+| `403 (Forbidden)` | The user isn't authorized. |
+| `404 (Not Found)` | The Target Workitem wasn't found. |
+| `409 (Conflict)` | The request is inconsistent with the current state of the Target Workitem. |
+| `415 (Unsupported Media Type)` | The provided `Content-Type` isn't supported. |
-#### Update Workitem Transaction Response Payload
+#### Update Workitem transaction response payload
The origin server shall support header fields as required in [Table 11.6.3-2](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#table_11.6.3-2).
A success response shall have either no payload or a payload containing a Status
A failure response payload may contain a Status Report describing any failures, warnings, or other useful information.
-### Change Workitem State
+### Change Workitem state
This transaction is used to change the state of a Workitem. It corresponds to the UPS DIMSE N-ACTION operation "Change UPS State". State changes are used to claim ownership, complete, or cancel a Workitem. Refer to: https://dicom.nema.org/medical/dicom/current/output/html/part18.html#sect_11.7
-If the Workitem exists on the origin server, the Workitem shall be returned in an Acceptable Media Type. The returned Workitem shall not contain the Transaction UID (0008,1195) attribute. This is necessary to preserve this attribute's role as an access lock, as described here.
+If the Workitem exists on the origin server, the Workitem shall be returned in an Acceptable Media Type. The returned Workitem shall not contain the Transaction UID (0008,1195) attribute. This is necessary to preserve this Attribute's role as an access lock as described [here.](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#sect_CC.1.1)
-|Method| Path| Description|
-|:|:|:|
-|PUT| ../workitems/{workitem}/state|Change Workitem State |
+| Method | Path | Description |
+| : | : | :-- |
+| PUT | ../workitems/{workitem}/state | Change Workitem State |
The `Accept` header is required, and must have the value `application/dicom+json`. The request payload shall contain the Change UPS State Data Elements. These data elements are:
-* **Transaction UID (0008, 1195)** The request payload shall include a Transaction UID. The user agent creates the Transaction UID when requesting a transition to the `IN PROGRESS` state for a given Workitem. The user agent provides that Transaction UID in subsequent transactions with that Workitem.
-* **Procedure Step State (0074, 1000)** The legal values correspond to the requested state transition. They are: `IN PROGRESS`, `COMPLETED`, or `CANCELED`.
+* **Transaction UID (0008, 1195)**. The request payload shall include a Transaction UID. The user agent creates the Transaction UID when requesting a transition to the `IN PROGRESS` state for a given Workitem. The user agent provides that Transaction UID in subsequent transactions with that Workitem.
+* **Procedure Step State (0074, 1000)**. The legal values correspond to the requested state transition. They are: `IN PROGRESS`, `COMPLETED`, or `CANCELED`.
-#### Change Workitem State Response Status Codes
+#### Change Workitem state response status codes
-|Code| Description|
-|:|:|
-|`200 (OK)`| Workitem Instance was successfully retrieved.|
-|`400 (Bad Request)` |The request can't be performed for one of the following reasons: (1) the request is invalid given the current state of the Target Workitem. (2) the Transaction UID is missing. (3) the Transaction UID is incorrect|
-|`401 (Unauthorized)` |The client isn't authenticated.|
-|`403 (Forbidden)` | The user isn't authorized. |
-|`404 (Not Found)`| The Target Workitem wasn't found.|
-|`409 (Conflict)`| The request is inconsistent with the current state of the Target Workitem.|
+| Code | Description |
+| :- | :- |
+| `200 (OK)` | Workitem Instance was successfully retrieved. |
+| `400 (Bad Request)` | The request can't be performed for one of the following reasons: (1) the request is invalid given the current state of the Target Workitem. (2) the Transaction UID is missing. (3) the Transaction UID is incorrect |
+| `401 (Unauthorized)` | The client isn't authenticated. |
+| `403 (Forbidden)` | The user isn't authorized. |
+| `404 (Not Found)` | The Target Workitem wasn't found. |
+| `409 (Conflict)` | The request is inconsistent with the current state of the Target Workitem. |
-#### Change Workitem State Response Payload
+#### Change Workitem state response payload
-* Responses will include the header fields specified in [section 11.7.3.2](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#sect_11.7.3.2).
+* Responses include the header fields specified in [section 11.7.3.2](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#sect_11.7.3.2).
* A success response shall have no payload. * A failure response payload may contain a Status Report describing any failures, warnings, or other useful information.
The request payload shall contain the Change UPS State Data Elements. These data
This transaction enables you to search for Workitems by attributes.
-|Method |Path| Description|
-|:|:|:|
-|GET| ../workitems?| Search for Workitems|
+| Method | Path | Description |
+| :-- | :- | :-- |
+| GET | ../workitems? | Search for Workitems |
The following `Accept` header(s) are supported for searching:
-`application/dicom+json`
+* `application/dicom+json`
#### Supported Search Parameters The following parameters for each query are supported:
-|Key |Support| Values| Allowed| Count |Description|
-|: |: |: |: |: |:|
-|`{attributeID}=`| `{value}` |0...N |Search for attribute/ value matching in query.
-|`includefield=` |`{attributeID} all`| 0...N |The additional attributes to return in the response. Only top-level attributes can be specified to be included - not attributes that are part of sequences. Both public and private tags are supported. When `all` is provided, see [Search Response](#search-response) for more information about which attributes will be returned for each query type. If a mixture of `{attributeID}` and `all` is provided, the server will default to using 'all'.
-|`limit=`| `{value}`| 0...1| Integer value to limit the number of values returned in the response. Value can be between the range `1 >= x <= 200`. Defaulted to `100`.|
-|`offset=`| `{value}`| 0...1| Skip {value} results. If an offset is provided larger than the number of search query results, a `204 (no content)` response will be returned.
-|`fuzzymatching=` |`true/false`| 0...1 |If true fuzzy matching is applied to any attributes with the Person Name (PN) Value Representation (VR). It will do a prefix word match of any name part inside these attributes. For example, if `PatientName` is `John^Doe`, then `joh`, `do`, `jo do`, `Doe` and `John Doe` will all match. However `ohn` will **not** match.|
+| Key | Support Value(s) | Allowed Count | Description |
+| : | :- | : | :- |
+| `{attributeID}=` | `{value}` | 0...N | Search for attribute/ value matching in query. |
+| `includefield=` | `{attributeID}`<br/>`all` | 0...N | The additional attributes to return in the response. Only top-level attributes can be specified to be included - not attributes that are part of sequences. Both public and private tags are supported. When `all` is provided, see [Search Response](#search-response) for more information about which attributes are returned for each query type. If a mixture of `{attributeID}` and `all` is provided, the server defaults to using 'all'. |
+| `limit=` | `{value}` | 0...1 | Integer value to limit the number of values returned in the response. Value can be between the range `1 >= x <= 200`. Defaulted to `100`. |
+| `offset=` | `{value}` | 0...1 | Skip {value} results. If an offset is provided larger than the number of search query results, a `204 (no content)` response is returned. |
+| `fuzzymatching=` | `true` \| `false` | 0...1 | If true fuzzy matching is applied to any attributes with the Person Name (PN) Value Representation (VR). It does a prefix word match of any name part inside these attributes. For example, if `PatientName` is `John^Doe`, then `joh`, `do`, `jo do`, `Doe` and `John Doe` all match. However `ohn` does **not** match. |
##### Searchable Attributes
We support searching on these attributes:
We support these matching types:
-|Search Type |Supported Attribute| Example|
-|:|:|:|
-|Range Query| `ScheduledΓÇïProcedureΓÇïStepΓÇïStartΓÇïDateΓÇïTime`| `{attributeID}={value1}-{value2}`. For date/time values, we support an inclusive range on the tag. This will be mapped to `attributeID >= {value1}` AND `attributeID <= {value2}`. If `{value1}` isn't specified, all occurrences of dates/times prior to and including `{value2}` will be matched. Likewise, if `{value2}` isn't specified, all occurrences of `{value1}` and subsequent dates/times will be matched. However, one of these values has to be present. `{attributeID}={value1}-` and `{attributeID}=-{value2}` are valid, however, `{attributeID}=-` is invalid.
-|Exact Match |All supported attributes| `{attributeID}={value1}`
-|Fuzzy Match| `PatientName` |Matches any component of the name that starts with the value.
+| Search Type | Supported Attribute | Example |
+| :- | : | : |
+| Range Query | `ScheduledΓÇïProcedureΓÇïStepΓÇïStartΓÇïDateΓÇïTime` | `{attributeID}={value1}-{value2}`. For date/time values, we support an inclusive range on the tag. This will be mapped to `attributeID >= {value1} AND attributeID <= {value2}`. If `{value1}` isn't specified, all occurrences of dates/times prior to and including `{value2}` will be matched. Likewise, if `{value2}` isn't specified, all occurrences of `{value1}` and subsequent dates/times will be matched. However, one of these values has to be present. `{attributeID}={value1}-` and `{attributeID}=-{value2}` are valid, however, `{attributeID}=-` is invalid. |
+| Exact Match | All supported attributes | `{attributeID}={value1}` |
+| Fuzzy Match | `PatientName` | Matches any component of the name that starts with the value. |
> [!NOTE] > While we don't support full sequence matching, we do support exact match on the attributes listed above that are contained in a sequence. ##### Attribute ID
-Tags can be encoded in a number of ways for the query parameter. We've partially implemented the standard as defined in [PS3.18 6.7.1.1.1](http://dicom.nema.org/medical/dicom/2019a/output/chtml/part18/sect_6.7.html#sect_6.7.1.1.1). The following encodings for a tag are supported:
+Tags can be encoded in many ways for the query parameter. We have partially implemented the standard as defined in [PS3.18 6.7.1.1.1](http://dicom.nema.org/medical/dicom/2019a/output/chtml/part18/sect_6.7.html#sect_6.7.1.1.1). The following encodings for a tag are supported:
-|Value |Example|
-|:|:|
-|`{group}{element}` |`00100010`|
-|`{dicomKeyword}` |`PatientName`|
+| Value | Example |
+| :-- | : |
+| `{group}{element}` | `00100010` |
+| `{dicomKeyword}` | `PatientName` |
Example query:
Example query:
#### Search Response
-The response will be an array of `0...N` DICOM datasets with the following attributes returned:
+The response is an array of `0...N` DICOM datasets with the following attributes returned:
* All attributes in [DICOM PowerShell 3.4 Table CC.2.5-3](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.5-3) with a Return Key Type of 1 or 2 * All attributes in [DICOM PowerShell 3.4 Table CC.2.5-3](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.5-3) with a Return Key Type of 1C for which the conditional requirements are met
The response will be an array of `0...N` DICOM datasets with the following attri
#### Search Response Codes
-The query API will return one of the following status codes in the response:
+The query API returns one of the following status codes in the response:
-|Code |Description|
-|:|:|
-|`200 (OK)`| The response payload contains all the matching resource.|
-|`206 (Partial Content)` | The response payload contains only some of the search results, and the rest can be requested through the appropriate request.|
-|`204 (No Content)`| The search completed successfully, but returned no results.|
-|`400 (Bad Request)`| The was a problem with the request. For example, invalid Query Parameter syntax. The Response body contains details of the failure.|
-|`401 (Unauthorized)`| The client isn't authenticated.|
-|`403 (Forbidden)` | The user isn't authorized. |
-|`503 (Service Unavailable)` | The service is unavailable or busy. Try again later.|
+| Code | Description |
+| :-- | :- |
+| `200 (OK)` | The response payload contains all the matching resource. |
+| `206 (Partial Content)` | The response payload contains only some of the search results, and the rest can be requested through the appropriate request. |
+| `204 (No Content)` | The search completed successfully but returned no results. |
+| `400 (Bad Request)` | There was a problem with the request. For example, invalid Query Parameter syntax. The response body contains details of the failure. |
+| `401 (Unauthorized)` | The client isn't authenticated. |
+| `403 (Forbidden)` | The user isn't authorized. |
+| `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. |
#### Additional Notes
-The query API will not return `413 (request entity too large)`. If the requested query response limit is outside of the acceptable range, a bad request will be returned. Anything requested within the acceptable range, will be resolved.
+The query API will not return `413 (request entity too large)`. If the requested query response limit is outside of the acceptable range, a bad request is returned. Anything requested within the acceptable range, will be resolved.
* Paged results are optimized to return matched newest instance first, this may result in duplicate records in subsequent pages if newer data matching the query was added. * Matching is case insensitive and accent insensitive for PN VR types.
The query API will not return `413 (request entity too large)`. If the requested
### Next Steps
-For more information, see
+For more information about the DICOM service, see
>[!div class="nextstepaction"] >[Overview of the DICOM service](dicom-services-overview.md)
healthcare-apis Enable Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/enable-diagnostic-logging.md
Last updated 03/02/2022
-# Enable Diagnostic Logging in the DICOM service
+# Enable audit and diagnostic logging in the DICOM service
In this article, you'll learn how to enable diagnostic logging in DICOM service and be able to review some sample queries for these logs. Access to diagnostic logs is essential for any healthcare service where compliance with regulatory requirements is a must. The feature in DICOM service enables diagnostic logs is the [Diagnostic settings](../../azure-monitor/essentials/diagnostic-settings.md) in the Azure portal.
-## Enable audit logs
+## Enable logs
-1. To enable diagnostic logging DICOM service, select your DICOM service in the Azure portal.
+1. To enable logging DICOM service, select your DICOM service in the Azure portal.
2. Select the **Activity log** blade, and then select **Diagnostic settings**. [ ![Screenshot of Azure activity log.](media/dicom-activity-log.png) ](media/dicom-activity-log.png#lightbox)
In this article, you'll learn how to enable diagnostic logging in DICOM service
For information on how to work with diagnostic logs, see [Azure Resource Log documentation](../../azure-monitor/essentials/platform-logs-overview.md)
-## Audit log details
+## Log details
+The log schema used differs based on the destination. Log Analytics has a schema that will differ from other destinations. Each log type will also have a schema that differs.
-The DICOM service returns the following fields in the audit log:
+### Audit log details
+
+#### Raw logs
+
+The DICOM service returns the following fields in the audit log as seen when streamed outside of Log Analytics:
|Field Name |Type |Notes | |||| |correlationId|String|Correlation ID
-|category|String|Log Category (We currently have 'AuditLogs')
|operationName|String|Describes the type of operation (for example, Retrieve, Store, Query, etc.) |time|DateTime|Date and time of the event. |resourceId|String| Azure path to the resource. |identity|Dynamic|A generic property bag containing identity information (currently doesn't apply to DICOM).
-|callerIpAddress|String|The caller's IP address.
-|Location|String|The location of the server that processed the request.
+|location|String|The location of the server that processed the request.
|uri|String|The request URI. |resultType|String| The available values currently are Started, Succeeded, or Failed. |resultSignature|Int|The HTTP Status Code (for example, 200)
-|properties|String|Describes the properties including resource type, resource name, subscription ID, audit action, etc.
|type|String|Type of log (it's always MicrosoftHealthcareApisAuditLog in this case).
+|level|String|Log level (Informational, Error).
++
+#### Log Analytics logs
+
+The DICOM service returns the following fields in the audit log in Log Analytics:
+
+|Field Name |Type |Notes |
+||||
+|CorrelationId|String|Correlation ID
+|OperationName|String|Describes the type of operation (for example, Retrieve, Store, Query, etc.)
+|TimeGenerated [UTC]|DateTime|Date and time of the event.
+|_ResourceId|String| Azure path to the resource.
+|Identity|Dynamic|A generic property bag containing identity information (currently doesn't apply to DICOM).
+|Uri|String|The request URI.
+|ResultType|String| The available values currently are Started, Succeeded, or Failed.
+|StatusCode|Int|The HTTP Status Code (for example, 200)
+|Type|String|Type of log (it's always AHDSDicomAuditLogs in this case).
|Level|String|Log level (Informational, Error).
-|operationVersion|String| Currently empty. Will be utilized to show api version.
+|TenantId|String| Tenant ID.
++
+### Diagnostic log details
+
+#### Raw logs
+
+The DICOM service returns the following fields in the audit log as seen when streamed outside of Log Analytics:
+|Field Name |Type |Notes |
+||||
+|correlationId|String|Correlation ID
+|operationName|String|Describes the type of operation (for example, Retrieve, Store, Query, etc.)
+|time|DateTime|Date and time of the event.
+|resultDescription|String|Description of the log entry. An example here is a diagnostic log with a validation warning message when storing a file.
+|resourceId|String| Azure path to the resource.
+|identity|Dynamic|A generic property bag containing identity information (currently doesn't apply to DICOM).
+|location|String|The location of the server that processed the request.
+|properties|String|Additional information about the event in JSON array format. Examples include DICOM identifiers present in the request.
+|level|String|Log level (Informational, Error).
+
+#### Log Analytics logs
+
+The DICOM service returns the following fields in the audit log in Log Analytics:
+
+|Field Name |Type |Notes |
+||||
+|CorrelationId|String|Correlation ID
+|OperationName|String|Describes the type of operation (for example, Retrieve, Store, Query, etc.)
+|TimeGenerated|DateTime|Date and time of the event.
+|Message|String|Description of the log entry. An example here is a diagnostic log with a validation warning message when storing a file.
+|Location|String|The location of the server that processed the request.
+|Properties|String|Additional information about the event in JSON array format. Examples include DICOM identifiers present in the request.
+|LogLevel|String|Log level (Informational, Error).
-## Sample queries
+## Sample Log Analytics queries
Below are a few basic Application Insights queries you can use to explore your log data.
healthcare-apis Configure Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-import-data.md
Last updated 06/06/2022-+ # Configure bulk-import settings
healthcare-apis Deploy Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-arm-template.md
Title: Deploy the MedTech service using an Azure Resource Manager template - Azure Health Data Services
-description: In this article, you'll learn how to deploy the MedTech service using an Azure Resource Manager template.
+description: Learn how to deploy the MedTech service using an Azure Resource Manager template.
Previously updated : 04/14/2023 Last updated : 04/25/2023
To implement infrastructure as code for your Azure solutions, use Azure Resource Manager templates (ARM templates). The template is a [JavaScript Object Notation (JSON)](https://www.json.org/) file that defines the infrastructure and configuration for your project. The template uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. In the template, you specify the resources to deploy and the properties for those resources.
-In this quickstart, you'll learn how to:
+In this quickstart, learn how to:
- Open an ARM template in the Azure portal. - Configure the ARM template for your deployment.
When you have these prerequisites, you're ready to configure the ARM template by
## Review the ARM template - Optional
-The ARM template used to deploy the resources in this quickstart is available at [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/iotconnectors-with-iothub/) by using the *azuredeploy.json* file on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors-with-iothub).
+The ARM template used to deploy the resources in this quickstart is available at [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/iotconnectors/) by using the *azuredeploy.json* file on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/).
## Use the Deploy to Azure button
To begin deployment in the Azure portal, select the **Deploy to Azure** button:
- **Destination Mapping** - Don't change the default values for this quickstart.
- :::image type="content" source="media\deploy-new-arm\iot-deploy-quickstart-options.png" alt-text="Screenshot of Azure portal page displaying deployment options for the Azure Health Data Service MedTech service." lightbox="media\deploy-new-arm\iot-deploy-quickstart-options.png":::
+ :::image type="content" source="media\deploy-arm-template\iot-deploy-quickstart-options.png" alt-text="Screenshot of Azure portal page displaying deployment options for the Azure Health Data Service MedTech service." lightbox="media\deploy-arm-template\iot-deploy-quickstart-options.png":::
2. To validate your configuration, select **Review + create**.
- :::image type="content" source="media\deploy-new-arm\iot-review-and-create-button.png" alt-text="Screenshot that shows the Review + create button selected in the Azure portal.":::
+ :::image type="content" source="media\deploy-arm-template\iot-review-and-create-button.png" alt-text="Screenshot that shows the Review + create button selected in the Azure portal.":::
3. In **Review + create**, check the template validation status. If validation is successful, the template displays **Validation Passed**. If validation fails, fix the detail that's indicated in the error message, and then select **Review + create** again.
- :::image type="content" source="media\deploy-new-arm\iot-validation-completed.png" alt-text="Screenshot that shows the Review + create pane displaying the Validation Passed message.":::
+ :::image type="content" source="media\deploy-arm-template\iot-validation-completed.png" alt-text="Screenshot that shows the Review + create pane displaying the Validation Passed message.":::
4. After a successful validation, to begin the deployment, select **Create**.
- :::image type="content" source="media\deploy-new-arm\iot-create-button.png" alt-text="Screenshot that shows the highlighted Create button.":::
+ :::image type="content" source="media\deploy-arm-template\iot-create-button.png" alt-text="Screenshot that shows the highlighted Create button.":::
5. In a few minutes, the Azure portal displays the message that your deployment is completed.
- :::image type="content" source="media\deploy-new-arm\iot-deployment-complete-banner.png" alt-text="Screenshot that shows a green checkmark and the message Your deployment is complete.":::
+ :::image type="content" source="media\deploy-arm-template\iot-deployment-complete-banner.png" alt-text="Screenshot that shows a green checkmark and the message Your deployment is complete.":::
> [!IMPORTANT] > If you're going to allow access from multiple services to the device message event hub, it's required that each service has its own event hub consumer group.
To begin deployment in the Azure portal, select the **Deploy to Azure** button:
> > Examples: >
- > - Two MedTech services accessing the same device message event hub.
+ > * Two MedTech services accessing the same device message event hub.
>
- > - A MedTech service and a storage writer application accessing the same device message event hub.
+ > * A MedTech service and a storage writer application accessing the same device message event hub.
## Review deployed resources and access permissions When deployment is completed, the following resources and access roles are created in the ARM template deployment: -- Azure Event Hubs namespace and device message event hub. In this deployment, the device message event hub is named *devicedata*.
+* Event Hubs namespace and event hub. In this deployment, the event hub is named *devicedata*.
- - An event hub consumer group. In this deployment, the consumer group is named *$Default*.
+ * An event hub consumer group. In this deployment, the consumer group is named *$Default*.
- - An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](../../event-hubs/authorize-access-shared-access-signature.md).
+ * An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](../../event-hubs/authorize-access-shared-access-signature.md).
-- A Health Data Services workspace.
+* A Health Data Services workspace.
-- A Health Data Services Fast Healthcare Interoperability Resources FHIR service.
+* A Health Data Services Fast Healthcare Interoperability Resources FHIR service.
-- A Health Data Services MedTech service with the required [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) roles:
+* A Health Data Services MedTech service with the required [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) roles:
- - For the device message event hub, the Azure Events Hubs Data Receiver role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the device message event hub.
+ * For the event hub, the Azure Event Hubs Data Receiver role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the device message event hub.
- - For the FHIR service, the FHIR Data Writer role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the FHIR service.
+ * For the FHIR service, the FHIR Data Writer role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the FHIR service.
> [!IMPORTANT] > In this quickstart, the ARM template configures the MedTech service to operate in **Create** mode. A patient resource and a device resource are created for each device that sends data to your FHIR service.
When deployment is completed, the following resources and access roles are creat
## Post-deployment mappings
-After you've successfully deployed an instance of the MedTech service, you'll still need to provide conforming and valid device and FHIR destination mappings.
+After you have successfully deployed an instance of the MedTech service, you'll still need to provide conforming and valid device and FHIR destination mappings.
+ * To learn about the device mapping, see [Overview of the MedTech service device mapping](overview-of-device-mapping.md).
+ * To learn about the FHIR destination mapping, see [Overview of the MedTech service FHIR destination mapping](overview-of-fhir-destination-mapping.md).
## Next steps
-In this quickstart, you learned how to deploy an instance of the MedTech service in the Azure portal using an ARM template with a **Deploy to Azure** button.
+In this quickstart, you learned how to deploy the MedTech service in the Azure portal using an ARM template with the **Deploy to Azure** button.
To learn about other methods for deploying the MedTech service, see
healthcare-apis Deploy Manual Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-manual-config.md
Follow these six steps to fill in the Basics tab configuration:
The Basics tab should now look like this after you've filled it out:
- :::image type="content" source="media\deploy-new-config\select-device-mapping-button.png" alt-text="Screenshot of Basics tab filled out correctly." lightbox="media\deploy-new-config\select-device-mapping-button.png":::
+ :::image type="content" source="media\deploy-manual-config\select-device-mapping-button.png" alt-text="Screenshot of Basics tab filled out correctly." lightbox="media\deploy-manual-config\select-device-mapping-button.png":::
You're now ready to select the Device mapping tab and begin setting up the device mappings for your MedTech service.
To begin the validation process of your MedTech service deployment, select the *
Your validation screen should look something like this:
- :::image type="content" source="media\deploy-new-config\validate-and-review-medtech-service.png" alt-text="Screenshot of validation success with details displayed." lightbox="media\deploy-new-config\validate-and-review-medtech-service.png":::
+ :::image type="content" source="media\deploy-manual-config\validate-and-review-medtech-service.png" alt-text="Screenshot of validation success with details displayed." lightbox="media\deploy-manual-config\validate-and-review-medtech-service.png":::
If your MedTech service didn't validate, review the validation failure message, and troubleshoot the issue. Check all properties under each MedTech service tab that you've configured. Go back and try again.
healthcare-apis Deploy Manual Post https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-manual-post.md
Previously updated : 03/10/2023 Last updated : 04/25/2023
When you're satisfied with your configuration and it has been successfully valid
Your screen should look something like this:
- :::image type="content" source="media\deploy-new-deploy\created-medtech-service.png" alt-text="Screenshot of the MedTech service deployment completion." lightbox="media\deploy-new-deploy\created-medtech-service.png":::
+ :::image type="content" source="media\deploy-manual-post\created-medtech-service.png" alt-text="Screenshot of the MedTech service deployment completion." lightbox="media\deploy-manual-post\created-medtech-service.png":::
## Manual post-deployment requirements
Follow these steps to grant access to the device message event hub:
13. After the role assignment has been successfully added to the event hub, a notification will display on your screen with a green check mark. This notification indicates that your MedTech service can now read from your device message event hub. It should look like this:
- :::image type="content" source="media\deploy-new-deploy\validate-medtech-service-managed-identity-added-to-event-hub.png" alt-text="Screenshot of the MedTech service system-assigned managed identity being successfully granted access to the event hub with a red box around the message." lightbox="media\deploy-new-deploy\validate-medtech-service-managed-identity-added-to-event-hub.png":::
+ :::image type="content" source="media\deploy-manual-post\validate-medtech-service-managed-identity-added-to-event-hub.png" alt-text="Screenshot of the MedTech service system-assigned managed identity being successfully granted access to the event hub with a red box around the message." lightbox="media\deploy-manual-post\validate-medtech-service-managed-identity-added-to-event-hub.png":::
For more information about authorizing access to Event Hubs resources, see [Authorize access with Azure Active Directory](../../event-hubs/authorize-access-azure-active-directory.md).
healthcare-apis Device Messages Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/device-messages-through-iot-hub.md
Previously updated : 04/14/2023 Last updated : 04/25/2023
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-For enhanced workflows and ease of use, you can use the MedTech service to receive messages from devices you create and manage through an IoT hub in [Azure IoT Hub](../../iot-hub/iot-concepts-and-iot-hub.md). This tutorial uses an Azure Resource Manager template (ARM template) and a **Deploy to Azure** button to deploy a MedTech service. The template deploys an IoT hub to create and manage devices, and then routes device messages to an event hub in Azure Event Hubs for the MedTech service to pick up and process.
+For enhanced workflows and ease of use, you can use the MedTech service to receive messages from devices you create and manage through an IoT hub in [Azure IoT Hub](../../iot-hub/iot-concepts-and-iot-hub.md). This tutorial uses an Azure Resource Manager template (ARM template) and a **Deploy to Azure** button to deploy a MedTech service. The template deploys an IoT hub to create and manage devices, and then routes the device messages to an event hub for the MedTech service to read and process.
:::image type="content" source="media\device-messages-through-iot-hub\data-flow-diagram.png" border="false" alt-text="Diagram of the IoT device message flow through an IoT hub and event hub, and then into the MedTech service." lightbox="media\device-messages-through-iot-hub\data-flow-diagram.png"::: > [!TIP]
-> To learn how the MedTech service transforms and persists device message data into the FHIR service as FHIR Observations, see [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md).
+> To learn how the MedTech service transforms and persists device data into the FHIR service as FHIR Observations, see [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md).
In this tutorial, you learn how to: > [!div class="checklist"]
-> - Open an ARM template in the Azure portal.
-> - Configure the template for your deployment.
-> - Create a device.
-> - Send a test message.
-> - Review metrics for the test message.
+> * Open an ARM template in the Azure portal.
+> * Configure the template for your deployment.
+> * Create a device.
+> * Send a test message.
+> * Review metrics for the test message.
> [!TIP] > To learn about ARM templates, see [What are ARM templates?](./../../azure-resource-manager/templates/overview.md)
When you have these prerequisites, you're ready to configure the ARM template by
## Review the ARM template - Optional
-The ARM template used to deploy the resources in this tutorial is available at [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/iotconnectors-with-iothub/) by using the _azuredeploy.json_ file on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors-with-iothub).
+The ARM template used to deploy the resources in this tutorial is available at [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/iotconnectors-with-iothub/) by using the *azuredeploy.json* file on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors-with-iothub).
## Use the Deploy to Azure button
To begin deployment in the Azure portal, select the **Deploy to Azure** button:
- **Region**: The Azure region of the resource group that's used for the deployment. **Region** autofills by using the resource group region.
- - **Basename**: A value that's appended to the name of the Azure resources and services that are deployed. The examples in this tutorial use the basename _azuredocsdemo_. You can choose your own basename value.
+ - **Basename**: A value that's appended to the name of the Azure resources and services that are deployed. The examples in this tutorial use the basename *azuredocsdemo*. You can choose your own basename value.
- **Location**: A supported Azure region for Azure Health Data Services (the value can be the same as or different from the region your resource group is in). For a list of Azure regions where Health Data Services is available, see [Products available by regions](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=health-data-services).
To begin deployment in the Azure portal, select the **Deploy to Azure** button:
> > Examples: >
- > - Two MedTech services accessing the same device message event hub.
+ > * Two MedTech services accessing the same device message event hub.
>
- > - A MedTech service and a storage writer application accessing the same device message event hub.
+ > * A MedTech service and a storage writer application accessing the same device message event hub.
## Review deployed resources and access permissions When deployment is completed, the following resources and access roles are created in the template deployment: -- An Azure Event Hubs namespace and a device message event hub. In this deployment, the event hub is named _devicedata_.
+* An Event Hubs namespace and event hub. In this deployment, the event hub is named *devicedata*.
- - An event hub consumer group. In this deployment, the consumer group is named _$Default_.
+ * An event hub consumer group. In this deployment, the consumer group is named *$Default*.
- - An Azure Event Hubs Data Sender role. In this deployment, the sender role is named _devicedatasender_ and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](../../event-hubs/authorize-access-shared-access-signature.md). The Azure Event Hubs Data Sender role isn't used in this tutorial.
+ * An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](../../event-hubs/authorize-access-shared-access-signature.md). The Azure Event Hubs Data Sender role isn't used in this tutorial.
-- An Azure IoT Hub with [message routing](../../iot-hub/iot-hub-devguide-messages-d2c.md) configured to send device messages to the device message event hub.
+* An IoT hub with [message routing](../../iot-hub/iot-hub-devguide-messages-d2c.md) configured to send device messages to the event hub.
-- A [user-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) that provides send access from the IoT hub to the device message event hub. The managed identity has the Azure Event Hubs Data Sender role in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the device message event hub.
+* A [user-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md), which provides send access from the IoT hub to the event hub. The managed identity has the Azure Event Hubs Data Sender role in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the event hub.
- A Health Data Services workspace.
When deployment is completed, the following resources and access roles are creat
- A Health Data Services MedTech service with the required [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) roles:
- - For the device message event hub, the Azure Events Hubs Data Receiver role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the device message event hub.
+ - For the event hub, the Azure Event Hubs Data Receiver role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the event hub.
- For the FHIR service, the FHIR Data Writer role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the FHIR service. -- Conforming and valid MedTech service [device](overview-of-device-mapping.md) and [FHIR destination mappings](how-to-configure-fhir-mappings.md). **Resolution type** is set to **Create**.
+- Conforming and valid MedTech service [device](overview-of-device-mapping.md) and [FHIR destination mappings](overview-of-fhir-destination-mapping.md). **Resolution type** is set to **Create**.
> [!IMPORTANT] > In this tutorial, the ARM template configures the MedTech service to operate in **Create** mode. A Patient resource and a Device resource are created for each device that sends data to your FHIR service. >
-> To learn about the MedTech service resolution types **Create** and **Lookup**, see [Destination properties](deploy-new-config.md#destination-properties).
+> To learn about the MedTech service resolution types **Create** and **Lookup**, see [Destination properties](deploy-manual-config.md#destination-properties).
## Create a device and send a test message With your resources successfully deployed, you next connect to your IoT hub, create a device, and send a test message to the IoT hub. After you complete these steps, your MedTech service can: -- Pick up the IoT hub-routed test message from the device message event hub.-- Transform the test message into five FHIR observations.-- Persist the FHIR observations to your FHIR service.
+* Read the IoT hub-routed test message from the event hub.
+* Transform the test message into five FHIR Observations.
+* Persist the FHIR Observations to your FHIR service.
You complete the steps by using Visual Studio Code with the Azure IoT Hub extension:
You complete the steps by using Visual Studio Code with the Azure IoT Hub extens
3. Select the Azure subscription where your IoT hub was provisioned.
-4. Select your IoT hub. The name of your IoT hub is the _basename_ you provided when you provisioned the resources prefixed with **ih-**. An example hub name is _ih-azuredocsdemo_.
+4. Select your IoT hub. The name of your IoT hub is the *basename* you provided when you provisioned the resources prefixed with **ih-**. An example hub name is *ih-azuredocsdemo*.
-5. In Explorer, in **Azure IoT Hub**, select **…** and choose **Create Device**. An example device name is _iot-001_.
+5. In Explorer, in **Azure IoT Hub**, select **…** and choose **Create Device**. An example device name is *iot-001*.
:::image type="content" source="media\device-messages-through-iot-hub\create-device.png" alt-text="Screenshot that shows Visual Studio Code with the Azure IoT Hub extension with Create device selected." lightbox="media\device-messages-through-iot-hub\create-device.png"::: 6. To send a test message from the device to your IoT hub, right-click the device and select **Send D2C Message to IoT Hub**. > [!NOTE]
- > In this device-to-cloud (D2C) example, _cloud_ is the IoT hub in the Azure IoT Hub that receives the device message. Azure IoT Hub supports two-way communications. To set up a cloud-to-device (C2D) scenario, select **Send C2D Message to Device Cloud**.
+ > In this device-to-cloud (D2C) example, *cloud* is the IoT hub in the Azure IoT Hub that receives the device message. Azure IoT Hub supports two-way communications. To set up a cloud-to-device (C2D) scenario, select **Send C2D Message to Device Cloud**.
:::image type="content" source="media\device-messages-through-iot-hub\select-device-to-cloud-message.png" alt-text="Screenshot that shows Visual Studio Code with the Azure IoT Hub extension and the Send D2C Message to IoT Hub option selected." lightbox="media\device-messages-through-iot-hub\select-device-to-cloud-message.png"::: 7. In **Send D2C Messages**, select or enter the following values:
- - **Device(s) to send messages from**: The name of the device you created.
+ * **Device(s) to send messages from**: The name of the device you created.
- - **Message(s) per device**: **1**.
+ * **Message(s) per device**: **1**.
- - **Interval between two messages**: **1 second(s)**.
+ * **Interval between two messages**: **1 second(s)**.
- - **Message**: **Plain Text**.
+ * **Message**: **Plain Text**.
- - **Edit**: Clear any existing text, and then paste the following JSON.
+ * **Edit**: Clear any existing text, and then paste the following JSON.
> [!TIP] > You can use the **Copy** option in in the right corner of the below test message, and then paste it within the **Edit** option.
Now that you have successfully sent a test message to your IoT hub, review your
For your MedTech service metrics, you can see that your MedTech service completed the following steps for the test message: -- **Number of Incoming Messages**: Received the incoming test message from the device message event hub.-- **Number of Normalized Messages**: Created five normalized messages.-- **Number of Measurements**: Created five measurements.-- **Number of FHIR resources**: Created five FHIR resources that are persisted in your FHIR service.
+* **Number of Incoming Messages**: Received the incoming test message from the device message event hub.
+* **Number of Normalized Messages**: Created five normalized messages.
+* **Number of Measurements**: Created five measurements.
+* **Number of FHIR resources**: Created five FHIR resources that are persisted in your FHIR service.
:::image type="content" source="media\device-messages-through-iot-hub\metrics-tile-one.png" alt-text="Screenshot that shows a MedTech service metrics tile and test data metrics." lightbox="media\device-messages-through-iot-hub\metrics-tile-one.png":::
iot-central Tutorial Use Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-use-rest-api.md
description: In this tutorial you use the REST API to create and manage an IoT Central application, add a device, and configure data export. Previously updated : 12/07/2022 Last updated : 04/26/2023
iot-edge Iot Edge For Linux On Windows Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-security.md
The EFLOW virtual machine is made up of two main partitions *rootfs*, and *data*
Because you may need write access to `/etc`, `/home`, `/root`, `/var` for specific use cases, write access for these directories is done by overlaying them onto our data partition specifically to the directory `/var/.eflow/overlays`. The end result of this is that users can write anything to the previous mentioned directories. For more information about overlays, see [*overlayfs*](https://docs.kernel.org/filesystems/overlayfs.html).
-![EFLOW CR partition layout](./media/iot-edge-for-linux-on-windows-security/eflow-cr-partition-layout.png)
+[ ![EFLOW CR partition layout](./media/iot-edge-for-linux-on-windows-security/eflow-cr-partition-layout.png) ](./media/iot-edge-for-linux-on-windows-security/eflow-cr-partition-layout.png#lightbox)
| Partition | Size | Description | | |- | |
Because you may need write access to `/etc`, `/home`, `/root`, `/var` for specif
| BootEFIB | 8 MB | Firmware partition B for future GRUBless boot | | BootB | 192 MB | Contains the bootloader for B partition | | RootFS B | 4 GB | One of two active/passive partitions holding the root file system |
-| Unused | 4 GB | This partition is reserved for future use |
| Log | 1 GB or 6 GB | Logs specific partition mounted under /logs | | Data | 2 GB to 2 TB | Stateful partition for storing persistent data across updates. Expandable according to the deployment configuration |
In the EFLOW Continuous Release (CR) version, we introduced a change in the tran
Read more about [Windows IoT security premises](/windows/iot/iot-enterprise/os-features/security)
-Stay up-to-date with the latest [IoT Edge for Linux on Windows updates](./iot-edge-for-linux-on-windows-updates.md).
+Stay up-to-date with the latest [IoT Edge for Linux on Windows updates](./iot-edge-for-linux-on-windows-updates.md).
iot-hub Iot Hub Live Data Visualization In Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-live-data-visualization-in-power-bi.md
Title: Real-time data visualization of data from Azure IoT Hub ΓÇô Power BI
-description: Use Power BI to visualize temperature and humidity data that is collected from the sensor and sent to your Azure IoT hub.
+ Title: Tutorial - IoT data visualization with Power BI
+
+description: This tutorial uses Power BI to visualize temperature and humidity data that is collected from the sensor and sent to your Azure IoT hub.
arduino Previously updated : 11/21/2022 Last updated : 04/14/2023 # Tutorial: Visualize real-time sensor data from Azure IoT Hub using Power BI
-You can use Microsoft Power BI to visualize real-time sensor data that your Azure IoT hub receives. To do so, you configure an Azure Stream Analytics job to consume the data from IoT Hub and route it to a dataset in Power BI.
+You can use Microsoft Power BI to visualize real-time sensor data that your Azure IoT hub receives. To do so, configure an Azure Stream Analytics job to consume the data from IoT Hub and route it to a dataset in Power BI.
- [Microsoft Power BI](https://powerbi.microsoft.com/) is a data visualization tool that you can use to perform self-service and enterprise business intelligence (BI) over large data sets. [Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/#overview) is a fully managed, real-time analytics service designed to help you analyze and process fast moving streams of data that can be used to get insights, build reports or trigger alerts and actions.
+[Microsoft Power BI](https://powerbi.microsoft.com/) is a data visualization tool that you can use to perform self-service and enterprise business intelligence (BI) over large data sets. [Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/#overview) is a fully managed, real-time analytics service designed to help you analyze and process fast moving streams of data that can be used to get insights, build reports or trigger alerts and actions.
In this tutorial, you perform the following tasks:
In this tutorial, you perform the following tasks:
> * Create and configure an Azure Stream Analytics job to read temperature telemetry from your consumer group and send it to Power BI. > * Create a report of the temperature data in Power BI and share it to the web.
+If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+ ## Prerequisites
-* Complete the one of the [Send telemetry](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp) quickstarts in the development language of your choice. Alternatively, you can use any device app that sends temperature telemetry; for example, the [Raspberry Pi online simulator](iot-hub-raspberry-pi-web-simulator-get-started.md) or one of the [Embedded device](../iot-develop/quickstart-devkit-mxchip-az3166.md) quickstarts. These articles cover the following requirements:
-
+Before you begin this tutorial, have the following prerequisites in place:
+
+* Complete one of the [Send telemetry](../iot-develop/quickstart-send-telemetry-iot-hub.md) quickstarts in the development language of your choice. Alternatively, you can use any device app that sends temperature telemetry; for example, the [Raspberry Pi online simulator](iot-hub-raspberry-pi-web-simulator-get-started.md) or one of the [Embedded device](../iot-develop/quickstart-devkit-mxchip-az3166.md) quickstarts. These articles cover the following requirements:
+ * An active Azure subscription. * An Azure IoT hub in your subscription. * A client app that sends messages to your Azure IoT hub.
-* A Power BI account. ([Try Power BI for free](https://powerbi.microsoft.com/))
+* A Power BI account. [Try Power BI for free.](https://powerbi.microsoft.com/)
[!INCLUDE [iot-hub-get-started-create-consumer-group](../../includes/iot-hub-get-started-create-consumer-group.md)] ## Create, configure, and run a Stream Analytics job
-Let's start by creating a Stream Analytics job. After you create the job, you define the inputs, outputs, and the query used to retrieve the data.
+Create a Stream Analytics job. After you create the job, you define the inputs, outputs, and the query used to retrieve the data.
### Create a Stream Analytics job
-1. In the [Azure portal](https://portal.azure.com), select **Create a resource**. Type *Stream Analytics Job* in the search box and select it from the drop-down list. On the **Stream Analytics job** overview page, select **Create**
-
-2. In the **Basics** tab of the working pane, enter the following information.
-
- **Subscription**: Select the subscription for your IoT hub.
+Create a Stream Analytics job that you'll use to route data from IoT Hub to Power BI.
- **Resource group**: Select the resource group for your IoT hub.
+1. In the [Azure portal](https://portal.azure.com), select **Create a resource**. Type *Stream Analytics Job* in the search box and select it from the drop-down list. On the **Stream Analytics job** overview page, select **Create**
- **Name**: Enter the name of the job. The name must be globally unique.
+2. In the **Basics** tab of the **New Stream Analytics job** page, enter the following information:
- **Region**: Select the region for your IoT hub.
+ | Parameter | Value |
+ | | -- |
+ | **Subscription** | Select the subscription that contains your IoT hub. |
+ | **Resource group** | Select the resource group that contains your IoT hub. |
+ | **Name** | Enter the name of the job. The name must be globally unique. |
+ | **Region** | Select the region where your IoT hub is located. |
- Leave all other fields at their defaults, as shown in the following picture.
+ Leave all other fields at their defaults.
- :::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/create-stream-analytics-job.png" alt-text="Create a Stream Analytics job in Azure":::
+ :::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/create-stream-analytics-job.png" alt-text="Screenshot that shows creating a Stream Analytics job.":::
3. Select **Review + create**, then select **Create** to create the Stream Analytics job.
-### Add an input to the Stream Analytics job
-
-1. Open the Stream Analytics job.
-
-2. Under **Job topology**, select **Inputs**.
-
-3. In the **Inputs** pane, select **Add stream input**, then select **IoT Hub** from the drop-down list. On the new input pane, enter the following information:
+4. Once the job is created, select **Go to resource**.
- **Input alias**: Enter a unique alias for the input.
+### Add an input to the Stream Analytics job
- **Select IoT Hub from your subscription**: Select this radio button.
+Configure the Stream Analytics job to collect data from your IoT hub.
- **Subscription**: Select the Azure subscription you're using for this tutorial.
+1. Open the Stream Analytics job.
- **IoT Hub**: Select the IoT hub you're using for this tutorial.
+2. Select **Inputs** from the **Job simulation** section of the navigation menu.
- **Consumer group**: Select the consumer group you created previously.
+3. Select **Add input**, then select **IoT Hub** from the drop-down list.
- **Shared access policy name**: Select the name of the shared access policy you want the Stream Analytics job to use for your IoT hub. For this tutorial, you can select *service*. The *service* policy is created by default on new IoT hubs and grants permission to send and receive on cloud-side endpoints exposed by the IoT hub. To learn more, see [Access control and permissions](iot-hub-dev-guide-sas.md#access-control-and-permissions).
+ :::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/add-input-iot-hub.png" alt-text="Screenshot that shows selecting IoT Hub from the add input menu.":::
- **Shared access policy key**: This field is automatically filled, based on your selection for the shared access policy name.
+4. On the new input pane, enter the following information:
- **Endpoint**: Select **Messaging**.
-
- Leave all other fields at their defaults, as shown in the following picture.
+ | Parameter | Value |
+ | | -- |
+ | **Input alias** | Enter a unique alias for the input. For example, `PowerBIVisualizationInput`. |
+ | **Subscription** | Select the Azure subscription you're using for this tutorial. |
+ | **IoT Hub** | Select the IoT hub you're using for this tutorial. |
+ | **Consumer group** | Select the consumer group you created previously. |
+ | **Shared access policy name** | Select the name of the shared access policy you want the Stream Analytics job to use for your IoT hub. For this tutorial, you can select *service*. The *service* policy is created by default on new IoT hubs and grants permission to send and receive on cloud-side endpoints exposed by the IoT hub. To learn more, see [Access control and permissions](iot-hub-dev-guide-sas.md#access-control-and-permissions). |
+ | **Shared access policy key** | This field is automatically filled, based on your selection for the shared access policy name. |
+ | **Endpoint** | Select **Messaging**. |
- :::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/add-input-to-stream-analytics-job.png" alt-text="Add an input to a Stream Analytics job in Azure":::
+ Leave all other fields at their defaults.
-4. Select **Save**.
+5. Select **Save**.
### Add an output to the Stream Analytics job
-1. Under **Job topology**, select **Outputs**.
-
-2. In the **Outputs** pane, select **Add**, and then select **Power BI** from the drop-down list.
+1. Select **Outputs** from the **Job simulation** section of the navigation menu.
-3. On the **Power BI - New output** pane, select **Authorize** and follow the prompts to sign in to your Power BI account.
+2. Select **Add output**, and then select **Power BI** from the drop-down list.
-4. After you've signed in to Power BI, enter the following information:
+ :::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/add-output-power-bi.png" alt-text="Screenshot that shows selecting Power BI from the add output menu.":::
- **Output alias**: A unique alias for the output.
+3. After you've signed in to Power BI, enter the following information to create a Power BI output:
- **Group workspace**: Select your target group workspace.
+ | Parameter | Value |
+ | | -- |
+ | **Output alias** | A unique alias for the output. For example, `PowerBIVisualizationOutput`. |
+ | **Group workspace** | Select your target group workspace. |
+ | **Authentication mode** | The portal warns you if you don't have the correct permissions to use managed identities for authentication. If that's the case, select **User token** instead. |
+ | **Dataset name** | Enter a dataset name. |
+ | **Table name** | Enter a table name. |
- **Dataset name**: Enter a dataset name.
-
- **Table name**: Enter a table name.
-
- **Authentication mode**: Leave at the default.
-
- :::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/add-output-to-stream-analytics-job.png" alt-text="Add an output to a Stream Analytics job in Azure":::
+4. Select **Authorize** and sign in to your Power BI account.
5. Select **Save**. ### Configure the query of the Stream Analytics job
-1. Under **Job topology**, select **Query**.
+1. Select **Query** from the **Job simulation** section of the navigation menu.
-2. Replace `[YourInputAlias]` with the input alias of the job.
+2. In the query editor, replace `[YourOutputAlias]` with the output alias of the job.
-3. Replace `[YourOutputAlias]` with the output alias of the job.
+3. Replace `[YourInputAlias]` with the input alias of the job.
-1. Add the following `WHERE` clause as the last line of the query. This line ensures that only messages with a **temperature** property will be forwarded to Power BI.
+4. Add the following `WHERE` clause as the last line of the query. This line ensures that only messages with a **temperature** property will be forwarded to Power BI.
- ```sql
- WHERE temperature IS NOT NULL
- ```
-1. Your query should look similar to the following screenshot. Select **Save query**.
+ ```sql
+ WHERE temperature IS NOT NULL
+ ```
- :::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/add-query-to-stream-analytics-job.png" alt-text="Add a query to a Stream Analytics job":::
+5. Your query should look similar to the following screenshot. Select **Save query**.
-### Run the Stream Analytics job
+ :::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/add-query-to-stream-analytics-job.png" alt-text=" Screenshot that shows adding a query to a Stream Analytics job.":::
-In the Stream Analytics job, select **Overview**, then select **Start** > **Now** > **Start**. Once the job successfully starts, the job status changes from **Stopped** to **Running**.
+### Run the Stream Analytics job
+1. In the Stream Analytics job, select **Overview**.
+1. Select **Start** > **Now** > **Start**. Once the job successfully starts, the job status changes from **Stopped** to **Running**.
## Create and publish a Power BI report to visualize the data
-The following steps show you how to create and publish a report using the Power BI service. You can follow these steps, with some modification, if you want to use the "new look" in Power BI. To understand the differences and how to navigate in the "new look", see [The 'new look' of the Power BI service](/power-bi/fundamentals/desktop-latest-update).
+The following steps show you how to create and publish a report using the Power BI service.
-1. Make sure the client app is running on your device.
+1. Make sure that your IoT device is running and sending temperature data to IoT hub.
-2. Sign in to your [Power BI](https://powerbi.microsoft.com/) account and select **Power BI service** from the top menu.
+2. Sign in to your [Power BI](https://powerbi.microsoft.com/) account.
-3. Select the workspace you used from the side menu, **My Workspace**.
+3. Select **Workspaces** from the side menu, then select the group workspace you chose in the Stream Analytics job output.
-4. Under the **All** tab or the **Datasets + dataflows** tab, you should see the dataset that you specified when you created the output for the Stream Analytics job.
+4. On your workspace view, you should see the dataset that you specified when you created the output for the Stream Analytics job.
5. Hover over the dataset you created, select **More options** menu (the three dots to the right of the dataset name), and then select **Create report**.
- :::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/power-bi-create-report.png" alt-text="Create a Microsoft Power BI report":::
+ :::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/power-bi-create-report.png" alt-text="Screenshot that shows creating a Microsoft Power BI report.":::
6. Create a line chart to show real-time temperature over time.
The following steps show you how to create and publish a report using the Power
2. On the **Fields** pane, expand the table that you specified when you created the output for the Stream Analytics job.
- 3. Drag **EventEnqueuedUtcTime** to **Axis** on the **Visualizations** pane.
+ 3. Drag **EventEnqueuedUtcTime** to **X Axis** on the **Visualizations** pane.
- 4. Drag **temperature** to **Values**.
+ 4. Drag **temperature** to **Y Axis**.
A line chart is created. The x-axis displays date and time in the UTC time zone. The y-axis displays temperature from the sensor.
The following steps show you how to create and publish a report using the Power
> [!NOTE] > Depending on the device or simulated device that you use to send telemetry data, you may have a slightly different list of fields.
- >
-
-8. Select **File** > **Save** to save the report. When prompted, enter a name for your report. When prompted for a sensitivity label, you can select **Public** and then select **Save**.
-10. Still on the report pane, select **File** > **Embed report** > **Website or portal**.
+7. Select **File** > **Save** to save the report. When prompted, enter a name for your report.
- :::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/power-bi-select-embed-report.png" alt-text="Select embed report website for the Microsoft Power BI report":::
+8. Still on the report pane, select **File** > **Embed report** > **Website or portal**.
> [!NOTE] > If you get a notification to contact your administrator to enable embed code creation, you may need to contact them. Embed code creation must be enabled before you can complete this step. >
- > :::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/contact-admin.png" alt-text="Contact your administrator notification":::
--
-11. You're provided the report link that you can share with anyone for report access and a code snippet that you can use to integrate the report into a blog or website. Copy the link in the **Secure embed code** window and then close the window.
+ > :::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/contact-admin.png" alt-text="Screenshot that shows the Contact your administrator notification.":::
- :::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/copy-secure-embed-code.png" alt-text="Copy the embed report link":::
+9. You're provided the report link that you can share with anyone for report access and a code snippet that you can use to integrate the report into a blog or website. Copy the link in the **Secure embed code** window and then close the window.
-12. Open a web browser and paste the link into the address bar.
-
- :::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/power-bi-web-output.png" alt-text="Publish a Microsoft Power BI report":::
+10. Open a web browser and paste the link into the address bar to view your report in the browser.
Microsoft also offers the [Power BI mobile apps](https://powerbi.microsoft.com/documentation/powerbi-power-bi-apps-for-mobile-devices/) for viewing and interacting with your Power BI dashboards and reports on your mobile device. ## Clean up resources
-In this tutorial, you've created a resource group, an IoT hub, a Stream Analytics job, and a dataset in Power BI.
+In this tutorial, you created a Stream Analytics job and a dataset in Power BI.
+
+If you plan to complete other tutorials, you may want to keep the resource group and IoT hub, so you can reuse them later.
-If you plan to complete other tutorials, you may want to keep the resource group and IoT hub, so you can reuse them later.
+### Clean up Azure resources
-If you don't need the IoT hub or the other resources you created any longer, you can delete the resource group in the Azure portal. To do so, select the resource group and then select **Delete resource group**. If you want to keep the IoT hub, you can delete other resources from the **Overview** pane of the resource group. To do so, right-click the resource, select **Delete** from the context menu, and follow the prompts.
+Your Stream Analytics job should be in the same resource group as your IoT hub. If you don't need the IoT hub or the other resources you created any longer, you can delete the entire resource group in the Azure portal. Or, you can delete individual resources.
-### Use the Azure CLI to clean up Azure resources
+1. In the Azure portal, navigate to your resource group.
+1. Review the resources in your group. If you want to delete them all, select **Delete resource group**. If you want to delete an individual resource, right-click the resource, select **Delete** from the context menu, and follow the prompts.
-To remove the resource group and all of its resources, use the [az group delete](/cli/azure/group#az-group-delete) command.
+To remove the resource group and all of its resources, you can also use the [az group delete](/cli/azure/group#az-group-delete) command:
```azurecli-interactive az group delete --name {your resource group}
az group delete --name {your resource group}
### Clean up Power BI resources
-You created a dataset, **PowerBiVisualizationDataSet**, in Power BI. To remove it, sign in to your [Power BI](https://powerbi.microsoft.com/) account. On the left-hand menu under **Workspaces**, select **My workspace**. In the list of datasets under the **DataSets + dataflows** tab, hover over the **PowerBiVisualizationDataSet** dataset. Select the three vertical dots that appear to the right of the dataset name to open the **More options** menu, then select **Delete** and follow the prompts. When you remove the dataset, the report is removed as well.
+You created a dataset, **PowerBiVisualizationDataSet**, in Power BI. You can delete your dataset and the associated report you created from the Power BI service.
-## Next steps
+1. Sign in to your [Power BI](https://powerbi.microsoft.com/) account.
+1. Select **Workspaces**, then select the name of the workspace that contains your dataset.
+1. Hover over the **PowerBiVisualizationDataSet** dataset and select the three horizontal dots that appear to open the **More options** menu.
+1. Select **Delete** and follow the prompts. When you remove the dataset, the report is removed as well.
-In this tutorial, you learned how to use Power BI to visualize real-time sensor data from your Azure IoT hub by performing the following tasks:
+## Next steps
-> [!div class="checklist"]
-> * Create a consumer group on your IoT hub.
-> * Create and configure an Azure Stream Analytics job to read temperature telemetry from your consumer group and send it to Power BI.
-> * Configure a report for the temperature data in Power BI and share it to the web.
+In this tutorial, you learned how to use Power BI to visualize real-time sensor data from your Azure IoT hub.
-For another way to visualize data from Azure IoT Hub, see the following article.
+For another way to visualize data from Azure IoT Hub, see the following tutorial:
> [!div class="nextstepaction"]
-> [Use a web app to visualize real-time sensor data from Azure IoT Hub](iot-hub-live-data-visualization-in-web-apps.md).
+> [Use a web app to visualize real-time sensor data from Azure IoT Hub](iot-hub-live-data-visualization-in-web-apps.md).
iot-hub Iot Hub Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-scaling.md
The difference in supported capabilities between the basic and standard tiers of
| [Update file upload status](/rest/api/iothub/device/updatefileuploadstatus) | Yes | Yes | | [Bulk device operation](/rest/api/iothub/service/bulk-registry/update-registry) | Yes, except for IoT Edge capabilities | Yes | | [Create import export job](/rest/api/iothub/service/jobs/createimportexportjob), [Get import export job](/rest/api/iothub/service/jobs/getimportexportjob), [Cancel import export job](/rest/api/iothub/service/jobs/cancelimportexportjob) | Yes | Yes |
-| [Purge command queue](/javascript/api/azure-iot-digitaltwins-service/registrymanager#azure-iot-digitaltwins-service-registrymanager-purgecommandqueue) | | Yes |
| [Get device twin](/rest/api/iothub/service/devices/get-twin), [Update device twin](/rest/api/iothub/service/devices/update-twin) | | Yes | | [Get module twin](/rest/api/iothub/service/modules/get-twin), [Update module twin](/rest/api/iothub/service/modules/update-twin) | | Yes | | [Invoke device method](/rest/api/iothub/service/devices/invoke-method) | | Yes |
lab-services Concept Nested Virtualization Template Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/concept-nested-virtualization-template-vm.md
Last updated 01/13/2023
# Nested virtualization on a template virtual machine in Azure Lab Services
-Azure Lab Services enables you to set up a [template virtual machine](./classroom-labs-concepts.md#template-virtual-machine) in a lab, which serves as a base image for the VMs of your students. Teaching a networking, security or IT class can require an environment with multiple VMs. The VMs also need to communicate with each other.
+Azure Lab Services enables you to set up a [template virtual machine](./classroom-labs-concepts.md#template-virtual-machine) in a lab, which serves as a base image for the VMs of your students. Teaching a networking, security or IT class can require an environment with multiple VMs. These VMs also need to communicate with each other.
-Nested virtualization enables you to create a multi-VM environment inside a lab's template virtual machine. Publishing the template will provide each lab user with a virtual machine that has multiple VMs within it. This article explains the concepts of nested virtualization on a template VM in Azure Lab Services, and how to enable it.
+Nested virtualization enables you to create a multi-VM environment inside a lab's template virtual machine. Publishing the template provides each lab user with a virtual machine that has multiple VMs within it. This article explains the concepts of nested virtualization on a template VM in Azure Lab Services, and how to enable it.
## What is nested virtualization?
Before setting up a lab with nested virtualization, here are a few things to tak
- Client VMs don't have access to Azure resources, such as DNS servers, on the Azure virtual network. -- The host VM requires additional configuration to let the client machines have internet connectivity.
+- The host VM requires extra configuration to let the client machines have internet connectivity.
- Hyper-V client VMs are licensed as independent machines. For information about licensing for Microsoft operation systems and products, see [Microsoft Licensing](https://www.microsoft.com/licensing/default). Check licensing agreements for any other software you use, before installing it on the template VM or client VMs.
+- Virtualization applications other than Hyper-V are [*not* supported for nested virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization#3rd-party-virtualization-apps). This includes any software that requires hardware virtualization extensions.
+ ## Enable nested virtualization on a template VM
-To enable nested virtualiztion on a template VM, you first connect to the template VM with a remote desktop client. Then, you make a number of configuration changes inside the VM.
+To enable nested virtualization on a template VM, you first connect to the template VM with a remote desktop client. You then make the required configuration changes inside the template VM.
1. Follow these steps to [connect to and update the template machine](./how-to-create-manage-template.md#update-a-template-vm).
To enable nested virtualiztion on a template VM, you first connect to the templa
>[!NOTE] >The NAT network created on the Lab Services VM will allow a Hyper-V VM to access the internet and other Hyper-V VMs on the same Lab Services VM. The Hyper-V VM won't be able to access Azure resources, such as DNS servers, on an Azure virtual network.
-You can accomplish the tasks listed above by using a script, or by using Windows tools. Learn how you can [enable nested virtualization on a template VM in Azure Lab Services](./how-to-enable-nested-virtualization-template-vm-using-script.md).
+You can accomplish the tasks listed previously by using a script, or by using Windows tools. Follow these steps to [enable nested virtualization on a template VM](./how-to-enable-nested-virtualization-template-vm-using-script.md).
## Processor compatibility
lab-services How To Enable Nested Virtualization Template Vm Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-enable-nested-virtualization-template-vm-ui.md
- Title: Enable nested virtualization on a template VM-
-description: Learn how to create a template VM in Azure Lab Services with multiple VMs inside. In other words, enable nested virtualization on a template VM in Azure Lab Services.
----- Previously updated : 03/03/2023--
-# Enable nested virtualization manually on a template VM in Azure Lab Services
-
-Nested virtualization enables you to create a multi-VM environment inside a lab's template VM. Publishing the template provides each user in the lab with a virtual machine that is set up with multiple VMs within it. For more information about nested virtualization and Azure Lab Services, see [Enable nested virtualization on a template virtual machine in Azure Lab Services](how-to-enable-nested-virtualization-template-vm.md).
-
-This article covers how to set up nested virtualization on a template machine in Azure Lab Services using Windows roles and tools directly. There are a few things needed to enable a class to use nested virtualization. The following steps describe how to manually set up an Azure Lab Services machine template with Hyper-V. Steps are intended for Windows Server 2016 or Windows Server 2019.
-
-> [!IMPORTANT]
-> Select **Large (nested virtualization)** or **Medium (nested virtualization)** for the virtual machine size when creating the lab. Nested virtualization will not work otherwise.
-
-## Enable Hyper-V role
-
-The following steps describe how to enable Hyper-V on Windows Server using Server Manager. After enabling Hyper-V, Hyper-V manager is available to add, modify, and delete client VMs on the template VM.
-
-1. Connect to the template virtual machine using remote desktop (RDP).
-
-1. In **Server Manager**, on the Dashboard page, select **Add Roles and Features**.
-
-1. On the **Before you begin** page, select **Next**.
-
-1. On the **Select installation type** page, keep the default selection of **Role-based or feature-based installation** and then select **Next**.
-
-1. On the **Select destination server** page, select **Select a server from the server pool**. The current server is already selected. Select **Next**.
-
-1. On the **Select server roles** page, select **Hyper-V**.
-
-1. The **Add Roles and Features Wizard** pop-up appears. Select **Include management tools (if applicable)**, and then select **Add Features**.
-
-1. On the **Select server roles** page, select **Next**.
-
-1. On the **Select features page**, select **Next**.
-
-1. On the **Hyper-V** page, select **Next**.
-
-1. On the **Create Virtual Switches** page, accept the defaults, and select **Next**.
-
-1. On the **Virtual Machine Migration** page, accept the defaults, and select **Next**.
-
-1. On the **Default Stores** page, accept the defaults, and select **Next**.
-
-1. On the **Confirm installation selections** page, select **Restart the destination server automatically if required**.
-
-1. When the **Add Roles and Features Wizard** pop-up appears, select **Yes**.
-
-1. Select **Install**.
-
-1. Wait for the **Installation progress** page to indicate that the Hyper-V role is complete. The machine may restart in the middle of the installation.
-
-1. Select **Close**.
-
-## Enable DHCP role
-
-Any Hyper-V client VM you create, needs an IP address in the NAT network. You'll create the NAT network at a later stage. One way to assign IP addresses is to set up the host, in this case the lab VM template, as a DHCP server.
-
-To enable the DHCP role on the template VM:
-
-1. In **Server Manager**, on the **Dashboard** page, select **Add Roles and Features**.
-
-1. On the **Before you begin** page, select **Next**.
-1. On the **Select installation type** page, select **Role-based or feature-based installation** and then select **Next**.
-1. On the **Select destination server** page, select the current server from the server pool and then select **Next**.
-1. On the **Select server roles** page, select **DHCP Server**.
-1. The **Add Roles and Features Wizard** pop-up appears. Select **Include management tools (if applicable)**. Select **Add Features**.
-
- >[!NOTE]
- >You may see a validation error stating that no static IP addresses were found. This warning can be ignored for our scenario.
-
-1. On the **Select server roles** page, select **Next**.
-1. On the **Select features** page, select **Next**.
-1. On the **DHCP Server** page, select **Next**.
-1. On the **Confirm installation selections** page, select **Install**.
-1. Wait for the **Installation progress page** to indicate that the DHCP role is complete.
-1. Select Close.
-
-## Enable Routing and Remote Access role
-
-To enable the Routing and Remote Access role:
-
-1. In **Server Manager**, on the **Dashboard** page, select **Add Roles and Features**.
-
-1. On the **Before you begin** page, select **Next**.
-1. On the **Select installation type** page, select **Role-based or feature-based installation** and then select **Next**.
-1. On the **Select destination server** page, select the current server from the server pool and then select **Next**.
-1. On the **Select server roles** page, select **Remote Access**, and then select **OK**.
-1. On the **Select features** page, select **Next**.
-1. On the **Remote Access** page, select **Next**.
-1. On the **Role Services** page, select **Routing**.
-1. The **Add Roles and Features Wizard** pop-up appears. Select **Include management tools (if applicable)**. Select **Add Features**.
-1. Select **Next**.
-1. On the **Web Server Role (IIS)** page, select **Next**.
-1. On the **Select role services** page, select **Next**.
-1. On the **Confirm installation selections** page, select **Install**.
-1. Wait for the **Installation progress** page to indicate that the Remote Access role is complete.
-1. Select **Close**.
-
-## Create virtual NAT network
-
-Now that you enabled the necessary server roles, you can create the NAT network. The creation process involves creating a switch and the NAT network, itself. A NAT (network address translation) network assigns a public IP address to a group of VMs on a private network to allow connectivity to the internet. In this case, the group of private VMs are the nested VMs. The NAT network allows the nested VMs to communicate with one another. A switch is a network device that handles receiving and routing of traffic in a network.
-
-### Create a new virtual switch
-
-To create a new virtual switch:
-
-1. Open **Hyper-V Manager** from Windows Administrative Tools.
-
-1. Select the current server in the left-hand navigation menu.
-1. Select **Virtual Switch Manager…** from the **Actions** menu on the right-hand side of the **Hyper-V Manager**.
-1. On the **Virtual Switch Manager** pop-up, select **Internal** for the type of switch to create. Select **Create Virtual Switch**.
-1. For the newly created virtual switch, set the name to something memorable. For this example, you use *LabServicesSwitch*.
-1. Select **OK**.
-
- Windows now creates a new network adapter. The name is similar to *vEthernet (LabServicesSwitch)*. To verify, open the **Control Panel** > **Network and Internet** > **View network status and tasks**. On the left, select **Change adapter settings** to view all network adapters.
-
-1. Before you continue to create a NAT network, restart the template virtual machine.
-
-### Create a NAT network
-
-To create a NAT network:
-
-1. Open the **Routing and Remote Access** tool from Windows Administrative Tools.
-
-1. Select the local server in the left navigation page.
-1. Choose **Action** -> **Configure and Enable Routing and Remote Access**.
-1. When **Routing and Remote Access Server Setup Wizard** appears, select **Next**.
-1. On the **Configuration** page, select **Network address translation (NAT)** configuration, and then select **Next**.
-
- >[!WARNING]
- >Don't choose the **Virtual private network (VPN) access and NAT** option.
-
-1. On **NAT Internet Connection** page, choose **Ethernet**, and then select **Next**.
-
- >[!WARNING]
- >Don't choose the **vEthernet (LabServicesSwitch)** connection we created in Hyper-V Manager.
-
- If there are no network interfaces in the list, restart the virtual machine.
-
-1. Select **Finish** on the last page of the wizard.
-
-1. On the **Start the service** dialog, select **Start Service**, and wait until the service is running.
-
-## Update network adapter settings
-
-The network adapter is associated with the IP used for the default gateway IP for the NAT network you created earlier. In this example, you create an IP address of `192.168.0.1` with a subnet mask of `255.255.255.0`. You use the virtual switch you created earlier.
-
-1. Open the **Control Panel**, select **Network and Internet**, select **View network status and tasks**.
-
-1. On the left, select **Change adapter settings**.
-1. In the **Network Connections** window, double-click on 'vEthernet (LabServicesSwitch)' to show the **vEthernet (LabServicesSwitch) Status** details dialog.
-1. Select the **Properties** button.
-1. Select **Internet Protocol Version 4 (TCP/IPv4)** item and select the **Properties** button.
-1. In the **Internet Protocol Version 4 (TCP/IPv4) Properties** dialog, select **Use the following IP address**. For the IP address, enter 192.168.0.1. For the subnet mask, enter 255.255.255.0. Leave the default gateway and DNS servers blank.
-
- >[!NOTE]
- > The range for the NAT network is, in CIDR notation, 192.168.0.0/24. This configuration creates a range of usable IP addresses from 192.168.0.1 to 192.168.0.254. By convention, gateways have the first IP address in a subnet range.
-
-1. Select **OK**.
-
-## Create DHCP Scope
-
-The following steps are instructions to add a DHCP scope. In this article, the NAT network is 192.168.0.0/24 in CIDR notation. This creates a range of usable IP addresses from 192.168.0.1 to 192.168.0.254. The DHCP scope must be in that range of usable addresses, excluding the IP address you already created earlier.
-
-1. Open **Administrative Tools** and open the **DHCP** administrative tool.
-1. In the **DHCP** tool, expand the node for the current server and select **IPv4**.
-1. From the Action menu, choose **New Scope…**.
-1. When the **New Scope Wizard** appears, select **Next** on the **Welcome** page.
-1. On the **Scope Name** page, enter 'LabServicesDhcpScope' or something else memorable for the name. Select **Next**.
-1. On the **IP Address Range** page, enter the following values.
-
- - *192.168.0.100* for the **Start IP address**
- - *192.168.0.200* for the **End IP address**
- - *24* for the **Length**
- - *255.255.255.0* for the **Subnet mask**
-
-1. Select **Next**.
-1. On the **Add Exclusions and Delay** page, select **Next**.
-1. On the **Lease Duration** page, select **Next**.
-1. On the **Configure DHCP Options** page, select **Yes, I want to configure these options now**. Select **Next**.
-1. On the **Router (Default Gateway)**
-1. Add 192.168.0.1, if not done already. Select **Next**.
-1. On the **Domain Name and DNS Servers** page, add 168.63.129.16 as a DNS server IP address, if not done already. 168.63.129.16 is the IP address for an Azure static DNS server. Select **Next**.
-1. On the **WINS Servers** page, select **Next**.
-1. One the **Activate Scope** page, select **Yes, I want to activate this scope now**. Select **Next**.
-1. On the **Completing the New Scope Wizard** page, select **Finish**.
-
-## Conclusion
-
-Now your template machine is ready to create Hyper-V virtual machines. See [Create a Virtual Machine in Hyper-V](/windows-server/virtualization/hyper-v/get-started/create-a-virtual-machine-in-hyper-v) for instructions about how to create Hyper-V virtual machines. Also see the [Microsoft Evaluation Center](https://www.microsoft.com/evalcenter/) to check out available operating systems and software.
-
-## Next steps
-
-Next steps are common to setting up any lab.
--- [As an educator, add students to a lab](tutorial-setup-lab.md#add-users-to-the-lab)-- [As an educator, set quota for students](how-to-configure-student-usage.md#set-quotas-for-users)-- [As an educator, set a schedule for the lab](tutorial-setup-lab.md#add-a-lab-schedule)-- [As an educator, publish a lab](tutorial-setup-lab.md#publish-lab)
lab-services How To Enable Nested Virtualization Template Vm Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-enable-nested-virtualization-template-vm-using-script.md
To enable nested virtualization on the template VM, you first connect to the VM
- [Enable nested virtualization by using a script](#enable-nested-virtualization-by-using-a-script). - [Enable nested virtualization by using Windows tools](#enable-nested-virtualization-by-using-windows-tools).
+> [!NOTE]
+> Virtualization applications other than Hyper-V are [*not* supported for nested virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization#3rd-party-virtualization-apps). This includes any software that requires hardware virtualization extensions.
+ >[!IMPORTANT] >Select **Large (nested virtualization)** or **Medium (nested virtualization)** for the virtual machine size when creating the lab. Nested virtualization will not work otherwise.
logic-apps Logic Apps Enterprise Integration Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-maps.md
This article shows how to add a map to your integration account. If you're worki
* If you already have an integration account with the artifacts that you need or want to use, you can link your integration account to multiple Standard logic app resources where you want to use the artifacts. That way, you don't have to upload maps to each individual logic app. For more information, review [Link your logic app resource to your integration account](logic-apps-enterprise-integration-create-integration-account.md?tabs=standard#link-account).
- * The **Liquid** built-in connector lets you select a map that you previously uploaded to your logic app resource or to a linked integration account, but not both. You can then use this artifact across all child workflows within the same logic app resource.
+ * The **Liquid** built-in connector lets you select a map that you previously uploaded to your logic app resource or to a linked integration account, but not both.
So, if you don't have or need an integration account, you can use the upload option. Otherwise, you can use the linking option. Either way, you can use these artifacts across all child workflows within the same logic app resource.
This article shows how to add a map to your integration account. If you're worki
* Supports references to external assemblies from maps, which enable direct calls from XSLT maps to custom .NET code. To configure support for external assemblies, see [.NET Framework assembly support for XSLT transformations added to Azure Logic Apps (Standard)](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/net-framework-assembly-support-added-to-azure-logic-apps/ba-p/3669120).
+ * Supports XSLT 1.0, 2.0, and 3.0.
+ * No limits apply to map file sizes. * Consumption workflows
+ * Azure Logic Apps allocates finite memory for processing XML transformations. If you create Consumption workflows, and your map or payload transformations have high memory consumption, such transformations might fail, resulting in out of memory errors. To avoid this scenario, consider these options:
+
+ * Edit your maps or payloads to reduce memory consumption.
+
+ * Create [Standard logic app workflows](logic-apps-overview.md#resource-environment-differences), which run in single-tenant Azure Logic Apps and offer dedicated and flexible options for compute and memory resources.
+ * Supports references to external assemblies from maps, which enable direct calls from XSLT maps to custom .NET code with the following requirements: * You need a 64-bit assembly. The transform service runs a 64-bit process, so 32-bit assemblies aren't supported. If you have the source code for a 32-bit assembly, recompile the code into a 64-bit assembly. If you don't have the source code, but you obtained the binary from a third-party provider, get the 64-bit version from that provider. For example, some vendors provide assemblies in packages that have both 32-bit and 64-bit versions. If you have the option, use the 64-bit version instead.
This article shows how to add a map to your integration account. If you're worki
To add larger maps, you can use the [Azure Logic Apps REST API - Maps](/rest/api/logic/maps/createorupdate). For Standard workflows, the Azure Logic Apps REST API is currently unavailable.
- * Azure Logic Apps allocates finite memory for processing XML transformations. If you create Consumption workflows, and your map or payload transformations have high memory consumption, such transformations might fail, resulting in out of memory errors. To avoid this scenario, consider these options:
-
- * Edit your maps or payloads to reduce memory consumption.
-
- * Create [Standard logic app workflows](logic-apps-overview.md#resource-environment-differences) instead.
-
- These workflows run in single-tenant Azure Logic Apps, which offers dedicated and flexible options for compute and memory resources. However, Standard workflows support only XSLT 1.0 and don't support referencing external assemblies from maps.
- <a name="create-maps"></a> ## Create maps
logic-apps Logic Apps Enterprise Integration Transform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-transform.md
If you're new to logic apps, review [What is Azure Logic Apps](logic-apps-overvi
* A [link to your logic app resource](logic-apps-enterprise-integration-create-integration-account.md#link-account).
- * If you're working on a [Standard logic app resource and workflow](logic-apps-overview.md#resource-environment-differences), you don't store maps in your integration account. Instead, you can [directly add maps to your logic app resource](logic-apps-enterprise-integration-maps.md) using either the Azure portal or Visual Studio Code. Only XSLT 1.0 is currently supported. You can then use these maps across multiple workflows within the *same logic app resource*.
+ * If you're working on a [Standard logic app resource and workflow](logic-apps-overview.md#resource-environment-differences), you can link your integration account to your logic app resource, upload maps directly to your logic app resource, or both, based on the following scenarios:
- You still need an integration account to store other artifacts, such as partners, agreements, and certificates, along with using the [AS2](logic-apps-enterprise-integration-as2.md), [X12](logic-apps-enterprise-integration-x12.md), and [EDIFACT](logic-apps-enterprise-integration-edifact.md) operations. However, you don't need to link your logic app resource to your integration account, so the linking capability doesn't exist. Your integration account still has to meet other requirements, such as using the same Azure subscription and existing in the same location as your logic app resource.
+ * If you already have an integration account with the artifacts that you need or want to use, you can link your integration account to multiple Standard logic app resources where you want to use the artifacts. That way, you don't have to upload maps to each individual logic app. For more information, review [Link your logic app resource to your integration account](logic-apps-enterprise-integration-create-integration-account.md?tabs=standard#link-account).
+
+ * If you don't have an integration account or only plan to use your artifacts across multiple workflows within the *same logic app resource*, you can [directly add maps to your logic app resource](logic-apps-enterprise-integration-maps.md) using either the Azure portal or Visual Studio Code.
+
+ > [!NOTE]
+ >
+ > The Liquid built-in connector lets you select a map that you previously uploaded to your logic app resource or to a linked integration account, but not both.
+
+ So, if you don't have or need an integration account, you can use the upload option. Otherwise, you can use the linking option. Either way, you can use these artifacts across all child workflows within the same logic app resource.
+
+ You still need an integration account to store other artifacts, such as partners, agreements, and certificates, along with using the [AS2](logic-apps-enterprise-integration-as2.md), [X12](logic-apps-enterprise-integration-x12.md), and [EDIFACT](logic-apps-enterprise-integration-edifact.md) operations.
## Add Transform XML action
machine-learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md
Azure portal users will always find the latest image available for provisioning
See the [list of known issues](reference-known-issues.md) to learn about known bugs and workarounds.
+## April 26, 2023
+[Data Science VM ΓÇô Ubuntu 20.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-2004?tab=Overview)
+
+Version `23.04.24`
+
+Main changes:
+
+- SDK `1.50.0`
+- Dotnet upgraded to `6.0` SDK
+- PyTorch GPU functionality fixed in `azureml_py38_PT_and_TF environment.
+- Blobfuse upgraded to blobfuse2
+ ## April 4, 2023 [Data Science Virtual Machine - Windows 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview)
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
We also recommend [deploying locally](#deploy-locally) to test and debug your mo
This is a list of common resources that might run out of quota when using Azure * [CPU](#cpu-quota)
+* [Cluster](#cluster-quota)
* [Disk](#disk-quota) * [Memory](#memory-quota) * [Role assignments](#role-assignment-quota)
Before deploying a model, you need to have enough compute quota. This quota defi
A possible mitigation is to check if there are unused deployments that you can delete. Or you can submit a [request for a quota increase](how-to-manage-quotas.md#request-quota-increases).
+#### Cluster quota
+
+This issue will occur when you do not have enough Azure ML Compute cluster quota. This quota defines the total number of clusters that may be in use at one time per subscription to deploy CPU or GPU nodes in Azure Cloud.
+
+A possible mitigation is to check if there are unused deployments that you can delete. Or you can submit a [request for a quota increase](how-to-manage-quotas.md#request-quota-increases). Make sure to select `Machine Learning Service: Cluster Quota` as the quota type for this quota increase request.
+ #### Disk quota
-This issue happens when the size of the model is larger than the available disk space and the model is not able to be downloaded. Try a [SKU](reference-managed-online-endpoints-vm-sku-list.md) with more disk space or reducing the image and model size.
This issue happens when the size of the model is larger than the available disk space and the model is not able to be downloaded. Try a [SKU](reference-managed-online-endpoints-vm-sku-list.md) with more disk space or reducing the image and model size. #### Memory quota
mysql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-server-parameters.md
Creating new client connections to MySQL takes time and once established, these
### innodb_strict_mode
-If you receive an error similar to "Row size too large (> 8126)", you may want to turn OFF the parameter **innodb_strict_mode**. The server parameter **innodb_strict_mode** isn't allowed to be modified globally at the server level because if row data size is larger than 8k, the data is truncated without an error, which can lead to potential data loss. We recommend modifying the schema to fit the page size limit.
+If you receive an error similar to "Row size too large (> 8126)", you may want to turn OFF the parameter **innodb_strict_mode**. The server parameter **innodb_strict_mode** can't be modified globally at the server level because if row data size is larger than 8k, the data is truncated without an error, which can lead to potential data loss. We recommend modifying the schema to fit the page size limit.
This parameter can be set at a session level using `init_connect`. To set **innodb_strict_mode** at session level, refer to [setting parameter not listed](./how-to-configure-server-parameters-portal.md#setting-non-modifiable-server-parameters).
Upon initial deployment, an Azure for MySQL Flexible Server includes system tabl
In Azure Database for MySQL this parameter specifies the number of seconds the service waits before purging the binary log file.
-The binary log contains ΓÇ£eventsΓÇ¥ that describe database changes such as table creation operations or changes to table data. It also contains events for statements that potentially could have made changes. The binary log is used mainly for two purposes, replication and data recovery operations. Usually, the binary logs are purged as soon as the handle is free from service, backup or the replica set. If there are multiple replicas, the binary logs wait for the slowest replica to read the changes before it's been purged. If you want to persist binary logs for a more duration of time, you can configure the parameter binlog_expire_logs_seconds. If the binlog_expire_logs_seconds is set to 0, which is the default value, it purges as soon as the handle to the binary log is freed. If binlog_expire_logs_seconds > 0, then it would wait until the seconds configured before it purges. For Azure database for MySQL, managed features like backup and read replica purging of binary files are handled internally. When you replicate the data-out from the Azure Database for MySQL service, this parameter needs to be set in primary to avoid purging of binary logs before the replica reads from the changes from the primary. If you set the binlog_expire_logs_seconds to a higher value, then the binary logs won't get purged soon enough and can lead to increase in the storage billing.
+The binary log contains ΓÇ£eventsΓÇ¥ that describe database changes such as table creation operations or changes to table data. It also contains events for statements that potentially could have made changes. The binary log is used mainly for two purposes, replication and data recovery operations. Usually, the binary logs are purged as soon as the handle is free from service, backup or the replica set. If there are multiple replicas, the binary logs wait for the slowest replica to read the changes before it's been purged. If you want to persist binary logs for a more duration of time, you can configure the parameter binlog_expire_logs_seconds. If the binlog_expire_logs_seconds is set to 0, which is the default value, it purges as soon as the handle to the binary log is freed. If binlog_expire_logs_seconds > 0, then it would wait until the seconds configured before it purges. For Azure database for MySQL, managed features like backup and read replica purging of binary files are handled internally. When you replicate the data-out from the Azure Database for MySQL service, this parameter needs to be set in primary to avoid purging of binary logs before the replica reads from the changes from the primary. If you set the binlog_expire_logs_seconds to a higher value, then the binary logs won't be purged soon enough and can lead to increase in the storage billing.
### event_scheduler
To configure the `event_scheduler` server parameter in Azure Database for MySQL,
4. To view the Event Scheduler Details, run the following SQL statement:
- ```slq
+ ```sql
SHOW EVENTS; ```
To configure the `event_scheduler` server parameter in Azure Database for MySQL,
```azurecli mysql> show events;
- +--++-+--+--++-+-+--
+ +--++-+--+--++-+-+++++-+-+--+
+ | Db | Name | Definer | Time zone | Type | Execute at | Interval value | Interval field | Starts | Ends | Status | Originator | character_set_client | collation_connection | Database Collation |
+ +--++-+--+--++-+-+++++-+-+--+
+ | db1 | test_event_01 | azureuser@% | SYSTEM | RECURRING | NULL | 1 | MINUTE | 2023-04-05 14:47:04 | 2023-04-05 15:47:04 | ENABLED | 3221153808 | latin1 | latin1_swedish_ci | latin1_swedish_ci |
+ +--++-+--+--++-+-+++++-+-+--+
+ 1 row in set (0.23 sec)
+ ```
+
+5. After few minutes, query the rows from the table to begin viewing the rows inserted every minute as per the `event_scheduler` parameter you configured:
+
+ ```azurecli
+ mysql> select * from tab1;
+ +-++-+
+ | id | CreatedAt | CreatedBy |
+ +-++-+
+ | 1 | 2023-04-05 14:47:04 | azureuser@% |
+ | 2 | 2023-04-05 14:48:04 | azureuser@% |
+ | 3 | 2023-04-05 14:49:04 | azureuser@% |
+ | 4 | 2023-04-05 14:50:04 | azureuser@% |
+ +-++-+
+ 4 rows in set (0.23 sec)
+ ```
+
+6. After an hour, run a Select statement on the table to view the complete result of the values inserted into table every minute for an hour as the `event_scheduler` is configured in our case.
+
+ ```azurecli
+ mysql> select * from tab1;
+ +-++-+
+ | id | CreatedAt | CreatedBy |
+ +-++-+
+ | 1 | 2023-04-05 14:47:04 | azureuser@% |
+ | 2 | 2023-04-05 14:48:04 | azureuser@% |
+ | 3 | 2023-04-05 14:49:04 | azureuser@% |
+ | 4 | 2023-04-05 14:50:04 | azureuser@% |
+ | 5 | 2023-04-05 14:51:04 | azureuser@% |
+ | 6 | 2023-04-05 14:52:04 | azureuser@% |
+ ..< 50 lines trimmed to compact output >..
+ | 56 | 2023-04-05 15:42:04 | azureuser@% |
+ | 57 | 2023-04-05 15:43:04 | azureuser@% |
+ | 58 | 2023-04-05 15:44:04 | azureuser@% |
+ | 59 | 2023-04-05 15:45:04 | azureuser@% |
+ | 60 | 2023-04-05 15:46:04 | azureuser@% |
+ | 61 | 2023-04-05 15:47:04 | azureuser@% |
+ +-++-+
+ 61 rows in set (0.23 sec)
+ ```
+
+#### Other scenarios
+
+You can set up an event based on the requirements of your specific scenario. A few similar examples of scheduling SQL statements to run at different time intervals follow.
+
+**Run a SQL statement now and repeat one time per day with no end**
+
+```sql
+CREATE EVENT <event name>
+ON SCHEDULE
+EVERY 1 DAY
+STARTS (TIMESTAMP(CURRENT_DATE) + INTERVAL 1 DAY + INTERVAL 1 HOUR)
+COMMENT 'Comment'
+DO
+<your statement>;
+```
+
+**Run a SQL statement every hour with no end**
+
+```sql
+CREATE EVENT <event name>
+ON SCHEDULE
+EVERY 1 HOUR
+COMMENT 'Comment'
+DO
+<your statement>;
+```
+
+**Run a SQL statement every day with no end**
+
+```sql
+CREATE EVENT <event name>
+ON SCHEDULE
+EVERY 1 DAY
+STARTS str_to_date( date_format(now(), '%Y%m%d 0200'), '%Y%m%d %H%i' ) + INTERVAL 1 DAY
+COMMENT 'Comment'
+DO
+<your statement>;
+```
#### Limitations
mysql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-server-parameters.md
To configure the `event_scheduler` server parameter in Azure Database for MySQL,
4. To view the Event Scheduler Details, run the following SQL statement:
- ```slq
+ ```sql
SHOW EVENTS; ```
operator-nexus Howto Platform Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-platform-prerequisites.md
Terminal Server has been deployed and configured as follows:
ipv4_static_settings.address="$TS_NET2_IP" ipv4_static_settings.netmask="$TS_NET2_NETMASK" ipv4_static_settings.gateway="$TS_NET2_GW"
- physif="net1"
+ physif="net2"
END ```
Terminal Server has been deployed and configured as follows:
| TS_NET1_IP | The terminal server PE1 to TS NET1 IP | | TS_NET1_NETMASK | The terminal server PE1 to TS NET1 netmask | | TS_NET1_GW | The terminal server PE1 to TS NET1 gateway |
- | TS_NET2_IP | The terminal server PE1 to TS NET2 IP |
- | TS_NET2_NETMASK | The terminal server PE1 to TS NET2 netmask |
- | TS_NET2_GW | The terminal server PE1 to TS NET2 gateway |
+ | TS_NET2_IP | The terminal server PE2 to TS NET2 IP |
+ | TS_NET2_NETMASK | The terminal server PE2 to TS NET2 netmask |
+ | TS_NET2_GW | The terminal server PE2 to TS NET2 gateway |
3. Setup support admin user:
orbital Partner Network Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/partner-network-integration.md
# Integrate partner network ground stations into your Azure Orbital Ground Station solution
-This article describes how to integrate partner network ground stations for customers with partner network contracts.
+This article describes how to integrate partner network ground stations for customers with partner network contracts. In order to use Azure Orbital Ground Station to make contacts with partner network ground station sites, your spacecraft must be authorized in the portal.
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An active contract with the partner network(s) you wish to integrate with Azure Orbital:
- - [KSAT Lite](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/kongsbergsatelliteservicesas1657024593438.ksatlite?exp=ubp8&tab=Overview)
+- [Contributor permissions](https://learn.microsoft.com/azure/role-based-access-control/rbac-and-directory-admin-roles#azure-roles) at the subscription level.
+- A [Basic Support Plan](https://azure.microsoft.com/support/plans/) or higher is required for a spacecraft authorization request.
+- A spacecraft license is required for private spacecraft.
+- An active contract with the partner network(s) you wish to integrate with Azure Orbital Ground Station:
+ - [KSAT Lite](https://azuremarketplace.microsoft.com/marketplace/apps/kongsbergsatelliteservicesas1657024593438.ksatlite?exp=ubp8&tab=Overview)
- [Viasat RTE](https://azuremarketplace.microsoft.com/marketplace/apps/viasatinc1628707641775.viasat-real-time-earth?tab=overview)
+- A ground station license for each of the partner network sites you wish to contact is required for private spacecraft.
+- A registered spacecraft object. Learn more on how to [register a spacecraft](register-spacecraft.md).
+
+## Obtain licencses
+
+Obtain the proper **spacecraft license(s)** for a private spacecraft. Additionally, work with the partner network to obtain a **ground station license** for each partner network site you intend to use with your spacecraft.
+
+ > [!NOTE]
+ > Public spacecraft do not require licensing for authorization. The Azure Orbital Ground Station service supports several public satellites including Aqua, Suomi NPP, JPSS-1/NOAA-20, and Terra.
+
+## Create spacecraft resource
+
+Create a registered spacecraft object on the Orbital portal by following the [spacecraft registration](register-spacecraft.md) instructions.
## Request authorization of the new spacecraft resource 1. Navigate to the newly created spacecraft resource's overview page.
-1. Select **New support request** in the Support + troubleshooting section of the left-hand blade.
-1. In the **New support request** page, enter or select this information in the Basics tab:
+2. Select **New support request** in the Support + troubleshooting section of the left-hand blade.
+3. In the **New support request** page, enter or select this information in the Basics tab:
| **Field** | **Value** | | | |
This article describes how to integrate partner network ground stations for cust
| Problem type | Select **Spacecraft Management and Setup** | | Problem subtype | Select **Spacecraft Registration** |
-1. Select the Details tab at the top of the page
-1. In the Details tab, enter this information in the Problem details section:
+4. Select the Details tab at the top of the page
+5. In the Details tab, enter this information in the Problem details section:
| **Field** | **Value** | | | | | When did the problem start? | Select the current date & time |
-| Description | List your spacecraft's frequency bands and desired ground stations |
-| File upload | Upload any pertinent licensing material, contract details, or partner POCs, if applicable |
+| Description | List your spacecraft's **frequency bands** and **desired partner network ground stations**. |
+| File upload | Upload all pertinent **spacecraft licensing material**, **ground station licensing material**, **partner network contract details**, or **partner POCs**, if applicable. |
-1. Complete the **Advanced diagnostic information** and **Support method** sections of the **Details** tab.
-1. Select the **Review + create** tab, or select the **Review + create** button.
-1. Select **Create**.
+6. Complete the **Advanced diagnostic information** and **Support method** sections of the **Details** tab.
+7. Select the **Review + create** tab, or select the **Review + create** button.
+8. Select **Create**.
> [!NOTE] > A [Basic Support Plan](https://azure.microsoft.com/support/plans/) or higher is required for a spacecraft authorization request.
-
+
+After the authorization request is generated, our regulatory team will investigate the request and validate the material. The partner network must inform Microsoft of the ground station license approval(s) to complete the spacecraft authorization. Once verified, we will enable your spacecraft to communicate with the partner network ground stations outlined in the request.
+
+## Confirm spacecraft is authorized
+
+1. In the Azure portal search box, enter **Spacecraft**. Select **Spacecraft** in the search results.
+2. In the Spacecraft page, select the **newly registered spacecraft**.
+3. In the new spacecraft's overview page, check that the **Authorization status** shows **Allowed**.
+ ## Next steps - [Configure a contact profile](./contact-profile.md)
postgresql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-supported-versions.md
The PostgreSQL project regularly issues minor releases to fix reported bugs. Azu
Automation for major version upgrade isn't yet supported. For example, there's currently no automatic upgrade from PostgreSQL 11 to PostgreSQL 12.<!-- To upgrade to the next major version, create a [database dump and restore](howto-migrate-using-dump-and-restore.md) to a server that was created with the new engine version.-->
+## Supportability and retirement policy of the underlying operating system
+
+Azure Database for PostgreSQL - Flexible Server is a fully managed open-source database. The underlying operating system is an integral part of the service. Microsoft continually works to ensure ongoing security updates and maintenance for security compliance and vulnerability mitigation, regardless of whether it is provided by a third-party or an internal vendor. Automatic upgrades during scheduled maintenance keep your managed database secure, stable, and up-to-date.
++ ## Managing PostgreSQL engine defects Microsoft has a team of committers and contributors who work full time on the open source Postgres project and are long term members of the community. Our contributions include but aren't limited to features, performance enhancements, bug fixes, security patches among other things. Our open source team also incorporates feedback from our Azure fleet (and customers) when prioritizing work, however please keep in mind that Postgres project has its own independent contribution guidelines, review process and release schedule.
postgresql Concept Reserved Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concept-reserved-pricing.md
Last updated 06/24/2022
# Prepay for Azure Database for PostgreSQL compute resources with reserved capacity [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)] [!INCLUDE [azure-database-for-postgresql-single-server-deprecation](../includes/azure-database-for-postgresql-single-server-deprecation.md)]
purview Create A Custom Classification And Classification Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-a-custom-classification-and-classification-rule.md
Previously updated : 12/29/2022 Last updated : 04/26/2023 # Custom classifications in Microsoft Purview This article describes how you can create custom classifications to define data types in your data estate that are unique to your organization. It also describes the creation of custom classification rules that let you find specified data throughout your data estate.
->[IMPORTANT]
+>[!IMPORTANT]
>To create a custom classification you need either **data curator** or **data source administrator** permission on a collection. Permissions at any collection level are sufficient. >For more information about permissions, see: [Microsoft Purview permissions](catalog-permissions.md).
To create a custom classification rule:
:::image type="content" source="media/create-a-custom-classification-and-classification-rule/dictionary-generated.png" alt-text="Create dictionary rule, with Dictionary-Generated checkmark." border="true":::
+## Edit or delete a custom classification
+
+To update or edit a custom classification, follow these steps:
+
+1. In your Microsoft Purview account, select the **Data map**, and then **Classifications**.
+1. Select the **Custom** tab.
+1. Select the classification you want to edit, then select the **Edit** button.
+
+ :::image type="content" source="media/create-a-custom-classification-and-classification-rule/select-edit.png" alt-text="Screenshot of the custom classification page, showing a classification selected and the edit button highlighted." border="true":::
+
+1. Now can edit the description of this custom classification. Select the **Ok** button when you're finished to save your changes.
+
+To delete a custom classification:
+
+1. After opening the **Data map**, and then **Classifications**, select the **Custom** tab.
+1. Select the classification you want to delete, or multiple classifications you want to delete, and then select the **Delete** button.
+ :::image type="content" source="media/create-a-custom-classification-and-classification-rule/select-delete.png" alt-text="Screenshot of the custom classification page, showing a classification selected and the delete button highlighted." border="true":::
+
+You can also edit or delete a classification from inside the classification itself. Just select your classification, then select the **Edit** or **Delete** buttons in the top menu.
++
+## Enable or disable classification rules
+
+1. In your Microsoft Purview account, select the **Data map**, and then **Classification rules**.
+1. Select the **Custom** tab.
+1. You can check the current status of a classification rule by looking at the **Status** column in the table.
+1. Select the classification rule, or multiple classification rules, that you want to enable or disable.
+1. Select either the **Enable** or **Disable** buttons in the top menu.
+
+ :::image type="content" source="media/create-a-custom-classification-and-classification-rule/enable-or-disable.png" alt-text="Screenshot of the custom classification rule page, showing a classification rule selected and the enable and disable buttons highlighted." border="true":::
+
+You can also update the status of a rule when editing the rule.
+
+## Edit or delete a classification rule
+
+To update or edit a custom classification rule, follow these steps:
+
+1. In your Microsoft Purview account, select the **Data map**, and then **Classification rules**.
+1. Select the **Custom** tab.
+1. Select the classification rule you want to edit, then select the **Edit** button.
+
+ :::image type="content" source="media/create-a-custom-classification-and-classification-rule/select-edit-rule.png" alt-text="Screenshot of the custom classification rule page, showing a classification rule selected and the edit button highlighted." border="true":::
+
+1. Now you can edit the state, the description, and the associated classification rule.
+1. Select the **Continue** button.
+1. You can upload a new file for your regular expression or dictionary rule to match against, and update your match threshold and column pattern match.
+1. Select **Apply** to save your changes. Scans will need to be rerun with the new rule to apply changes across your assets.
+
+To delete a custom classification:
+
+1. After opening the **Data map**, and then **Classification rules**, select the **Custom** tab.
+1. Select the classification rule you want to delete, and then select the **Delete** button.
+
+ :::image type="content" source="media/create-a-custom-classification-and-classification-rule/select-delete-rule.png" alt-text="Screenshot of the custom classification rule page, showing a classification rule selected and the delete button highlighted." border="true":::
+ ## Next steps Now that you've created your classification rule, it's ready to be added to a scan rule set so that your scan uses the rule when scanning. For more information, see [Create a scan rule set](create-a-scan-rule-set.md).
purview Data Stewardship https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/data-stewardship.md
Previously updated : 05/16/2022 Last updated : 04/25/2023 # Get insights into data stewardship from Microsoft Purview
-As described in the [insights concepts](concept-insights.md), data stewardship is report that is part of the "Health" section of the Data Estate Insights App. This report offers a one-stop shop experience for data, governance, and quality focused users like chief data officers and data stewards to get actionable insights into key areas of gap in their data estate, for better governance.
+As described in the [insights concepts](concept-insights.md), the data stewardship report is part of the "Health" section of the Data Estate Insights App. This report offers a one-stop shop experience for data, governance, and quality focused users like chief data officers and data stewards to get actionable insights into key areas of gap in their data estate.
In this guide, you'll learn how to:
Before getting started with Microsoft Purview Data Estate Insights, make sure th
* Set up and completed a scan your storage source.
+* [Enable and schedule your data estate insights reports](how-to-schedule-data-estate-insights.md).
+ For more information to create and complete a scan, see [the manage data sources in Microsoft Purview article](manage-data-sources.md). ## Understand your data estate and catalog health in Data Estate Insights In Microsoft Purview Data Estate Insights, you can get an overview of all assets inventoried in the Data Map, and any key gaps that can be closed by governance stakeholders, for better governance of the data estate.
-1. Navigate to your Microsoft Purview account in the Azure portal.
-
-1. On the **Overview** page, in the **Get Started** section, select the **Open Microsoft Purview governance portal** tile.
-
- :::image type="content" source="./media/data-stewardship/portal-access.png" alt-text="Screenshot of Microsoft Purview account in Azure portal with the Microsoft Purview governance portal button highlighted.":::
+1. Access the [Microsoft Purview Governance Portal](https://web.purview.azure.com/) and open your Microsoft Purview account.
1. On the Microsoft Purview **Home** page, select **Data Estate Insights** on the left menu.
In Microsoft Purview Data Estate Insights, you can get an overview of all assets
:::image type="content" source="./media/data-stewardship/data-stewardship-table-of-contents.png" alt-text="Screenshot of the Microsoft Purview governance portal Data Estate Insights menu with Data Stewardship highlighted under the Health section.":::
+## View data stewardship dashboard
-### View data stewardship dashboard
+The dashboard is purpose-built for the governance and quality focused users, like data stewards and chief data officers, to understand the data estate health of their organization. The dashboard shows high level KPIs that need to reduce governance risks:
-The dashboard is purpose-built for the governance and quality focused users, like data stewards and chief data officers, to understand the data estate health and catalog adoption health of their organization. The dashboard shows high level KPIs that need to reduce governance risks:
-
- * **Asset curation**: All data assets are categorized into three buckets - "Fully curated", "Partially curated" and "Not curated", based on certain attributes of assets being present. An asset is "Fully curated" if it has at least one classification tag, an assigned Data Owner and a description. If any of these attributes is missing, but not all, then the asset is categorized as "Partially curated" and if all of them are missing, then it's "Not curated".
- * **Asset data ownership**: Assets that have the owner attribute within "Contacts" tab as blank are categorized as "No owner", else it's categorized as "Owner assigned".
- * **Catalog usage and adoption**: This KPI shows a sum of monthly active users of the catalog across different pages.
+* **Asset curation**: All data assets are categorized into three buckets - "Fully curated", "Partially curated" and "Not curated", based on certain attributes of assets being present. An asset is "Fully curated" if it has at least one classification tag, an assigned Data Owner and a description. If any of these attributes is missing, but not all, then the asset is categorized as "Partially curated" and if all of them are missing, then it's "Not curated".
+* **Asset data ownership**: Assets that have the owner attribute within "Contacts" tab as blank are categorized as "No owner", else it's categorized as "Owner assigned".
:::image type="content" source="./media/data-stewardship/kpis-small.png" alt-text="Screenshot of the data stewardship insights summary graphs, showing the three main KPI charts." lightbox="media/data-stewardship/data-stewardship-kpis-large.png":::
-
-
-As users look at the main dashboard layout, it's divided into two tabs - [**Data estate**](#data-estate) and [**Catalog adoption**](#catalog-adoption).
-
-#### Data estate
-This section of **data stewardship** gives governance and quality focused users, like data stewards and chief data officers, an overview of their data estate, as well as running trends.
+### Data estate health
-
-##### Data estate health
-Data estate health is a scorecard view that helps management and governance focused users, like chief data officers, understand critical governance metrics that can be looked at by collection hierarchy.
+Data estate health is a scorecard view that helps management and governance focused users, like chief data officers, understand critical governance metrics that can be looked at by collection hierarchy.
:::image type="content" source="./media/data-stewardship/data-estate-health-small.png" alt-text="Screenshot of the data stewardship data estate health table in the middle of the dashboard." lightbox="media/data-stewardship/data-estate-health-large.png"::: You can view the following metrics:
-* **Total asset**: Count of assets by collection drill-down
+* **Assets**: Count of assets by collection drill-down
* **With sensitive classifications**: Count of assets with any system classification applied * **Fully curated assets**: Count of assets that have a data owner, at least one classification and a description.
-* **Owners assigned**: Count of assets with data owner assigned on them
+* **Owner assigned**: Count of assets with data owner assigned on them
* **No classifications**: Count of assets with no classification tag
-* **Net new assets**: Count of new assets pushed in the Data Map in the last 30 days
-* **Deleted assets**: Count of deleted assets from the Data Map in the last 30 days
+* **Out of date**: Percentage of assets that have not been updated in over 365 days.
+* **New**: Count of new assets pushed in the Data Map in the last 30 days
+* **Updated**: Count of assets updated in the Data Map in the last 30 days
+* **Deleted**: Count of deleted assets from the Data Map in the last 30 days
-You can also drill down by collection paths. As you hover on each column name, it provides description of the column and takes you to the detailed graph for further drill-down.
+You can also drill down by collection paths. As you hover on each column name, it provides description of the column, provides recommended percentage ranges, and takes you to the detailed graph for further drill-down.
:::image type="content" source="./media/data-stewardship/hover-menu.png" alt-text="Screenshot of the data stewardship data estate health table, with the fully curated column hovered over. A summary is show, and the view more in Stewardship insights option is selected."::: :::image type="content" source="./media/data-stewardship/detailed-view.png" alt-text="Screenshot of the asset curation detailed view, as shown after selecting the view more in stewardship insights option is selected.":::
-##### Asset curation
-All data assets are categorized into three buckets - ***"Fully curated"***, ***"Partially curated"*** and ***"Not curated"***, based on whether assets have been given certain attributes.
+### Asset curation
+
+All data assets are categorized into three buckets - ***Fully curated***, ***Partially curated*** and ***Not curated***, based on whether assets have been given certain attributes.
:::image type="content" source="./media/data-stewardship/asset-curation-small.png" alt-text="Screenshot of the data stewardship insights health dashboard, with the asset curation bar chart highlighted." lightbox="media/data-stewardship/asset-curation-large.png":::
-An asset is ***"Fully curated"*** if it has at least one classification tag, an assigned data owner, and a description.
+An asset is ***Fully curated*** if it has at least one classification tag, an assigned data owner, and a description.
-If any of these attributes is missing, but not all, then the asset is categorized as ***"Partially curated"***. If all of them are missing, then it's listed as ***"Not curated"***.
+If any of these attributes is missing, but not all, then the asset is categorized as ***Partially curated***. If all of them are missing, then it's listed as ***Not curated***.
You can drill down by collection hierarchy. :::image type="content" source="./media/data-stewardship/asset-curation-collection-filter.png" alt-text="Screenshot of the data stewardship asset curation chart, with the collection filter opened to show all available collections.":::
-For further information about which assets aren't fully curated, you can select ***"View details"*** link that will take you into the deeper view.
+For further information about which assets aren't fully curated, you can select **View details** link that will take you into the deeper view.
:::image type="content" source="./media/data-stewardship/asset-curation-view-details.png" alt-text="Screenshot of the data stewardship asset curation chart, with the view details button highlighted below the chart.":::
-In the ***"View details"*** page, if you select a specific collection, it will list all assets with attribute values or blanks, that make up the ***"fully curated"*** assets.
+In the **View details** page, if you select a specific collection, it will list all assets with attribute values or blanks, that make up the ***fully curated*** assets.
:::image type="content" source="./media/data-stewardship/asset-curation-select-collection.png" alt-text="Screenshot of the asset curation detailed view, shown after selecting View Details beneath the asset curation chart.":::
First, it tells you what was the ***classification source***, if the asset is cl
Second, if an asset is unclassified, it tells us why it's not classified, in the column ***Reasons for unclassified***. Currently, Data estate insights can tell one of the following reasons:+ * No match found * Low confidence score * Not applicable
You can select any asset and add missing attributes, without leaving the **Data
:::image type="content" source="./media/data-stewardship/edit-asset.png" alt-text="Screenshot of the asset list page, with an asset selected and the edit menu open.":::
-##### Trends and gap analysis
+### Trends and gap analysis
This graph shows how the assets and key metrics have been trending over:+ * Last 30 days: The graph takes last run of the day or recording of the last run across days as a data point. * Last six weeks: The graph takes last run of the week where week ends on Sunday. If there was no run on Sunday, then it takes the last recorded run. * Last 12 months: The graph takes last run of the month.
This graph shows how the assets and key metrics have been trending over:
:::image type="content" source="./media/data-stewardship/trends-and-gap-analysis-small.png" alt-text="Screenshot of the data stewardship insights summary graphs, with data estate selected, showing the trends and gap analysis graph at the bottom of the page." lightbox="media/data-stewardship/trends-and-gap-analysis-large.png":::
-#### Catalog adoption
-
-This tab of the **data stewardship** insight gives management focused users like, chief data officers, a view of what is activity is happening in the catalog. The hypothesis is, the more activity on the catalog, the better usage, hence the better are the chances of governance program to have a high return on investment.
--
-##### Active users trend by catalog features
-
-Active users trend by area of the catalog, and the graph focuses on activities in **search and browse**, and **asset edits**.
-
-If there are active users of search and browse, meaning the user has typed a search keyword and hit enter, or selected browse by assets, we count it as an active user of "search and browse".
-
-If a user has edited an asset by selecting "save" after making changes, we consider that user as an active user of "asset edits".
--
-##### Most viewed assets in last 30 days
-
-You can see the most viewed assets in the catalog, their current curation level, and number of views. This list is currently limited to five items.
--
-##### Most searched keywords in last 30 days
-
-You can view count of top five searches with a result returned. The table also shows what key words were searched without any results in the catalog.
-- ## Next steps
-Learn more about Microsoft Purview Data estate insights through:
-* [Concepts](concept-insights.md)
+Learn more about Microsoft Purview Data Estate Insights through:
+* [Data Estate Insights Concepts](concept-insights.md)
purview How To Bulk Edit Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-bulk-edit-assets.md
This article describes how you can update assets in bulk to add glossary terms,
:::image type="content" source="media/how-to-bulk-edit-assets/close-list.png" alt-text="Screenshot of the close."::: > [!IMPORTANT]
-> The recommended number of assets for bulk edit are 25. Selecting more than 25 might cause performance issues.
+> In the UI you can currently only select up to 25 assets.
> The **View Selected** box will be visible only if there is at least one asset selected. ## Next steps
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
This article outlines the process to register an Azure SQL database source in Mi
| [Yes](#register-the-data-source) | [Yes](#scope-and-run-the-scan)|[Yes](#scope-and-run-the-scan) | [Yes](#scope-and-run-the-scan)|[Yes](#scope-and-run-the-scan)| [Yes](create-sensitivity-label.md)| [Yes](#set-up-access-policies) | [Yes (preview)](#extract-lineage-preview) | No | > [!NOTE]
-> Data lineage extraction is currently supported only for stored procedure runs. Lineage is also supported if Azure SQL tables or views are used as a source/sink in [Azure Data Factory Copy and Data Flow activities](how-to-link-azure-data-factory.md).
+> [Data lineage extraction is currently supported only for stored procedure runs.](#troubleshoot-lineage-extraction) Lineage is also supported if Azure SQL tables or views are used as a source/sink in [Azure Data Factory Copy and Data Flow activities](how-to-link-azure-data-factory.md).
When you're scanning Azure SQL Database, Microsoft Purview supports extracting technical metadata from these sources:
reliability Reliability Guidance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-guidance-overview.md
Azure reliability guidance contains the following:
[Azure Firewall](../firewall/deploy-availability-zone-powershell.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Firewall Manager](../firewall-manager/quick-firewall-policy.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Functions](reliability-functions.md)|
-[Azure HDInsight](../hdinsight/hdinsight-use-availability-zones.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
+[Azure HDInsight](reliability-hdinsight.md)|
[Azure IoT Hub](../iot-hub/iot-hub-ha-dr.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Kubernetes Service (AKS)](../aks/availability-zones.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Logic Apps](../logic-apps/set-up-zone-redundancy-availability-zones.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
reliability Reliability Hdinsight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-hdinsight.md
+
+ Title: Reliability in Azure HDInsight
+description: Find out about reliability in Azure HDInsight
+++ Last updated : 02/27/2023++
+CustomerIntent: As a cloud architect/engineer, I need general guidance on migrating HDInsight to using availability zones.
++
+# Reliability in Azure HDInsight
+
+This article describes reliability support in Azure HDInsight, and covers [availability zones](#availability-zone-support). For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
++
+## Availability zone support
+
+Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. In the case of a local zone failure, availability zones are designed so that if one zone is affected, regional services, capacity, and high availability are supported by the remaining two zones. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Availability zone service and regional support](availability-zones-service-support.md).
+
+Azure HDInsight supports a [zonal deployment configuration](availability-zones-service-support.md#azure-services-with-availability-zone-support). Azure HDInsight cluster nodes are placed in a single zone that you select in the selected region. A zonal HDInsight cluster is isolated from any outages that occur in other zones. However, if an outage impacts the specific zone chosen for the HDInsight cluster, the cluster won't be available. This deployment model provides inexpensive, low latency network connectivity within the cluster. Replicating this deployment model into multiple availability zones can provide a higher level of availability to protect against hardware failure.
+
+>[!IMPORTANT]
+>For deployments where users don't specify a specific zone, node types are not zone resilient and can experience downtime during an outage in any zone in that region.
+
+## Prerequisites
+
+- Availability zones are only supported for clusters created after June 15, 2023. Availability zone settings can't be updated after the cluster is created. You also can't update an existing, non-availability zone cluster to use availability zones.
+
+- Clusters must be created under a custom VNet.
+
+- You need to bring your own SQL DB for Ambari DB and external metastore, such as Hive metastore, so that you can config these DBs in the same availability zone.
+
+- Your HDInsight clusters must be created with the availability zone option in one of the following regions:
+
+ - Australia East
+ - Brazil South
+ - Canada Central
+ - Central US
+ - East US
+ - East US 2
+ - France Central
+ - Germany West Central
+ - Japan East
+ - Korea Central
+ - North Europe
+ - Qatar Central
+ - Southeast Asia
+ - South Central US
+ - UK South
+ - US Gov Virginia
+ - West Europe
+ - West US 2
++
+## Create an HDInsight cluster using availability zone
+
+You can use Azure Resource Manager (ARM) template to launch an HDInsight cluster into a specified availability zone.
+
+In the resources section, you need to add a section of ΓÇÿzonesΓÇÖ and provide which availability zone you want this cluster to be deployed into.
+
+```json
+ "resources": [
+ {
+ "type": "Microsoft.HDInsight/clusters",
+ "apiVersion": "2021-06-01",
+ "name": "[parameters('cluster name')]",
+ "location": "East US 2",
+ "zones": [
+ "1"
+ ],
+ }
+ ]
+```
+
+### Verify nodes within one availability Zone across zones
+
+When the HDInsight cluster is ready, you can check the location to see which availability zone they're deployed in.
++
+**Get API response**:
+
+```json
+ [
+ {
+ "location": "East US 2",
+ "zones": [
+ "1"
+ ],
+ }
+ ]
+```
+
+### Scale up the cluster
+
+You can scale up an HDInsight cluster with more worker nodes. The newly added worker nodes will be placed in the same availability zone of this cluster.
+
+### Availability zone redeployment
+
+Azure HDInsight clusters currently doesn't support in-place migration of existing cluster instances to availability zone support. However, you can choose to [recreate your cluster](#create-an-hdinsight-cluster-using-availability-zone), and choose a different availability zone or region during the cluster creation. A secondary standby cluster in a different region and a different availability zone can be used in disaster recovery scenarios.
+
+### Zone down experience
+
+When an availability zone goes down:
+
+ - You can't ssh to this cluster.
+ - You can't delete or scale up or scale down this cluster.
+ - You can't submit jobs or see job history.
+ - You still can submit new cluster creation request in a different region.
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Reliability in Azure](availability-zones-overview.md)
sap Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/security-baseline.md
+
+ Title: Security baseline for Azure Monitor for SAP solutions
+description: Learn about security baseline for Azure Monitor for SAP solutions
+++++ Last updated : 04/24/2023+
+#Customer intent: As a SAP BASIS or cloud infrastructure team, I want to learn about security baseline provided by Azure Monitor for SAP solutions.
++
+# Security baseline for Azure Monitor for SAP solutions
+
+This security baseline applies guidance from the Microsoft cloud security benchmarks version 1.0. The Microsoft cloud security benchmark provides recommendations on how you can secure your cloud solutions on Azure.
+
+You can monitor this security baseline and its recommendations using Microsoft Defender for Cloud. Azure Policy definitions is listed in the Regulatory Compliance section of the Microsoft Defender for Cloud dashboard.
+
+When a feature has relevant Azure Policy Definitions, they are listed in this baseline to help you measure compliance with the Microsoft cloud security benchmark controls and recommendations. Some recommendations may require a paid Microsoft Defender plan to enable certain security scenarios.
+
+When Azure Monitor for SAP solutions is deployed, a managed resource group is deployed with it.
+This managed resource group contains services such as Azure Log Analytics, Azure Functions, Azure Storage and Azure Key Vault.
+
+## Security baseline for relevant servicea
+
+- [Azure Log Analytics](/security/benchmark/azure/baselines/azure-monitor-security-baseline)
+- [Azure Functions](/security/benchmark/azure/baselines/functions-security-baseline)
+- [Azure Storage](/security/benchmark/azure/baselines/storage-security-baseline)
+- [Azure Key Vault](/security/benchmark/azure/baselines/key-vault-security-baseline)
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about Azure Monitor for SAP solutions provider types](providers.md)
sap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 04/06/2023 Last updated : 04/25/2023
In the SAP workload documentation space, you can find the following areas:
## Change Log
+- April 25, 2023: Adjust mount options in [HA for HANA Scale-up with ANF on SLES](sap-hana-high-availability-netapp-files-suse.md), [HANA scale-out with standby node with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md), [HA for HANA Scale-out HA on SLES](sap-hana-high-availability-scale-out-hsr-suse.md), [HA for HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md), [HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md), [HA for HANA scale-out on RHEL](./sap-hana-high-availability-scale-out-hsr-rhel.md), [HA for SAP NW on SLES with ANF](./high-availability-guide-suse-netapp-files.md) , [HA for SAP NW on RHEL with ANF](./high-availability-guide-rhel-netapp-files.md) and [HA for SAP NW on SLES with simple mount and NFS](high-availability-guide-suse-nfs-simple-mount.md)
- April 6, 2023: Updates for RHEL 9 in [Setting up Pacemaker on RHEL in Azure](./high-availability-guide-rhel-pacemaker.md) - March 26, 2023: Adding recommended sector size in [SAP HANA Azure virtual machine Premium SSD v2 storage configurations](./hana-vm-premium-ssd-v2.md) - March 1, 2023: Change in [HA for SAP HANA on Azure VMs on RHEL](./sap-hana-high-availability-rhel.md) to add configuration for cluster default properties
sap High Availability Guide Rhel Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-netapp-files.md
Previously updated : 12/06/2022 Last updated : 04/25/2023
The following items are prefixed with either **[A]** - applicable to all nodes,
# mount temporarily the volume sudo mkdir -p /saptmp # If using NFSv3
- sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp 192.168.24.5:/sapQAS /saptmp
+ sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,nfsvers=3,tcp 192.168.24.5:/sapQAS /saptmp
# If using NFSv4.1
- sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys,tcp 192.168.24.5:/sapQAS /saptmp
+ sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,nfsvers=4.1,sec=sys,tcp 192.168.24.5:/sapQAS /saptmp
# create the SAP directories sudo cd /saptmp sudo mkdir -p sapmntQAS
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo vi /etc/fstab # Add the following lines to fstab, save and exit
- 192.168.24.5:/sapQAS/sapmntQAS /sapmnt/QAS nfs rw,hard,rsize=65536,wsize=65536,vers=3
- 192.168.24.5:/sapQAS/usrsapQASsys /usr/sap/QAS/SYS nfs rw,hard,rsize=65536,wsize=65536,vers=3
- 192.168.24.4:/transSAP /usr/sap/trans nfs rw,hard,rsize=65536,wsize=65536,vers=3
+ 192.168.24.5:/sapQAS/sapmntQAS /sapmnt/QAS nfs rw,hard,rsize=65536,wsize=65536,nfsvers=3
+ 192.168.24.5:/sapQAS/usrsapQASsys /usr/sap/QAS/SYS nfs rw,hard,rsize=65536,wsize=65536,nfsvers=3
+ 192.168.24.4:/transSAP /usr/sap/trans nfs rw,hard,rsize=65536,wsize=65536,nfsvers=3
``` If using NFSv4.1:
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo vi /etc/fstab # Add the following lines to fstab, save and exit
- 192.168.24.5:/sapQAS/sapmntQAS /sapmnt/QAS nfs rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys
- 192.168.24.5:/sapQAS/usrsapQASsys /usr/sap/QAS/SYS nfs rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys
- 192.168.24.4:/transSAP /usr/sap/trans nfs rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys
+ 192.168.24.5:/sapQAS/sapmntQAS /sapmnt/QAS nfs rw,hard,rsize=65536,wsize=65536,nfsvers=4.1,sec=sys
+ 192.168.24.5:/sapQAS/usrsapQASsys /usr/sap/QAS/SYS nfs rw,hard,rsize=65536,wsize=65536,nfsvers=4.1,sec=sys
+ 192.168.24.4:/transSAP /usr/sap/trans nfs rw,hard,rsize=65536,wsize=65536,nfsvers=4.1,sec=sys
``` > [!NOTE]
The following items are prefixed with either **[A]** - applicable to all nodes,
# If using NFSv4.1 sudo pcs resource create fs_QAS_ASCS Filesystem device='192.168.24.5:/sapQAS/usrsapQASascs' \
- directory='/usr/sap/QAS/ASCS00' fstype='nfs' force_unmount=safe options='sec=sys,vers=4.1' \
+ directory='/usr/sap/QAS/ASCS00' fstype='nfs' force_unmount=safe options='sec=sys,nfsvers=4.1' \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=105 \ --group g-QAS_ASCS
The following items are prefixed with either **[A]** - applicable to all nodes,
# If using NFSv4.1 sudo pcs resource create fs_QAS_AERS Filesystem device='192.168.24.5:/sapQAS/usrsapQASers' \
- directory='/usr/sap/QAS/ERS01' fstype='nfs' force_unmount=safe options='sec=sys,vers=4.1' \
+ directory='/usr/sap/QAS/ERS01' fstype='nfs' force_unmount=safe options='sec=sys,nfsvers=4.1' \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=105 \ --group g-QAS_AERS
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo vi /etc/fstab # Add the following lines to fstab, save and exit
- 192.168.24.5:/sapQAS/sapmntQAS /sapmnt/QAS nfs rw,hard,rsize=65536,wsize=65536,vers=3
- 192.168.24.4:/transSAP /usr/sap/trans nfs rw,hard,rsize=65536,wsize=65536,vers=3
+ 192.168.24.5:/sapQAS/sapmntQAS /sapmnt/QAS nfs rw,hard,rsize=65536,wsize=65536,nfsvers=3
+ 192.168.24.4:/transSAP /usr/sap/trans nfs rw,hard,rsize=65536,wsize=65536,nfsvers=3
``` If using NFSv4.1:
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo vi /etc/fstab # Add the following lines to fstab, save and exit
- 192.168.24.5:/sapQAS/sapmntQAS /sapmnt/QAS nfs rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys
- 192.168.24.4:/transSAP /usr/sap/trans nfs rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys
+ 192.168.24.5:/sapQAS/sapmntQAS /sapmnt/QAS nfs rw,hard,rsize=65536,wsize=65536,nfsvers=4.1,sec=sys
+ 192.168.24.4:/transSAP /usr/sap/trans nfs rw,hard,rsize=65536,wsize=65536,nfsvers=4.1,sec=sys
``` Mount the new shares
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo vi /etc/fstab # Add the following line to fstab
- 92.168.24.5:/sapQAS/usrsapQASpas /usr/sap/QAS/D02 nfs rw,hard,rsize=65536,wsize=65536,vers=3
+ 92.168.24.5:/sapQAS/usrsapQASpas /usr/sap/QAS/D02 nfs rw,hard,rsize=65536,wsize=65536,nfsvers=3
# Mount sudo mount -a
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo vi /etc/fstab # Add the following line to fstab
- 92.168.24.5:/sapQAS/usrsapQASpas /usr/sap/QAS/D02 nfs rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys
+ 92.168.24.5:/sapQAS/usrsapQASpas /usr/sap/QAS/D02 nfs rw,hard,rsize=65536,wsize=65536,nfsvers=4.1,sec=sys
# Mount sudo mount -a
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo vi /etc/fstab # Add the following line to fstab
- 92.168.24.5:/sapQAS/usrsapQASaas /usr/sap/QAS/D03 nfs rw,hard,rsize=65536,wsize=65536,vers=3
+ 92.168.24.5:/sapQAS/usrsapQASaas /usr/sap/QAS/D03 nfs rw,hard,rsize=65536,wsize=65536,nfsvers=3
# Mount sudo mount -a
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo vi /etc/fstab # Add the following line to fstab
- 92.168.24.5:/sapQAS/usrsapQASaas /usr/sap/QAS/D03 nfs rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys
+ 92.168.24.5:/sapQAS/usrsapQASaas /usr/sap/QAS/D03 nfs rw,hard,rsize=65536,wsize=65536,nfsvers=4.1,sec=sys
# Mount sudo mount -a
sap High Availability Guide Suse Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-netapp-files.md
vm-windows Previously updated : 12/06/2022 Last updated : 04/25/2023
The following items are prefixed with either **[A]** - applicable to all nodes,
# mount temporarily the volume sudo mkdir -p /saptmp # If using NFSv3
- sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp 10.1.0.4:/sapQAS /saptmp
+ sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,nfsvers=3,tcp 10.1.0.4:/sapQAS /saptmp
# If using NFSv4.1
- sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys,tcp 10.1.0.4:/sapQAS /saptmp
+ sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,nfsvers=4.1,sec=sys,tcp 10.1.0.4:/sapQAS /saptmp
# create the SAP directories sudo cd /saptmp sudo mkdir -p sapmntQAS
The following items are prefixed with either **[A]** - applicable to all nodes,
op monitor interval=20s timeout=40s # If using NFSv4.1
- sudo crm configure primitive fs_<b>QAS</b>_ASCS Filesystem device='<b>10.1.0.4</b>:/usrsap<b>qas</b>/usrsap<b>QAS</b>ascs' directory='/usr/sap/<b>QAS</b>/ASCS<b>00</b>' fstype='nfs' options='sec=sys,vers=4.1' \
+ sudo crm configure primitive fs_<b>QAS</b>_ASCS Filesystem device='<b>10.1.0.4</b>:/usrsap<b>qas</b>/usrsap<b>QAS</b>ascs' directory='/usr/sap/<b>QAS</b>/ASCS<b>00</b>' fstype='nfs' options='sec=sys,nfsvers=4.1' \
op start timeout=60s interval=0 \ op stop timeout=60s interval=0 \ op monitor interval=20s timeout=105s
The following items are prefixed with either **[A]** - applicable to all nodes,
op monitor interval=20s timeout=40s # If using NFSv4.1
- sudo crm configure primitive fs_<b>QAS</b>_ERS Filesystem device='<b>10.1.0.4</b>:/usrsap<b>qas</b>/usrsap<b>QAS</b>ers' directory='/usr/sap/<b>QAS</b>/ERS<b>01</b>' fstype='nfs' options='sec=sys,vers=4.1' \
+ sudo crm configure primitive fs_<b>QAS</b>_ERS Filesystem device='<b>10.1.0.4</b>:/usrsap<b>qas</b>/usrsap<b>QAS</b>ers' directory='/usr/sap/<b>QAS</b>/ERS<b>01</b>' fstype='nfs' options='sec=sys,nfsvers=4.1' \
op start timeout=60s interval=0 \ op stop timeout=60s interval=0 \ op monitor interval=20s timeout=105s
sap High Availability Guide Suse Nfs Simple Mount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-nfs-simple-mount.md
vm-windows Previously updated : 12/06/2022 Last updated : 04/25/2023
The following items are prefixed with:
```bash # Temporarily mount the volume. sudo mkdir -p /saptmp
- sudo mount -t nfs sapnfsafs.file.core.windows.net:/sapnfsafs/sapnw1 /saptmp -o vers=4,minorversion=1,sec=sys
+ sudo mount -t nfs sapnfsafs.file.core.windows.net:/sapnfsafs/sapnw1 /saptmp -o vers=4.1,sec=sys
# Create the SAP directories. sudo cd /saptmp sudo mkdir -p sapmntNW1
The following items are prefixed with:
With the simple mount configuration, the Pacemaker cluster doesn't control the file systems. ```bash
- echo "sapnfsafs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1 nfs vers=4,minorversion=1,sec=sys 0 0" >> /etc/fstab
- echo "sapnfsafs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1/ /usr/sap/NW1 nfs vers=4,minorversion=1,sec=sys 0 0" >> /etc/fstab
- echo "sapnfsafs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs vers=4,minorversion=1,sec=sys 0 0" >> /etc/fstab
+ echo "sapnfsafs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1 nfs vers=4.1,sec=sys 0 0" >> /etc/fstab
+ echo "sapnfsafs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1/ /usr/sap/NW1 nfs vers=4.1,sec=sys 0 0" >> /etc/fstab
+ echo "sapnfsafs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs vers=4.1,sec=sys 0 0" >> /etc/fstab
# Mount the file systems. mount -a ```
The instructions in this section are applicable only if you're using Azure NetAp
# Temporarily mount the volume. sudo mkdir -p /saptmp # If you're using NFSv3:
- sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp 10.27.1.5:/sapnw1 /saptmp
+ sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,nfsvers=3,tcp 10.27.1.5:/sapnw1 /saptmp
# If you're using NFSv4.1:
- sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys,tcp 10.27.1.5:/sapnw1 /saptmp
+ sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,nfsvers=4.1,sec=sys,tcp 10.27.1.5:/sapnw1 /saptmp
# Create the SAP directories. sudo cd /saptmp sudo mkdir -p sapmntNW1
The instructions in this section are applicable only if you're using Azure NetAp
```bash # If you're using NFSv3:
- echo "10.27.1.5:/sapnw1/sapmntNW1 /sapmnt/NW1 nfs vers=3,hard 0 0" >> /etc/fstab
- echo "10.27.1.5:/sapnw1/usrsapNW1 /usr/sap/NW1 nfs vers=3,hard 0 0" >> /etc/fstab
- echo "10.27.1.5:/saptrans /usr/sap/trans nfs vers=3,hard 0 0" >> /etc/fstab
+ echo "10.27.1.5:/sapnw1/sapmntNW1 /sapmnt/NW1 nfs nfsvers=3,hard 0 0" >> /etc/fstab
+ echo "10.27.1.5:/sapnw1/usrsapNW1 /usr/sap/NW1 nfs nfsvers=3,hard 0 0" >> /etc/fstab
+ echo "10.27.1.5:/saptrans /usr/sap/trans nfs nfsvers=3,hard 0 0" >> /etc/fstab
# If you're using NFSv4.1:
- echo "10.27.1.5:/sapnw1/sapmntNW1 /sapmnt/NW1 nfs vers=4,minorversion=1,sec=sys,hard 0 0" >> /etc/fstab
- echo "10.27.1.5:/sapnw1/usrsapNW1 /usr/sap/NW1 nfs vers=4,minorversion=1,sec=sys,hard 0 0" >> /etc/fstab
- echo "10.27.1.5:/saptrans /usr/sap/trans nfs vers=4,minorversion=1,sec=sys,hard 0 0" >> /etc/fstab
+ echo "10.27.1.5:/sapnw1/sapmntNW1 /sapmnt/NW1 nfs nfsvers=4.1,sec=sys,hard 0 0" >> /etc/fstab
+ echo "10.27.1.5:/sapnw1/usrsapNW1 /usr/sap/NW1 nfs nfsvers=4.1,sec=sys,hard 0 0" >> /etc/fstab
+ echo "10.27.1.5:/saptrans /usr/sap/trans nfs nfsvers=4.1,sec=sys,hard 0 0" >> /etc/fstab
# Mount the file systems. mount -a ```
If you're using NFS on Azure Files, use the following instructions to prepare th
2. Mount the file systems. ```bash
- echo "sapnfsafs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1 nfs vers=4,minorversion=1,sec=sys 0 0" >> /etc/fstab
- echo "sapnfsafs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs vers=4,minorversion=1,sec=sys 0 0" >> /etc/fstab
+ echo "sapnfsafs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1 nfs vers=4.1,sec=sys 0 0" >> /etc/fstab
+ echo "sapnfsafs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs vers=4.1,sec=sys 0 0" >> /etc/fstab
# Mount the file systems. mount -a ```
If you're using NFS on Azure NetApp Files, use the following instructions to pre
```bash # If you're using NFSv3:
- echo "10.27.1.5:/sapnw1/sapmntNW1 /sapmnt/NW1 nfs vers=3,hard 0 0" >> /etc/fstab
- echo "10.27.1.5:/saptrans /usr/sap/trans nfs vers=3, hard 0 0" >> /etc/fstab
+ echo "10.27.1.5:/sapnw1/sapmntNW1 /sapmnt/NW1 nfs nfsvers=3,hard 0 0" >> /etc/fstab
+ echo "10.27.1.5:/saptrans /usr/sap/trans nfs nfsvers=3, hard 0 0" >> /etc/fstab
# If you're using NFSv4.1:
- echo "10.27.1.5:/sapnw1/sapmntNW1 /sapmnt/NW1 nfs vers=4,minorversion=1,sec=sys,hard 0 0" >> /etc/fstab
- echo "10.27.1.5:/saptrans /usr/sap/trans nfs vers=4,minorversion=1,sec=sys,hard 0 0" >> /etc/fstab
+ echo "10.27.1.5:/sapnw1/sapmntNW1 /sapmnt/NW1 nfs nfsvers=4.1,sec=sys,hard 0 0" >> /etc/fstab
+ echo "10.27.1.5:/saptrans /usr/sap/trans nfs nfsvers=4.1,sec=sys,hard 0 0" >> /etc/fstab
# Mount the file systems. mount -a ```
sap Planning Guide Storage Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/planning-guide-storage-azure-files.md
+
+ Title: 'Azure Premium Files NFS and SMB for SAP'
+description: Using Azure Premium Files NFS and SMB for SAP workload
++
+tags: azure-resource-manager
++++ Last updated : 04/26/2023+++
+# Using Azure Premium Files NFS and SMB for SAP workload
+
+This document is about Azure Premium Files file shares used for SAP workload. Both NFS volumes and SMB file shares are covered. For considerations around Azure NetApp Files for SMB or NFS volumes, see the following two documents:
+
+- [Azure Storage types for SAP workload](./planning-guide-storage.md)
+- [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md)
+
+> [!IMPORTANT]
+> The suggestions for the storage configurations in this document are meant as directions to start with. Running workload and analyzing storage utilization patterns, you might realize that you are not utilizing all the storage bandwidth or IOPS provided. You might consider downsizing on storage then. Or in contrary, your workload might need more storage throughput than suggested with these configurations. As a result, you might need to deploy more capacity to increase IOPS or throughput. In the field of tension between storage capacity required, storage latency needed, storage throughput and IOPS required and least expensive configuration, Azure offers enough different storage types with different capabilities and different price points to find and adjust to the right compromise for you and your SAP workload.
+
+For SAP workloads, the supported uses of Azure Files shares are:
+
+- sapmnt volume for a distributed SAP system
+- transport directory for SAP landscape
+- /hana/shared for HANA scale-out
+- file interface between your SAP landscape and other applications
+
+> [!NOTE]
+> No SAP DBMS workloads are supported on Azure Premium Files volumes, NFS or SMB. For support restrictions on Azure storage types for SAP NetWeaver/application layer of S/4HANA, read the [SAP support note 2015553](https://launchpad.support.sap.com/#/notes/2015553)
+
+## Important considerations for Azure Premium Files shares with SAP
+
+When you plan your deployment with Azure Files, consider the following important points. The term share in this section applies to both SMB share and NFS volume.
+
+- The minimum share size is 100 gibibytes (GiB). With Azure Premium Files, you pay for the [capacity of the provisioned shares](/azure/storage/files/understanding-billing#provisioned-model).
+- Size your file shares not only based on capacity requirements, but also on IOPS and throughput requirements. For details, see [Azure files share targets](/azure/storage/files/storage-files-scale-targets#azure-file-share-scale-targets).
+- Test the workload to validate your sizing and ensure that it meets your performance targets. To learn how to troubleshoot performance issues with NFS on Azure Files, consult [Troubleshoot Azure file share performance](/azure/storage/files/storage-troubleshooting-files-performance).
+- Deploy a separate `sapmnt` share for each SAP system.
+- Don't use the `sapmnt` share for any other activity, such as interfaces.
+- Don't use the `saptrans` share for any other activity, such as interfaces.
+- If your SAP system has a heavy load of batch jobs, you might have millions of job logs. If the SAP batch job logs are stored in the file system, pay special attention to the sizing of the `sapmnt` share. Reorganize the job log files regularly as per [SAP note 16083](https://launchpad.support.sap.com/#/notes/16083). As of SAP_BASIS 7.52, the default behavior for the batch job logs is to be stored in the database. For details, see [SAP note 2360818 | Job log in the database][https://launchpad.support.sap.com/#/notes/2360818].
+- Avoid consolidating the shares for too many SAP systems in a single storage account. There are also [scalability and performance targets for storage accounts](/azure/storage/files/storage-files-scale-targets#storage-account-scale-targets). Be careful to not exceed the limits for the storage account, too.
+- In general, don't consolidate the shares for more than *five* SAP systems in a single storage account. This guideline helps you avoid exceeding the storage account limits and simplifies performance analysis.
+- In general, avoid mixing shares like `sapmnt` for non-production and production SAP systems in the same storage account.
+- Use a [private endpoint](/azure/storage/files/storage-files-networking-endpoints) with Azure Files. In the unlikely event of a zonal failure, your NFS sessions automatically redirect to a healthy zone. You don't have to remount the NFS shares on your VMs. Use of private link can result in extra charges for the data processed, see details about [private link pricing](https://azure.microsoft.com/pricing/details/private-link/).
+- If you're deploying your VMs across availability zones, use a [storage account with ZRS](/azure/storage/common/storage-redundancy#zone-redundant-storage) in the Azure regions that support ZRS.
+- Azure Premium Files doesn't currently support automatic cross-region replication for disaster recovery scenarios. See [guidelines on DR for SAP applications](disaster-recovery-overview-guide.md) for available options.
+
+Carefully consider when consolidating multiple activities into one file share or multiple file shares in one storage accounts. Distributing these shares onto separate storage accounts improves throughput, resiliency and simplifies the performance analysis. If many SAP SIDs and shares are consolidated onto a single Azure Files storage account and the storage account performance is poor due to hitting the throughput limits. It can become difficult to identify which SID or volume is causing the problem.
+
+## NFS additional considerations
+
+- We recommend that you deploy on SLES 15 SP2 or higher, RHEL 8.4 or higher to benefit from [NFS client improvements](/azure/storage/files/storage-troubleshooting-files-nfs#ls-hangs-for-large-directory-enumeration-on-some-kernels).
+- Mount the NFS shares with [documented mount](/azure/storage/files/storage-files-how-to-mount-nfs-shares) options, with [troubleshooting](/azure/storage/files/storage-troubleshooting-files-nfs#cannot-connect-to-or-mount-an-nfs-azure-file-share) information available for mount or connection problems.
+- For SAP J2EE systems, placing `/usr/sap/<SID>/J<nr>` on NFS on Azure Files isn't supported.
+
+## SMB additional considerations
+
+- SAP software provisioning manager (SWPM) version 1.0 SP32, SWPM 2.0 SP09 or higher is required to use Azure Files SMB. SAPInst patch must be 749.0.91 or higher. If SWPM/SAPInst doesn't accept more than 13 characters for file share server, then the SWPM version is too old.
+- During the installation of the SAP PAS Instance, SWPM/SAPInst will ask to input a transport hostname. The FQDN of the storage account should be entered <storage_account>.file.core.windows.net or with IP address/hostname of the private endpoint, if used.
+- When you integrate the active directory domain with Azure Files SMB for [SAP high availability deployment](high-availability-guide-windows-azure-files-smb.md), the SAP users and groups must be added to the ΓÇÿsapmntΓÇÖ share. The SAP users should have permission `Storage File Data SMB Share Elevated Contributor` set in the Azure portal.
+
+## Next steps
+
+For more information, see:
+- [Azure Storage types for SAP workload](planning-guide-storage.md)
+- [SAP HANA High Availability guide for Azure virtual machines](sap-hana-availability-overview.md)
++
sap Sap Hana High Availability Netapp Files Red Hat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-netapp-files-red-hat.md
vm-linux Previously updated : 04/06/2023 Last updated : 04/25/2023
For more information about the required ports for SAP HANA, read the chapter [Co
3. **[1]** Mount the node-specific volumes on node1 (**hanadb1**) ```bash
- sudo mount -o rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 10.32.2.4:/hanadb1-shared-mnt00001 /hana/shared
- sudo mount -o rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 10.32.2.4:/hanadb1-log-mnt00001 /hana/log
- sudo mount -o rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 10.32.2.4:/hanadb1-data-mnt00001 /hana/data
+ sudo mount -o rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 10.32.2.4:/hanadb1-shared-mnt00001 /hana/shared
+ sudo mount -o rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 10.32.2.4:/hanadb1-log-mnt00001 /hana/log
+ sudo mount -o rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 10.32.2.4:/hanadb1-data-mnt00001 /hana/data
``` 4. **[2]** Mount the node-specific volumes on node2 (**hanadb2**) ```bash
- sudo mount -o rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 10.32.2.4:/hanadb2-shared-mnt00001 /hana/shared
- sudo mount -o rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 10.32.2.4:/hanadb2-log-mnt00001 /hana/log
- sudo mount -o rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 10.32.2.4:/hanadb2-data-mnt00001 /hana/data
+ sudo mount -o rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 10.32.2.4:/hanadb2-shared-mnt00001 /hana/shared
+ sudo mount -o rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 10.32.2.4:/hanadb2-log-mnt00001 /hana/log
+ sudo mount -o rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 10.32.2.4:/hanadb2-data-mnt00001 /hana/data
``` 5. **[A]** Verify that all HANA volumes are mounted with NFS protocol version NFSv4.
In this example each cluster node has its own HANA NFS filesystems /hana/shared,
2. **[1]** Create the Filesystem resources for the **hanadb1** mounts. ```bash
- sudo pcs resource create hana_data1 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb1-data-mnt00001 directory=/hana/data fstype=nfs options=rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 --group hanadb1_nfs
- sudo pcs resource create hana_log1 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb1-log-mnt00001 directory=/hana/log fstype=nfs options=rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 --group hanadb1_nfs
- sudo pcs resource create hana_shared1 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb1-shared-mnt00001 directory=/hana/shared fstype=nfs options=rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 --group hanadb1_nfs
+ sudo pcs resource create hana_data1 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb1-data-mnt00001 directory=/hana/data fstype=nfs options=rw,nfvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 --group hanadb1_nfs
+ sudo pcs resource create hana_log1 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb1-log-mnt00001 directory=/hana/log fstype=nfs options=rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 --group hanadb1_nfs
+ sudo pcs resource create hana_shared1 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb1-shared-mnt00001 directory=/hana/shared fstype=nfs options=rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 --group hanadb1_nfs
``` 3. **[2]** Create the Filesystem resources for the **hanadb2** mounts. ```bash
- sudo pcs resource create hana_data2 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb2-data-mnt00001 directory=/hana/data fstype=nfs options=rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 --group hanadb2_nfs
- sudo pcs resource create hana_log2 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb2-log-mnt00001 directory=/hana/log fstype=nfs options=rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 --group hanadb2_nfs
- sudo pcs resource create hana_shared2 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb2-shared-mnt00001 directory=/hana/shared fstype=nfs options=rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 --group hanadb2_nfs
+ sudo pcs resource create hana_data2 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb2-data-mnt00001 directory=/hana/data fstype=nfs options=rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 --group hanadb2_nfs
+ sudo pcs resource create hana_log2 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb2-log-mnt00001 directory=/hana/log fstype=nfs options=rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 --group hanadb2_nfs
+ sudo pcs resource create hana_shared2 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb2-shared-mnt00001 directory=/hana/shared fstype=nfs options=rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 --group hanadb2_nfs
``` `OCF_CHECK_LEVEL=20` attribute is added to the monitor operation so that each monitor performs a read/write test on the filesystem. Without this attribute, the monitor operation only verifies that the filesystem is mounted. This can be a problem because when connectivity is lost, the filesystem may remain mounted despite being inaccessible.
sap Sap Hana High Availability Netapp Files Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-netapp-files-suse.md
vm-linux Previously updated : 12/07/2022 Last updated : 04/25/2023
For more information about the required ports for SAP HANA, read the chapter [Co
Example for hanadb1 ```output
- 10.3.1.4:/hanadb1-data-mnt00001 /hana/data/HN1/mnt00001 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
- 10.3.1.4:/hanadb1-log-mnt00001 /hana/log/HN1/mnt00001 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
- 10.3.1.4:/hanadb1-shared-mnt00001 /hana/shared/HN1 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
+ 10.3.1.4:/hanadb1-data-mnt00001 /hana/data/HN1/mnt00001 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
+ 10.3.1.4:/hanadb1-log-mnt00001 /hana/log/HN1/mnt00001 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
+ 10.3.1.4:/hanadb1-shared-mnt00001 /hana/shared/HN1 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
``` Example for hanadb2 ```output
- 10.3.1.4:/hanadb2-data-mnt00001 /hana/data/HN1/mnt00001 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
- 10.3.1.4:/hanadb2-log-mnt00001 /hana/log/HN1/mnt00001 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
- 10.3.1.4:/hanadb2-shared-mnt00001 /hana/shared/HN1 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
+ 10.3.1.4:/hanadb2-data-mnt00001 /hana/data/HN1/mnt00001 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
+ 10.3.1.4:/hanadb2-log-mnt00001 /hana/log/HN1/mnt00001 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
+ 10.3.1.4:/hanadb2-shared-mnt00001 /hana/shared/HN1 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
``` Mount all volumes
Create a dummy file system cluster resource, which will monitor and report failu
sudo crm configure primitive rsc_fs_check_HN1_HDB03 Filesystem params \ device="/hana/shared/HN1/check/" \ directory="/hana/shared/check/" fstype=nfs4 \
- options="bind,defaults,rw,hard,rsize=262144,wsize=262144,proto=tcp,intr,noatime,_netdev,vers=4,minorversion=1,lock,sec=sys" \
+ options="bind,defaults,rw,hard,rsize=262144,wsize=262144,proto=tcp,noatime,_netdev,nfsvers=4.1,lock,sec=sys" \
op monitor interval=120 timeout=120 on-fail=fence \ op_params OCF_CHECK_LEVEL=20 \ op start interval=0 timeout=120 \
sap Sap Hana High Availability Scale Out Hsr Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-scale-out-hsr-rhel.md
vm-windows Previously updated : 04/06/2022 Last updated : 04/25/2023
In this example, the shared HANA file systems are deployed on Azure NetApp Files
1. **[AH1]** Mount the shared Azure NetApp Files volumes on the SITE1 HANA DB VMs. ```bash
- sudo mount -o rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 10.23.1.7:/HN1-shared-s1 /hana/shared
+ sudo mount -o rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 10.23.1.7:/HN1-shared-s1 /hana/shared
``` 1. **[AH2]** Mount the shared Azure NetApp Files volumes on the SITE2 HANA DB VMs. ```bash
- sudo mount -o rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 10.23.1.7:/HN1-shared-s2 /hana/shared
+ sudo mount -o rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 10.23.1.7:/HN1-shared-s2 /hana/shared
```
For the next part of this process, you need to create file system resources. Her
```bash # /hana/shared file system for site 1 pcs resource create fs_hana_shared_s1 --disabled ocf:heartbeat:Filesystem device=10.23.1.7:/HN1-shared-s1 directory=/hana/shared \
- fstype=nfs options='defaults,rw,hard,timeo=600,rsize=262144,wsize=262144,proto=tcp,intr,noatime,sec=sys,vers=4.1,lock,_netdev' op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 \
+ fstype=nfs options='defaults,rw,hard,timeo=600,rsize=262144,wsize=262144,proto=tcp,noatime,sec=sys,nfsvers=4.1,lock,_netdev' op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 \
op start interval=0 timeout=120 op stop interval=0 timeout=120 # /hana/shared file system for site 2 pcs resource create fs_hana_shared_s2 --disabled ocf:heartbeat:Filesystem device=10.23.1.7:/HN1-shared-s1 directory=/hana/shared \
- fstype=nfs options='defaults,rw,hard,timeo=600,rsize=262144,wsize=262144,proto=tcp,intr,noatime,sec=sys,vers=4.1,lock,_netdev' op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 \
+ fstype=nfs options='defaults,rw,hard,timeo=600,rsize=262144,wsize=262144,proto=tcp,noatime,sec=sys,nfsvers=4.1,lock,_netdev' op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 \
op start interval=0 timeout=120 op stop interval=0 timeout=120 # clone the /hana/shared file system resources for both site1 and site2
sap Sap Hana High Availability Scale Out Hsr Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-scale-out-hsr-suse.md
vm-windows Previously updated : 01/27/2023 Last updated : 04/25/2023
In this example, the shared HANA file systems are deployed on Azure NetApp Files
```bash sudo vi /etc/fstab # Add the following entry
- 10.23.1.7:/HN1-shared-s1 /hana/shared nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
+ 10.23.1.7:/HN1-shared-s1 /hana/shared nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
# Mount all volumes sudo mount -a ```
In this example, the shared HANA file systems are deployed on Azure NetApp Files
```bash sudo vi /etc/fstab # Add the following entry
- 10.23.1.7:/HN1-shared-s2 /hana/shared nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
+ 10.23.1.7:/HN1-shared-s2 /hana/shared nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
# Mount the volume sudo mount -a ```
In this example, the shared HANA file systems are deployed on NFS on Azure Files
```bash sudo vi /etc/fstab # Add the following entry
- sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s1 /hana/shared nfs vers=4,minorversion=1,sec=sys 0 0
+ sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s1 /hana/shared nfs nfsvers=4.1,sec=sys 0 0
# Mount all volumes sudo mount -a ```
In this example, the shared HANA file systems are deployed on NFS on Azure Files
```bash sudo vi /etc/fstab # Add the following entries
- sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s2 /hana/shared nfs vers=4,minorversion=1,sec=sys 0 0
+ sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s2 /hana/shared nfs nfsvers=4.1,sec=sys 0 0
# Mount the volume sudo mount -a ```
Create a dummy file system cluster resource, which will monitor and report failu
crm configure primitive fs_HN1_HDB03_fscheck Filesystem \ params device="/hana/shared/HN1/check" \ directory="/hana/check" fstype=nfs4 \
- options="bind,defaults,rw,hard,proto=tcp,intr,noatime,vers=4.1,lock" \
+ options="bind,defaults,rw,hard,proto=tcp,noatime,nfsvers=4.1,lock" \
op monitor interval=120 timeout=120 on-fail=fence \ op_params OCF_CHECK_LEVEL=20 \ op start interval=0 timeout=120 op stop interval=0 timeout=120
sap Sap Hana Scale Out Standby Netapp Files Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-scale-out-standby-netapp-files-rhel.md
vm-windows Previously updated : 04/06/2023 Last updated : 04/25/2023
Configure and prepare your OS by doing the following steps:
# if using NFSv3 for this volume, mount with the following command mount <b>10.9.0.4</b>:/<b>HN1</b>-shared /mnt/tmp # if using NFSv4.1 for this volume, mount with the following command
- mount -t nfs -o sec=sys,vers=4.1 <b>10.9.0.4</b>:/<b>HN1</b>-shared /mnt/tmp
+ mount -t nfs -o sec=sys,nfsvers=4.1 <b>10.9.0.4</b>:/<b>HN1</b>-shared /mnt/tmp
cd /mnt/tmp mkdir shared usr-sap-<b>hanadb1</b> usr-sap-<b>hanadb2</b> usr-sap-<b>hanadb3</b> # unmount /hana/shared
Configure and prepare your OS by doing the following steps:
<pre><code> sudo vi /etc/fstab # Add the following entries
- 10.9.0.4:/<b>HN1</b>-data-mnt00001 /hana/data/<b>HN1</b>/mnt00001 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
- 10.9.0.4:/<b>HN1</b>-data-mnt00002 /hana/data/<b>HN1</b>/mnt00002 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
- 10.9.0.4:/<b>HN1</b>-log-mnt00001 /hana/log/<b>HN1</b>/mnt00001 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
- 10.9.0.4:/<b>HN1</b>-log-mnt00002 /hana/log/HN1/mnt00002 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
- 10.9.0.4:/<b>HN1</b>-shared/shared /hana/shared nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
+ 10.9.0.4:/<b>HN1</b>-data-mnt00001 /hana/data/<b>HN1</b>/mnt00001 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
+ 10.9.0.4:/<b>HN1</b>-data-mnt00002 /hana/data/<b>HN1</b>/mnt00002 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
+ 10.9.0.4:/<b>HN1</b>-log-mnt00001 /hana/log/<b>HN1</b>/mnt00001 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
+ 10.9.0.4:/<b>HN1</b>-log-mnt00002 /hana/log/HN1/mnt00002 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
+ 10.9.0.4:/<b>HN1</b>-shared/shared /hana/shared nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
# Mount all volumes sudo mount -a </code></pre>
Configure and prepare your OS by doing the following steps:
<pre><code> sudo vi /etc/fstab # Add the following entries
- 10.9.0.4:/<b>HN1</b>-shared/usr-sap-<b>hanadb1</b> /usr/sap/<b>HN1</b> nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
+ 10.9.0.4:/<b>HN1</b>-shared/usr-sap-<b>hanadb1</b> /usr/sap/<b>HN1</b> nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
# Mount the volume sudo mount -a </code></pre>
Configure and prepare your OS by doing the following steps:
<pre><code> sudo vi /etc/fstab # Add the following entries
- 10.9.0.4:/<b>HN1</b>-shared/usr-sap-<b>hanadb2</b> /usr/sap/<b>HN1</b> nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
+ 10.9.0.4:/<b>HN1</b>-shared/usr-sap-<b>hanadb2</b> /usr/sap/<b>HN1</b> nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
# Mount the volume sudo mount -a </code></pre>
Configure and prepare your OS by doing the following steps:
<pre><code> sudo vi /etc/fstab # Add the following entries
- 10.9.0.4:/<b>HN1</b>-shared/usr-sap-<b>hanadb3</b> /usr/sap/<b>HN1</b> nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
+ 10.9.0.4:/<b>HN1</b>-shared/usr-sap-<b>hanadb3</b> /usr/sap/<b>HN1</b> nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
# Mount the volume sudo mount -a </code></pre>
sap Sap Hana Scale Out Standby Netapp Files Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-scale-out-standby-netapp-files-suse.md
vm-windows Previously updated : 11/15/2022 Last updated : 04/25/2023
Configure and prepare your OS by doing the following steps:
# if using NFSv3 for this volume, mount with the following command mount <b>10.23.1.4</b>:/<b>HN1</b>-shared /mnt/tmp # if using NFSv4.1 for this volume, mount with the following command
- mount -t nfs -o sec=sys,vers=4.1 <b>10.23.1.4</b>:/<b>HN1</b>-shared /mnt/tmp
+ mount -t nfs -o sec=sys,nfsvers=4.1 <b>10.23.1.4</b>:/<b>HN1</b>-shared /mnt/tmp
cd /mnt/tmp mkdir shared usr-sap-<b>hanadb1</b> usr-sap-<b>hanadb2</b> usr-sap-<b>hanadb3</b> # unmount /hana/shared
Configure and prepare your OS by doing the following steps:
<pre><code> sudo vi /etc/fstab # Add the following entries
- 10.23.1.5:/<b>HN1</b>-data-mnt00001 /hana/data/<b>HN1</b>/mnt00001 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
- 10.23.1.6:/<b>HN1</b>-data-mnt00002 /hana/data/<b>HN1</b>/mnt00002 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
- 10.23.1.4:/<b>HN1</b>-log-mnt00001 /hana/log/<b>HN1</b>/mnt00001 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
- 10.23.1.6:/<b>HN1</b>-log-mnt00002 /hana/log/HN1/mnt00002 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
- 10.23.1.4:/<b>HN1</b>-shared/shared /hana/shared nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
+ 10.23.1.5:/<b>HN1</b>-data-mnt00001 /hana/data/<b>HN1</b>/mnt00001 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
+ 10.23.1.6:/<b>HN1</b>-data-mnt00002 /hana/data/<b>HN1</b>/mnt00002 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
+ 10.23.1.4:/<b>HN1</b>-log-mnt00001 /hana/log/<b>HN1</b>/mnt00001 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
+ 10.23.1.6:/<b>HN1</b>-log-mnt00002 /hana/log/HN1/mnt00002 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
+ 10.23.1.4:/<b>HN1</b>-shared/shared /hana/shared nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
# Mount all volumes sudo mount -a </code></pre>
Configure and prepare your OS by doing the following steps:
<pre><code> sudo vi /etc/fstab # Add the following entries
- 10.23.1.4:/<b>HN1</b>-shared/usr-sap-<b>hanadb1</b> /usr/sap/<b>HN1</b> nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
+ 10.23.1.4:/<b>HN1</b>-shared/usr-sap-<b>hanadb1</b> /usr/sap/<b>HN1</b> nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
# Mount the volume sudo mount -a </code></pre>
Configure and prepare your OS by doing the following steps:
<pre><code> sudo vi /etc/fstab # Add the following entries
- 10.23.1.4:/<b>HN1</b>-shared/usr-sap-<b>hanadb2</b> /usr/sap/<b>HN1</b> nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
+ 10.23.1.4:/<b>HN1</b>-shared/usr-sap-<b>hanadb2</b> /usr/sap/<b>HN1</b> nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
# Mount the volume sudo mount -a </code></pre>
Configure and prepare your OS by doing the following steps:
<pre><code> sudo vi /etc/fstab # Add the following entries
- 10.23.1.4:/<b>HN1</b>-shared/usr-sap-<b>hanadb3</b> /usr/sap/<b>HN1</b> nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
+ 10.23.1.4:/<b>HN1</b>-shared/usr-sap-<b>hanadb3</b> /usr/sap/<b>HN1</b> nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
# Mount the volume sudo mount -a </code></pre>
sentinel Ai Analyst Darktrace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/ai-analyst-darktrace.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward Common Event Format (CEF) logs to Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/darktrace1655286944672.darktrace_mss?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/darktrace1655286944672.darktrace_mss?tab=Overview) in the Azure Marketplace.
sentinel Akamai Security Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/akamai-security-events.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward Common Event Format (CEF) logs to Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Aruba Clearpass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/aruba-clearpass.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward Aruba ClearPass logs to a Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
Make sure to configure the machine's security according to your organizationΓÇÖs
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-arubaclearpass?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-arubaclearpass?tab=Overview) in the Azure Marketplace.
sentinel Atlassian Confluence Audit Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/atlassian-confluence-audit-using-azure-function.md
Use this method for automated deployment of the Confluence Audit data connector
[![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-confluenceauditapi-azuredeploy) 2. Select the preferred **Subscription**, **Resource Group** and **Location**. > **NOTE:** Within the same resource group, you can't mix Windows and Linux apps in the same region. Select existing resource group without Windows apps in it or create new resource group.
-3. Enter the **ConfluenceAccessToken**, **ConfluenceUsername**, **ConfluenceHomeSiteName** (short site name part, as example HOMESITENAME from https://HOMESITENAME.atlassian.net) and deploy.
+3. Enter the **ConfluenceAccessToken**, **ConfluenceUsername**, **ConfluenceHomeSiteName** (short site name part, as example HOMESITENAME from `https://HOMESITENAME.atlassian.net`) and deploy.
4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. 5. Click **Purchase** to deploy.
sentinel Awake Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/awake-security.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward Awake Adversarial Model match results to a CEF collector.
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/arista-networks.awake-security?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/arista-networks.awake-security?tab=Overview) in the Azure Marketplace.
sentinel Box Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/box-using-azure-function.md
To integrate with Box (using Azure Functions) make sure you have:
**STEP 1 - Configuration of the Box events collection**
-See documentation to [setup JWT authentication](https://developer.box.com/guides/applications/custom-apps/jwt-setup/) and [obtain JSON file with credentials](https://developer.box.com/guides/authentication/jwt/with-sdk/#prerequisites).
+See documentation to [setup JWT authentication](https://developer.box.com/guides/authentication/jwt/jwt-setup/) and [obtain JSON file with credentials](https://developer.box.com/guides/authentication/jwt/with-sdk/#prerequisites).
**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
sentinel Braodcom Symantec Dlp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/braodcom-symantec-dlp.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward Symantec DLP logs to a Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Cisco Asa Ftd Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-asa-ftd-via-ama.md
Enable data collection ruleΓÇï
Run the following command to install and apply the Cisco ASA/FTD collector:
- sudo wget -O Forwarder_AMA_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/Syslog/Forwarder_AMA_installer.py&&sudo python Forwarder_AMA_installer.py
+ sudo wget -O Forwarder_AMA_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/Syslog/Forwarder_AMA_installer.py python Forwarder_AMA_installer.py
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ciscoasa?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ciscoasa?tab=Overview) in the Azure Marketplace.
sentinel Cisco Asa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-asa.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward Cisco ASA logs to Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ciscoasa?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ciscoasa?tab=Overview) in the Azure Marketplace.
sentinel Cisco Firepower Estreamer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-firepower-estreamer.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Install the Firepower eNcore client
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cisco.cisco-firepower-estreamer?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cisco.cisco-firepower-estreamer?tab=Overview) in the Azure Marketplace.
sentinel Cisco Secure Email Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-secure-email-gateway.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward Common Event Format (CEF) logs to Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ciscoseg?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ciscoseg?tab=Overview) in the Azure Marketplace.
sentinel Citrix Adc Former Netscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/citrix-adc-former-netscaler.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Configure Citrix ADC to use CEF logging
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
5. Secure your machine
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-citrixadc?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-citrixadc?tab=Overview) in the Azure Marketplace.
sentinel Citrix Waf Web App Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/citrix-waf-web-app-firewall.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward Common Event Format (CEF) logs to Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/citrix.citrix_waf_mss?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/citrix.citrix_waf_mss?tab=Overview) in the Azure Marketplace.
sentinel Claroty https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/claroty.md
# Claroty connector for Microsoft Sentinel
-The [Claroty](https://claroty.com/) data connector provides the capability to ingest [Continuous Threat Detection](https://claroty.com/continuous-threat-detection/) and [Secure Remote Access](https://claroty.com/secure-remote-access/) events into Microsoft Sentinel.
+The [Claroty](https://claroty.com/) data connector provides the capability to ingest [Continuous Threat Detection](https://claroty.com/resources/datasheets/continuous-threat-detection) and [Secure Remote Access](https://claroty.com/secure-remote-access/) events into Microsoft Sentinel.
## Connector attributes
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Configure Claroty to send logs using CEF
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-claroty?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-claroty?tab=Overview) in the Azure Marketplace.
sentinel Contrast Protect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/contrast-protect.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward Common Event Format (CEF) logs to Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Corelight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/corelight.md
# Corelight connector for Microsoft Sentinel
-The [Corelight](https://corelight.com/) data connector enables incident responders and threat hunters who use Microsoft Sentinel to work faster and more effectively. The data connector enables ingestion of events from [Zeek](https://zeek.org/) and [Suricata](https://suricata-ids.org/) via Corelight Sensors into Microsoft Sentinel.
+The [Corelight](https://corelight.com/) data connector enables incident responders and threat hunters who use Microsoft Sentinel to work faster and more effectively. The data connector enables ingestion of events from [Zeek](https://zeek.org/) and [Suricata](https://suricata.io/) via Corelight Sensors into Microsoft Sentinel.
## Connector attributes
sentinel Crowdstrike Falcon Endpoint Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/crowdstrike-falcon-endpoint-protection.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward CrowdStrike Falcon Event Stream logs to a Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Cyberark Enterprise Password Vault Epv Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cyberark-enterprise-password-vault-epv-events.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward Common Event Format (CEF) logs to Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
Make sure to configure the machines security according to your organizations sec
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cyberark.cyberark_epv_events_mss?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cyberark.cyberark_epv_events_mss?tab=Overview) in the Azure Marketplace.
sentinel Delinea Secret Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/delinea-secret-server.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward Common Event Format (CEF) logs to Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Exabeam Advanced Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/exabeam-advanced-analytics.md
Configure the custom log directory to be collected
3. Configure Exabeam event forwarding to Syslog
-[Follow these instructions](https://docs.exabeam.com/en/advanced-analytics/i54/advanced-analytics-administration-guide/113254-configure-advanced-analytics.html#UUID-7ce5ff9d-56aa-93f0-65de-c5255b682a08) to send Exabeam Advanced Analytics activity log data via syslog.
+[Follow these instructions](https://docs.exabeam.com/en/advanced-analytics/i56/advanced-analytics-administration-guide/125371-configure-advanced-analytics.html#UUID-6d28da8d-6d3e-5aa7-7c12-e67dc804f894) to send Exabeam Advanced Analytics activity log data via syslog.
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-exabeamadvancedanalytics?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-exabeamadvancedanalytics?tab=Overview) in the Azure Marketplace.
sentinel Extrahop Reveal X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/extrahop-reveal-x.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward ExtraHop Networks logs to Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel F5 Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/f5-networks.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward Common Event Format (CEF) logs to Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Fireeye Network Security Nx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/fireeye-network-security-nx.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Configure FireEye NX to send logs using CEF
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Forcepoint Casb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/forcepoint-casb.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward Common Event Format (CEF) logs to Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Forcepoint Csg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/forcepoint-csg.md
This integration requires the Linux Syslog agent to collect your Forcepoint Clou
Your Data Connector Syslog Agent Installation Command is:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Implementation options
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-forcepoint-csg?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-forcepoint-csg?tab=Overview) in the Azure Marketplace.
sentinel Forcepoint Ngfw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/forcepoint-ngfw.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward Common Event Format (CEF) logs to Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Fortinet Fortiweb Web Application Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/fortinet-fortiweb-web-application-firewall.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward Common Event Format (CEF) logs to Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Fortinet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/fortinet.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward Fortinet logs to Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-fortinetfortigate?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-fortinetfortigate?tab=Overview) in the Azure Marketplace.
sentinel Iboss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/iboss.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward Common Event Format (CEF) logs
sentinel Illumio Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/illumio-core.md
# Illumio Core connector for Microsoft Sentinel
-The [Illumio Core](https://www.illumio.com/products/core) data connector provides the capability to ingest Illumio Core logs into Microsoft Sentinel.
+The [Illumio Core](https://www.illumio.com/products/illumio-core) data connector provides the capability to ingest Illumio Core logs into Microsoft Sentinel.
## Connector attributes
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Configure Ilumio Core to send logs using CEF
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Illusive Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/illusive-platform.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward Illusive Common Event Format (CEF) logs to Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Infoblox Cloud Data Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/infoblox-cloud-data-connector.md
InfobloxCDC
## Vendor installation instructions
->**IMPORTANT:** This data connector depends on a parser based on a Kusto Function to work as expected called [**InfobloxCDC**](https://aka.ms/sentinel-InfobloxCloudDataConnector-parser) which is deployed with the Microsoft Sentinel Solution.
+>**IMPORTANT:** This data connector depends on a parser based on a Kusto Function to work as expected called [**InfobloxCDC**](https://raw.githubusercontent.com/sschuur/Azure-Sentinel/master/Solutions/Infoblox%20Cloud%20Data%20Connector/Parsers/InfobloxCDC.txt) which is deployed with the Microsoft Sentinel Solution.
>**IMPORTANT:** This Sentinel data connector assumes an Infoblox Data Connector host has already been created and configured in the Infoblox Cloud Services Portal (CSP). As the [**Infoblox Data Connector**](https://docs.infoblox.com/display/BloxOneThreatDefense/Deploying+the+Data+Connector+Solution) is a feature of BloxOne Threat Defense, access to an appropriate BloxOne Threat Defense subscription is required. See this [**quick-start guide**](https://www.infoblox.com/wp-content/uploads/infoblox-deployment-guide-data-connector.pdf) for more information and licensing requirements.
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Configure Infoblox BloxOne to send Syslog data to the Infoblox Cloud Data Connector to forward to the Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Kaspersky Security Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/kaspersky-security-center.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Configure Kaspersky Security Center to send logs using CEF
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Morphisec Utpp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/morphisec-utpp.md
Integrate vital insights from your security products with the Morphisec Data Con
| Connector attribute | Description | | | |
-| **Kusto function url** | https://aka.ms/sentinel-morphisecutpp-parser |
+| **Kusto function url** | https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/Morphisec/Parsers/Morphisec |
| **Log Analytics table(s)** | CommonSecurityLog (Morphisec)<br/> | | **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Morphisec](https://support.morphisec.com/support/home) |
+| **Supported by** | [Morphisec](https://support.morphisec.com/hc) |
## Query samples
Morphisec
These queries and workbooks are dependent on Kusto functions based on Kusto to work as expected. Follow the steps to use the Kusto functions alias "Morphisec"
-in queries and workbooks. [Follow steps to get this Kusto function.](https://aka.ms/sentinel-morphisecutpp-parser)
+in queries and workbooks. [Follow steps to get this Kusto function.](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/Morphisec/Parsers/Morphisec)
1. Linux Syslog agent configuration
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward Common Event Format (CEF) logs to Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Netskope Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/netskope-using-azure-function.md
Netskope
To integrate with Netskope (using Azure Function) make sure you have: - **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).-- **Netskope API Token**: A Netskope API Token is required. [See the documentation to learn more about Netskope API](https://innovatechcloud.goskope.com/docs/Netskope_Help/en/rest-api-v1-overview.html). **Note:** A Netskope account is required-
+- **Netskope API Token**: A Netskope API Token is required. [See the documentation to learn more about Netskope API](https://innovatechcloud.goskope.com). **Note:** A Netskope account is required.
## Vendor installation instructions
sentinel Netwrix Auditor Formerly Stealthbits Privileged Activity Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/netwrix-auditor-formerly-stealthbits-privileged-activity-manager.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Configure Netwrix Auditor to send logs using CEF
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Nozomi Networks N2os https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/nozomi-networks-n2os.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward Common Event Format (CEF) logs to Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Nxlog Aix Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/nxlog-aix-audit.md
The NXLog [AIX Audit](https://nxlog.co/documentation/nxlog-user-guide/im_aixaudi
| | | | **Log Analytics table(s)** | AIX_Audit_CL<br/> | | **Data collection rules support** | Not currently supported |
-| **Supported by** | [NXLog](https://nxlog.co/user?destination=node/add/support-ticket) |
+| **Supported by** | [NXLog](https://nxlog.co/community-forum/t/819-support-tickets) |
## Query samples
sentinel Nxlog Bsm Macos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/nxlog-bsm-macos.md
The NXLog [BSM](https://nxlog.co/documentation/nxlog-user-guide/im_bsm.html) mac
| | | | **Log Analytics table(s)** | BSMmacOS_CL<br/> | | **Data collection rules support** | Not currently supported |
-| **Supported by** | [NXLog](https://nxlog.co/user?destination=node/add/support-ticket) |
+| **Supported by** | [NXLog](https://nxlog.co/community-forum/t/819-support-tickets) |
## Query samples
sentinel Nxlog Dns Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/nxlog-dns-logs.md
The NXLog DNS Logs data connector uses Event Tracing for Windows ([ETW](/windows
| | | | **Log Analytics table(s)** | NXLog_DNS_Server_CL<br/> | | **Data collection rules support** | Not currently supported |
-| **Supported by** | [NXLog](https://nxlog.co/user?destination=node/add/support-ticket) |
+| **Supported by** | [NXLog](https://nxlog.co/community-forum/t/819-support-tickets) |
## Query samples
Follow the step-by-step instructions in the *NXLog User Guide* Integration Topic
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nxlogltd1589381969261.nxlog_dns_logs?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nxlogltd1589381969261.nxlog_dns_logs?tab=Overview) in the Azure Marketplace.
sentinel Nxlog Linuxaudit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/nxlog-linuxaudit.md
The NXLog [LinuxAudit](https://nxlog.co/documentation/nxlog-user-guide/im_linuxa
| | | | **Log Analytics table(s)** | LinuxAudit_CL<br/> | | **Data collection rules support** | Not currently supported |
-| **Supported by** | [NXLog](https://nxlog.co/user?destination=node/add/support-ticket) |
+| **Supported by** | [NXLog](https://nxlog.co/community-forum/t/819-support-tickets) |
## Query samples
sentinel Ossec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/ossec.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward Common Event Format (CEF) logs to Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Palo Alto Networks Cortex Data Lake Cdl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/palo-alto-networks-cortex-data-lake-cdl.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Configure Cortex Data Lake to forward logs to a Syslog Server using CEF
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Palo Alto Networks Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/palo-alto-networks-firewall.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward Palo Alto Networks logs to Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Pingfederate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/pingfederate.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward Common Event Format (CEF) logs to Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Rsa Securid Authentication Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/rsa-securid-authentication-manager.md
RSASecurIDAMEvent
> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected [**RSASecurIDAMEvent**](https://aka.ms/sentinel-rsasecuridam-parser) which is deployed with the Microsoft Sentinel Solution.
+ > This data connector depends on a parser based on a Kusto Function to work as expected [**RSASecurIDAMEvent**](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/RSA%20SecurID/Parsers/RSASecurIDAMEvent.txt) which is deployed with the Microsoft Sentinel Solution.
> [!NOTE]
sentinel Senservapro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/senservapro.md
The SenservaPro data connector provides a viewing experience for your SenservaPr
| | | | **Log Analytics table(s)** | SenservaPro_CL<br/> | | **Data collection rules support** | Not currently supported |
-| **Supported by** | [Senserva](https://www.senserva.com/contact/) |
+| **Supported by** | [Senserva](https://www.senserva.com/support/) |
## Query samples
let timeframe = 14d;
1. Setup the data connection
-Visit [Senserva Setup](https://www.senserva.com/senserva-setup/) for information on setting up the Senserva data connection, support, or any other questions. The Senserva installation will configure a Log Analytics Workspace for output. Deploy Microsoft Sentinel onto the configured Log Analytics Workspace to finish the data connection setup by following [this onboarding guide.](/azure/sentinel/quickstart-onboard)
-
+Visit [Senserva Setup](https://www.senserva.com/portal/) for information on setting up the Senserva data connection, support, or any other questions. The Senserva installation will configure a Log Analytics Workspace for output. Deploy Microsoft Sentinel onto the configured Log Analytics Workspace to finish the data connection setup by following [this onboarding guide.](/azure/sentinel/quickstart-onboard)
sentinel Sonicwall Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/sonicwall-firewall.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward SonicWall Firewall Common Event Format (CEF) logs to Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Tenable Io Vulnerability Management Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/tenable-io-vulnerability-management-using-azure-function.md
Tenable_IO_Assets_CL
To integrate with Tenable.io Vulnerability Management (using Azure Function) make sure you have: - **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).-- **REST API Credentials/permissions**: Both a **TenableAccessKey** and a **TenableSecretKey** is required to access the Tenable REST API. [See the documentation to learn more about API](https://developer.tenable.com/reference#vulnerability-management). Check all [requirements and follow the instructions](https://docs.tenable.com/tenableio/vulnerabilitymanagement/Content/Settings/GenerateAPIKey.htm) for obtaining credentials.
+- **REST API Credentials/permissions**: Both a **TenableAccessKey** and a **TenableSecretKey** is required to access the Tenable REST API. [See the documentation to learn more about API](https://developer.tenable.com/reference#vulnerability-management). Check all [requirements and follow the instructions](https://docs.tenable.com/tenableio/Content/Platform/Settings/MyAccount/GenerateAPIKey.htm) for obtaining credentials.
## Vendor installation instructions
To integrate with Tenable.io Vulnerability Management (using Azure Function) mak
**STEP 1 - Configuration steps for Tenable.io**
- [Follow the instructions](https://docs.tenable.com/tenableio/vulnerabilitymanagement/Content/Settings/GenerateAPIKey.htm) to obtain the required API credentials.
+ [Follow the instructions](https://docs.tenable.com/tenableio/Content/Platform/Settings/MyAccount/GenerateAPIKey.htm) to obtain the required API credentials.
sentinel Threat Intelligence Upload Indicators Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/threat-intelligence-upload-indicators-api.md
Follow These Steps to Connect to your Threat Intelligence:
1. Get AAD Access Token
-To send request to the APIs, you need to acquire Azure Active Directory access token. You can follow instruction in this page: https://learn.microsoft.com/azure/databricks/dev-tools/api/latest/aad/app-aad-token#get-an-azure-ad-access-token
+To send request to the APIs, you need to acquire Azure Active Directory access token. You can follow instruction in this page: [Get Azure AD tokens for users by using MSAL
+](/azure/databricks/dev-tools/api/latest/aad/app-aad-token#get-an-azure-ad-access-token)
- Notice: Please request AAD access token with scope value: https://management.azure.com/.default 2. Send indicators to Sentinel
You can send indicators by calling our Upload Indicators API. For more informati
>HTTP method: POST
->Endpoint: https://apis.sentinelus.net/[WorkspaceID]/threatintelligence:upload-indicators?api-version=2022-07-01
+>Endpoint: `https://apis.sentinelus.net/[WorkspaceID]/threatintelligence:upload-indicators?api-version=2022-07-01`
>WorkspaceID: the workspace that the indicators are uploaded to.
sentinel Trend Micro Apex One https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/trend-micro-apex-one.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward Common Event Format (CEF) logs to Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Trend Micro Deep Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/trend-micro-deep-security.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward Trend Micro Deep Security logs to Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Trend Micro Tippingpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/trend-micro-tippingpoint.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward Trend Micro TippingPoint SMS logs to Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Varmour Application Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/varmour-application-controller.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Configure the vArmour Application Controller to forward Common Event Format (CEF) logs to the Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Vectra Ai Detect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/vectra-ai-detect.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward AI Vectra Detect logs to Syslog agent in CEF format
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Vmware Vcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/vmware-vcenter.md
vCenter
**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias VMware vCenter and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/VMware%20vCenter/Parsers/vCenter.txt), on the second line of the query, enter the hostname(s) of your VMware vCenter device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
-> 1. If you have not installed the vCenter solution from ContentHub then [Follow the steps](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/VCenter/Parsers/vCenter.txt) to use the Kusto function alias, **vCenter**
+> 1. If you have not installed the vCenter solution from ContentHub then [Follow the steps](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/VMware%20vCenter/Parsers/vCenter.txt) to use the Kusto function alias, **vCenter**
1. Install and onboard the agent for Linux
sentinel Wirex Network Forensics Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/wirex-network-forensics-platform.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward Common Event Format (CEF) logs to Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Withsecure Elements Via Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/withsecure-elements-via-connector.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
For python3 use command below:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python3 cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python3 cef_installer.py {0} {1}
2. Forward data from WithSecure Elements Connector to Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
For python3 use command below:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python3 cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py
+ python3 cef_troubleshoot.py {0}
4. Secure your machine
sentinel Zero Networks Segment Audit Function Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/zero-networks-segment-audit-function-using-azure-function.md
The [Zero Networks Segment](https://zeronetworks.com/product/) Audit data connec
| Connector attribute | Description | | | | | **Application settings** | APIToken<br/>WorkspaceID<br/>WorkspaceKey<br/>logAnalyticsUri (optional)<br/>uri<br/>tableName |
-| **Azure functions app code** | https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/ZeroNetworks/SegmentFunctionConnector/AzureFunction_ZeroNetworks_Segment_Audit.zip |
+| **Azure functions app code** | https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/ZeroNetworks/Data%20Connectors/SegmentFunctionConnector/AzureFunction_ZeroNetworks_Segment_Audit.zip |
| **Log Analytics table(s)** | ZNSegmentAudit_CL<br/> | | **Data collection rules support** | Not currently supported | | **Supported by** | [Zero Networks](https://zeronetworks.com) |
Use the following step-by-step instructions to deploy the Zero Networks Segment
> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-powershell#prerequisites) for Azure function development.
-1. Download the [Azure Function App](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/ZeroNetworks/SegmentFunctionConnector/AzureFunction_ZeroNetworks_Segment_Audit.zip) file. Extract archive to your local development computer.
+1. Download the [Azure Function App](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/ZeroNetworks/Data%20Connectors/SegmentFunctionConnector/AzureFunction_ZeroNetworks_Segment_Audit.zip) file. Extract archive to your local development computer.
2. Start VS Code. Choose File in the main menu and select Open Folder. 3. Select the top level folder from extracted files. 4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button.
sentinel Zoom Reports Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/zoom-reports-using-azure-function.md
# Zoom Reports (using Azure Function) connector for Microsoft Sentinel
-The [Zoom](https://zoom.us/) Reports data connector provides the capability to ingest [Zoom Reports](https://marketplace.zoom.us/docs/api-reference/zoom-api/reports/) events into Microsoft Sentinel through the REST API. Refer to [API documentation](https://marketplace.zoom.us/docs/api-reference/introduction) for more information. The connector provides ability to get events which helps to examine potential security risks, analyze your team's use of collaboration, diagnose configuration problems and more.
+The [Zoom](https://zoom.us/) Reports data connector provides the capability to ingest [Zoom Reports](https://developers.zoom.us/docs/api/) events into Microsoft Sentinel through the REST API. Refer to [API documentation](https://developers.zoom.us/docs/api/) for more information. The connector provides ability to get events which helps to examine potential security risks, analyze your team's use of collaboration, diagnose configuration problems and more.
## Connector attributes
Zoom
To integrate with Zoom Reports (using Azure Function) make sure you have: - **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).-- **REST API Credentials/permissions**: **ZoomApiKey** and **ZoomApiSecret** are required for Zoom API. [See the documentation to learn more about API](https://marketplace.zoom.us/docs/guides/auth/jwt). Check all [requirements and follow the instructions](https://marketplace.zoom.us/docs/guides/auth/jwt) for obtaining credentials.
+- **REST API Credentials/permissions**: **ZoomApiKey** and **ZoomApiSecret** are required for Zoom API. [See the documentation to learn more about API](https://developers.zoom.us/docs/internal-apps/jwt/). Check all [requirements and follow the instructions](https://developers.zoom.us/docs/internal-apps/jwt/) for obtaining credentials.
## Vendor installation instructions
To integrate with Zoom Reports (using Azure Function) make sure you have:
**STEP 1 - Configuration steps for the Zoom API**
- [Follow the instructions](https://marketplace.zoom.us/docs/guides/auth/jwt) to obtain the credentials.
+ [Follow the instructions](https://developers.zoom.us/docs/internal-apps/jwt/) to obtain the credentials.
sentinel Zscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/zscaler.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
2. Forward Common Event Format (CEF) logs to Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
4. Secure your machine
sentinel Forward Syslog Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/forward-syslog-monitor-agent.md
In Microsoft Sentinel or Azure Monitor, verify that the Azure Monitor agent is r
```kusto Heartbeat
- | where Computer == "vm-ubuntu"
+ | where Computer == "vm-linux"
| take 10 ```
After you configured your linux-based device to send logs to your VM, verify tha
```kusto Syslog
- | where Computer == "vm-ubuntu"
+ | where Computer == "vm-linux"
| summarize by HostName ```
site-recovery Vmware Azure Troubleshoot Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-troubleshoot-replication.md
description: This article provides troubleshooting information for common replic
- Previously updated : 04/20/2023+ Last updated : 04/26/2023
This article describes some common issues and specific errors you might encounte
Site Recovery uses the [process server](vmware-physical-azure-config-process-server-overview.md#process-server) to receive and optimize replicated data, and send it to Azure.
-We recommend that you monitor the health of process servers in portal, to ensure that they're connected and working properly, and that replication is progressing for the source machines that are associated with the process server.
+We recommend that you monitor the health of process servers in the portal to ensure that they are connected and working properly, and that replication is progressing for the source machines that are associated with the process server.
- [Learn about](vmware-physical-azure-monitor-process-server.md) monitoring process servers.-- [Review best practices](vmware-physical-azure-troubleshoot-process-server.md#best-practices-for-process-server-deployment)
+- [Review best practices].(vmware-physical-azure-troubleshoot-process-server.md#best-practices-for-process-server-deployment)
- [Troubleshoot](vmware-physical-azure-troubleshoot-process-server.md#check-process-server-health) process server health. ## Step 2: Troubleshoot connectivity and replication issues
-Initial and ongoing replication failures often are caused by connectivity issues between the source server and the process server or between the process server and Azure.
+Connectivity issues between the source server and the process server or between the process server and Azure often cause initial and ongoing replication failures.
To solve these issues, [troubleshoot connectivity and replication](vmware-physical-azure-troubleshoot-process-server.md#check-connectivity-and-replication).
To solve these issues, [troubleshoot connectivity and replication](vmware-physic
When you try to select the source machine to enable replication by using Site Recovery, the machine might not be available for one of the following reasons:
-* **Two virtual machines with same instance UUID**: If two virtual machines under the vCenter have the same instance UUID, the first virtual machine discovered by the configuration server is shown in the Azure portal. To resolve this issue, ensure that no two virtual machines have the same instance UUID. This scenario is commonly seen in instances where a backup VM becomes active and is logged into our discovery records. Refer to [Azure Site Recovery VMware-to-Azure: How to clean up duplicate or stale entries](https://social.technet.microsoft.com/wiki/contents/articles/32026.asr-vmware-to-azure-how-to-cleanup-duplicatestale-entries.aspx) to resolve.
+* **Two virtual machines with the same instance UUID**: If two virtual machines under the vCenter have the same instance UUID, the first virtual machine discovered by the configuration server is displayed in the Azure portal. To resolve this issue, ensure that no two virtual machines have the same instance UUID. This scenario is commonly seen in instances where a backup VM becomes active and is logged into our discovery records. Refer to [Azure Site Recovery VMware-to-Azure: How to clean up duplicate or stale entries](https://social.technet.microsoft.com/wiki/contents/articles/32026.asr-vmware-to-azure-how-to-cleanup-duplicatestale-entries.aspx) to resolve.
* **Incorrect vCenter user credentials**: Ensure that you added the correct vCenter credentials when you set up the configuration server by using the OVF template or unified setup. To verify the credentials that you added during setup, see [Modify credentials for automatic discovery](vmware-azure-manage-configuration-server.md#modify-credentials-for-automatic-discovery). * **vCenter insufficient privileges**: If the permissions provided to access vCenter don't have the required permissions, failure to discover virtual machines might occur. Ensure that the permissions described in [Prepare an account for automatic discovery](vmware-azure-tutorial-prepare-on-premises.md#prepare-an-account-for-automatic-discovery) are added to the vCenter user account.
-* **Azure Site Recovery management servers**: If the virtual machine is used as management server under one or more of the following roles - Configuration server /scale-out process server / Master target server, then you won't be able to choose the virtual machine from portal. Managements servers can't be replicated.
+* **Azure Site Recovery management servers**: If the virtual machine is used as a management server under one or more of the following roles - Configuration server /scale-out process server / Master target server, then you won't be able to choose the virtual machine from the portal. Management servers cannot be replicated.
* **Already protected/failed over through Azure Site Recovery services**: If the virtual machine is already protected or failed over through Site Recovery, the virtual machine isn't available to select for protection in the portal. Ensure that the virtual machine you're looking for in the portal isn't already protected by any other user or under a different subscription.
-* **vCenter not connected**: Check if vCenter is in connected state. To verify, go to Recovery Services vault > Site Recovery Infrastructure > Configuration Servers > Click on respective configuration server > a blade opens on your right with details of associated servers. Check if vCenter is connected. If it's in a "*Not Connected*" state, resolve the issue, and then [refresh the configuration server](vmware-azure-manage-configuration-server.md#refresh-configuration-server) on the portal. After this, the virtual machine is listed on the portal.
-* **ESXi powered off**: If ESXi host under which the virtual machine resides is in powered off state, then virtual machine isn't listed or cannot be selected on the Azure portal. Power on the ESXi host, [refresh the configuration server](vmware-azure-manage-configuration-server.md#refresh-configuration-server) on the portal. After this, the virtual machine is listed on the portal.
-* **Pending reboot**: If there's a pending reboot on the virtual machine, then you won't be able to select the machine on Azure portal. Ensure to complete the pending reboot activities, [refresh the configuration server](vmware-azure-manage-configuration-server.md#refresh-configuration-server). After this, the virtual machine is listed on the portal.
-* **IP not found or Machine does not have IP address**: If the virtual machine doesn't have a valid IP address associated with it, then you won't be able to select the machine on Azure portal. Ensure to assign a valid IP address to the virtual machine, [refresh the configuration server](vmware-azure-manage-configuration-server.md#refresh-configuration-server). It could also be caused if the machine does not have a valid IP address associated with one of its NICs. Either assign a valid IP address to all NICs or remove the NIC that's missing the IP. After this, virtual machine will be listed on the portal.
-
+* **vCenter not connected**: Check if vCenter is in a connected state. To verify, go to Recovery Services vault > Site Recovery Infrastructure > Configuration Servers > Click on respective configuration server > a blade opens on your right with details of associated servers. Check if vCenter is connected. If it's in a "Not Connected" state, resolve the issue, and then [refresh the configuration server](vmware-azure-manage-configuration-server.md#refresh-configuration-server) on the portal. After this, virtual machine is not listed on the portal.
+* **ESXi powered off**: If the ESXi host under which the virtual machine resides is in a powered-off state, then the virtual machine is not listed or is not selectable on the Azure portal. Power on the ESXi host, and [refresh the configuration server](vmware-azure-manage-configuration-server.md#refresh-configuration-server) on the portal. After this, virtual machine is listed on the portal.
+* **Pending reboot**: If there is a pending reboot on the virtual machine, then you won't be able to select the machine on the Azure portal. Ensure to complete the pending reboot activities and [refresh the configuration server](vmware-azure-manage-configuration-server.md#refresh-configuration-server). After this, virtual machine is listed on the portal.
+* **IP not found or Machine does not have an IP address**: If the virtual machine doesn't have a valid IP address associated with it, then you are not able to select the machine on the Azure portal. Ensure to assign a valid IP address to the virtual machine, and [refresh the configuration server](vmware-azure-manage-configuration-server.md#refresh-configuration-server). It could also be caused if the machine does not have a valid IP address associated with one of its NICs. Either assign a valid IP address to all NICs or remove the NIC that's missing the IP. After this, the virtual machine is listed on the portal.
### Troubleshoot protected virtual machines greyed out in the portal Virtual machines that are replicated under Site Recovery aren't available in the Azure portal if there are duplicate entries in the system. [Learn more](https://social.technet.microsoft.com/wiki/contents/articles/32026.asr-vmware-to-azure-how-to-cleanup-duplicatestale-entries.aspx) about deleting stale entries and resolving the issue.
-Another reason could be that the machine was cloned. When machines move between hypervisor and if BIOS ID changes, then the mobility agent blocks replication. Site Recovery doesn't support replication of cloned machines.
-## No crash consistent recovery point available for the VM in the last 'XXX' minutes
+Another reason could be that the machine was cloned. When machines move between a hypervisor and if BIOS ID changes, then the mobility agent blocks the replication. Replication of cloned machines is not supported by Site Recovery.
+
+## No crash-consistent recovery point available for the VM in the last 'XXX' minutes
-The following is a list of some of the common issues:
+Following is a list of some of the most common issues:
### Initial replication issues [error 78169]
-Over an above ensuring that there are no connectivity, bandwidth or time sync related issues, ensure that:
+Over and above ensuring that there are no connectivity, bandwidth, or time sync-related issues, ensure that:
- No anti-virus software is blocking Azure Site Recovery. Learn [more](vmware-azure-set-up-source.md#azure-site-recovery-folder-exclusions-from-antivirus-program) on folder exclusions required for Azure Site Recovery. ### Source machines with high churn [error 78188]
-**Possible causes**:
--- The data change rate (write bytes/sec) on the listed disks of the virtual machine is more than the [Azure Site Recovery supported limits](site-recovery-vmware-deployment-planner-analyze-report.md#azure-site-recovery-limits) for the replication target storage account type.-- There's a sudden spike in the churn rate due to which high amount of data is pending for upload.
-**To resolve the issue**:
+**Possible Causes:**
-- Ensure that the target storage account type (Standard or Premium) is provisioned as per the churn rate requirement at source.-- If you're already replicating to a Premium managed disk (asrseeddisk type), ensure that the size of the disk supports the observed churn rate as per Site Recovery limits. You can increase the size of the asrseeddisk if necessary. Follow the given steps:
+- The data change rate (write bytes/sec) on the listed disks of the virtual machine is more than the [Azure Site Recovery supported limits](site-recovery-vmware-deployment-planner-analyze-report.md#azure-site-recovery-limits) for the replication target storage account type.
+- There is a sudden spike in the churn rate due to which a high amount of data is pending upload.
+**To resolve the issue:**
+- Ensure that the target storage account type (Standard or Premium) is provisioned as per the churn rate requirement at the source.
+- If you are already replicating to a Premium managed disk (asrseeddisk type), ensure that the size of the disk supports the observed churn rate as per Site Recovery limits. You can increase the size of the asrseeddisk if necessary. Follow these steps:
- Navigate to the Disks blade of the impacted replicated machine and copy the replica disk name - Navigate to this replica managed disk
- - You may see a banner on the Overview blade saying that a SAS URL has been generated. Click on this banner and cancel the export. Ignore this step if you don't see the banner.
- - As soon as the SAS URL is revoked, go to Configuration blade of the Managed Disk and increase the size so that Azure Site Recovery supports the observed churn rate on source disk
-- If the observed churn is temporary, wait for a few hours for the pending data upload to catch up and to create recovery points.-- If the disk contains noncritical data like temporary logs, test data etc., consider moving this data elsewhere or completely exclude this disk from replication
+ - You may see a banner on the Overview blade saying that a SAS URL has been generated. Click on this banner and cancel the export. Ignore this step if you do not see the banner.
+ - As soon as the SAS URL is revoked, go to the **Configuration** blade of the Managed Disk and increase the size so that Azure Site Recovery supports the observed churn rate on the source disk.
+- If the observed churn is temporary, wait for a few hours for the pending data upload to catch up and create recovery points.
+- If the disk contains non-critical data like temporary logs, test data, etc. consider moving this data elsewhere or completely exclude this disk from replication
+ - If the problem continues to persist, use the Site Recovery [deployment planner](site-recovery-deployment-planner.md#overview) to help plan replication. ### Source machines with no heartbeat [error 78174]
To resolve the issue, use the following steps to verify the network connectivity
In case there's no heartbeat from the Process Server (PS), check that: 1. PS VM is up and running
-2. Check following logs on the PS for error details:
+2. Check the following logs on the PS for error details:
*C:\ProgramData\ASR\home\svsystems\eventmanager\*.log*\ and\
To resolve the issue, use the following steps to verify the service status:
- Check the logs at the location for error details: *C:\Program Files (X86)\Microsoft Azure Site Recovery\agent\svagents\*.log*
-3. To register master target with configuration server, navigate to folder **%PROGRAMDATA%\ASR\Agent**, and run the following on command prompt:
+3. To register the master target with the configuration server, navigate to folder **%PROGRAMDATA%\ASR\Agent**, and run the following on command prompt:
``` cmd cdpcli.exe --registermt
To resolve the issue, you can associate the policy with the configuration server
Enhancements have been made in mobility agent [9.23](vmware-physical-mobility-service-overview.md#mobility-service-agent-version-923-and-higher) & [9.27](site-recovery-whats-new.md#update-rollup-39) versions to handle VSS installation failure behaviors. Ensure that you're on the latest versions for best guidance on troubleshooting VSS failures.
-The following is a list of the most common issues:
+Some of the most common issues are listed below
#### Cause 1: Known issue in SQL server 2008/2008 R2-
-**How to fix** : There's a known issue with SQL server 2008/2008 R2. Refer this KB article [Azure Site Recovery Agent or other noncomponent VSS backup fails for a server hosting SQL Server 2008 R2](https://support.microsoft.com/help/4504103/non-component-vss-backup-fails-for-server-hosting-sql-server-2008-r2)
+**How to fix**: There is a known issue with SQL server 2008/2008 R2. Please refer this KB article [Azure Site Recovery Agent or other non-component VSS backup fails for a server hosting SQL Server 2008 R2](https://support.microsoft.com/help/4504103/non-component-vss-backup-fails-for-server-hosting-sql-server-2008-r2)
#### Cause 2: Azure Site Recovery jobs fail on servers hosting any version of SQL Server instances with AUTO_CLOSE DBs
+**How to fix**: Refer to Kb [article](https://support.microsoft.com/help/4504104/non-component-vss-backups-such-as-azure-site-recovery-jobs-fail-on-ser)
+ **How to fix** : Refer KB [article](https://support.microsoft.com/help/4504104/non-component-vss-backups-such-as-azure-site-recovery-jobs-fail-on-ser) #### Cause 3: Known issue in SQL Server 2016 and 2017
-**How to fix** : Refer KB [article](https://support.microsoft.com/help/4493364/fix-error-occurs-when-you-back-up-a-virtual-machine-with-non-component)
+**How to fix**: Refer to Kb [article](https://support.microsoft.com/help/4493364/fix-error-occurs-when-you-back-up-a-virtual-machine-with-non-component)
#### Cause 4: App-Consistency not enabled on Linux servers-
-**How to fix** : Azure Site Recovery for Linux Operation System supports application custom scripts for app-consistency. The custom script with pre and post options is used by the Azure Site Recovery Mobility Agent for app-consistency. [Here](./site-recovery-faq.yml) are the steps to enable it.
+**How to fix**: Azure Site Recovery for Linux Operation System supports application custom scripts for app-consistency. The custom script with pre and post options will be used by the Azure Site Recovery Mobility Agent for app-consistency. [Here](./site-recovery-faq.yml) are the steps to enable it.
### More causes due to VSS related issues:
Search for the string "vacpError" by opening the vacp.log file in an editor
`Ex: `**`vacpError`**`:220#Following disks are in FilteringStopped state [\\.\PHYSICALDRIVE1=5, ]#220|^|224#FAILED: CheckWriterStatus().#2147754994|^|226#FAILED to revoke tags.FAILED: CheckWriterStatus().#2147754994|^|`
-In the given example **2147754994** is the error code that tells you about the failure as follows:
+In the above example **2147754994** is the error code that tells you about the failure as shown below
#### VSS writer is not installed - Error 2147221164
-*How to fix*: To generate application consistency tag, Azure Site Recovery uses Microsoft Volume Shadow copy Service (VSS). It installs a VSS Provider for its operation to take app consistency snapshots. This VSS Provider is installed as a service. In case the VSS Provider service isn't installed, the application consistency snapshot creation fails with the error ID 0x80040154 "Class not registered". </br>
+
+*How to fix*: To generate an application consistency tag, Azure Site Recovery uses Microsoft Volume Shadow copy Service (VSS). It installs a VSS Provider for its operation to take app consistency snapshots. This VSS Provider is installed as a service. In case the VSS Provider service is not installed, the application consistency snapshot creation fails with the error ID 0x80040154 "Class not registered". </br>
+ Refer [article for VSS writer installation troubleshooting](./vmware-azure-troubleshoot-push-install.md#vss-installation-failures) #### VSS writer is disabled - Error 2147943458
-**How to fix**: To generate application consistency tag, Azure Site Recovery uses Microsoft Volume Shadow copy Service (VSS). It installs a VSS Provider for its operation to take app consistency snapshots. This VSS Provider is installed as a service. In case the VSS Provider service is disabled, the application consistency snapshot creation fails with the error ID "The specified service is disabled and can't be started(0x80070422)". </br>
+**How to fix**: To generate an application consistency tag, Azure Site Recovery uses Microsoft Volume Shadow copy Service (VSS). It installs a VSS Provider for its operation to take app consistency snapshots. This VSS Provider is installed as a service. In case the VSS Provider service is disabled, the application consistency snapshot creation fails with the error ID "The specified service is disabled and cannot be started(0x80070422)". </br>
+ - If VSS is disabled, - Verify that the startup type of the VSS Provider service is set to **Automatic**.
Refer [article for VSS writer installation troubleshooting](./vmware-azure-troub
#### VSS PROVIDER NOT_REGISTERED - Error 2147754756
-**How to fix**: To generate application consistency tag, Azure Site Recovery uses Microsoft Volume Shadow copy Service (VSS).
+**How to fix**: To generate an application consistency tag, Azure Site Recovery uses Microsoft Volume Shadow copy Service (VSS).
Check if the Azure Site Recovery VSS Provider service is installed or not. </br> - Retry the Provider installation using the following commands:
Verify that the startup type of the VSS Provider service is set to **Automatic**
This error occurs when trying to enable replication and the application folders don't have enough permissions.
-**How to fix**: To resolve this issue, make sure the IUSR user has owner role for all the following folders -
+**How to fix**: To resolve this issue, make sure the IUSR user has the owner role for all the following folders -
+ - *C\ProgramData\Microsoft Azure Site Recovery\private*-- The installation directory. For example, if installation directory is F drive, then provide the correct permissions to -
+- The installation directory. For example, if the installation directory is F drive, then provide the correct permissions to:
- *F:\Program Files (x86)\Microsoft Azure Site Recovery\home\svsystems*-- The *\pushinstallsvc* folder in installation directory. For example, if installation directory is F drive, provide the correct permissions to -
+- The *\pushinstallsvc* folder in the installation directory. For example, if the installation directory is F drive, provide the correct permissions to -
- *F:\Program Files (x86)\Microsoft Azure Site Recovery\home\svsystems\pushinstallsvc*-- The *\etc* folder in installation directory. For example, if installation directory is F drive, provide the correct permissions to -
+- The *\etc* folder in the installation directory. For example, if the installation directory is F drive, provide the correct permissions to -
- *F:\Program Files (x86)\Microsoft Azure Site Recovery\home\svsystems\etc* - *C:\Temp* - *C:\thirdparty\php5nts*-- All the items under the below path -
+- All the items under the following path -
- *C:\thirdparty\rrdtool-1.2.15-win32-perl58\rrdtool\Release\** ## Troubleshoot and handle time changes on replicated servers
-This error occurs when the source machine's time moves forward and then moves back in short time, to correct the change. You may not notice the change as the time is corrected quickly.
+This error occurs when the source machine's time moves forward and then moves back in a short time, to correct the change. You may not notice the change as the time is corrected very quickly.
**How to fix**: To resolve this issue, wait until the system time crosses the skewed future time. Another option is to disable and enable replication once again, which is only feasible for forward replication (data replicated from on-premises to Azure) and isn't applicable for reverse replication (data replicated from Azure to on-premises). ## Next steps
-If you need more help, post your question in the [Microsoft Q&A question page for Azure Site Recovery](/answers/topics/azure-site-recovery.html). We have an active community, and one of our engineers can assist you.
+If you need more help, post your question on the [Microsoft Q&A question page for Azure Site Recovery](/answers/topics/azure-site-recovery.html). We have an active community, and one of our engineers can assist you.
spring-apps How To Bind Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-cosmos.md
Instead of manually configuring your Spring Boot applications, you can automatic
#### Use the Azure CLI
-Use the Azure CLI to configure your Spring app to connect to a Cosmos SQL Database by using the `az spring connection create` command, as shown in the following example:
+Use the Azure CLI to configure your Spring app to connect to a Cosmos NoSQL Database by using the `az spring connection create` command, as shown in the following example. Be sure to replace the variables in the example with actual values.
> [!NOTE] > Updating Azure Cosmos DB database settings can take a few minutes to complete.
+> [!NOTE]
+> If you're using Cosmos Cassandra, use `--key_space` instead of `--database`. If you're using Cosmos Table, use `--table` instead of `--database`. For more information, see [Quickstart: Create a service connection in Azure Spring Apps with the Azure CLI](../service-connector/quickstart-cli-spring-cloud-connection.md).
+ ```azurecli az spring connection create cosmos-sql \ --resource-group $AZURE_SPRING_APPS_RESOURCE_GROUP \ --service $AZURE_SPRING_APPS_SERVICE_INSTANCE_NAME \ --app $APP_NAME \
- --deployment $DEPLOYMENT_NAME \
--target-resource-group $COSMOSDB_RESOURCE_GROUP \ --account $COSMOSDB_ACCOUNT_NAME \ --database $DATABASE_NAME \
az spring connection create cosmos-sql \
> [!NOTE] > If you're using [Service Connector](../service-connector/overview.md) for the first time, start by running the command `az provider register --namespace Microsoft.ServiceLinker` to register the Service Connector resource provider.
->
-> If you're using Cosmos Cassandra, use a `--key_space` instead of `--database`.
> [!TIP] > Run the command `az spring connection list-support-types --output table` to get a list of supported target services and authentication methods for Azure Spring Apps. If the `az spring` command isn't recognized by the system, check that you have installed the required extension by running `az extension add --name spring`.
spring-apps How To Bind Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-mysql.md
With Azure Spring Apps, you can connect selected Azure services to your applicat
### [Service Connector](#tab/Service-Connector)
-Follow these steps to configure your Spring app to connect to an Azure Database for MySQL Flexible Server with a system-assigned managed identity.
+Follow these steps to configure your Spring app to connect to an Azure Database for MySQL Flexible Server with database username and password.
-1. Install the Service Connector passwordless extension for the Azure CLI.
-
- ```azurecli
- az extension add --name serviceconnector-passwordless --upgrade
- ```
-
-1. Run the `az spring connection create` command, as shown in the following example.
+1. Run the `az spring connection create` command, as shown in the following example. Be sure to replace the variables in the example with actual values.
```azurecli az spring connection create mysql-flexible \ --resource-group $AZURE_SPRING_APPS_RESOURCE_GROUP \ --service $AZURE_SPRING_APPS_SERVICE_INSTANCE_NAME \ --app $APP_NAME \
- --deployment $DEPLOYMENT_NAME \
--target-resource-group $MYSQL_RESOURCE_GROUP \ --server $MYSQL_SERVER_NAME \ --database $DATABASE_NAME \
- --system-identity mysql-identity-id=$AZ_IDENTITY_RESOURCE_ID
+ --secret name=$DATABASE_USERNAME secret=$DATABASE_PASSWORD
```
+
+> [!NOTE]
+> Alternatively, you can use a system-assigned identity for a passwordless experience. For more information, see [Deploy a Spring application to Azure Spring Apps with a passwordless connection to an Azure database](/azure/developer/java/spring-framework/deploy-passwordless-spring-database-app).
### [Terraform](#tab/Terraform)
spring-apps Tutorial Managed Identities Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tutorial-managed-identities-functions.md
Previously updated : 07/10/2020 Last updated : 04/24/2023 # Tutorial: Use a managed identity to invoke Azure Functions from an Azure Spring Apps app
Both Azure Functions and App Services have built in support for Azure Active Dir
- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. - [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher. - [Apache Maven](https://maven.apache.org/download.cgi) version 3.0 or higher.-- [Install the Azure Functions Core Tools](../azure-functions/functions-run-local.md#install-the-azure-functions-core-tools) version 3.0.2009 or higher.
+- [Install the Azure Functions Core Tools](../azure-functions/functions-run-local.md#install-the-azure-functions-core-tools) version 4.x.
## Create a resource group
az functionapp create \
--os-type windows \ --runtime node \ --storage-account <storage-account-name> \
- --functions-version 3
+ --functions-version 4
``` Make a note of the returned `hostNames` value, which is in the format *https://\<your-functionapp-name>.azurewebsites.net*. Use this value in the Function app's root URL for testing the Function app.
Use the following steps to enable Azure Active Directory authentication to acces
1. In the navigation pane, select **Authentication** and then select **Add identity provider** on the main pane. 1. On the **Add an identity provider** page, select **Microsoft** from the **Identity provider** dropdown menu.
- :::image type="content" source="media/spring-cloud-tutorial-managed-identities-functions/add-identity-provider.png" alt-text="Screenshot of the Azure portal showing the Add an identity provider page with Microsoft highlighted in the identity provider dropdown menu." lightbox="media/spring-cloud-tutorial-managed-identities-functions/add-identity-provider.png":::
+ :::image type="content" source="media/tutorial-managed-identities-functions/add-identity-provider.png" alt-text="Screenshot of the Azure portal showing the Add an identity provider page with Microsoft highlighted in the identity provider dropdown menu." lightbox="media/tutorial-managed-identities-functions/add-identity-provider.png":::
1. Select **Add**. 1. For the **Basics** settings on the **Add an identity provider** page, set **Supported account types** to **Any Azure AD directory - Multi-tenant**. 1. Set **Unauthenticated requests** to **HTTP 401 Unauthorized: recommended for APIs**. This setting ensures that all unauthenticated requests are denied (401 response).
- :::image type="content" source="media/spring-cloud-tutorial-managed-identities-functions/identity-provider-settings.png" alt-text="Screenshot of the Azure portal showing the settings page for adding an identity provider. This page highlights the 'supported account types' setting set to the 'Any Azure AD directory Multi tenant' option and also highlights the 'Unauthenticated requests' setting set to the 'HTTP 401 Unauthorized recommended for APIs' option." lightbox="media/spring-cloud-tutorial-managed-identities-functions/identity-provider-settings.png":::
+ :::image type="content" source="media/tutorial-managed-identities-functions/identity-provider-settings.png" alt-text="Screenshot of the Azure portal showing the Add an identity provider page with Support account types and Unauthenticated requests highlighted." lightbox="media/tutorial-managed-identities-functions/identity-provider-settings.png":::
1. Select **Add**. After you add the settings, the Function app restarts and all subsequent requests are prompted to sign in through Azure AD. You can test that unauthenticated requests are currently being rejected with the Function app's root URL (returned in the `hostNames` output of the `az functionapp create` command). You should then be redirected to your organization's Azure Active Directory sign-in screen.
+You need the Application ID and the Application ID URI for later use. In the Azure portal, navigate to the Function app you created.
+
+To get the Application ID, select **Authentication** in the navigation pane, and then copy the **App (client) ID** value for the identity provider that includes the name of the Function app.
++
+To get the Application ID URI, select **Expose an API** in the navigation pane, and then copy the **Application ID URI** value.
++ ## Create an HTTP triggered function In an empty local directory, use the following commands to create a new function app and add an HTTP triggered function.
This sample invokes the HTTP triggered function by first requesting an access to
```text azure.function.uri=https://<function-app-name>.azurewebsites.net azure.function.triggerPath=httptrigger
+ azure.function.application-id.uri=<function-app-application-ID-uri>
``` 1. Use the following command to package your sample app.
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
Title: Configure Azure Storage firewalls and virtual networks
-description: Configure layered network security for your storage account using Azure Storage firewalls and Azure Virtual Network.
+description: Configure layered network security for your storage account by using Azure Storage firewalls and Azure Virtual Network.
# Configure Azure Storage firewalls and virtual networks
-Azure Storage provides a layered security model. This model enables you to secure and control the level of access to your storage accounts that your applications and enterprise environments demand, based on the type and subset of networks or resources used. When network rules are configured, only applications requesting data over the specified set of networks or through the specified set of Azure resources can access a storage account. You can limit access to your storage account to requests originating from specified IP addresses, IP ranges, subnets in an Azure Virtual Network (VNet), or resource instances of some Azure services.
+Azure Storage provides a layered security model. This model enables you to control the level of access to your storage accounts that your applications and enterprise environments demand, based on the type and subset of networks or resources that you use.
-Storage accounts have a public endpoint that is accessible through the internet. You can also create [Private Endpoints for your storage account](storage-private-endpoints.md), which assigns a private IP address from your VNet to the storage account, and secures all traffic between your VNet and the storage account over a private link. The Azure storage firewall provides access control for the public endpoint of your storage account. You can also use the firewall to block all access through the public endpoint when using private endpoints. Your storage firewall configuration also enables select trusted Azure platform services to access the storage account securely.
+When you configure network rules, only applications that request data over the specified set of networks or through the specified set of Azure resources can access a storage account. You can limit access to your storage account to requests that come from specified IP addresses, IP ranges, subnets in an Azure virtual network, or resource instances of some Azure services.
-An application that accesses a storage account when network rules are in effect still requires proper authorization for the request. Authorization is supported with Azure Active Directory (Azure AD) credentials for blobs and queues, with a valid account access key, or with an SAS token. When a blob container is configured for anonymous public access, requests to read data in that container do not need to be authorized, but the firewall rules remain in effect and will block anonymous traffic.
+Storage accounts have a public endpoint that's accessible through the internet. You can also create [private endpoints for your storage account](storage-private-endpoints.md). Creating private endpoints assigns a private IP address from your virtual network to the storage account. It helps secure traffic between your virtual network and the storage account over a private link.
-> [!IMPORTANT]
-> Turning on firewall rules for your storage account blocks incoming requests for data by default, unless the requests originate from a service operating within an Azure Virtual Network (VNet) or from allowed public IP addresses. Requests that are blocked include those from other Azure services, from the Azure portal, from logging and metrics services, and so on.
->
-> You can grant access to Azure services that operate from within a VNet by allowing traffic from the subnet hosting the service instance. You can also enable a limited number of scenarios through the exceptions mechanism described below. To access data from the storage account through the Azure portal, you would need to be on a machine within the trusted boundary (either IP or VNet) that you set up.
+The Azure Storage firewall provides access control for the public endpoint of your storage account. You can also use the firewall to block all access through the public endpoint when you're using private endpoints. Your firewall configuration also enables trusted Azure platform services to access the storage account.
+
+An application that accesses a storage account when network rules are in effect still requires proper authorization for the request. Authorization is supported with Azure Active Directory (Azure AD) credentials for blobs and queues, with a valid account access key, or with a shared access signature (SAS) token. When you configure a blob container for anonymous public access, requests to read data in that container don't need to be authorized. The firewall rules remain in effect and will block anonymous traffic.
+
+Turning on firewall rules for your storage account blocks incoming requests for data by default, unless the requests originate from a service that operates within an Azure virtual network or from allowed public IP addresses. Requests that are blocked include those from other Azure services, from the Azure portal, and from logging and metrics services.
+
+You can grant access to Azure services that operate from within a virtual network by allowing traffic from the subnet that hosts the service instance. You can also enable a limited number of scenarios through the exceptions mechanism that this article describes. To access data from the storage account through the Azure portal, you need to be on a machine within the trusted boundary (either IP or virtual network) that you set up.
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)] ## Scenarios
-To secure your storage account, you should first configure a rule to deny access to traffic from all networks (including internet traffic) on the public endpoint, by default. Then, you should configure rules that grant access to traffic from specific VNets. You can also configure rules to grant access to traffic from selected public internet IP address ranges, enabling connections from specific internet or on-premises clients. This configuration enables you to build a secure network boundary for your applications.
+To secure your storage account, you should first configure a rule to deny access to traffic from all networks (including internet traffic) on the public endpoint, by default. Then, you should configure rules that grant access to traffic from specific virtual networks. You can also configure rules to grant access to traffic from selected public internet IP address ranges, enabling connections from specific internet or on-premises clients. This configuration helps you build a secure network boundary for your applications.
-You can combine firewall rules that allow access from specific virtual networks and from public IP address ranges on the same storage account. Storage firewall rules can be applied to existing storage accounts, or when creating new storage accounts.
+You can combine firewall rules that allow access from specific virtual networks and from public IP address ranges on the same storage account. You can apply storage firewall rules to existing storage accounts or when you create new storage accounts.
Storage firewall rules apply to the public endpoint of a storage account. You don't need any firewall access rules to allow traffic for private endpoints of a storage account. The process of approving the creation of a private endpoint grants implicit access to traffic from the subnet that hosts the private endpoint.
-Network rules are enforced on all network protocols for Azure storage, including REST and SMB. To access data using tools such as the Azure portal, Storage Explorer, and AzCopy, explicit network rules must be configured.
+Network rules are enforced on all network protocols for Azure Storage, including REST and SMB. To access data by using tools such as the Azure portal, Azure Storage Explorer, and AzCopy, you must configure explicit network rules.
-Once network rules are applied, they're enforced for all requests. SAS tokens that grant access to a specific IP address serve to limit the access of the token holder, but don't grant new access beyond configured network rules.
+After you apply network rules, they're enforced for all requests. SAS tokens that grant access to a specific IP address serve to limit the access of the token holder, but they don't grant new access beyond configured network rules.
-Virtual machine disk traffic (including mount and unmount operations, and disk IO) is not affected by network rules. REST access to page blobs is protected by network rules.
+Network rules don't affect virtual machine (VM) disk traffic, including mount and unmount operations and disk I/O. Network rules help protect REST access to page blobs.
-Classic storage accounts do not support firewalls and virtual networks.
+Classic storage accounts don't support firewalls and virtual networks.
-You can use unmanaged disks in storage accounts with network rules applied to back up and restore VMs by creating an exception. This process is documented in the [Manage Exceptions](#manage-exceptions) section of this article. Firewall exceptions aren't applicable with managed disks as they're already managed by Azure.
+You can use unmanaged disks in storage accounts with network rules applied to back up and restore VMs by creating an exception. The [Manage exceptions](#manage-exceptions) section of this article documents this process. Firewall exceptions aren't applicable with managed disks, because Azure already manages them.
## Change the default network access rule
-By default, storage accounts accept connections from clients on any network. You can limit access to selected networks **or** prevent traffic from all networks and permit access only through a [private endpoint](storage-private-endpoints.md).
+By default, storage accounts accept connections from clients on any network. You can limit access to selected networks *or* prevent traffic from all networks and permit access only through a [private endpoint](storage-private-endpoints.md).
-> [!WARNING]
-> Changing this setting can impact your application's ability to connect to Azure Storage. Make sure to grant access to any allowed networks or set up access through a [private endpoint](storage-private-endpoints.md) before you change this setting.
+You must set the default rule to **deny**, or network rules have no effect. However, changing this setting can affect your application's ability to connect to Azure Storage. Be sure to grant access to any allowed networks or set up access through a private endpoint before you change this setting.
### [Portal](#tab/azure-portal)
-1. Go to the storage account you want to secure.
+1. Go to the storage account that you want to secure.
-2. Locate the **Networking** settings under **Security + networking**.
+2. Locate the **Networking** settings under **Security + networking**.
-3. Choose which type of public network access you want to allow.
+3. Choose which type of public network access you want to allow:
- To allow traffic from all networks, select **Enabled from all networks**.
-
- - To allow traffic only from specific virtual networks, select **Enabled from selected virtual networks and IP addresses**.
-
+
+ - To allow traffic only from specific virtual networks, select **Enabled from selected virtual networks and IP addresses**.
+ - To block traffic from all networks, select **Disabled**. 4. Select **Save** to apply your changes.
By default, storage accounts accept connections from clients on any network. You
### [PowerShell](#tab/azure-powershell)
-1. Install the [Azure PowerShell](/powershell/azure/install-Az-ps) and [sign in](/powershell/azure/authenticate-azureps).
+1. Install [Azure PowerShell](/powershell/azure/install-Az-ps) and [sign in](/powershell/azure/authenticate-azureps).
-2. Choose which type of public network access you want to allow.
+2. Choose which type of public network access you want to allow:
- - To allow traffic from all networks, use the `Update-AzStorageAccountNetworkRuleSet` command, and set the `-DefaultAction` parameter to `Allow`.
+ - To allow traffic from all networks, use the `Update-AzStorageAccountNetworkRuleSet` command and set the `-DefaultAction` parameter to `Allow`:
```powershell Update-AzStorageAccountNetworkRuleSet -ResourceGroupName "myresourcegroup" -Name "mystorageaccount" -DefaultAction Allow ```
- - To allow traffic only from specific virtual networks, use the `Update-AzStorageAccountNetworkRuleSet` command and set the `-DefaultAction` parameter to `Deny`.
+ - To allow traffic only from specific virtual networks, use the `Update-AzStorageAccountNetworkRuleSet` command and set the `-DefaultAction` parameter to `Deny`:
- ```powershell
- Update-AzStorageAccountNetworkRuleSet -ResourceGroupName "myresourcegroup" -Name "mystorageaccount" -DefaultAction Deny
- ```
+ ```powershell
+ Update-AzStorageAccountNetworkRuleSet -ResourceGroupName "myresourcegroup" -Name "mystorageaccount" -DefaultAction Deny
+ ```
+
+ - To block traffic from all networks, use the `Set-AzStorageAccount` command and set the `-PublicNetworkAccess` parameter to `Disabled`. Traffic will be allowed only through a [private endpoint](storage-private-endpoints.md). You'll have to create that private endpoint.
- - To block traffic from all networks, use the `Set-AzStorageAccount` command and set the `-PublicNetworkAccess` parameter to `Disabled`. Traffic will be allowed only through a [private endpoint](storage-private-endpoints.md). You'll have to create that private endpoint.
-
- ```powershell
- Set-AzStorageAccount -ResourceGroupName "myresourcegroup" -Name "mystorageaccount" -PublicNetworkAccess Disabled
- ```
+ ```powershell
+ Set-AzStorageAccount -ResourceGroupName "myresourcegroup" -Name "mystorageaccount" -PublicNetworkAccess Disabled
+ ```
### [Azure CLI](#tab/azure-cli) 1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli).
-2. Choose which type of public network access you want to allow.
+2. Choose which type of public network access you want to allow:
- - To allow traffic from all networks, use the `az storage account update` command, and set the `--default-action` parameter to `Allow`.
+ - To allow traffic from all networks, use the `az storage account update` command and set the `--default-action` parameter to `Allow`:
```azurecli az storage account update --resource-group "myresourcegroup" --name "mystorageaccount" --default-action Allow ```
-
- - To allow traffic only from specific virtual networks, use the `az storage account update` command and set the `--default-action` parameter to `Deny`.
- ```azurecli
- az storage account update --resource-group "myresourcegroup" --name "mystorageaccount" --default-action Deny
- ```
+ - To allow traffic only from specific virtual networks, use the `az storage account update` command and set the `--default-action` parameter to `Deny`:
- - To block traffic from all networks, use the `az storage account update` command and set the `--public-network-access` parameter to `Disabled`. Traffic will be allowed only through a [private endpoint](storage-private-endpoints.md). You'll have to create that private endpoint.
+ ```azurecli
+ az storage account update --resource-group "myresourcegroup" --name "mystorageaccount" --default-action Deny
+ ```
+
+ - To block traffic from all networks, use the `az storage account update` command and set the `--public-network-access` parameter to `Disabled`. Traffic will be allowed only through a [private endpoint](storage-private-endpoints.md). You'll have to create that private endpoint.
- ```azurecli
- az storage account update --name MyStorageAccount --resource-group MyResourceGroup --public-network-access Disabled
- ```
+ ```azurecli
+ az storage account update --name MyStorageAccount --resource-group MyResourceGroup --public-network-access Disabled
+ ```
> [!CAUTION]
-> By design, access to a storage account from trusted services takes the highest precedence over other network access restrictions. For this reason, if you set **Public network access** to **Disabled** after previously setting it to **Enabled from selected virtual networks and IP addresses**, any [resource instances](#grant-access-from-azure-resource-instances) and [exceptions](#manage-exceptions) you had previously configured, including [Allow Azure services on the trusted services list to access this storage account](#grant-access-to-trusted-azure-services), will remain in effect. As a result, those resources and services may still have access to the storage account after setting **Public network access** to **Disabled**.
+> By design, access to a storage account from trusted services takes the highest precedence over other network access restrictions. If you set **Public network access** to **Disabled** after previously setting it to **Enabled from selected virtual networks and IP addresses**, any [resource instances](#grant-access-from-azure-resource-instances) and [exceptions](#manage-exceptions) that you previously configured, including [Allow Azure services on the trusted services list to access this storage account](#grant-access-to-trusted-azure-services), will remain in effect. As a result, those resources and services might still have access to the storage account.
## Grant access from a virtual network
-You can configure storage accounts to allow access only from specific subnets. The allowed subnets may belong to a VNet in the same subscription or a different subscription, including those belonging to a different Azure Active Directory tenant. With [cross-region service endpoints](#azure-storage-cross-region-service-endpoints), the allowed subnets can also be in different regions from the storage account.
+You can configure storage accounts to allow access only from specific subnets. The allowed subnets can belong to a virtual network in the same subscription or a different subscription, including those that belong to a different Azure AD tenant. With [cross-region service endpoints](#azure-storage-cross-region-service-endpoints), the allowed subnets can also be in different regions from the storage account.
-You can enable a [Service endpoint](../../virtual-network/virtual-network-service-endpoints-overview.md) for Azure Storage within the VNet. The service endpoint routes traffic from the VNet through an optimal path to the Azure Storage service. The identities of the subnet and the virtual network are also transmitted with each request. Administrators can then configure network rules for the storage account that allow requests to be received from specific subnets in a VNet. Clients granted access via these network rules must continue to meet the authorization requirements of the storage account to access the data.
+You can enable a [service endpoint](../../virtual-network/virtual-network-service-endpoints-overview.md) for Azure Storage within the virtual network. The service endpoint routes traffic from the virtual network through an optimal path to the Azure Storage service. The identities of the subnet and the virtual network are also transmitted with each request. Administrators can then configure network rules for the storage account that allow requests to be received from specific subnets in a virtual network. Clients granted access via these network rules must continue to meet the authorization requirements of the storage account to access the data.
-Each storage account supports up to 200 virtual network rules, which may be combined with [IP network rules](#grant-access-from-an-internet-ip-range).
+Each storage account supports up to 200 virtual network rules. You can combine these rules with [IP network rules](#grant-access-from-an-internet-ip-range).
> [!IMPORTANT]
-> If you delete a subnet that has been included in a network rule, it will be removed from the network rules for the storage account. If you create a new subnet by the same name, it will not have access to the storage account. To allow access, you must explicitly authorize the new subnet in the network rules for the storage account.
+> If you delete a subnet that's included in a network rule, it will be removed from the network rules for the storage account. If you create a new subnet by the same name, it won't have access to the storage account. To allow access, you must explicitly authorize the new subnet in the network rules for the storage account.
### Required permissions
-To apply a virtual network rule to a storage account, the user must have the appropriate permissions for the subnets being added. Applying a rule can be performed by a [Storage Account Contributor](../../role-based-access-control/built-in-roles.md#storage-account-contributor) or a user that has been given permission to the `Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action` [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftnetwork) via a custom Azure role.
+To apply a virtual network rule to a storage account, the user must have the appropriate permissions for the subnets that are being added. A [Storage Account Contributor](../../role-based-access-control/built-in-roles.md#storage-account-contributor) or a user who has permission to the `Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action` [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftnetwork) can apply a rule by using a custom Azure role.
-Storage account and the virtual networks granted access may be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant.
+The storage account and the virtual networks that get access can be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant.
-> [!NOTE]
-> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. Such rules cannot be configured through the Azure portal, though they may be viewed in the portal.
+Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure AD tenant are currently supported only through PowerShell, the Azure CLI, and REST APIs. You can't configure such rules through the Azure portal, though you can view them in the portal.
### Azure Storage cross-region service endpoints
-Cross-region service endpoints for Azure Storage became generally available in April of 2023. They work between virtual networks and storage service instances in any region. With cross-region service endpoints, subnets will no longer use a public IP address to communicate with any storage account, including those in another region. Instead, all the traffic from subnets to storage accounts will use a private IP address as a source IP. As a result, any storage accounts that use IP network rules to permit traffic from those subnets will no longer have an effect.
+Cross-region service endpoints for Azure Storage became generally available in April 2023. They work between virtual networks and storage service instances in any region. With cross-region service endpoints, subnets no longer use a public IP address to communicate with any storage account, including those in another region. Instead, all the traffic from subnets to storage accounts uses a private IP address as a source IP. As a result, any storage accounts that use IP network rules to permit traffic from those subnets no longer have an effect.
Configuring service endpoints between virtual networks and service instances in a [paired region](../../best-practices-availability-paired-regions.md) can be an important part of your disaster recovery plan. Service endpoints allow continuity during a regional failover and access to read-only geo-redundant storage (RA-GRS) instances. Network rules that grant access from a virtual network to a storage account also grant access to any RA-GRS instance.
-When planning for disaster recovery during a regional outage, you should create the VNets in the paired region in advance. Enable service endpoints for Azure Storage, with network rules granting access from these alternative virtual networks. Then apply these rules to your geo-redundant storage accounts.
+When you're planning for disaster recovery during a regional outage, you should create the virtual networks in the paired region in advance. Enable service endpoints for Azure Storage, with network rules granting access from these alternative virtual networks. Then apply these rules to your geo-redundant storage accounts.
-> [!IMPORTANT]
-> Local and cross-region service endpoints cannot coexist on the same subnet.
->
-> To replace existing service endpoints with cross-region ones, delete the existing **Microsoft.Storage** endpoints and recreate them as cross-region endpoints (**Microsoft.Storage.Global**).
+Local and cross-region service endpoints can't coexist on the same subnet. To replace existing service endpoints with cross-region ones, delete the existing `Microsoft.Storage` endpoints and re-create them as cross-region endpoints (`Microsoft.Storage.Global`).
### Managing virtual network rules
-You can manage virtual network rules for storage accounts through the Azure portal, PowerShell, or CLIv2.
+You can manage virtual network rules for storage accounts through the Azure portal, PowerShell, or the Azure CLI v2.
-> [!NOTE]
-> If you want to enable access to your storage account from a virtual network/subnet in another Azure AD tenant, you must use PowerShell or the Azure CLI. The Azure portal does not show subnets in other Azure AD tenants.
+If you want to enable access to your storage account from a virtual network or subnet in another Azure AD tenant, you must use PowerShell or the Azure CLI. The Azure portal does not show subnets in other Azure AD tenants.
#### [Portal](#tab/azure-portal)
-1. Go to the storage account you want to secure.
+1. Go to the storage account that you want to secure.
+
+2. Select **Networking**.
-2. Select on the settings menu called **Networking**.
+3. Check that you've chosen to allow access from **Selected networks**.
-3. Check that you've selected to allow access from **Selected networks**.
+4. To grant access to a virtual network by using a new network rule, under **Virtual networks**, select **Add existing virtual network**. Select the **Virtual networks** and **Subnets** options, and then select **Add**.
-4. To grant access to a virtual network with a new network rule, under **Virtual networks**, select **Add existing virtual network**, select **Virtual networks** and **Subnets** options, and then select **Add**. To create a new virtual network and grant it access, select **Add new virtual network**. Provide the information necessary to create the new virtual network, and then select **Create**.
+ To create a new virtual network and grant it access, select **Add new virtual network**. Provide the necessary information to create the new virtual network, and then select **Create**.
- > [!NOTE]
- > If a service endpoint for Azure Storage wasn't previously configured for the selected virtual network and subnets, you can configure it as part of this operation.
- >
- > Presently, only virtual networks belonging to the same Azure Active Directory tenant are shown for selection during rule creation. To grant access to a subnet in a virtual network belonging to another tenant, please use PowerShell, Azure CLI or REST APIs.
+ If a service endpoint for Azure Storage wasn't previously configured for the selected virtual network and subnets, you can configure it as part of this operation.
-5. To remove a virtual network or subnet rule, select **...** to open the context menu for the virtual network or subnet, and select **Remove**.
+ Presently, only virtual networks that belong to the same Azure AD tenant appear for selection during rule creation. To grant access to a subnet in a virtual network that belongs to another tenant, use PowerShell, the Azure CLI, or REST APIs.
-6. select **Save** to apply your changes.
+5. To remove a virtual network or subnet rule, select the ellipsis (**...**) to open the context menu for the virtual network or subnet, and then select **Remove**.
+
+6. Select **Save** to apply your changes.
#### [PowerShell](#tab/azure-powershell)
-1. Install the [Azure PowerShell](/powershell/azure/install-Az-ps) and [sign in](/powershell/azure/authenticate-azureps).
+1. Install [Azure PowerShell](/powershell/azure/install-Az-ps) and [sign in](/powershell/azure/authenticate-azureps).
-2. List virtual network rules.
+2. List virtual network rules:
```powershell (Get-AzStorageAccountNetworkRuleSet -ResourceGroupName "myresourcegroup" -AccountName "mystorageaccount").VirtualNetworkRules ```
-3. Enable service endpoint for Azure Storage on an existing virtual network and subnet.
+3. Enable a service endpoint for Azure Storage on an existing virtual network and subnet:
```powershell Get-AzVirtualNetwork -ResourceGroupName "myresourcegroup" -Name "myvnet" | Set-AzVirtualNetworkSubnetConfig -Name "mysubnet" -AddressPrefix "10.0.0.0/24" -ServiceEndpoint "Microsoft.Storage.Global" | Set-AzVirtualNetwork ```
-4. Add a network rule for a virtual network and subnet.
+4. Add a network rule for a virtual network and subnet:
```powershell $subnet = Get-AzVirtualNetwork -ResourceGroupName "myresourcegroup" -Name "myvnet" | Get-AzVirtualNetworkSubnetConfig -Name "mysubnet" Add-AzStorageAccountNetworkRule -ResourceGroupName "myresourcegroup" -Name "mystorageaccount" -VirtualNetworkResourceId $subnet.Id ```
- > [!TIP]
- > To add a network rule for a subnet in a VNet belonging to another Azure AD tenant, use a fully-qualified **VirtualNetworkResourceId** parameter in the form "/subscriptions/subscription-ID/resourceGroups/resourceGroup-Name/providers/Microsoft.Network/virtualNetworks/vNet-name/subnets/subnet-name".
+ To add a network rule for a subnet in a virtual network that belongs to another Azure AD tenant, use a fully qualified `VirtualNetworkResourceId` parameter in the form `/subscriptions/subscription-ID/resourceGroups/resourceGroup-Name/providers/Microsoft.Network/virtualNetworks/vNet-name/subnets/subnet-name`.
-5. Remove a network rule for a virtual network and subnet.
+5. Remove a network rule for a virtual network and subnet:
```powershell $subnet = Get-AzVirtualNetwork -ResourceGroupName "myresourcegroup" -Name "myvnet" | Get-AzVirtualNetworkSubnetConfig -Name "mysubnet" Remove-AzStorageAccountNetworkRule -ResourceGroupName "myresourcegroup" -Name "mystorageaccount" -VirtualNetworkResourceId $subnet.Id ```
-> [!IMPORTANT]
-> Be sure to [set the default rule](#change-the-default-network-access-rule) to **deny**, or network rules have no effect.
- #### [Azure CLI](#tab/azure-cli) 1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli).
-2. List virtual network rules.
+2. List virtual network rules:
```azurecli az storage account network-rule list --resource-group "myresourcegroup" --account-name "mystorageaccount" --query virtualNetworkRules ```
-3. Enable service endpoint for Azure Storage on an existing virtual network and subnet.
+3. Enable a service endpoint for Azure Storage on an existing virtual network and subnet:
```azurecli az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage.Global" ```
-4. Add a network rule for a virtual network and subnet.
+4. Add a network rule for a virtual network and subnet:
```azurecli subnetid=$(az network vnet subnet show --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --query id --output tsv) az storage account network-rule add --resource-group "myresourcegroup" --account-name "mystorageaccount" --subnet $subnetid ```
- > [!TIP]
- > To add a rule for a subnet in a VNet belonging to another Azure AD tenant, use a fully-qualified subnet ID in the form "/subscriptions/\<subscription-ID\>/resourceGroups/\<resourceGroup-Name\>/providers/Microsoft.Network/virtualNetworks/\<vNet-name\>/subnets/\<subnet-name\>".
- >
- > You can use the **subscription** parameter to retrieve the subnet ID for a VNet belonging to another Azure AD tenant.
+ To add a rule for a subnet in a virtual network that belongs to another Azure AD tenant, use a fully qualified subnet ID in the form `/subscriptions/<subscription-ID>/resourceGroups/<resourceGroup-Name>/providers/Microsoft.Network/virtualNetworks/<vNet-name>/subnets/<subnet-name>`. You can use the `subscription` parameter to retrieve the subnet ID for a virtual network that belongs to another Azure AD tenant.
-5. Remove a network rule for a virtual network and subnet.
+5. Remove a network rule for a virtual network and subnet:
```azurecli subnetid=$(az network vnet subnet show --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --query id --output tsv) az storage account network-rule remove --resource-group "myresourcegroup" --account-name "mystorageaccount" --subnet $subnetid ```
-> [!IMPORTANT]
-> Be sure to [set the default rule](#change-the-default-network-access-rule) to **deny**, or network rules have no effect.
- ## Grant access from an internet IP range
-You can use IP network rules to allow access from specific public internet IP address ranges by creating IP network rules. Each storage account supports up to 200 rules. These rules grant access to specific internet-based services and on-premises networks and blocks general internet traffic.
+You can use IP network rules to allow access from specific public internet IP address ranges by creating IP network rules. Each storage account supports up to 200 rules. These rules grant access to specific internet-based services and on-premises networks and block general internet traffic.
-The following restrictions apply to IP address ranges.
+The following restrictions apply to IP address ranges:
-- IP network rules are allowed only for **public internet** IP addresses.
+- IP network rules are allowed only for *public internet* IP addresses.
- IP address ranges reserved for private networks (as defined in [RFC 1918](https://tools.ietf.org/html/rfc1918#section-3)) aren't allowed in IP rules. Private networks include addresses that start with *10.**, *172.16.** - *172.31.**, and *192.168.**.
+ IP address ranges reserved for private networks (as defined in [RFC 1918](https://tools.ietf.org/html/rfc1918#section-3)) aren't allowed in IP rules. Private networks include addresses that start with 10, 172.16 to 172.31, and 192.168.
-- You must provide allowed internet address ranges using [CIDR notation](https://tools.ietf.org/html/rfc4632) in the form *16.17.18.0/24* or as individual IP addresses like *16.17.18.19*.
+- You must provide allowed internet address ranges by using [CIDR notation](https://tools.ietf.org/html/rfc4632) in the form 16.17.18.0/24 or as individual IP addresses like 16.17.18.19.
-- Small address ranges using "/31" or "/32" prefix sizes are not supported. These ranges should be configured using individual IP address rules.
+- Small address ranges that use /31 or /32 prefix sizes are not supported. Configure these ranges by using individual IP address rules.
-- Only IPV4 addresses are supported for configuration of storage firewall rules.
+- Only IPv4 addresses are supported for configuration of storage firewall rules.
-IP network rules can't be used in the following cases:
+You can't use IP network rules in the following cases:
- To restrict access to clients in same Azure region as the storage account.
- IP network rules have no effect on requests originating from the same Azure region as the storage account. Use [Virtual network rules](#grant-access-from-a-virtual-network) to allow same-region requests.
+ IP network rules have no effect on requests that originate from the same Azure region as the storage account. Use [Virtual network rules](#grant-access-from-a-virtual-network) to allow same-region requests.
-- To restrict access to clients in a [paired region](../../availability-zones/cross-region-replication-azure.md) which are in a VNet that has a service endpoint.
+- To restrict access to clients in a [paired region](../../availability-zones/cross-region-replication-azure.md) that are in a virtual network that has a service endpoint.
- To restrict access to Azure services deployed in the same region as the storage account.
- Services deployed in the same region as the storage account use private Azure IP addresses for communication. Thus, you can't restrict access to specific Azure services based on their public outbound IP address range.
+ Services deployed in the same region as the storage account use private Azure IP addresses for communication. So, you can't restrict access to specific Azure services based on their public outbound IP address range.
### Configuring access from on-premises networks
-To grant access from your on-premises networks to your storage account with an IP network rule, you must identify the internet facing IP addresses used by your network. Contact your network administrator for help.
+To grant access from your on-premises networks to your storage account by using an IP network rule, you must identify the internet-facing IP addresses that your network uses. Contact your network administrator for help.
-If you are using [ExpressRoute](../../expressroute/expressroute-introduction.md) from your premises, for public peering or Microsoft peering, you will need to identify the NAT IP addresses that are used. For public peering, each ExpressRoute circuit by default uses two NAT IP addresses applied to Azure service traffic when the traffic enters the Microsoft Azure network backbone. For Microsoft peering, the NAT IP addresses used are either customer provided or are provided by the service provider. To allow access to your service resources, you must allow these public IP addresses in the resource IP firewall setting. To find your public peering ExpressRoute circuit IP addresses, [open a support ticket with ExpressRoute](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) via the Azure portal. Learn more about [NAT for ExpressRoute public and Microsoft peering.](../../expressroute/expressroute-nat.md#nat-requirements-for-azure-public-peering)
+If you're using [Azure ExpressRoute](../../expressroute/expressroute-introduction.md) from your premises, for public peering or Microsoft peering, you need to identify the NAT IP addresses that are used. For public peering, each ExpressRoute circuit (by default) uses two NAT IP addresses applied to Azure service traffic when the traffic enters the Microsoft Azure network backbone. For Microsoft peering, either the service provider or the customer provides the NAT IP addresses.
+
+To allow access to your service resources, you must allow these public IP addresses in the firewall setting for resource IPs. To find your IP addresses for public-peering ExpressRoute circuits, [open a support ticket with ExpressRoute](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) via the Azure portal. [Learn more about NAT for ExpressRoute public peering and Microsoft peering](../../expressroute/expressroute-nat.md#nat-requirements-for-azure-public-peering).
### Managing IP network rules
-You can manage IP network rules for storage accounts through the Azure portal, PowerShell, or CLIv2.
+You can manage IP network rules for storage accounts through the Azure portal, PowerShell, or the Azure CLI v2.
#### [Portal](#tab/azure-portal)
-1. Go to the storage account you want to secure.
+1. Go to the storage account that you want to secure.
-2. Select on the settings menu called **Networking**.
+2. Select **Networking**.
-3. Check that you've selected to allow access from **Selected networks**.
+3. Check that you've chosen to allow access from **Selected networks**.
4. To grant access to an internet IP range, enter the IP address or address range (in CIDR format) under **Firewall** > **Address Range**.
-5. To remove an IP network rule, select the trash can icon next to the address range.
+5. To remove an IP network rule, select the delete icon (:::image type="icon" source="media/storage-network-security/delete-icon.png":::) next to the address range.
6. Select **Save** to apply your changes. #### [PowerShell](#tab/azure-powershell)
-1. Install the [Azure PowerShell](/powershell/azure/install-Az-ps) and [sign in](/powershell/azure/authenticate-azureps).
+1. Install [Azure PowerShell](/powershell/azure/install-Az-ps) and [sign in](/powershell/azure/authenticate-azureps).
-2. List IP network rules.
+2. List IP network rules:
```powershell (Get-AzStorageAccountNetworkRuleSet -ResourceGroupName "myresourcegroup" -AccountName "mystorageaccount").IPRules ```
-3. Add a network rule for an individual IP address.
+3. Add a network rule for an individual IP address:
```powershell Add-AzStorageAccountNetworkRule -ResourceGroupName "myresourcegroup" -AccountName "mystorageaccount" -IPAddressOrRange "16.17.18.19" ```
-4. Add a network rule for an IP address range.
+4. Add a network rule for an IP address range:
```powershell Add-AzStorageAccountNetworkRule -ResourceGroupName "myresourcegroup" -AccountName "mystorageaccount" -IPAddressOrRange "16.17.18.0/24" ```
-5. Remove a network rule for an individual IP address.
+5. Remove a network rule for an individual IP address:
```powershell Remove-AzStorageAccountNetworkRule -ResourceGroupName "myresourcegroup" -AccountName "mystorageaccount" -IPAddressOrRange "16.17.18.19" ```
-6. Remove a network rule for an IP address range.
+6. Remove a network rule for an IP address range:
```powershell Remove-AzStorageAccountNetworkRule -ResourceGroupName "myresourcegroup" -AccountName "mystorageaccount" -IPAddressOrRange "16.17.18.0/24" ```
-> [!IMPORTANT]
-> Be sure to [set the default rule](#change-the-default-network-access-rule) to **deny**, or network rules have no effect.
- #### [Azure CLI](#tab/azure-cli) 1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli).
-1. List IP network rules.
+1. List IP network rules:
```azurecli az storage account network-rule list --resource-group "myresourcegroup" --account-name "mystorageaccount" --query ipRules ```
-2. Add a network rule for an individual IP address.
+1. Add a network rule for an individual IP address:
```azurecli az storage account network-rule add --resource-group "myresourcegroup" --account-name "mystorageaccount" --ip-address "16.17.18.19" ```
-3. Add a network rule for an IP address range.
+1. Add a network rule for an IP address range:
```azurecli az storage account network-rule add --resource-group "myresourcegroup" --account-name "mystorageaccount" --ip-address "16.17.18.0/24" ```
-4. Remove a network rule for an individual IP address.
+1. Remove a network rule for an individual IP address:
```azurecli az storage account network-rule remove --resource-group "myresourcegroup" --account-name "mystorageaccount" --ip-address "16.17.18.19" ```
-5. Remove a network rule for an IP address range.
+1. Remove a network rule for an IP address range:
```azurecli az storage account network-rule remove --resource-group "myresourcegroup" --account-name "mystorageaccount" --ip-address "16.17.18.0/24" ```
-> [!IMPORTANT]
-> Be sure to [set the default rule](#change-the-default-network-access-rule) to **deny**, or network rules have no effect.
- <a id="grant-access-specific-instances"></a> ## Grant access from Azure resource instances
-In some cases, an application might depend on Azure resources that cannot be isolated through a virtual network or an IP address rule. However, you'd still like to secure and restrict storage account access to only your application's Azure resources. You can configure storage accounts to allow access to specific resource instances of some Azure services by creating a resource instance rule.
+In some cases, an application might depend on Azure resources that can't be isolated through a virtual network or an IP address rule. But you still want to secure and restrict storage account access to only your application's Azure resources. You can configure storage accounts to allow access to specific resource instances of some Azure services by creating a resource instance rule.
-The types of operations that a resource instance can perform on storage account data is determined by the Azure role assignments of the resource instance. Resource instances must be from the same tenant as your storage account, but they can belong to any subscription in the tenant.
+The Azure role assignments of the resource instance determine the types of operations that a resource instance can perform on storage account data. Resource instances must be from the same tenant as your storage account, but they can belong to any subscription in the tenant.
### [Portal](#tab/azure-portal)
-You can add or remove resource network rules in the Azure portal.
+You can add or remove resource network rules in the Azure portal:
-1. Sign in to the [Azure portal](https://portal.azure.com/) to get started.
+1. Sign in to the [Azure portal](https://portal.azure.com/).
2. Locate your storage account and display the account overview.
-3. Select **Networking** to display the configuration page for networking.
+3. Select **Networking**.
-4. Under **Firewalls and virtual networks**, for **Selected networks**, select to allow access.
+4. Under **Firewalls and virtual networks**, for **Selected networks**, select the option to allow access.
-5. Scroll down to find **Resource instances**, and in the **Resource type** dropdown list, choose the resource type of your resource instance.
+5. Scroll down to find **Resource instances**. In the **Resource type** dropdown list, select the resource type of your resource instance.
-6. In the **Instance name** dropdown list, choose the resource instance. You can also choose to include all resource instances in the active tenant, subscription, or resource group.
+6. In the **Instance name** dropdown list, select the resource instance. You can also choose to include all resource instances in the active tenant, subscription, or resource group.
-7. Select **Save** to apply your changes. The resource instance appears in the **Resource instances** section of the network settings page.
+7. Select **Save** to apply your changes. The resource instance appears in the **Resource instances** section of the page for network settings.
To remove the resource instance, select the delete icon (:::image type="icon" source="media/storage-network-security/delete-icon.png":::) next to the resource instance.
To remove the resource instance, select the delete icon (:::image type="icon" so
You can use PowerShell commands to add or remove resource network rules.
-> [!IMPORTANT]
-> Be sure to [set the default rule](#change-the-default-network-access-rule) to **deny**, or network rules have no effect.
- #### Grant access
-Add a network rule that grants access from a resource instance.
+Add a network rule that grants access from a resource instance:
```powershell $resourceId = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myResourceGroup/providers/Microsoft.DataFactory/factories/myDataFactory"
Add-AzStorageAccountNetworkRule -ResourceGroupName $resourceGroupName -Name $acc
```
-Specify multiple resource instances at once by modifying the network rule set.
+Specify multiple resource instances at once by modifying the network rule set:
```powershell $resourceId1 = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myResourceGroup/providers/Microsoft.DataFactory/factories/myDataFactory"
Update-AzStorageAccountNetworkRuleSet -ResourceGroupName $resourceGroupName -Nam
#### Remove access
-Remove a network rule that grants access from a resource instance.
+Remove a network rule that grants access from a resource instance:
```powershell $resourceId = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myResourceGroup/providers/Microsoft.DataFactory/factories/myDataFactory"
$accountName = "mystorageaccount"
Remove-AzStorageAccountNetworkRule -ResourceGroupName $resourceGroupName -Name $accountName -TenantId $tenantId -ResourceId $resourceId ```
-Remove all network rules that grant access from resource instances.
+Remove all network rules that grant access from resource instances:
```powershell $resourceGroupName = "myResourceGroup"
Update-AzStorageAccountNetworkRuleSet -ResourceGroupName $resourceGroupName -Nam
#### View a list of allowed resource instances
-View a complete list of resource instances that have been granted access to the storage account.
+View a complete list of resource instances that have access to the storage account:
```powershell $resourceGroupName = "myResourceGroup"
You can use Azure CLI commands to add or remove resource network rules.
#### Grant access
-Add a network rule that grants access from a resource instance.
+Add a network rule that grants access from a resource instance:
```azurecli az storage account network-rule add \
az storage account network-rule add \
#### Remove access
-Remove a network rule that grants access from a resource instance.
+Remove a network rule that grants access from a resource instance:
```azurecli az storage account network-rule remove \
az storage account network-rule remove \
#### View a list of allowed resource instances
-View a complete list of resource instances that have been granted access to the storage account.
+View a complete list of resource instances that have access to the storage account:
```azurecli az storage account network-rule list \
az storage account network-rule list \
## Grant access to trusted Azure services
-Some Azure services operate from networks that can't be included in your network rules. You can grant a subset of such trusted Azure services access to the storage account, while maintaining network rules for other apps. These trusted services will then use strong authentication to securely connect to your storage account.
+Some Azure services operate from networks that you can't include in your network rules. You can grant a subset of such trusted Azure services access to the storage account, while maintaining network rules for other apps. These trusted services will then use strong authentication to connect to your storage account.
-You can grant access to trusted Azure services by creating a network rule exception. For step-by-step guidance, see the [Manage exceptions](#manage-exceptions) section of this article.
-
-When you grant access to trusted Azure services, you grant the following types of access:
--- Trusted access for select operations to resources that are registered in your subscription.-- Trusted access to resources based on a managed identity.
+You can grant access to trusted Azure services by creating a network rule exception. The [Manage exceptions](#manage-exceptions) section of this article provides step-by-step guidance.
<a id="trusted-access-resources-in-subscription"></a> ### Trusted access for resources registered in your subscription
-Resources of some services, **when registered in your subscription**, can access your storage account **in the same subscription** for select operations, such as writing logs or backup. The following table describes each service and the operations allowed.
+Resources of some services that are registered in your subscription can access your storage account *in the same subscription* for selected operations, such as writing logs or running backups. The following table describes each service and the allowed operations.
-| Service | Resource Provider Name | Operations allowed |
+| Service | Resource provider name | Allowed operations |
|: |:-- |:- |
-| Azure Backup | Microsoft.RecoveryServices | Run backups and restores of unmanaged disks in IAAS virtual machines. (not required for managed disks). [Learn more](../../backup/backup-overview.md). |
-| Azure Data Box | Microsoft.DataBox | Enables import of data to Azure using Data Box. [Learn more](../../databox/data-box-overview.md). |
-| Azure DevTest Labs | Microsoft.DevTestLab | Custom image creation and artifact installation. [Learn more](../../devtest-labs/devtest-lab-overview.md). |
-| Azure Event Grid | Microsoft.EventGrid | Enable Blob Storage event publishing and allow Event Grid to publish to storage queues. Learn about [blob storage events](../../event-grid/overview.md#event-sources) and [publishing to queues](../../event-grid/event-handlers.md). |
-| Azure Event Hubs | Microsoft.EventHub | Archive data with Event Hubs Capture. [Learn More](../../event-hubs/event-hubs-capture-overview.md). |
-| Azure File Sync | Microsoft.StorageSync | Enables you to transform your on-premises file server to a cache for Azure File shares. Allowing for multi-site sync, fast disaster-recovery, and cloud-side backup. [Learn more](../file-sync/file-sync-planning.md) |
-| Azure HDInsight | Microsoft.HDInsight | Provision the initial contents of the default file system for a new HDInsight cluster. [Learn more](../../hdinsight/hdinsight-hadoop-use-blob-storage.md). |
-| Azure Import Export | Microsoft.ImportExport | Enables import of data to Azure Storage or export of data from Azure Storage using the Azure Storage Import/Export service. [Learn more](../../import-export/storage-import-export-service.md). |
-| Azure Monitor | Microsoft.Insights | Allows writing of monitoring data to a secured storage account, including resource logs, Azure Active Directory sign-in and audit logs, and Microsoft Intune logs. [Learn more](../../azure-monitor/roles-permissions-security.md). |
-| Azure Networking | Microsoft.Network | Store and analyze network traffic logs, including through the Network Watcher and Traffic Analytics services. [Learn more](../../network-watcher/network-watcher-nsg-flow-logging-overview.md). |
-| Azure Site Recovery | Microsoft.SiteRecovery | Enable replication for disaster-recovery of Azure IaaS virtual machines when using firewall-enabled cache, source, or target storage accounts. [Learn more](../../site-recovery/azure-to-azure-tutorial-enable-replication.md). |
+| Azure Backup | `Microsoft.RecoveryServices` | Run backups and restores of unmanaged disks in infrastructure as a service (IaaS) virtual machines (not required for managed disks). [Learn more](../../backup/backup-overview.md). |
+| Azure Data Box | `Microsoft.DataBox` | Import data to Azure. [Learn more](../../databox/data-box-overview.md). |
+| Azure DevTest Labs | `Microsoft.DevTestLab` | Create custom images and install artifacts. [Learn more](../../devtest-labs/devtest-lab-overview.md). |
+| Azure Event Grid | `Microsoft.EventGrid` | Enable [Azure Blob Storage event publishing](../../event-grid/overview.md#event-sources) and allow [publishing to storage queues](../../event-grid/event-handlers.md). |
+| Azure Event Hubs | `Microsoft.EventHub` | Archive data by using Event Hubs Capture. [Learn More](../../event-hubs/event-hubs-capture-overview.md). |
+| Azure File Sync | `Microsoft.StorageSync` | Transform your on-premises file server to a cache for Azure file shares. This capability allows multiple-site sync, fast disaster recovery, and cloud-side backup. [Learn more](../file-sync/file-sync-planning.md). |
+| Azure HDInsight | `Microsoft.HDInsight` | Provision the initial contents of the default file system for a new HDInsight cluster. [Learn more](../../hdinsight/hdinsight-hadoop-use-blob-storage.md). |
+| Azure Import/Export | `Microsoft.ImportExport` | Import data to Azure Storage or export data from Azure Storage. [Learn more](../../import-export/storage-import-export-service.md). |
+| Azure Monitor | `Microsoft.Insights` | Write monitoring data to a secured storage account, including resource logs, Azure AD sign-in and audit logs, and Microsoft Intune logs. [Learn more](../../azure-monitor/roles-permissions-security.md). |
+| Azure networking services | `Microsoft.Network` | Store and analyze network traffic logs, including through the Azure Network Watcher and Azure Traffic Manager services. [Learn more](../../network-watcher/network-watcher-nsg-flow-logging-overview.md). |
+| Azure Site Recovery | `Microsoft.SiteRecovery` | Enable replication for disaster recovery of Azure IaaS virtual machines when you're using firewall-enabled cache, source, or target storage accounts. [Learn more](../../site-recovery/azure-to-azure-tutorial-enable-replication.md). |
<a id="trusted-access-system-assigned-managed-identity"></a> <a id="trusted-access-based-on-system-assigned-managed-identity"></a> ### Trusted access based on a managed identity
-The following table lists services that can have access to your storage account data if the resource instances of those services are given the appropriate permission.
-
-If your account does not have the hierarchical namespace feature enabled on it, you can grant permission, by explicitly assigning an Azure role to the [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for each resource instance. In this case, the scope of access for the instance corresponds to the Azure role assigned to the managed identity.
-
-You can use the same technique for an account that has the hierarchical namespace feature enable on it. However, you don't have to assign an Azure role if you add the managed identity to the access control list (ACL) of any directory or blob contained in the storage account. In that case, the scope of access for the instance corresponds to the directory or file to which the managed identity has been granted access. You can also combine Azure roles and ACLs together. To learn more about how to combine them together to grant access, see [Access control model in Azure Data Lake Storage Gen2](../blobs/data-lake-storage-access-control-model.md).
-
-> [!TIP]
-> The recommended way to grant access to specific resources is to use resource instance rules. To grant access to specific resource instances, see the [Grant access from Azure resource instances](#grant-access-specific-instances) section of this article.
+The following table lists services that can access your storage account data if the resource instances of those services have the appropriate permission.
-| Service | Resource Provider Name | Purpose |
+| Service | Resource provider name | Purpose |
| :-- | :- | :-- |
-| Azure API Management | Microsoft.ApiManagement/service | Enables API Management service access to storage accounts behind firewall using policies. [Learn more](../../api-management/authentication-managed-identity-policy.md#use-managed-identity-in-send-request-policy). |
-| Azure Cache for Redis | Microsoft.Cache/Redis | Allows access to storage accounts through Azure Cache for Redis. [Learn more](../../azure-cache-for-redis/cache-managed-identity.md)|
-| Azure Cognitive Search | Microsoft.Search/searchServices | Enables Cognitive Search services to access storage accounts for indexing, processing and querying. |
-| Azure Cognitive Services | Microsoft.CognitiveService/accounts | Enables Cognitive Services to access storage accounts. [Learn more](../..//cognitive-services/cognitive-services-virtual-networks.md).|
-| Azure Container Registry Tasks | Microsoft.ContainerRegistry/registries | ACR Tasks can access storage accounts when building container images. |
-| Azure Data Factory | Microsoft.DataFactory/factories | Allows access to storage accounts through the ADF runtime. |
-| Azure Data Share | Microsoft.DataShare/accounts | Allows access to storage accounts through Data Share. |
-| Azure DevTest Labs | Microsoft.DevTestLab/labs | Allows access to storage accounts through DevTest Labs. |
-| Azure Event Grid | Microsoft.EventGrid/topics | Allows access to storage accounts through the Azure Event Grid. |
-| Azure Healthcare APIs | Microsoft.HealthcareApis/services | Allows access to storage accounts through Azure Healthcare APIs. |
-| Azure IoT Central Applications | Microsoft.IoTCentral/IoTApps | Allows access to storage accounts through Azure IoT Central Applications. |
-| Azure IoT Hub | Microsoft.Devices/IotHubs | Allows data from an IoT hub to be written to Blob storage. [Learn more](../../iot-hub/virtual-network-support.md#egress-connectivity-from-iot-hub-to-other-azure-resources) |
-| Azure Logic Apps | Microsoft.Logic/workflows | Enables logic apps to access storage accounts. [Learn more](../../logic-apps/create-managed-service-identity.md#authenticate-access-with-managed-identity). |
-| Azure Machine Learning Service | Microsoft.MachineLearningServices | Authorized Azure Machine Learning workspaces write experiment output, models, and logs to Blob storage and read the data. [Learn more](../../machine-learning/how-to-network-security-overview.md#secure-the-workspace-and-associated-resources). |
-| Azure Media Services | Microsoft.Media/mediaservices | Allows access to storage accounts through Media Services. |
-| Azure Migrate | Microsoft.Migrate/migrateprojects | Allows access to storage accounts through Azure Migrate. |
-| Microsoft Purview | Microsoft.Purview/accounts | Allows Microsoft Purview to access storage accounts. |
-| Azure Site Recovery | Microsoft.RecoveryServices/vaults | Allows access to storage accounts through Site Recovery. |
-| Azure SQL Database | Microsoft.Sql | Allows [writing](/azure/azure-sql/database/audit-write-storage-account-behind-vnet-firewall) audit data to storage accounts behind firewall. |
-| Azure Synapse Analytics | Microsoft.Sql | Allows import and export of data from specific SQL databases using the COPY statement or PolyBase (in dedicated pool), or the `openrowset` function and external tables in serverless pool. [Learn more](/azure/azure-sql/database/vnet-service-endpoint-rule-overview). |
-| Azure Stream Analytics | Microsoft.StreamAnalytics | Allows data from a streaming job to be written to Blob storage. [Learn more](../../stream-analytics/blob-output-managed-identity.md). |
-| Azure Synapse Analytics | Microsoft.Synapse/workspaces | Enables access to data in Azure Storage from Azure Synapse Analytics. |
-
-## Grant access to storage analytics
-
-In some cases, access to read resource logs and metrics is required from outside the network boundary. When configuring trusted services access to the storage account, you can allow read-access for the log files, metrics tables, or both by creating a network rule exception. For step-by-step guidance, see the **Manage exceptions** section below. To learn more about working with storage analytics, see [Use Azure Storage analytics to collect logs and metrics data](./storage-analytics.md).
+| Azure API Management | `Microsoft.ApiManagement/service` | Enables access to storage accounts behind firewalls via policies. [Learn more](../../api-management/authentication-managed-identity-policy.md#use-managed-identity-in-send-request-policy). |
+| Azure Cache for Redis | `Microsoft.Cache/Redis` | Enables access to storage accounts. [Learn more](../../azure-cache-for-redis/cache-managed-identity.md).|
+| Azure Cognitive Search | `Microsoft.Search/searchServices` | Enables access to storage accounts for indexing, processing, and querying. |
+| Azure Cognitive Services | `Microsoft.CognitiveService/accounts` | Enables access to storage accounts. [Learn more](../..//cognitive-services/cognitive-services-virtual-networks.md).|
+| Azure Container Registry | `Microsoft.ContainerRegistry/registries` | Through the ACR Tasks suite of features, enables access to storage accounts when you're building container images. |
+| Azure Data Factory | `Microsoft.DataFactory/factories` | Enables access to storage accounts through the Data Factory runtime. |
+| Azure Data Share | `Microsoft.DataShare/accounts` | Enables access to storage accounts. |
+| Azure DevTest Labs | `Microsoft.DevTestLab/labs` | Enables access to storage accounts. |
+| Azure Event Grid | `Microsoft.EventGrid/topics` | Enables access to storage accounts. |
+| Azure Healthcare APIs | `Microsoft.HealthcareApis/services` | Enables access to storage accounts. |
+| Azure IoT Central | `Microsoft.IoTCentral/IoTApps` | Enables access to storage accounts. |
+| Azure IoT Hub | `Microsoft.Devices/IotHubs` | Allows data from an IoT hub to be written to Blob Storage. [Learn more](../../iot-hub/virtual-network-support.md#egress-connectivity-from-iot-hub-to-other-azure-resources). |
+| Azure Logic Apps | `Microsoft.Logic/workflows` | Enables logic apps to access storage accounts. [Learn more](../../logic-apps/create-managed-service-identity.md#authenticate-access-with-managed-identity). |
+| Azure Machine Learning | `Microsoft.MachineLearningServices` | Enables authorized Azure Machine Learning workspaces to write experiment output, models, and logs to Blob Storage and read the data. [Learn more](../../machine-learning/how-to-network-security-overview.md#secure-the-workspace-and-associated-resources). |
+| Azure Media Services | `Microsoft.Media/mediaservices` | Enables access to storage accounts. |
+| Azure Migrate | `Microsoft.Migrate/migrateprojects` | Enables access to storage accounts. |
+| Microsoft Purview | `Microsoft.Purview/accounts` | Enables access to storage accounts. |
+| Azure Site Recovery | `Microsoft.RecoveryServices/vaults` | Enables access to storage accounts. |
+| Azure SQL Database | `Microsoft.Sql` | Allows [writing audit data to storage accounts behind a firewall](/azure/azure-sql/database/audit-write-storage-account-behind-vnet-firewall). |
+| Azure Synapse Analytics | `Microsoft.Sql` | Allows import and export of data from specific SQL databases via the `COPY` statement or PolyBase (in a dedicated pool), or the `openrowset` function and external tables in a serverless pool. [Learn more](/azure/azure-sql/database/vnet-service-endpoint-rule-overview). |
+| Azure Stream Analytics | `Microsoft.StreamAnalytics` | Allows data from a streaming job to be written to Blob Storage. [Learn more](../../stream-analytics/blob-output-managed-identity.md). |
+| Azure Synapse Analytics | `Microsoft.Synapse/workspaces` | Enables access to data in Azure Storage. |
+
+If your account doesn't have the hierarchical namespace feature enabled on it, you can grant permission by explicitly assigning an Azure role to the [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for each resource instance. In this case, the scope of access for the instance corresponds to the Azure role that's assigned to the managed identity.
+
+You can use the same technique for an account that has the hierarchical namespace feature enabled on it. However, you don't have to assign an Azure role if you add the managed identity to the access control list (ACL) of any directory or blob that the storage account contains. In that case, the scope of access for the instance corresponds to the directory or file to which the managed identity has access.
+
+You can also combine Azure roles and ACLs together to grant access. To learn more, see [Access control model in Azure Data Lake Storage Gen2](../blobs/data-lake-storage-access-control-model.md).
+
+We recommend that you [use resource instance rules to grant access to specific resources](#grant-access-from-azure-resource-instances).
<a id="manage-exceptions"></a> ## Manage exceptions
-You can manage network rule exceptions through the Azure portal, PowerShell, or Azure CLI v2.
+In some cases, like storage analytics, access to read resource logs and metrics is required from outside the network boundary. When you configure trusted services to access the storage account, you can allow read access for the log files, metrics tables, or both by creating a network rule exception. You can manage network rule exceptions through the Azure portal, PowerShell, or the Azure CLI v2.
+
+To learn more about working with storage analytics, see [Use Azure Storage analytics to collect logs and metrics data](./storage-analytics.md).
#### [Portal](#tab/azure-portal)
-1. Go to the storage account you want to secure.
+1. Go to the storage account that you want to secure.
-2. Select on the settings menu called **Networking**.
+2. Select **Networking**.
-3. Check that you've selected to allow access from **Selected networks**.
+3. Check that you've chosen to allow access from **Selected networks**.
-4. Under **Exceptions**, select the exceptions you wish to grant.
+4. Under **Exceptions**, select the exceptions that you want to grant.
5. Select **Save** to apply your changes. #### [PowerShell](#tab/azure-powershell)
-1. Install the [Azure PowerShell](/powershell/azure/install-Az-ps) and [sign in](/powershell/azure/authenticate-azureps).
+1. Install [Azure PowerShell](/powershell/azure/install-Az-ps) and [sign in](/powershell/azure/authenticate-azureps).
-2. Display the exceptions for the storage account network rules.
+2. Display the exceptions for the storage account's network rules:
```powershell (Get-AzStorageAccountNetworkRuleSet -ResourceGroupName "myresourcegroup" -Name "mystorageaccount").Bypass ```
-3. Configure the exceptions to the storage account network rules.
+3. Configure the exceptions to the storage account's network rules:
```powershell Update-AzStorageAccountNetworkRuleSet -ResourceGroupName "myresourcegroup" -Name "mystorageaccount" -Bypass AzureServices,Metrics,Logging ```
-4. Remove the exceptions to the storage account network rules.
+4. Remove the exceptions to the storage account's network rules:
```powershell Update-AzStorageAccountNetworkRuleSet -ResourceGroupName "myresourcegroup" -Name "mystorageaccount" -Bypass None ```
-> [!IMPORTANT]
-> Be sure to [set the default rule](#change-the-default-network-access-rule) to **deny**, or removing exceptions have no effect.
- #### [Azure CLI](#tab/azure-cli) 1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli).
-2. Display the exceptions for the storage account network rules.
+2. Display the exceptions for the storage account's network rules:
```azurecli az storage account show --resource-group "myresourcegroup" --name "mystorageaccount" --query networkRuleSet.bypass ```
-3. Configure the exceptions to the storage account network rules.
+3. Configure the exceptions to the storage account's network rules:
```azurecli az storage account update --resource-group "myresourcegroup" --name "mystorageaccount" --bypass Logging Metrics AzureServices ```
-4. Remove the exceptions to the storage account network rules.
+4. Remove the exceptions to the storage account's network rules:
```azurecli az storage account update --resource-group "myresourcegroup" --name "mystorageaccount" --bypass None ```
-> [!IMPORTANT]
-> Be sure to [set the default rule](#change-the-default-network-access-rule) to **deny**, or removing exceptions have no effect.
- ## Next steps
-Learn more about Azure Network service endpoints in [Service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md).
+Learn more about [Azure network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md).
-Dig deeper into Azure Storage security in [Azure Storage security guide](../blobs/security-recommendations.md).
+Dig deeper into [Azure Storage security](../blobs/security-recommendations.md).
storage Storage Use Azurite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azurite.md
Title: Use Azurite emulator for local Azure Storage development
description: The Azurite open-source emulator provides a free local environment for testing your Azure storage applications. Previously updated : 08/04/2022 Last updated : 04/26/2023
There are several different ways to install and run Azurite on your local system
### [Visual Studio](#tab/visual-studio)
-Azurite is automatically available with [Visual Studio 2022](https://visualstudio.microsoft.com/vs/). If you're running an earlier version of Visual Studio, you can install Azurite by using either Node Package Manager, DockerHub, or by cloning the Azurite GitHub repository.
+Azurite is automatically available with [Visual Studio 2022](https://visualstudio.microsoft.com/vs/). The Azurite executable is updated as part of Visual Studio new version releases. If you're running an earlier version of Visual Studio, you can install Azurite by using either Node Package Manager, DockerHub, or by cloning the Azurite GitHub repository.
### [Visual Studio Code](#tab/visual-studio-code)
storage File Sync Networking Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-networking-endpoints.md
description: Learn how to configure Azure File Sync network endpoints.
Previously updated : 11/01/2022 Last updated : 04/26/2023
Additionally:
- If you intend to use the Azure CLI, [install the latest version](/cli/azure/install-azure-cli). ## Create the private endpoints
-When you are creating a private endpoint for an Azure resource, the following resources are deployed:
+When you create a private endpoint for an Azure resource, the following resources are deployed:
-- **A private endpoint**: An Azure resource representing either the private endpoint for the storage account or the Storage Sync Service. You can think of this as a resource that connects your Azure resource and a network interface.
+- **A private endpoint**: An Azure resource representing either the private endpoint for the storage account or the Storage Sync Service. Think of this as a resource that connects your Azure resource and a network interface.
- **A network interface (NIC)**: The network interface that maintains a private IP address within the specified virtual network/subnet. This is the exact same resource that gets deployed when you deploy a virtual machine, however instead of being assigned to a VM, it's owned by the private endpoint. - **A private DNS zone**: If you've never deployed a private endpoint for this virtual network before, a new private DNS zone will be deployed for your virtual network. A DNS A record will also be created for Azure resource in this DNS zone. If you've already deployed a private endpoint in this virtual network, a new A record for Azure resource will be added to the existing DNS zone. Deploying a DNS zone is optional, however highly recommended to simplify the DNS management required.
When you restrict the storage account to specific virtual networks, you are allo
### Disable access to the Storage Sync Service public endpoint
-Azure File Sync enables you to restrict access to specific virtual networks through private endpoints only; Azure File Sync does not support service endpoints for restricting access to the public endpoint to specific virtual networks. This means that the two states for the Storage Sync Service's public endpoint are enabled and disabled.
+Azure File Sync enables you to restrict access to specific virtual networks through private endpoints only; Azure File Sync doesn't support service endpoints for restricting access to the public endpoint to specific virtual networks. This means that the two states for the Storage Sync Service's public endpoint are **enabled** and **disabled**.
+
+> [!IMPORTANT]
+> You must create a private endpoint before disabling access to the public endpoint. If the public endpoint is disabled and there's no private endpoint configured, sync can't work.
# [Portal](#tab/azure-portal)
-This is not possible through the Azure portal. Please select the Azure PowerShell tab to get instructions on how to disable the Storage Sync Service public endpoint.
+To disable access to the Storage Sync Service's public endpoint, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com?azure-portal=true).
+1. Navigate to the Storage Sync Service and select **Settings** > **Network** from the left navigation.
+1. Under **Allow access from**, select **Private endpoints only**.
+1. Select a private endpoint from the **Private endpoint connections** list.
# [PowerShell](#tab/azure-powershell)
-To disable access to the Storage Sync Service's public endpoint, we will set the `incomingTrafficPolicy` property on the Storage Sync Service to `AllowVirtualNetworksOnly`. If you would like to enable access to the Storage Sync Service's public endpoint, set `incomingTrafficPolicy` to `AllowAllTraffic` instead. Remember to replace `<storage-sync-service-resource-group>` and `<storage-sync-service>`.
+To disable access to the Storage Sync Service's public endpoint, set the `incomingTrafficPolicy` property on the Storage Sync Service to `AllowVirtualNetworksOnly`. If you want to enable access to the Storage Sync Service's public endpoint, set `incomingTrafficPolicy` to `AllowAllTraffic` instead. Remember to replace `<storage-sync-service-resource-group>` and `<storage-sync-service>` with your own values.
```powershell $storageSyncServiceResourceGroupName = "<storage-sync-service-resource-group>"
storage Storage Quickstart Queues Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-quickstart-queues-nodejs.md
The following diagram shows the relationship between these resources.
Use the following JavaScript classes to interact with these resources: -- [`QueueServiceClient`](/javascript/api/@azure/storage-queue/queueserviceclient): The `QueueServiceClient` allows you to manage the all queues in your storage account.-- [`QueueClient`](/javascript/api/@azure/storage-queue/queueclient): The `QueueClient` class allows you to manage and manipulate an individual queue and its messages.-- [`QueueMessage`](/javascript/api/preview-docs/@azure/storage-queue/queuemessage): The `QueueMessage` class represents the individual objects returned when calling [`ReceiveMessages`](/javascript/api/@azure/storage-queue/queueclient#receivemessages-queuereceivemessageoptions-) on a queue.
+- [`QueueServiceClient`](/javascript/api/@azure/storage-queue/queueserviceclient): A `QueueServiceClient` instance represents a connection to a given storage account in the Azure Storage Queue service. This client allows you to manage the all queues in your storage account.
+- [`QueueClient`](/javascript/api/@azure/storage-queue/queueclient): A `QueueClient` instance represents a single queue in a storage account. This client allows you to manage and manipulate an individual queue and its messages.
## Code examples
console.log("Queue created, requestId:", createQueueResponse.requestId);
### Add messages to a queue
-The following code snippet adds messages to queue by calling the [`sendMessage`](/javascript/api/@azure/storage-queue/queueclient#sendmessage-string--queuesendmessageoptions-) method. It also saves the [`QueueMessage`](/javascript/api/preview-docs/@azure/storage-queue/queuemessage) returned from the third `sendMessage` call. The returned `sendMessageResponse` is used to update the message content later in the program.
+The following code snippet adds messages to queue by calling the [`sendMessage`](/javascript/api/@azure/storage-queue/queueclient#sendmessage-string--queuesendmessageoptions-) method. It also saves the [`QueueSendMessageResponse`](/javascript/api/@azure/storage-queue/queuesendmessageresponse) returned from the third `sendMessage` call. The returned `sendMessageResponse` is used to update the message content later in the program.
Add this code to the end of the `main` function:
storage Table Storage Design Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-design-patterns.md
Consider the following points when deciding how to store log data:
## Implementation considerations
-This section discusses some of the considerations to bear in mind when you implement the patterns described in the previous sections. Most of this section uses examples written in C# that use the Storage Client Library (version 4.3.0 at the time of writing).
+This section discusses some of the considerations to bear in mind when you implement the patterns described in the previous sections. Most of this section uses examples written in C# that use the Storage client library (version 4.3.0 at the time of writing).
## Retrieving entities
-As discussed in the section Design for querying, the most efficient query is a point query. However, in some scenarios you may need to retrieve multiple entities. This section describes some common approaches to retrieving entities using the Storage Client Library.
+As discussed in the section Design for querying, the most efficient query is a point query. However, in some scenarios you may need to retrieve multiple entities. This section describes some common approaches to retrieving entities using the Storage client library.
-### Executing a point query using the Storage Client Library
+### Executing a point query using the Storage client library
The easiest way to execute a point query is to use the **GetEntityAsync** method as shown in the following C# code snippet that retrieves an entity with a **PartitionKey** of value "Sales" and a **RowKey** of value "212":
Notice how this example expects the entity it retrieves to be of type **Employee
You can use LINQ to retrieve multiple entities from the Table service when working with Microsoft Azure Cosmos DB Table Standard Library. ```azurecli
-dotnet add package Microsoft.Azure.Cosmos.Table
+dotnet add package Azure.Data.Tables
``` To make the below examples work, you'll need to include namespaces: ```csharp using System.Linq;
-using Azure.Data.Table
+using Azure.Data.Tables
```
-The employeeTable is a CloudTable object that implements a CreateQuery\<ITableEntity>() method, which returns a TableQuery\<ITableEntity>. Objects of this type implement an IQueryable and allow using both LINQ Query Expressions and dot notation syntax.
+Retrieving multiple entities can be achieved by specifying a query with a **filter** clause. To avoid a table scan, you should always include the **PartitionKey** value in the filter clause, and if possible the **RowKey** value to avoid table and partition scans. The table service supports a limited set of comparison operators (greater than, greater than or equal, less than, less than or equal, equal, and not equal) to use in the filter clause.
-Retrieving multiple entities and be achieved by specifying a query with a **filter** clause. To avoid a table scan, you should always include the **PartitionKey** value in the filter clause, and if possible the **RowKey** value to avoid table and partition scans. The table service supports a limited set of comparison operators (greater than, greater than or equal, less than, less than or equal, equal, and not equal) to use in the filter clause.
-
-The following C# code snippet finds all the employees whose last name starts with "B" (assuming that the **RowKey** stores the last name) in the sales department (assuming the **PartitionKey** stores the department name):
+In the following example, `employeeTable` is a [TableClient](/dotnet/api/azure.data.tables.tableclient) object. This example finds all the employees whose last name starts with "B" (assuming that the **RowKey** stores the last name) in the sales department (assuming the **PartitionKey** stores the department name):
```csharp var employees = employeeTable.Query<EmployeeEntity>(e => (e.PartitionKey == "Sales" && e.RowKey.CompareTo("B") >= 0 && e.RowKey.CompareTo("C") < 0));
You should always fully test the performance of your application in such scenari
A query against the table service may return a maximum of 1,000 entities at one time and may execute for a maximum of five seconds. If the result set contains more than 1,000 entities, if the query did not complete within five seconds, or if the query crosses the partition boundary, the Table service returns a continuation token to enable the client application to request the next set of entities. For more information about how continuation tokens work, see [Query Timeout and Pagination](/rest/api/storageservices/Query-Timeout-and-Pagination).
-If you are using the Table Client Library, it can automatically handle continuation tokens for you as it returns entities from the Table service. The following C# code sample using the Table Client Library automatically handles continuation tokens if the table service returns them in a response:
+If you are using the Azure Tables client library, it can automatically handle continuation tokens for you as it returns entities from the Table service. The following C# code sample using the client library automatically handles continuation tokens if the table service returns them in a response:
```csharp var employees = employeeTable.Query<EmployeeEntity>("PartitionKey eq 'Sales'")
foreach (var emp in employees)
} ```
-The following C# code handles continuation tokens explicitly:
+You can also specify the maximum number of entities that are returned per page. The following example shows how to query entities with `maxPerPage`:
```csharp
-TableContinuationToken continuationToken = null;
-do
+var employees = employeeTable.Query<EmployeeEntity>(maxPerPage: 10);
+
+// Iterate the Pageable object by page
+foreach (var page in employees.AsPages())
{
- var employees = employeeTable.Query<EmployeeEntity>("PartitionKey eq 'Sales'");
- foreach (var emp in employees.AsPages())
+ // Iterate the entities returned for this page
+ foreach (var emp in page.Values)
{ // ...
- continuationToken = emp.ContinuationToken;
}
-
-} while (continuationToken != null);
+}
+```
+
+In more advanced scenarios, you may want to store the continuation token returned from the service so that your code controls exactly when the next pages is fetched. The following example shows a basic scenario of how the token can be fetched and applied to paginated results:
+
+```csharp
+string continuationToken = null;
+bool moreResultsAvailable = true;
+while (moreResultsAvailable)
+{
+ var page = employeeTable
+ .Query<EmployeeEntity>()
+ .AsPages(continuationToken, pageSizeHint: 10)
+ .FirstOrDefault(); // pageSizeHint limits the number of results in a single page, so we only enumerate the first page
+
+ if (page == null)
+ break;
+
+ // Get the continuation token from the page
+ // Note: This value can be stored so that the next page query can be executed later
+ continuationToken = page.ContinuationToken;
+
+ var pageResults = page.Values;
+ moreResultsAvailable = pageResults.Any() && continuationToken != null;
+
+ // Iterate the results for this page
+ foreach (var result in pageResults)
+ {
+ // ...
+ }
+}
``` By using continuation tokens explicitly, you can control when your application retrieves the next segment of data. For example, if your client application enables users to page through the entities stored in a table, a user may decide not to page through all the entities retrieved by the query so your application would only use a continuation token to retrieve the next segment when the user had finished paging through all the entities in the current segment. This approach has several benefits:
By using continuation tokens explicitly, you can control when your application r
> >
-The following C# code shows how to modify the number of entities returned inside a segment:
-
-```csharp
-employees.max = 50;
-```
- ### Server-side projection A single entity can have up to 255 properties and be up to 1 MB in size. When you query the table and retrieve entities, you may not need all the properties and can avoid transferring data unnecessarily (to help reduce latency and cost). You can use server-side projection to transfer just the properties you need. The following example retrieves just the **Email** property (along with **PartitionKey**, **RowKey**, **Timestamp**, and **ETag**) from the entities selected by the query.
Notice how the **RowKey** value is available even though it was not included in
## Modifying entities
-The Storage Client Library enables you to modify your entities stored in the table service by inserting, deleting, and updating entities. You can use EGTs to batch multiple inserts, update, and delete operations together to reduce the number of round trips required and improve the performance of your solution.
+The Storage client library enables you to modify your entities stored in the table service by inserting, deleting, and updating entities. You can use EGTs to batch multiple inserts, update, and delete operations together to reduce the number of round trips required and improve the performance of your solution.
-Exceptions thrown when the Storage Client Library executes an EGT typically include the index of the entity that caused the batch to fail. This is helpful when you are debugging code that uses EGTs.
+Exceptions thrown when the Storage client library executes an EGT typically include the index of the entity that caused the batch to fail. This is helpful when you are debugging code that uses EGTs.
You should also consider how your design affects how your client application handles concurrency and update operations.
The techniques discussed in this section are especially relevant to the discussi
> >
-The remainder of this section describes some of the features in the Storage Client Library that facilitate working with multiple entity types in the same table.
+The remainder of this section describes some of the features in the Storage client library that facilitate working with multiple entity types in the same table.
### Retrieving heterogeneous entity types
-If you are using the Table Client Library, you have three options for working with multiple entity types.
+If you are using the Table client library, you have three options for working with multiple entity types.
-If you know the type of the entity stored with a specific **RowKey** and **PartitionKey** values, then you can specify the entity type when you retrieve the entity as shown in the previous two examples that retrieve entities of type **EmployeeEntity**: [Executing a point query using the Storage Client Library](#executing-a-point-query-using-the-storage-client-library) and [Retrieving multiple entities using LINQ](#retrieving-multiple-entities-using-linq).
+If you know the type of the entity stored with a specific **RowKey** and **PartitionKey** values, then you can specify the entity type when you retrieve the entity as shown in the previous two examples that retrieve entities of type **EmployeeEntity**: [Executing a point query using the Storage client library](#executing-a-point-query-using-the-storage-client-library) and [Retrieving multiple entities using LINQ](#retrieving-multiple-entities-using-linq).
The second option is to use the **TableEntity** type (a property bag) instead of a concrete POCO entity type (this option may also improve performance because there is no need to serialize and deserialize the entity to .NET types). The following C# code potentially retrieves multiple entities of different types from the table, but returns all entities as **TableEntity** instances. It then uses the **EntityType** property to determine the type of each entity:
It is possible to generate a SAS token that grants access to a subset of the ent
Provided you are spreading your requests across multiple partitions, you can improve throughput and client responsiveness by using asynchronous or parallel queries. For example, you might have two or more worker role instances accessing your tables in parallel. You could have individual worker roles responsible for particular sets of partitions, or simply have multiple worker role instances, each able to access all the partitions in a table.
-Within a client instance, you can improve throughput by executing storage operations asynchronously. The Storage Client Library makes it easy to write asynchronous queries and modifications. For example, you might start with the synchronous method that retrieves all the entities in a partition as shown in the following C# code:
+Within a client instance, you can improve throughput by executing storage operations asynchronously. The Storage client library makes it easy to write asynchronous queries and modifications. For example, you might start with the synchronous method that retrieves all the entities in a partition as shown in the following C# code:
```csharp private static void ManyEntitiesQuery(TableClient employeeTable, string department)
synapse-analytics Tutorial Automl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-automl.md
Sign in to the [Azure portal](https://portal.azure.com/).
For this tutorial, you need a Spark table. The following notebook creates one:
-1. Download the notebook [Create-Spark-Table-NYCTaxi- Data.ipynb](https://go.microsoft.com/fwlink/?linkid=2149229).
+1. Download the notebook [Create-Spark-Table-NYCTaxi- Data.ipynb](https://github.com/Azure-Samples/Synapse/blob/ec6faf976d580b793548a4e137b71a0c7e0d287a/MachineLearning/Create%20Spark%20Table%20with%20NYC%20Taxi%20Data.ipynb).
1. Import the notebook to Synapse Studio.
update-center Manage Multiple Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-multiple-machines.md
Title: Manage multiple machines in update management center (preview) description: The article details how to use Update management center (preview) in Azure to manage multiple supported machines and view their compliance state in the Azure portal. Previously updated : 04/11/2023 Last updated : 04/26/2023
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
+> [!IMPORTANT]
+> - For a seamless scheduled patching experience, we recommend that for all Azure VMs, you update the patch mode to *Azure orchestrated with user managed schedules (preview)* before **May 19, 2023**. If you fail to update the patch mode before **May 19, 2023**, you can experience a disruption in business continuity because the schedules will fail to patch the VMs.[Learn more](prerequsite-for-schedule-patching.md).
+> - To update the patch mode, go to **Update management center (Preview)** home page > **Update Settings**. In **Change update settings**, add the machines and under **Patch orchestration**, select *Azure-orchestrated-safe deployment*.
+ This article describes the various features that update management center (Preview) offers to manage the system updates on your machines. Using the update management center (preview), you can: - Quickly assess the status of available operating system updates.
Instead of performing these actions from a selected Azure VM or Arc-enabled serv
- **Reboot Required**ΓÇöpending a reboot for the updates to take effect. - **No updates data**ΓÇöno assessment data is available for these machines.
- There following could be the reasons for no assessment data:
+ The following could be the reasons for no assessment data:
- No assessment has been done over the last seven days - The machine has an unsupported OS - The machine is in an unsupported region and you can't perform an assessment.
When the Resource Graph Explorer opens, it is automatically populated with the s
## Next steps * To set up and manage recurring deployment schedules, see [Schedule recurring updates](scheduled-patching.md)
-* To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md).
+* To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md).
update-center Prerequsite For Schedule Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/prerequsite-for-schedule-patching.md
+
+ Title: Configure schedule patching on Azure VMs to ensure business continuity in update management center (preview).
+description: The article describes the new prerequisites to configure scheduled patching to ensure business continuity in Update management center (preview).
+ Last updated : 04/26/2023+++++
+# Configure schedule patching on Azure VMs to ensure business continuity
+
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: Azure VMs.
+
+This article is an overview on how to configure Schedule patching and Automatic guest VM patching on Azure VMs using the new prerequisite to ensure business continuity. The steps to configure both the patching options on Arc VMs remain the same.
+
+Currently, you can enable [Automatic guest VM patching](../virtual-machines/automatic-vm-guest-patching.md) (Autopatch) by setting the patch mode to **Azure-orchestrated**/**AutomaticByPlatform** on Azure portal/REST API respectively, where patches are automatically applied during off-peak hours.
+
+For customizing control over your patch installation, you can use [schedule patching](updates-maintenance-schedules.md#scheduled-patching) to define your maintenance window. You can [enable schedule patching](scheduled-patching.md#schedule-recurring-updates-on-single-vm) by setting the patch mode to **Azure orchestrated**/**AutomaticByPlatform** and attaching a schedule to the Azure VM. So, the VM properties couldn't be differentiated between **schedule patching** or **Automatic guest VM patching** as both had the patch mode set to *Azure-Orchestrated*.
+
+Additionally, in some instances, when you remove the schedule from a VM, there is a possibility that the VM may be auto patched and rebooted. To overcome the limitations, we have introduced a new prerequisite - **ByPassPlatformSafetyChecksOnUserSchedule**, which can now be set to *true* to identify a VM using schedule patching. It means that VMs with this property set to *true* will no longer be auto patched when the VMs don't have an associated maintenance configuration.
+
+> [!IMPORTANT]
+> For a continued scheduled patching experience, you must ensure that the new VM property, *BypassPlatformSafetyChecksOnUserSchedule*, is enabled on all your Azure VMs (existing or new) that have schedules attached to them **before May 19, 2023**. This setting will ensure machines are patched using your configured schedules and not autopatched. Failing to enable the pre-requisite will give an error that the prerequisites aren't met.
+
+## Find VMs with associated schedules
+
+To identify the list of VMs with the associated schedules for which you have to enable new VM property, follow these steps:
+
+1. Go to **Update management center (Preview)** home page and select **Machines** tab.
+1. In **Patch orchestration** filter, select **Azure-orchestrated safe deployment**.
+1. Use the **Select all** option to select the machines and then select **Export to CSV**.
+1. Open the CSV file and in the column **Associated schedules**, select the rows that have an entry.
+
+ In the corresponding **Name** column, you can view the list the VMs to which you would need to enable the **ByPassPlatformSafetyChecksOnUserSchedule** flag.
++
+## Enable schedule patching on Azure VMs
+
+# [Azure portal](#tab/new-prereq-portal)
+
+**Prerequisite**
+
+Patch orchestration = Customer managed schedules.
+
+Select the patch orchestration option as **Customer managed schedules**.
+The new patch orchestration option enables the following VM properties on your behalf after receiving your consent:
+
+ - Patch mode = Azure-orchestrated
+ - BypassPlatformSafetyChecksOnUserSchedule = TRUE
+
+**Enable for new VMs**
+
+You can select the patch orchestration option for new VMs that would be associated with the schedules:
+
+To update the patch mode, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com)
+1. Go to **Virtual machine**, and select **+Create** to open *Create a virtual machine* page.
+1. In **Basics** tab, complete all the mandatory fields.
+1. In **Management** tab, under **Guest OS updates**, for **Patch orchestration options**, select *Azure-orchestrated*.
+1. After you complete the entries in **Monitoring**, **Advanced** and **Tags** tabs.
+1. Select **Review + Create** and select **Create** to create a new VM with the appropriate patch orchestration option.
+
+To schedule patch the newly created VMs, follow the procedure from step 2 in **Enable for existing VMs**.
++
+**Enable for existing VMs**
+
+You can update the patch orchestration option for existing VMs that either already have schedules associated or are to be newly associated with a schedule:
+
+> [!NOTE]
+> If the **Patch orchestration** is set as *Azure-orchestrated or Azure-orchestrated safe deployment (AutomaticByPlatform)*, the **BypassPlatformSafetyChecksOnUserSchedule** is set to *False* and there is no schedule associated, the VM(s) will be autopatched.
+
+To update the patch mode, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com)
+1. Go to **Update management center (Preview)**, select **Update Settings**.
+1. In **Change update settings**, select **+Add machine**.
+1. In **Select resources**, select your VMs and then select **Add**.
+1. In **Change update settings**, under **Patch orchestration**, select *Customer managed schedules* and then select **Save**.
+
+Attach a schedule after you complete the above steps.
+
+To check if the **BypassPlatformSafetyChecksOnUserSchedule** is enabled, go to **Virtual machine** home page > **Overview** tab > **JSON View**.
+
+# [REST API](#tab/new-prereq-rest-api)
+
+**Prerequisite**
+
+- Patch mode = AutomaticByPlatform
+- BypassPlatformSafetyChecksOnUserSchedule = TRUE
+
+**Enable on Windows VMs**
+
+```
+PUT on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVirtualMachine?api-version=2023-03-01`
+```
+
+```json
+{
+ "location":"<location>",
+ "properties": {
+ "osProfile": {
+ "windowsConfiguration": {
+ "provisionVMAgent": true,
+ "enableAutomaticUpdates": true,
+ "patchSettings": {
+ "patchMode": "AutomaticByPlatform",
+ "automaticByPlatformSettings":{
+"bypassPlatformSafetyChecksOnUserSchedule":true
+ }
+ }
+ }
+ }
+ }
+}
+
+```
+**Enable on Linux VMs**
+
+```
+PUT on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVirtualMachine?api-version=2023-03-01`
+```
+
+```json
+{
+
+ "location":"<location>",
+ "properties": {
+ "osProfile": {
+ " linuxConfiguration": {
+ "provisionVMAgent": true,
+ "enableAutomaticUpdates": true,
+ "patchSettings": {
+ "patchMode": "AutomaticByPlatform",
+ "automaticByPlatformSettings":{
+"bypassPlatformSafetyChecksOnUserSchedule":true
+ }
+ }
+ }
+ }
+ }
+}
+```
++
+> [!NOTE]
+> Currently, you can only enable the new prerequisite for schedule patching via Azure portal and REST API. It cannot be enabled via Azure CLI and PowerShell.
++
+## Enable automatic guest VM patching on Azure VMs
+
+To enable automatic guest VM patching on your Azure VMs now, follow these steps:
+
+# [Azure portal](#tab/auto-portal)
+
+**Prerequisite**
+
+Patch mode = Azure-orchestrated
+
+**Enable for new VMs**
+
+You can select the patch orchestration option for new VMs that would be associated with the schedules:
+
+To update the patch mode, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com)
+1. Go to **Virtual machine**, and select **+Create** to open *Create a virtual machine* page.
+1. In **Basics** tab, complete all the mandatory fields.
+1. In **Management** tab, under **Guest OS updates**, for **Patch orchestration options**, select *Azure-orchestrated*.
+1. After you complete the entries in **Monitoring**, **Advanced** and **Tags** tabs.
+1. Select **Review + Create** and select **Create** to create a new VM with the appropriate patch orchestration option.
++
+**Enable for existing VMs**
+
+To update the patch mode, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com)
+1. Go to **Update management center (Preview)**, select **Update Settings**.
+1. In **Change update settings**, select **+Add machine**.
+1. In **Select resources**, select your VMs and then select **Add**.
+1. In **Change update settings**, under **Patch orchestration**, select *Azure-orchestrated-safe deployment* and then select **Save**.
++
+# [REST API](#tab/auto-rest-api)
+
+**Prerequisites**
+
+- Patch mode = AutomaticByPlatform
+- BypassPlatformSafetyChecksOnUserSchedule = FALSE
+
+**Enable on Windows VMs**
+
+```
+PUT on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVirtualMachine?api-version=2023-03-01`
+```
+
+```json
+{
+
+ "location":"<location>",
+ "properties": {
+ "osProfile": {
+ "windowsConfiguration": {
+ "provisionVMAgent": true,
+ "enableAutomaticUpdates": true,
+ "patchSettings": {
+ "patchMode": "AutomaticByPlatform",
+ "automaticByPlatformSettings":{
+"bypassPlatformSafetyChecksOnUserSchedule":false
+ }
+ }
+ }
+ }
+ }
+}
+```
+
+**Enable on Linux VMs**
+
+```
+PUT on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVirtualMachine?api-version=2023-03-01`
+```
+
+```json
+{
+ "location":"<location>",
+ "properties": {
+ "osProfile": {
+ " linuxConfiguration": {
+ "provisionVMAgent": true,
+ "enableAutomaticUpdates": true,
+ "patchSettings": {
+ "patchMode": "AutomaticByPlatform",
+ "automaticByPlatformSettings":{
+"bypassPlatformSafetyChecksOnUserSchedule":false
+ }
+ }
+ }
+ }
+ }
+}
+```
+++
+## User scenarios
+
+**Scenarios** | **Azure-orchestrated** | **BypassPlatformSafetyChecksOnUserSchedule** | **Schedule Associated** |**Expected behavior in Azure** |
+ | | | | |
+Scenario 1 | Yes | True | Yes | The schedule patch runs as defined by user. |
+Scenario 2 | Yes | True | No | Neither autopatch nor the schedule patch will run.|
+Scenario 3 | Yes | False | Yes | Neither autopatch nor schedule patch will run. You'll get an error that the prerequisites for schedule patch aren't met.|
+Scenario 4 | Yes | False | No | The VM is autopatched.|
+Scenario 5 | No | True | Yes | Neither autopatch nor schedule patch will run. You'll get an error that the prerequisites for schedule patch aren't met. |
+Scenario 6 | No | True | No | Neither the autopatch nor the schedule patch will run.|
+Scenario 7 | No | False | Yes | Neither autopatch nor schedule patch will run. You'll get an error that the prerequisites for schedule patch aren't met.|
+Scenario 8 | No | False | No | Neither the autopatch nor the schedule patch will run.|
+
+## Next steps
+
+* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) update management center (preview).
update-center Scheduled Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/scheduled-patching.md
Title: Scheduling recurring updates in Update management center (preview) description: The article details how to use update management center (preview) in Azure to set update schedules that install recurring updates on your machines. Previously updated : 04/11/2023 Last updated : 04/26/2023
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
+> [!IMPORTANT]
+> - For a seamless scheduled patching experience, we recommend that for all Azure VMs, you update the patch mode to *Azure orchestrated with user managed schedules (preview)* before **May 19, 2023**. If you fail to update the patch mode before **May 19, 2023**, you can experience a disruption in business continuity because the schedules will fail to patch the VMs.[Learn more](prerequsite-for-schedule-patching.md).
+> - To update the patch mode, go to **Update management center (Preview)** home page > **Update Settings**. In **Change update settings**, add the machines and under **Patch orchestration**, select *Azure-orchestrated-safe deployment*.
+ You can use update management center (preview) in Azure to create and save recurring deployment schedules. You can create a schedule on a daily, weekly or hourly cadence, specify the machines that must be updated as part of the schedule, and the updates to be installed. This schedule will then automatically install the updates as per the created schedule for single VM and at scale. Update management center (preview) uses maintenance control schedule instead of creating its own schedules. Maintenance control enables customers to manage platform updates. For more information, see [Maintenance control documentation](/azure/virtual-machines/maintenance-control).
Update management center (preview) uses maintenance control schedule instead of
1. Patch orchestration of the Azure machines should be set to **Azure Orchestrated (Automatic By Platform)**. For Azure Arc-enabled machines, it isn't a requirement. > [!Note]
- > If you set the patch orchestration mode to Azure orchestrated (Automatic By Platform) but don't attach a maintenance configuration to an Azure machine, it is treated as [Automatic Guest patching](../virtual-machines/automatic-vm-guest-patching.md) enabled machine and Azure platform will automatically install updates as per its own schedule.
+ > If you set the patch orchestration mode to Azure orchestrated (AutomaticByPlatform) but don't attach a maintenance configuration to an Azure machine, it is treated as [Automatic Guest patching](../virtual-machines/automatic-vm-guest-patching.md) enabled machine and Azure platform will automatically install updates as per its own schedule.
## Schedule recurring updates on single VM
update-center Updates Maintenance Schedules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/updates-maintenance-schedules.md
Title: Updates and maintenance in update management center (preview). description: The article describes the updates and maintenance options available in Update management center (preview). Previously updated : 04/21/2022 Last updated : 04/26/2023
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
+> [!IMPORTANT]
+> - For a seamless scheduled patching experience, we recommend that for all Azure VMs, you update the patch mode to *Azure orchestrated with user managed schedules (preview)* before **May 19, 2023**. If you fail to update the patch mode before **May 19, 2023**, you can experience a disruption in business continuity because the schedules will fail to patch the VMs.[Learn more](prerequsite-for-schedule-patching.md).
+> - To update the patch mode, go to **Update management center (Preview)** home page > **Update Settings**. In **Change update settings**, add the machines and under **Patch orchestration**, select *Azure-orchestrated-safe deployment*.
++ This article provides an overview of the various update and maintenance options available by update management center (preview). Update management center (preview) provides you the flexibility to take an immediate action or schedule an update within a defined maintenance window. It also supports new patching methods such as [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md), [Hotpatching](../automanage/automanage-hotpatch.md?context=%2fazure%2fvirtual-machines%2fcontext%2fcontext) and so on.
update-center Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/whats-new.md
Last updated 03/03/2023
[Update management center (preview)](overview.md) helps you manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. This article summarizes new releases and features in Update management center (Preview).
+## April 2023
+
+### New prerequisite for scheduled patching
+
+A new patch mode - **Azure orchestrated with user managed schedules (Preview)** is introduced as a prerequisite to enable scheduled patching on Azure VMs. The new patch enables the *Azure-orchestrated using Automatic guest patching* and *BypassPlatformSafteyChecksOnUserSchedule* VM properties on your behalf after receiving the consent. [Learn more](prerequsite-for-schedule-patching.md).
+
+> [!IMPORTANT]
+> For a seamless scheduled patching experience, we recommend that for all Azure VMs, you update the patch mode to *Azure orchestrated with user managed schedules (preview)* before **May 19, 2023**. If you fail to update the patch mode before **May 19, 2023**, you can experience a disruption in business continuity because the schedules will fail to patch the VMs.
++ ## November 2022 ### New region support
virtual-desktop Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/terminology.md
The following table goes into more detail about the differences between each typ
|Feature|Personal host pools|Pooled host pools| |||| |Load balancing| User sessions are always load balanced to the session host the user is assigned to. If the user isn't currently assigned to a session host, the user session is load balanced to the next available session host in the host pool. | User sessions are load balanced to session hosts in the host pool based on user session count. You can choose which [load balancing algorithm](host-pool-load-balancing.md) to use: breadth-first or depth-first. |
-|Maximum session limit| One. | As configured by the **Max session limit** value of the properties of a host pool. |
+|Maximum session limit| One. | As configured by the **Max session limit** value of the properties of a host pool. In rare cases under high concurrent connection load when multiple users connect to the host pool at the same time, the number of session created on a session host can cross over the maximum session limit. |
|User assignment process| Users can either be directly assigned to session hosts or be automatically assigned to the first available session host. Users always have sessions on the session hosts they are assigned to. | Users aren't assigned to session hosts. After a user signs out and signs back in, their user session might get load balanced to a different session host. | |Scaling|None. | [Autoscale](autoscale-scaling-plan.md) for pooled host pools turns VMs on and off based on the capacity thresholds and schedules the customer defines. | |Windows Updates|Updated with Windows Updates, [System Center Configuration Manager (SCCM)](configure-automatic-updates.md), or other software distribution configuration tools.|Updated by redeploying session hosts from updated images instead of traditional updates.|
virtual-desktop Connect Windows Azure Virtual Desktop App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-windows-azure-virtual-desktop-app.md
The Azure Virtual Desktop Store app is available from the Microsoft Store. To do
1. Once the app has finished downloading and installing, select **Open**. The first time the app runs, it will install the *Azure Virtual Desktop (HostApp)* dependency automatically.
+> [!IMPORTANT]
+> If you have the Azure Virtual Desktop app and the [Remote Desktop client for Windows](connect-windows.md) installed on the same device, you may see the message that begins **A version of this application called Azure Virtual Desktop was installed from the Microsoft Store**. Both apps are supported, and you have the option to choose **Continue anyway**, however it could be confusing to use the same remote resource across both apps. We recommend using only one version of the app at a time.
+ ## Subscribe to a workspace A workspace combines all the desktops and applications that have been made available to you by your admin. To be able to see these in the Azure Virtual Desktop app, you need to subscribe to the workspace by following these steps:
virtual-desktop Connect Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-windows.md
Once you've downloaded the Remote Desktop client, you'll need to install it by f
1. If you left the box for **Launch Remote Desktop when setup exits** selected, the Remote Desktop client will automatically open. Alternatively to launch the client after installation, use the Start menu to search for and select **Remote Desktop**.
+> [!IMPORTANT]
+> If you have the Remote Desktop client for Windows and the [Azure Virtual Desktop app](connect-windows-azure-virtual-desktop-app.md) installed on the same device, you may see the message that begins **A version of this application called Azure Virtual Desktop was installed from the Microsoft Store**. Both apps are supported, and you have the option to choose **Continue anyway**, however it could be confusing to use the same remote resource across both apps. We recommend using only one version of the app at a time.
+ ## Subscribe to a workspace A workspace combines all the desktops and applications that have been made available to you by your admin. To be able to see these in the Remote Desktop client, you need to subscribe to the workspace by following these steps:
virtual-machines Azure Cli Change Subscription Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/azure-cli-change-subscription-marketplace.md
az group delete --name $destinationResourceGroup --subscription $destinationSubs
## Next steps - [Move VMs to another Azure region](../site-recovery/azure-to-azure-tutorial-migrate.md)-- [Move a VM to another subscription or resource group](./linux/move-vm.md)
+- [Move a VM to another subscription or resource group](/azure/azure-resource-manager/management/move-resource-group-and-subscription#use-azure-cli)
virtual-machines Capacity Reservation Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-create.md
Previously updated : 11/22/2022- Last updated : 04/24/2023+
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale set :heavy_check_mark: Flexible scale sets
-Capacity Reservation is always created as part of a Capacity Reservation group. The first step is to create a group if a suitable one doesnΓÇÖt exist already, then create reservations. Once successfully created, reservations are immediately available for use with virtual machines. The capacity is reserved for your use as long as the reservation is not deleted.
+Capacity Reservation is always created as part of a Capacity Reservation group. The first step is to create a group if a suitable one doesnΓÇÖt exist already, then create reservations. Once successfully created, reservations are immediately available for use with virtual machines. The capacity is reserved for your use as long as the reservation isn't deleted.
-A well-formed request for Capacity Reservation group should always succeed as it does not reserve any capacity. It just acts as a container for reservations. However, a request for Capacity Reservation could fail if you do not have the required quota for the VM series or if Azure doesnΓÇÖt have enough capacity to fulfill the request. Either request more quota or try a different VM size, location, or zone combination.
+A well-formed request for Capacity Reservation group should always succeed as it doesn't reserve any capacity. It just acts as a container for reservations. However, a request for Capacity Reservation could fail if you don't have the required quota for the VM series or if Azure doesnΓÇÖt have enough capacity to fulfill the request. Either request more quota or try a different VM size, location, or zone combination.
-A Capacity Reservation creation succeeds or fails in its entirety. For a request to reserve 10 instances, success is returned only if all 10 could be allocated. Otherwise, the Capacity Reservation creation will fail.
+A Capacity Reservation creation succeeds or fails in its entirety. For a request to reserve 10 instances, success is returned only if all 10 could be allocated. Otherwise, the Capacity Reservation creation fails.
## Considerations The Capacity Reservation must meet the following rules: -- The location parameter must match the location property for the parent Capacity Reservation group. A mismatch will result in an error. -- The VM size must be available in the target region. Otherwise, the reservation creation will fail.
+- The location parameter must match the location property for the parent Capacity Reservation group. A mismatch results in an error.
+- The VM size must be available in the target region. Otherwise, the reservation creation fails.
- The subscription must have available quota equal to or more than the quantity of VMs being reserved for the VM series and for the region overall. If needed, [request more quota](../azure-portal/supportability/per-vm-quota-requests.md).
- - As needed to satisfy existing quota limits, single VMs can be done in stages. Create a capacity reservation with a smaller quantity and reallocate that quantity of virtual machines. This will free up quota to increase the quantity reserved and add more virtual machines. Alternatively, if the subscription uses different VM sizes in the same series, reserve and redeploy VMs for the first size. Then add a reservation to the group for another size and redeploy the VMs for the new size to the reservation group. Repeat until complete.
- - For Scale Sets, available quota will be required unless the Scale Set or its VM instances are deleted, capacity is reserved, and the Scale Set instances are added using reserved capacity. If the Scale Set is updated using blue green deployment, then reserve the capacity and deploy the new Scale Set to the reserved capacity at the next update.
-- Each Capacity Reservation group can have exactly one reservation for a given VM size. For example, only one Capacity Reservation can be created for the VM size `Standard_D2s_v3`. Attempt to create a second reservation for `Standard_D2s_v3` in the same Capacity Reservation group will result in an error. However, another reservation can be created in the same group for other VM sizes, such as `Standard_D4s_v3`, `Standard_D8s_v3`, and so on.
+ - As needed to satisfy existing quota limits, single VMs can be done in stages. Create a capacity reservation with a smaller quantity and reallocate that quantity of virtual machines. This frees up quota to increase the quantity reserved and add more virtual machines. Alternatively, if the subscription uses different VM sizes in the same series, reserve and redeploy VMs for the first size. Then add a reservation to the group for another size and redeploy the VMs for the new size to the reservation group. Repeat until complete.
+ - For Scale Sets, available quota is required unless the Scale Set or you delete its VM instances, capacity is reserved, and the Scale Set instances are added using reserved capacity. If the Scale Set is updated using blue green deployment, then reserve the capacity and deploy the new Scale Set to the reserved capacity at the next update.
+- Each Capacity Reservation group can have exactly one reservation for a given VM size. For example, only one Capacity Reservation can be created for the VM size `Standard_D2s_v3`. Attempt to create a second reservation for `Standard_D2s_v3` in the same Capacity Reservation group results in an error. However, another reservation can be created in the same group for other VM sizes, such as `Standard_D4s_v3`, `Standard_D8s_v3`, and so on.
- For a Capacity Reservation group that supports zones, each reservation type is defined by the combination of **VM size** and **zone**. For example, one Capacity Reservation for `Standard_D2s_v3` in `Zone 1`, another Capacity Reservation for `Standard_D2s_v3` in `Zone 2`, and a third Capacity Reservation for `Standard_D2s_v3` in `Zone 3` is supported.
Before you create a capacity reservation, you can check the reservation availabl
This group is created to contain reservations for the US East location.
- The group in the following example will only support regional reservations, because zones were not specified at the time of creation. To create a zonal group, pass an extra parameter *zones* in the request body:
+ The group in the following example only supports regional reservations, because zones weren't specified at the time of creation. To create a zonal group, pass an extra parameter *zone* in the request body:
```json {
Before you create a capacity reservation, you can check the reservation availabl
} ```
- The above request creates a reservation in the East US location for 5 quantities of the D2s_v3 VM size.
+ The above request creates a reservation in the East US location for five quantities of the D2s_v3 VM size.
### [Portal](#tab/portal2)
Before you create a capacity reservation, you can check the reservation availabl
-g myResourceGroup ```
-1. Now create a Capacity Reservation group with `az capacity reservation group create`. The following example creates a group *myCapacityReservationGroup* in the East US location for all 3 availability zones.
+1. Now create a Capacity Reservation group with `az capacity reservation group create`. The following example creates a group *myCapacityReservationGroup* in the East US location for all three availability zones.
```azurecli-interactive az capacity reservation group create
Before you create a capacity reservation, you can check the reservation availabl
--zones 1 2 3 ```
-1. Once the Capacity Reservation group is created, create a new Capacity Reservation with `az capacity reservation create`. The following example creates *myCapacityReservation* for 5 quantities of Standard_D2s_v3 VM size in Zone 1 of East US location.
+1. Once the Capacity Reservation group is created, create a new Capacity Reservation with `az capacity reservation create`. The following example creates *myCapacityReservation* for five quantities of Standard_D2s_v3 VM size in Zone 1 of East US location.
```azurecli-interactive az capacity reservation create
Before you create a capacity reservation, you can check the reservation availabl
-Location "eastus" ```
-1. Now create a Capacity Reservation group with `New-AzCapacityReservationGroup`. The following example creates a group *myCapacityReservationGroup* in the East US location for all 3 availability zones.
+1. Now create a Capacity Reservation group with `New-AzCapacityReservationGroup`. The following example creates a group *myCapacityReservationGroup* in the East US location for all three availability zones.
```powershell-interactive New-AzCapacityReservationGroup
Before you create a capacity reservation, you can check the reservation availabl
-Name "myCapacityReservationGroup" ```
-1. Once the Capacity Reservation group is created, create a new Capacity Reservation with `New-AzCapacityReservation`. The following example creates *myCapacityReservation* for 5 quantities of Standard_D2s_v3 VM size in Zone 1 of East US location.
+1. Once the Capacity Reservation group is created, create a new Capacity Reservation with `New-AzCapacityReservation`. The following example creates *myCapacityReservation* for five quantities of Standard_D2s_v3 VM size in Zone 1 of East US location.
```powershell-interactive New-AzCapacityReservation
An [ARM template](../azure-resource-manager/templates/overview.md) is a Java
ARM templates let you deploy groups of related resources. In a single template, you can create Capacity Reservation group and Capacity Reservations. You can deploy templates through the Azure portal, Azure CLI, or Azure PowerShell, or from continuous integration/continuous delivery (CI/CD) pipelines.
-If your environment meets the prerequisites and you are familiar with using ARM templates, use any of the following templates:
+If your environment meets the prerequisites and you're familiar with using ARM templates, use any of the following templates:
- [Create Zonal Capacity Reservation](https://github.com/Azure/on-demand-capacity-reservation/blob/main/ZonalCapacityReservation.json) - [Create VM with Capacity Reservation](https://github.com/Azure/on-demand-capacity-reservation/blob/main/VirtualMachineWithReservation.json)
To learn more, go to Azure PowerShell command [Get-AzCapacityReservation](/power
1. From the list, select the Capacity Reservation group name you just created 1. Select **Overview** 1. Select **Reservations**
-1. In this view, you will be able to see all the reservations in the group along with the VM size and quantity reserved
+1. In this view, you are able to see all the reservations in the group along with the VM size and quantity reserved
<!-- The three dashes above show that your section of tabbed content is complete. Do not remove them :) -->
virtual-machines Capacity Reservation Overallocate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-overallocate.md
Previously updated : 11/22/2022- Last updated : 04/24/2023+
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale set :heavy_check_mark: Flexible scale sets
-Azure permits association of extra VMs beyond the reserved count of a Capacity Reservation to facilitate burst and other scale-out scenarios, without the overhead of managing around the limits of reserved capacity. The only difference is that the count of VMs beyond the quantity reserved does not receive the capacity availability SLA benefit. As long as Azure has available capacity that meets the virtual machine requirements, the extra allocations will succeed.
+Azure permits the association of extra VMs above the number of Capacity Reservations. These VMs are available to allow for burst and other scale-out scenarios without the limits of reserved capacity. The only difference is that the count of VMs beyond the quantity reserved doesn't receive the capacity availability SLA benefit. As long as Azure has available capacity that meets the virtual machine requirements, the extra allocation succeeds.
The Instance View of a Capacity Reservation group provides a snapshot of usage for each member Capacity Reservation. You can use the Instance View to see how overallocation works.
This article assumes you have created a Capacity Reservation group (`myCapacityR
## Instance View for Capacity Reservation group
-The Instance View for a Capacity Reservation group will look like this:
+The Instance View for a Capacity Reservation group looks like this:
```rest GET
https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{reso
Let's say we create another virtual machine named *myVM2* and associate it with the above Capacity Reservation group.
-The Instance View for the Capacity Reservation group will now look like this:
+The Instance View for the Capacity Reservation group now looks like this:
```json {
The Instance View for the Capacity Reservation group will now look like this:
Notice that the length of `virtualMachinesAllocated` (2) is greater than `capacity` (1). This valid state is referred to as *overallocated*. > [!IMPORTANT]
-> Azure will not stop allocations just because a Capacity Reservation is fully consumed. Auto-scale rules, temporary scale-out, and related requirements will work beyond the quantity of reserved capacity as long as Azure has available capacity and other constraints such as available quota are met.
+> Azure won't stop allocations just because a Capacity Reservation is fully consumed. Auto-scale rules, temporary scale-out, and related requirements will work beyond the quantity of reserved capacity as long as Azure has available capacity and other constraints such as available quota are met.
## States and considerations
There are three valid states for a given Capacity Reservations:
| State | Status | Considerations | |||| | Reserved capacity available | Length of `virtualMachinesAllocated` < `capacity` | Is all the reserved capacity needed? Optionally reduce the capacity to reduce costs. |
-| Reservation consumed | Length of `virtualMachinesAllocated` == `capacity` | Additional VMs will not receive the capacity SLA unless some existing VMs are deallocated. Optionally try to increase the capacity so extra planned VMs will receive an SLA. |
-| Reservation overallocated | Length of `virtualMachinesAllocated` > `capacity` | Additional VMs will not receive the capacity SLA. Also, the quantity of VMs (Length of `virtualMachinesAllocated` ΓÇô `capacity`) will not receive a capacity SLA if deallocated. Optionally increase the capacity to add capacity SLA to more of the existing VMs. |
+| Reservation consumed | Length of `virtualMachinesAllocated` == `capacity` | Additional VMs won't receive the capacity SLA unless some existing VMs are deallocated. Optionally try to increase the capacity so extra planned VMs will receive an SLA. |
+| Reservation overallocated | Length of `virtualMachinesAllocated` > `capacity` | Additional VMs won't receive the capacity SLA. Also, the quantity of VMs (Length of `virtualMachinesAllocated` ΓÇô `capacity`) won't receive a capacity SLA if deallocated. Optionally increase the capacity to add capacity SLA to more of the existing VMs. |
## Next steps
virtual-machines Capacity Reservation Remove Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-remove-vm.md
Previously updated : 11/22/2022- Last updated : 04/24/2023+
virtual-machines Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/custom-domain.md
Last updated 02/23/2023-+
virtual-machines Cli Ps Findimage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cli-ps-findimage.md
You can also browse available images and offers using the [Azure Marketplace](ht
A Marketplace image in Azure has the following attributes:
-* **Publisher**: The organization that created the image. Examples: Canonical, RedHat, SUSE
-* **Offer**: The name of a group of related images created by a publisher. Examples: UbuntuServer, RHEL, sles-12-sp5
-* **SKU**: An instance of an offer, such as a major release of a distribution. Examples: 18.04-LTS, 7_9, gen2
+* **Publisher**: The organization that created the image. Examples: Canonical, RedHat, SUSE.
+* **Offer**: The name of a group of related images created by a publisher. Examples: 0001-com-ubuntu-server-jammy, RHEL, sles-15-sp3.
+* **SKU**: An instance of an offer, such as a major release of a distribution. Examples: 22_04-lts-gen2, 8-lvm-gen2, gen2.
* **Version**: The version number of an image SKU. These values can be passed individually or as an image *URN*, combining the values separated by the colon (:). For example: *Publisher*:*Offer*:*Sku*:*Version*. You can replace the version number in the URN with `latest` to use the latest version of the image.
You can run the [az vm image list --all](/cli/azure/vm/image) to see all of the
az vm image list --output table ```
-The output includes the image URN. You can also use the *UrnAlias*, which is a shortened version created for popular images like *UbuntuLTS*.
+The output includes the image URN. You can also use the *UrnAlias*, which is a shortened version created for popular images like *Ubuntu2204*.
+The Linux image alias names and their details outputted by this command are:
```output Architecture Offer Publisher Sku Urn UrnAlias Version -- - - - -- x64 CentOS OpenLogic 7.5 OpenLogic:CentOS:7.5:latest CentOS latest
+x64 CentOS OpenLogic 8_5-gen2 OpenLogic:CentOS:8_5-gen2:latest CentOS85Gen2 latest
x64 debian-10 Debian 10 Debian:debian-10:10:latest Debian latest
+x64 Debian11 Debian 11-backports-gen2 Debian:debian-11:11-backports-gen2:latest Debian-11 latest
x64 flatcar-container-linux-free kinvolk stable kinvolk:flatcar-container-linux-free:stable:latest Flatcar latest
+x64 flatcar-container-linux-free kinvolk stable-gen2 kinvolk:flatcar-container-linux-free:stable-gen2:latest FlatcarLinuxFreeGen2 latest
x64 opensuse-leap-15-3 SUSE gen2 SUSE:opensuse-leap-15-3:gen2:latest openSUSE-Leap latest
+x64 opensuse-leap-15-4 SUSE gen2 SUSE:opensuse-leap-15-4:gen2:latest OpenSuseLeap154Gen2 latest
x64 RHEL RedHat 7-LVM RedHat:RHEL:7-LVM:latest RHEL latest
+x64 RHEL RedHat 8-lvm-gen2 RedHat:RHEL:8-lvm-gen2:latest RHELRaw8LVMGen2 latest
x64 sles-15-sp3 SUSE gen2 SUSE:sles-15-sp3:gen2:latest SLES latest x64 UbuntuServer Canonical 18.04-LTS Canonical:UbuntuServer:18.04-LTS:latest UbuntuLTS latest
-[...]
+x64 0001-com-ubuntu-server-jammy Canonical 22_04-lts-gen2 Canonical:UbuntuServer:22_04-lts-gen2:latest Ubuntu2204 latest
```
+The Windows image alias names and their details outputted by this command are:
+
+```output
+Architecture Offer Publisher Sku Urn Alias Version
+-- - - - --
+x64 WindowsServer MicrosoftWindowsServer 2022-Datacenter MicrosoftWindowsServer:WindowsServer:2022-Datacenter:latest Win2022Datacenter latest
+x64 WindowsServer MicrosoftWindowsServer 2022-datacenter-azure-edition-core MicrosoftWindowsServer:WindowsServer:2022-datacenter-azure-edition-core:latest Win2022AzureEditionCore latest
+x64 WindowsServer MicrosoftWindowsServer 2019-Datacenter MicrosoftWindowsServer:WindowsServer:2019-Datacenter:latest Win2019Datacenter latest
+x64 WindowsServer MicrosoftWindowsServer 2016-Datacenter MicrosoftWindowsServer:WindowsServer:2016-Datacenter:latest Win2016Datacenter latest
+x64 WindowsServer MicrosoftWindowsServer 2012-R2-Datacenter MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:latest Win2012R2Datacenter latest
+x64 WindowsServer MicrosoftWindowsServer 2012-Datacenter MicrosoftWindowsServer:WindowsServer:2012-Datacenter:latest Win2012Datacenter latest
+```
++ ## Find specific images You can filter the list of images by `--publisher` or another parameter to limit the results.
virtual-machines No Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/no-agent.md
Previously updated : 09/01/2020 Last updated : 04/11/2023 -+
Microsoft Azure provides provisioning agents for Linux VMs in the form of the [walinuxagent](https://github.com/Azure/WALinuxAgent) or [cloud-init](https://github.com/canonical/cloud-init) (recommended). But there could be a scenario when you don't want to use either of these applications for your provisioning agent, such as: -- Your Linux distro/version does not support cloud-init/Linux Agent.
+- Your Linux distro/version doesn't support cloud-init/Linux Agent.
- You require specific VM properties to be set, such as hostname. > [!NOTE] > > If you do not require any properties to be set or any form of provisioning to happen you should consider creating a specialized image.
-This article shows how you can setup your VM image to satisfy the Azure platform requirements and set the hostname, without installing a provisioning agent.
+This article shows how you can set up your VM image to satisfy the Azure platform requirements and set the hostname, without installing a provisioning agent.
## Networking and reporting ready
-In order to have your Linux VM communicating with Azure components, you will require a DHCP client to retrieve a host IP from the virtual network, as well as DNS resolution and route management. Most distros ship with these utilities out-of-the-box. Tools that have been tested on Azure by Linux distro vendors include `dhclient`, `network-manager`, `systemd-networkd` and others.
+In order to have your Linux VM communicate with Azure components, a DHCP client is required. The client is used to retrieve a host IP, DNS resolution, and route management from the virtual network. Most distros ship with these utilities out-of-the-box. Tools that are tested on Azure by Linux distro vendors include `dhclient`, `network-manager`, `systemd-networkd` and others.
> [!NOTE] > > Currently creating generalized images without a provisioning agent only supports DHCP-enabled VMs.
-After networking has been setup and configured, you must "report ready". This will tell Azure that the VM has been successfully provisioning.
+After networking has been set up and configured, select "report ready". This tells Azure that the VM has been successfully provisioned.
> [!IMPORTANT] >
After networking has been setup and configured, you must "report ready". This wi
## Demo/sample
-This demo will show how you can take an existing Marketplace image (in this case, a Debian Buster VM) and remove the Linux Agent (walinuxagent), but also creating the most basic process to report to Azure that the VM is "ready".
+An existing Marketplace image (in this case, a Debian Buster VM) with the Linux Agent (walinuxagent) removed and a custom python script added is the easiest way to tell Azure that the VM is "ready".
### Create the resource group and base VM:
$ az vm create \
### Remove the image provisioning Agent
-Once the VM is provisioning, you can SSH into it and remove the Linux Agent:
+Once the VM is provisioning, you can connect to it via SSH and remove the Linux Agent:
```bash $ sudo apt purge -y waagent
$ az image create \
--name demo1img ```
-Now we are ready to create a new VM (or multiple VMs) from the image:
+Now we're ready to create a new VM from the image. This can also be used to create multiple VMs:
```azurecli $ IMAGE_ID=$(az image show -g demo1 -n demo1img --query id -o tsv)
$ az vm create \
> > It is important to set `--enable-agent` to `false` because walinuxagent doesn't exist on this VM that is going to be created from the image.
-This VM should provisioning successfully. Logging into the newly-provisioning VM, you should be able to see the output of the report ready systemd service:
+The VM should be provisioned successfully. After Logging into the newly provisioning VM, you should be able to see the output of the report ready systemd service:
```bash $ sudo journalctl -u azure-provisioning.service
Jun 11 20:28:56 thstringnopa2 systemd[1]: Started Azure Provisioning.
## Support
-If you implement your own provisioning code/agent, then you own the support of this code, Microsoft support will only investigate issues relating to the provisioning interfaces not being available. We are continually making improvements and changes in this area, so you must monitor for changes in cloud-init and Azure Linux Agent for provisioning API changes.
+If you implement your own provisioning code/agent, then you own the support of this code, Microsoft support will only investigate issues relating to the provisioning interfaces not being available. We're continually making improvements and changes in this area, so you must monitor for changes in cloud-init and Azure Linux Agent for provisioning API changes.
## Next steps
virtual-machines Redhat Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/redhat-create-upload-vhd.md
vm-linux Previously updated : 03/15/2023 Last updated : 04/25/2023
This section assumes that you've already obtained an ISO file from the Red Hat w
sudo ln -s /etc/udev/rules.d/75-persistent-net-generator.rules sudo rm -f /etc/udev/rules.d/70-persistent-net.rules ```-
+> [!NOTE]
+> ** When using Accelerated Networking (AN) the synthetic interface that is created must me configured to be unmanaged using a udev rule. This will prevents NetworkManager from assigning the same ip to it as the primary interface. <br>
+ To apply it:<br>
+```
+sudo cat >/etc/udev/rules.d/68-azure-sriov-nm-unmanaged.rules <<EOF
+# Accelerated Networking on Azure exposes a new SRIOV interface to the VM.
+# This interface is transparently bonded to the synthetic interface,
+# so NetworkManager should just ignore any SRIOV interfaces.
+SUBSYSTEM=="net", DRIVERS=="hv_pci", ACTION=="add", ENV{NM_UNMANAGED}="1"
+EOF
+```
7. Ensure that the network service will start at boot time by running the following command: ```bash
This section shows you how to use KVM to prepare a [RHEL 6](#rhel-6-using-kvm) o
sudo ln -s /etc/udev/rules.d/75-persistent-net-generator.rules sudo rm -f /etc/udev/rules.d/70-persistent-net.rules ```-
+> [!NOTE]
+> ** When using Accelerated Networking (AN) the synthetic interface that is created must me configured to be unmanaged using a udev rule. This will prevents NetworkManager from assigning the same ip to it as the primary interface. <br>
+ To apply it:<br>
+```
+sudo cat > /etc/udev/rules.d/68-azure-sriov-nm-unmanaged.rules <<EOF
+# Accelerated Networking on Azure exposes a new SRIOV interface to the VM.
+# This interface is transparently bonded to the synthetic interface,
+# so NetworkManager should just ignore any SRIOV interfaces.
+SUBSYSTEM=="net", DRIVERS=="hv_pci", ACTION=="add", ENV{NM_UNMANAGED}="1"
+EOF
+```
7. Ensure that the network service will start at boot time by running the following command: ```bash
This section assumes that you have already installed a RHEL virtual machine in V
sudo ln -s /etc/udev/rules.d/75-persistent-net-generator.rules sudo rm -f /etc/udev/rules.d/70-persistent-net.rules ```
+> [!NOTE]
+> ** When using Accelerated Networking (AN) the synthetic interface that is created must me configured to be unmanaged using a udev rule. This will prevents NetworkManager from assigning the same ip to it as the primary interface. <br>
+ To apply it:<br>
+```
+sudo cat > /etc/udev/rules.d/68-azure-sriov-nm-unmanaged.rules <<EOF
+# Accelerated Networking on Azure exposes a new SRIOV interface to the VM.
+# This interface is transparently bonded to the synthetic interface,
+# so NetworkManager should just ignore any SRIOV interfaces.
+SUBSYSTEM=="net", DRIVERS=="hv_pci", ACTION=="add", ENV{NM_UNMANAGED}="1"
+EOF
+```
5. Ensure that the network service will start at boot time by running the following command:
virtual-machines Cli Ps Findimage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/cli-ps-findimage.md
A Marketplace image in Azure has the following attributes:
These values can be passed individually or as an image *URN*, combining the values separated by the colon (:). For example: *Publisher*:*Offer*:*Sku*:*Version*. You can replace the version number in the URN with `latest` to use the latest version of the image.
-If the image publisher provides additional license and purchase terms, then you must accept those before you can use the image. For more information, see [Accept purchase plan terms](#accept-purchase-plan-terms).
+If the image publisher provides other license and purchase terms, then you must accept those before you can use the image. For more information, see [Accept purchase plan terms](#accept-purchase-plan-terms).
+
+## Default Images
+
+Powershell offers several pre-defined image aliases to make the resource creation process easier. There are different images for resources with either a Windows or Linux operating system. Several Powershell cmdlets, such as `New-AzVM` and `New-AzVmss`, allow you to input the alias name as a parameter.
+For example:
+
+```powershell
+$rgname = <Resource Group Name>
+$location = <Azure Region>
+$vmName = "v" + $rgname
+$domainNameLabel = "d" + $rgname
+$securePassword = <Password> | ConvertTo-SecureString -AsPlainText -Force
+$username = <Username>
+$credential = New-Object System.Management.Automation.PSCredential ($username, $securePassword)
+New-AzVM -ResourceGroupName $rgname -Location $location -Name $vmName -Image CentOS85Gen2 -Credential $credential -DomainNameLabel $domainNameLabel
+```
+
+The Linux image alias names and their details are:
+```output
+Alias Architecture Offer Publisher Sku Urn Version
+-- -- - - -
+CentOS x64 CentOS OpenLogic 7.5 OpenLogic:CentOS:7.5:latest latest
+CentOS85Gen2 x64 CentOS OpenLogic 8_5-gen2 OpenLogic:CentOS:8_5-gen2:latest latest
+Debian11 x64 Debian-11 Debian 11-backports-gen2 Debian:debian-11:11-backports-gen2:latest latest
+Debian10 x64 Debian-10 Debian 10 Debian:debian-10:10:latest latest
+FlatcarLinuxFreeGen2 x64 flatcar-container-linux-free kinvolk stable kinvolk:flatcar-container-linux-free:stable:latest latest
+openSUSE-Leap x64 opensuse-leap-15-3 SUSE gen2 SUSE:opensuse-leap-15-3:gen2:latest latest
+OpenSuseLeap154Gen2 x64 opensuse-leap-15-4 SUSE gen2 SUSE:opensuse-leap-15-4:gen2:latest latest
+RHEL x64 RHEL RedHat 7-LVM RedHat:RHEL:7-LVM:latest latest
+RHELRaw8LVMGen2 x64 RHEL RedHat 8-lvm-gen2 RedHat:RHEL:8-lvm-gen2:latest latest
+SLES x64 sles-15-sp3 SUSE gen2 SUSE:sles-15-sp3:gen2:latest latest
+UbuntuLTS x64 UbuntuServer Canonical 16.04-LTS Canonical:UbuntuServer:16.04-LTS:latest latest
+Ubuntu2204 x64 0001-com-ubuntu-server-jammy Canonical 22_04-lts-gen2 Canonical:UbuntuServer:22_04-lts-gen2:latest latest
+```
+
+The Windows image alias names and their details are:
+```output
+Alias Architecture Offer Publisher Sku Urn Version
+-- -- - - -
+Win2022Datacenter x64 WindowsServer MicrosoftWindowsServer 2022-Datacenter MicrosoftWindowsServer:WindowsServer:2022-Datacenter:latest latest
+Win2022AzureEditionCore x64 WindowsServer MicrosoftWindowsServer 2022-datacenter-azure-edition-core MicrosoftWindowsServer:WindowsServer:2022-datacenter-azure-edition-core:latest latest
+Win10 x64 Windows MicrosoftVisualStudio Windows-10-N-x64 MicrosoftVisualStudio:Windows:Windows-10-N-x64:latest latest
+Win2019Datacenter x64 WindowsServer MicrosoftWindowsServer 2019-Datacenter MicrosoftWindowsServer:WindowsServer:2019-Datacenter:latest latest
+Win2016Datacenter x64 WindowsServer MicrosoftWindowsServer 2016-Datacenter MicrosoftWindowsServer:WindowsServer:2016-Datacenter:latest latest
+Win2012R2Datacenter x64 WindowsServer MicrosoftWindowsServer 2012-R2-Datacenter MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:latest latest
+Win2012Datacenter x64 WindowsServer MicrosoftWindowsServer 2012-Datacenter MicrosoftWindowsServer:WindowsServer:2012-Datacenter:latest latest
+```
## List images
-You can use PowerShell to narrow down a list of images. Replace the values of the variables to meet your needs.
+You can use PowerShell to narrow down a list of images if you want to use a specific image that is not provided by default. Replace the values of the below variables to meet your needs.
1. List the image publishers using [Get-AzVMImagePublisher](/powershell/module/az.compute/get-azvmimagepublisher).
You can use PowerShell to narrow down a list of images. Replace the values of th
Now you can combine the selected publisher, offer, SKU, and version into a URN (values separated by :). Pass this URN with the `-Image` parameter when you create a VM with the [New-AzVM](/powershell/module/az.compute/new-azvm) cmdlet. You can also replace the version number in the URN with `latest` to get the latest version of the image.
-If you deploy a VM with a Resource Manager template, then you'll set the image parameters individually in the `imageReference` properties. See the [template reference](/azure/templates/microsoft.compute/virtualmachines).
+If you deploy a VM with a Resource Manager template, then you must set the image parameters individually in the `imageReference` properties. See the [template reference](/azure/templates/microsoft.compute/virtualmachines).
## View purchase plan properties
-Some VM images in the Azure Marketplace have additional license and purchase terms that you must accept before you can deploy them programmatically. You'll need to accept the image's terms once per subscription.
+Some VM images in the Azure Marketplace have other license and purchase terms that you must accept before you can deploy them programmatically. You need to accept the image's terms once per subscription.
To view an image's purchase plan information, run the `Get-AzVMImage` cmdlet. If the `PurchasePlan` property in the output is not `null`, the image has terms you need to accept before programmatic deployment.
$version = "2016.127.20170406"
Get-AzVMImage -Location $locName -PublisherName $pubName -Offer $offerName -Skus $skuName -Version $version ```
-The output will look similar to the following:
+The output looks similar to the following output:
```output Id : /Subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/Providers/Microsoft.Compute/Locations/westus/Publishers/MicrosoftWindowsServer/ArtifactTypes/VMImage/Offers/WindowsServer/Skus/2016-Datacenter/Versions/2019.0.20190115
The example below shows a similar command for the *Data Science Virtual Machine
Get-AzVMImage -Location "westus" -PublisherName "microsoft-ads" -Offer "windows-data-science-vm" -Skus "windows2016" -Version "0.2.02" ```
-The output will look similar to the following:
+The output looks similar to the following output:
``` Id : /Subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/Providers/Microsoft.Compute/Locations/westus/Publishers/microsoft-ads/ArtifactTypes/VMImage/Offers/windows-data-science-vm/Skus/windows2016/Versions/19.01.14
$vm = Get-azvm `
$vm.Plan ```
-If you didn't get the plan information before the original VM was deleted, you can file a [support request](https://portal.azure.com/#create/Microsoft.Support). They will need the VM name, subscription ID and the time stamp of the delete operation.
+If you didn't get the plan information before the original VM was deleted, you can file a [support request](https://portal.azure.com/#create/Microsoft.Support). The support request needs at minimum the VM name, subscription ID and the time stamp of the delete operation.
To create a VM using a VHD, refer to this article [Create a VM from a specialized VHD](create-vm-specialized.md) and add in a line to add the plan information to the VM configuration using [Set-AzVMPlan](/powershell/module/az.compute/set-azvmplan) similar to the following:
virtual-machines Oracle Oci Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-oci-overview.md
Previously updated : 06/01/2020 Last updated : 04/11/2023 # Oracle application solutions integrating Microsoft Azure and Oracle Cloud Infrastructure
-**Applies to:** :heavy_check_mark: Linux VMs
+**Applies to:** :heavy_check_mark: Linux VMs
-Microsoft and Oracle have partnered to provide low latency, high throughput cross-cloud connectivity, allowing you to take advantage of the best of both clouds.
+Microsoft and Oracle have partnered to provide low latency, high throughput cross-cloud connectivity, allowing you to take advantage of the best of both clouds.
-Using this cross-cloud connectivity, you can partition a multi-tier application to run your database tier on Oracle Cloud Infrastructure (OCI), and the application and other tiers on Microsoft Azure. The experience is similar to running the entire solution stack in a single cloud.
+Using this cross-cloud connectivity, you can partition a multi-tier application to run your database tier on Oracle Cloud Infrastructure (OCI), and the application and other tiers on Microsoft Azure. The experience is similar to running the entire solution stack in a single cloud.
-If you are interested in running your middleware, including WebLogic Server, on Azure infrastructure, but have the Oracle database running within OCI, see [WebLogic Server Azure Applications](oracle-weblogic.md).
+If you're interested in running your middleware, including WebLogic Server, on Azure infrastructure, but have the Oracle database running within OCI, see [WebLogic Server Azure Applications](oracle-weblogic.md).
-If you are interested in deploying Oracle solutions entirely on Azure infrastructure, see [Oracle VM images and their deployment on Microsoft Azure](oracle-vm-solutions.md).
+If you're interested in deploying Oracle solutions entirely on Azure infrastructure, see [Oracle VM images and their deployment on Microsoft Azure](oracle-vm-solutions.md).
## Scenario overview
-Cross-cloud connectivity provides a solution for you to run OracleΓÇÖs industry-leading applications, and your own custom applications, on Azure virtual machines while enjoying the benefits of hosted database services in OCI.
+*Cross-cloud connectivity* provides a solution for you to run Oracle's industry-leading applications and your own custom applications on Azure virtual machines while enjoying the benefits of hosted database services in OCI.
-As of May 2020, the following applications are certified in a cross-cloud configuration:
+The following applications are certified in a cross-cloud configuration:
-* E-Business Suite
-* JD Edwards EnterpriseOne
-* PeopleSoft
-* Oracle Retail applications
-* Oracle Hyperion Financial Management
+- E-Business Suite
+- JD Edwards EnterpriseOne
+- PeopleSoft
+- Oracle Retail applications
+- Oracle Hyperion Financial Management
-The following diagram is a high-level overview of the connected solution. For simplicity, the diagram shows only an application tier and a data tier. Depending on the application architecture, your solution could include additional tiers such as a WebLogic Server cluster or web tier in Azure. For more information, see the following sections.
+The following diagram is a high-level overview of the connected solution. For simplicity, the diagram shows only an application tier and a data tier. Depending on the application architecture, your solution could include other tiers such as a WebLogic Server cluster or web tier in Azure.
-![Azure OCI solution overview](media/oracle-oci-overview/crosscloud.png)
-## Region Availability
+## Region availability
Cross-cloud connectivity is limited to the following regions:
-* Azure East US (EastUS) & OCI Ashburn, VA (US East)
-* Azure UK South (UKSouth) & OCI London (UK South)
-* Azure Canada Central (CanadaCentral) & OCI Toronto (Canada Southeast)
-* Azure West Europe (WestEurope) & OCI Amsterdam (Netherlands Northwest)
-* Azure Japan East (JapanEast) & OCI Tokyo (Japan East)
-* Azure West US (WestUS) & OCI San Jose (US West)
-* Azure Germany West Central & OCI Germany Central (Frankfurt)
-* Azure West US 3 & OCI US West ((Phoenix)
-* Azure Korea Central region & OCI South Korea Central (Seoul)
-* Azure Southeast Asia region & OCI Singapore (Singapore)
-* Azure Brazil South (BrazilSouth) & OCI Vinhedo (Brazil Southeast)
+- Azure Brazil South & OCI Vinhedo (Brazil Southeast)
+- Azure Canada Central & OCI Toronto (Canada Southeast)
+- Azure East US & OCI Ashburn, VA (US East)
+- Azure Germany West Central & OCI Germany Central (Frankfurt)
+- Azure Japan East & OCI Tokyo (Japan East)
+- Azure Korea Central region & OCI South Korea Central (Seoul)
+- Azure South Africa North & South Africa Central (Johannesburg)
+- Azure Southeast Asia region & OCI Singapore (Singapore)
+- Azure UK South & OCI London (UK South)
+- Azure West Europe & OCI Amsterdam (Netherlands Northwest)
+- Azure West US & OCI San Jose (US West)
+- Azure West US 3 & OCI US West (Phoenix)
## Networking
-Enterprise customers often choose to diversify and deploy workloads over multiple clouds for various business and operational reasons. To diversify, customers interconnect cloud networks using the internet, IPSec VPN, or using the cloud providerΓÇÖs direct connectivity solution via your on-premises network. Interconnecting cloud networks can require significant investments in time, money, design, procurement, installation, testing, and operations.
+Enterprise customers often choose to diversify and deploy workloads over multiple clouds for various business and operational reasons. To diversify, you can interconnect cloud networks using the internet, IPSec VPN, or using the cloud provider's direct connectivity solution with your on-premises network. Interconnecting cloud networks can require significant investments in time, money, design, procurement, installation, testing, and operations.
-To address these customer challenges, Oracle and Microsoft have enabled an integrated multi-cloud experience. Cross-cloud networking is established by connecting an [ExpressRoute](../../../expressroute/expressroute-introduction.md) circuit in Microsoft Azure with a [FastConnect](https://docs.cloud.oracle.com/iaas/Content/Network/Concepts/fastconnectoverview.htm) circuit in OCI. This connectivity is possible where an Azure ExpressRoute peering location is in proximity to or in the same peering location as the OCI FastConnect. This setup allows for secure, fast connectivity between the two clouds without the need for an intermediate service provider.
+To address these challenges, Oracle and Microsoft have enabled an integrated multicloud experience. Establish *cross-cloud networking* by connecting an [ExpressRoute](../../../expressroute/expressroute-introduction.md) circuit in Microsoft Azure with a [FastConnect](https://docs.cloud.oracle.com/iaas/Content/Network/Concepts/fastconnectoverview.htm) circuit in OCI. This connectivity is possible where an Azure ExpressRoute peering location is in proximity to or in the same peering location as the OCI FastConnect. This setup allows for secure, fast connectivity between the two clouds without the need for an intermediate service provider.
-Using ExpressRoute and FastConnect, customers can peer a virtual network in Azure with a virtual cloud network in OCI, provided that the private IP address space does not overlap. Peering the two networks allows a resource in the virtual network to communicate to a resource in the OCI virtual cloud network as if they are both in the same network.
+Using ExpressRoute and FastConnect, you can peer a virtual network in Azure with a virtual cloud network in OCI, if the private IP address space doesn't overlap. Peering the two networks allows a resource in the virtual network to communicate to a resource in the OCI virtual cloud network as if they're both in the same network.
## Network security
-Network security is a crucial component of any enterprise application, and is central to this multi-cloud solution. Any traffic going over ExpressRoute and FastConnect passes over a private network. This configuration allows for secure communication between an Azure virtual network and an Oracle virtual cloud network. You don't need to provide a public IP address to any virtual machines in Azure. Similarly, you don't need an internet gateway in OCI. All communication happens via the private IP address of the machines.
+Network security is a crucial component of any enterprise application, and is central to this multicloud solution. Any traffic going over ExpressRoute and FastConnect passes over a private network. This configuration allows for secure communication between an Azure virtual network and an Oracle virtual cloud network. You don't need to provide a public IP address to any virtual machines in Azure. Similarly, you don't need an internet gateway in OCI. All communication happens by using the private IP address of the machines.
-Additionally, you can set up [security lists](https://docs.cloud.oracle.com/iaas/Content/Network/Concepts/securitylists.htm) on your OCI virtual cloud network and security rules (attached to Azure [network security groups](../../../virtual-network/network-security-groups-overview.md)). Use these rules to control the traffic flowing between machines in the virtual networks. Network security rules can be added at a machine level, at a subnet level, as well as at the virtual network level.
+Additionally, you can set up [security lists](https://docs.cloud.oracle.com/iaas/Content/Network/Concepts/securitylists.htm) on your OCI virtual cloud network and security rules, attached to Azure [network security groups](../../../virtual-network/network-security-groups-overview.md). Use these rules to control the traffic flowing between machines in the virtual networks. You can add network security rules at a machine level, at a subnet level, and at the virtual network level.
+
+The [WebLogic Server Azure Applications](oracle-weblogic.md) each create a network security group preconfigured to work with WebLogic Server's port configurations.
-The [WebLogic Server Azure Applications](oracle-weblogic.md) each create a network security group pre-configured to work with WebLogic Server's port configurations.
-
## Identity
-Identity is one of the core pillars of the partnership between Microsoft and Oracle. Significant work has been done to integrate [Oracle Identity Cloud Service](https://docs.oracle.com/en/cloud/paas/identity-cloud/https://docsupdatetracker.net/index.html) (IDCS) with [Azure Active Directory](../../../active-directory/index.yml) (Azure AD). Azure AD is MicrosoftΓÇÖs cloud-based identity and access management service. Your users can sign in and access various resources with help from Azure AD. Azure AD also allows you to manage your users and their permissions.
+Identity is one of the core pillars of the partnership between Microsoft and Oracle. Significant work has been done to integrate [Oracle Identity Cloud Service](https://docs.oracle.com/en/cloud/paas/identity-cloud/https://docsupdatetracker.net/index.html) (IDCS) with [Azure Active Directory](../../../active-directory/index.yml) (Azure AD). Azure AD is Microsoft's cloud-based identity and access management service. Your users can sign in and access various resources with help from Azure AD. Azure AD also allows you to manage your users and their permissions.
-Currently, this integration allows you to manage in one central location, which is Azure Active Directory. Azure AD synchronizes any changes in the directory with the corresponding Oracle directory and is used for single sign-on to cross-cloud Oracle solutions.
+Currently, this integration allows you to manage in one central location. Azure AD synchronizes any changes in the directory with the corresponding Oracle directory and is used for single sign-on to cross-cloud Oracle solutions.
## Next steps
-Get started with a [cross-cloud network](configure-azure-oci-networking.md) between Azure and OCI.
-
-For more information and whitepapers about OCI, see the [Oracle Cloud](https://docs.cloud.oracle.com/iaas/Content/home.htm) documentation.
+- Get started with a [cross-cloud network](configure-azure-oci-networking.md) between Azure and OCI.
+- For more information and whitepapers about OCI, see [Oracle Cloud Infrastructure](https://docs.cloud.oracle.com/iaas/Content/home.htm).
virtual-network Tutorial Create Route Table Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-create-route-table-cli.md
Azure automatically routes traffic between all subnets within a virtual network,
* Create a route * Create a virtual network with multiple subnets * Associate a route table to a subnet
-* Create an NVA that routes traffic
+* Create a basic NVA that routes traffic from an Ubuntu VM
* Deploy virtual machines (VM) into different subnets * Route traffic from one subnet to another through an NVA
az network vnet subnet update \
## Create an NVA
-An NVA is a VM that performs a network function, such as routing, firewalling, or WAN optimization.
+An NVA is a VM that performs a network function, such as routing, firewalling, or WAN optimization. We will create a basic NVA from a general purpose Ubuntu VM, for demonstration purposes.
-Create an NVA in the *DMZ* subnet with [az vm create](/cli/azure/vm). When you create a VM, Azure creates and assigns a network interface *myVmNvaVMNic* and a public IP address to the VM, by default. The `--public-ip-address ""` parameter instructs Azure not to create and assign a public IP address to the VM, since the VM doesn't need to be connected to from the internet. If SSH keys do not already exist in a default key location, the command creates them. To use a specific set of keys, use the `--ssh-key-value` option.
+Create a VM to be used as the NVA in the *DMZ* subnet with [az vm create](/cli/azure/vm). When you create a VM, Azure creates and assigns a network interface *myVmNvaVMNic* and a public IP address to the VM, by default. The `--public-ip-address ""` parameter instructs Azure not to create and assign a public IP address to the VM, since the VM doesn't need to be connected to from the internet. If SSH keys do not already exist in a default key location, the command creates them. To use a specific set of keys, use the `--ssh-key-value` option.
```azurecli-interactive az vm create \
az network nic update \
--ip-forwarding true ```
-Within the VM, the operating system, or an application running within the VM, must also be able to forward network traffic. Enable IP forwarding within the VM's operating system with [az vm extension set](/cli/azure/vm/extension):
+Within the VM, the operating system, or an application running within the VM, must also be able to forward network traffic. We will use the `sysctl` command to enable the Linux kernel to forward packets. To run this command without logging onto the VM, we will use the [Custom Script extension](/azure/virtual-machines/extensions/custom-script-linux) [az vm extension set](/cli/azure/vm/extension):
```azurecli-interactive az vm extension set \
az vm extension set \
--settings '{"commandToExecute":"sudo sysctl -w net.ipv4.ip_forward=1"}' ```
-The command may take up to a minute to execute.
+The command may take up to a minute to execute. Note that this change will not persist after a VM reboot, so if the NVA VM is rebooted for any reason, the script will need to be repeated.
## Create virtual machines
Take note of the **publicIpAddress**. This address is used to access the VM from
## Route traffic through an NVA
-Use the following command to create an SSH session with the *myVmPrivate* VM. Replace *\<publicIpAddress>* with the public IP address of your VM. In the example above, the IP address is *13.90.242.231*.
+Using an SSH client of your choice, connect to the VMs created above. For example, the following command can be used from a command line interface such as [WSL](/windows/wsl/install) to create an SSH session with the *myVmPrivate* VM. Replace *\<publicIpAddress>* with the public IP address of your VM. In the example above, the IP address is *13.90.242.231*.
```bash ssh azureuser@<publicIpAddress>
When prompted for a password, enter the password you selected in [Create virtual
Use the following command to install trace route on the *myVmPrivate* VM: ```bash
-sudo apt-get update
-sudo apt-get upgrade
-sudo apt-get install traceroute
+sudo apt update
+sudo apt install traceroute
``` -- Use the following command to test routing for network traffic to the *myVmPublic* VM from the *myVmPrivate* VM. ```bash