Updates from: 03/23/2023 02:14:22
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Application Provisioning Config Problem Scim Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-config-problem-scim-compatibility.md
Following the steps below will delete your existing customappsso job and create
11. In the results of the last step, copy the full "ID" string that begins with "scim". Optionally, reapply your old attribute-mappings by running the command below, replacing [new-job-id] with the new job ID you copied, and entering the JSON output from step #7 as the request body.
- `POST https://graph.microsoft.com/beta/servicePrincipals/[object-id]/synchronization/jobs/[new-job-id]/schema`
+ `PUT https://graph.microsoft.com/beta/servicePrincipals/[object-id]/synchronization/jobs/[new-job-id]/schema`
`{ <your-schema-json-here> }` 12. Return to the first web browser window, and select the **Provisioning** tab for your application.
active-directory Customize Application Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/customize-application-attributes.md
Previously updated : 03/21/2023 Last updated : 03/22/2023
The Azure AD provisioning service can be deployed in both "green field" scenario
- **Matching attributes should be unique:** Customers often use attributes such as userPrincipalName, mail, or object ID as the matching attribute. - **Multiple attributes can be used as matching attributes:** You can define multiple attributes to be evaluated when matching users and the order in which they're evaluated (defined as matching precedence in the UI). If for example, you define three attributes as matching attributes, and a user is uniquely matched after evaluating the first two attributes, the service won't evaluate the third attribute. The service will evaluate matching attributes in the order specified and stop evaluating when a match is found. - **The value in the source and the target don't have to match exactly:** The value in the target can be a function of the value in the source. So, one could have an emailAddress attribute in the source and the userPrincipalName in the target, and match by a function of the emailAddress attribute that replaces some characters with some constant value. -- **Matching based on a combination of attributes isn't supported:** Most applications don't support querying based on two properties. Therefore, it's not possible to match based on a combination of attributes. It is possible to evaluate single properties on after another.
+- **Matching based on a combination of attributes isn't supported:** Most applications don't support querying based on two properties. Therefore, it's not possible to match based on a combination of attributes. It's possible to evaluate single properties on after another.
- **All users must have a value for at least one matching attribute:** If you define one matching attribute, all users must have a value for that attribute in the source system. If for example, you define userPrincipalName as the matching attribute, all users must have a userPrincipalName. If you define multiple matching attributes (for example, both extensionAttribute1 and mail), not all users have to have the same matching attribute. One user could have a extensionAttribute1 but not mail while another user could have mail but no extensionAttribute1. - **The target application must support filtering on the matching attribute:** Application developers allow filtering for a subset of attributes on their user or group API. For applications in the gallery, we ensure that the default attribute mapping is for an attribute that the target application's API does support filtering on. When changing the default matching attribute for the target application, check the third party API documentation to ensure that the attribute can be filtered on.
Group provisioning can be optionally enabled or disabled by selecting the group
The attributes provisioned as part of Group objects can be customized in the same manner as User objects, described previously. > [!TIP]
-> Provisioning of group objects (properties and members) is a distinct concept from [assigning groups](../manage-apps/assign-user-or-group-access-portal.md) to an application. It is possible to assign a group to an application, but only provision the user objects contained in the group. Provisioning of full group objects isn't required to use groups in assignments.
+> Provisioning of group objects (properties and members) is a distinct concept from [assigning groups](../manage-apps/assign-user-or-group-access-portal.md) to an application. It's possible to assign a group to an application, but only provision the user objects contained in the group. Provisioning of full group objects isn't required to use groups in assignments.
## Editing the list of supported attributes
Applications and systems that support customization of the attribute list includ
- SuccessFactors to Active Directory / SuccessFactors to Azure Active Directory - Azure Active Directory ([Azure AD Graph API default attributes](/previous-versions/azure/ad/graph/api/entity-and-complex-type-reference#user-entity) and custom directory extensions are supported). Learn more about [creating extensions](./user-provisioning-sync-attributes-for-mapping.md) and [known limitations](./known-issues.md). - Apps that support [SCIM 2.0](https://tools.ietf.org/html/rfc7643)-- For Azure Active Directory writeback to Workday or SuccessFactors, it is supported to update relevant metadata for supported attributes (XPATH and JSONPath), but it isn't supported to add new Workday or SuccessFactors attributes beyond those included in the default schema
+- For Azure Active Directory writeback to Workday or SuccessFactors, it's supported to update relevant metadata for supported attributes (XPATH and JSONPath), but isn't supported to add new Workday or SuccessFactors attributes beyond those included in the default schema
> [!NOTE] > Editing the list of supported attributes is only recommended for administrators who have customized the schema of their applications and systems, and have first-hand knowledge of how their custom attributes have been defined or if a source attribute isn't automatically displayed in the Azure Portal UI. This sometimes requires familiarity with the APIs and developer tools provided by an application or system. The ability to edit the list of supported attributes is locked down by default, but customers can enable the capability by navigating to the following URL: https://portal.azure.com/?Microsoft_AAD_Connect_Provisioning_forceSchemaEditorEnabled=true . You can then navigate to your application to view the attribute list as described [above](#editing-the-list-of-supported-attributes). > [!NOTE]
-> When a directory extension attribute in Azure AD does not show up automatically in your attribute mapping drop-down, you can manually add it to the "Azure AD attribute list". When manually adding Azure AD directory extension attributes to your provisioning app, note that directory extension attribute names are case-sensitive. For example: If you have a directory extension attribute named `extension_53c9e2c0exxxxxxxxxxxxxxxx_acmeCostCenter`, make sure you enter it in the same format as defined in the directory.
+> When a directory extension attribute in Azure AD doesn't show up automatically in your attribute mapping drop-down, you can manually add it to the "Azure AD attribute list". When manually adding Azure AD directory extension attributes to your provisioning app, note that directory extension attribute names are case-sensitive. For example: If you have a directory extension attribute named `extension_53c9e2c0exxxxxxxxxxxxxxxx_acmeCostCenter`, make sure you enter it in the same format as defined in the directory.
-When you are editing the list of supported attributes, the following properties are provided:
+When you're editing the list of supported attributes, the following properties are provided:
- **Name** - The system name of the attribute, as defined in the target object's schema. - **Type** - The type of data the attribute stores, as defined in the target object's schema, which can be one of the following types:
The SCIM RFC defines a core user and group schema, while also allowing for exten
For SCIM applications, the attribute name must follow the pattern shown in the example below. The "CustomExtensionName" and "CustomAttribute" can be customized per your application's requirements, for example: urn:ietf:params:scim:schemas:extension:CustomExtensionName:2.0:User:CustomAttribute
-These instructions are only applicable to SCIM-enabled applications. Applications such as ServiceNow and Salesforce are not integrated with Azure AD using SCIM, and therefore they don't require this specific namespace when adding a custom attribute.
+These instructions are only applicable to SCIM-enabled applications. Applications such as ServiceNow and Salesforce aren't integrated with Azure AD using SCIM, and therefore they don't require this specific namespace when adding a custom attribute.
-Custom attributes can't be referential attributes, multi-value or complex-typed attributes. Custom multi-value and complex-typed extension attributes are currently supported only for applications in the gallery. The custom extension schema header is omitted in the example below as it isn't sent in requests from the Azure AD SCIM client. This issue will be fixed in the future and the header will be sent in the request.
+Custom attributes can't be referential attributes, multi-value or complex-typed attributes. Custom multi-value and complex-typed extension attributes are currently supported only for applications in the gallery. The custom extension schema header is omitted in the example because it isn't sent in requests from the Azure AD SCIM client. This issue will be fixed in the future and the header will be sent in the request.
**Example representation of a user with an extension attribute:**
Custom attributes can't be referential attributes, multi-value or complex-typed
## Provisioning a role to a SCIM app Use the steps below to provision roles for a user to your application. Note that the description below is specific to custom SCIM applications. For gallery applications such as Salesforce and ServiceNow, use the pre-defined role mappings. The bullets below describe how to transform the AppRoleAssignments attribute to the format your application expects. -- Mapping an appRoleAssignment in Azure AD to a role in your application requires that you transform the attribute using an [expression](../app-provisioning/functions-for-customizing-application-data.md). The appRoleAssignment attribute **should not be mapped directly** to a role attribute without using an expression to parse the role details.
+- Mapping an appRoleAssignment in Azure AD to a role in your application requires that you transform the attribute using an [expression](../app-provisioning/functions-for-customizing-application-data.md). The appRoleAssignment attribute **shouldn't be mapped directly** to a role attribute without using an expression to parse the role details.
- **SingleAppRoleAssignment** - **When to use:** Use the SingleAppRoleAssignment expression to provision a single role for a user and to specify the primary role.
Use the steps below to provision roles for a user to your application. Note that
![Add SingleAppRoleAssignment](./media/customize-application-attributes/edit-attribute-singleapproleassignment.png) - **Things to consider**
- - Ensure that multiple roles are not assigned to a user. We cannot guarantee which role will be provisioned.
+ - Ensure that multiple roles aren't assigned to a user. We can't guarantee which role will be provisioned.
- SingleAppRoleAssignments isn't compatible with setting scope to "Sync All users and groups." - **Example request (POST)**
The request formats in the PATCH and POST differ. To ensure that POST and PATCH
![Add AppRoleAssignmentsComplex](./media/customize-application-attributes/edit-attribute-approleassignmentscomplex.png)<br> - **Things to consider** - All roles will be provisioned as primary = false.
- - The POST contains the role type. The PATCH request does not contain type. We are working on sending the type in both POST and PATCH requests.
+ - The POST contains the role type. The PATCH request doesn't contain type. We're working on sending the type in both POST and PATCH requests.
- AppRoleAssignmentsComplex isn't compatible with setting scope to "Sync All users and groups." - **Example output**
Selecting this option will effectively force a resynchronization of all users wh
- Microsoft Azure AD provides an efficient implementation of a synchronization process. In an initialized environment, only objects requiring updates are processed during a synchronization cycle. - Updating attribute-mappings has an impact on the performance of a synchronization cycle. An update to the attribute-mapping configuration requires all managed objects to be reevaluated. - A recommended best practice is to keep the number of consecutive changes to your attribute-mappings at a minimum.-- Adding a photo attribute to be provisioned to an app isn't supported today as you cannot specify the format to sync the photo. You can request the feature on [User Voice](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789)-- The attribute IsSoftDeleted is often part of the default mappings for an application. IsSoftdeleted can be true in one of four scenarios (the user is out of scope due to being unassigned from the application, the user is out of scope due to not meeting a scoping filter, the user has been soft deleted in Azure AD, or the property AccountEnabled is set to false on the user). It isn't recommended to remove the IsSoftDeleted attribute from your attribute mappings.-- The Azure AD provisioning service does not support provisioning null values.-- They primary key, typically "ID", should not be included as a target attribute in your attribute mappings.
+- Adding a photo attribute to be provisioned to an app isn't supported today as you can't specify the format to sync the photo. You can request the feature on [User Voice](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789)
+- The attribute IsSoftDeleted is often part of the default mappings for an application. IsSoftdeleted can be true in one of four scenarios (the user is out of scope due to being unassigned from the application, the user is out of scope due to not meeting a scoping filter, the user has been soft deleted in Azure AD, or the property AccountEnabled is set to false on the user). It's not recommended to remove the IsSoftDeleted attribute from your attribute mappings.
+- The Azure AD provisioning service doesn't support provisioning null values.
+- They primary key, typically "ID", shouldn't be included as a target attribute in your attribute mappings.
- The role attribute typically needs to be mapped using an expression, rather than a direct mapping. See section above for more details on role mapping. - While you can disable groups from your mappings, disabling users isn't supported.
active-directory Application Proxy Deployment Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-deployment-plan.md
The following design elements should increase the success of your pilot implemen
* Restrict visibility of the pilot applicationΓÇÖs icon to a pilot group by hiding its launch icon form the Azure MyApps portal. When ready for production you can scope the app to its respective targeted audience, either in the same pre-production tenant, or by also publishing the application in your production tenant. **Single sign-on settings**:
-Some SSO settings have specific dependencies that can take time to set up, so avoid change control delays by ensuring dependencies are addressed ahead of time. This includes domain joining connector hosts to perform SSO using Kerberos Constrained Delegation (KCD) and taking care of other time-consuming activities. For example, Setting up a PING Access instance, if needing header-based SSO.
+Some SSO settings have specific dependencies that can take time to set up, so avoid change control delays by ensuring dependencies are addressed ahead of time. This includes domain joining connector hosts to perform SSO using Kerberos Constrained Delegation (KCD) and taking care of other time-consuming activities.
**TLS Between Connector Host and Target Application**: Security is paramount, so TLS between the connector host and target applications should always be used. Particularly if the web application is configured for forms-based authentication (FBA), as user credentials are then effectively transmitted in clear text.
active-directory Concept Authentication Methods Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-methods-manage.md
To manage the Authentication methods policy, click **Security** > **Authenticati
Only the [converged registration experience](concept-registration-mfa-sspr-combined.md) is aware of the Authentication methods policy. Users in scope of the Authentication methods policy but not the converged registration experience won't see the correct methods to register.
->[!NOTE]
->Some pieces of the Authentication methods policy experience are in preview. This includes management of Email OTP, third party software OATH tokens, SMS, and voice call as noted in the portal. Also, use of the authentication methods policy alone with the legacy MFA and SSPR polices disabled is a preview experience.
- ## Legacy MFA and SSPR policies Two other policies, located in **Multifactor authentication** settings and **Password reset** settings, provide a legacy way to manage some authentication methods for all users in the tenant. You can't control who uses an enabled authentication method, or how the method can be used. A [Global Administrator](../roles/permissions-reference.md#global-administrator) is needed to manage these policies.
For users who are enabled for **Mobile phone** for SSPR, the independent control
Similarly, let's suppose you enable **Voice calls** for a group. After you enable it, you find that even users who aren't group members can sign-in with a voice call. In this case, it's likely those users are enabled for **Mobile phone** in the legacy SSPR policy or **Call to phone** in the legacy MFA policy.
-## Migration between policies (preview)
+## Migration between policies
The Authentication methods policy provides a migration path toward unified administration of all authentication methods. All desired methods can be enabled in the Authentication methods policy. Methods in the legacy MFA and SSPR policies can be disabled. Migration has three settings to let you move at your own pace, and avoid problems with sign-in or SSPR during the transition. After migration is complete, you'll centralize control over authentication methods for both sign-in and SSPR in a single place, and the legacy MFA and SSPR policies will be disabled.
Tenants are set to either Pre-migration or Migration in Progress by default, dep
> In the future, both of these features will be integrated with the Authentication methods policy. ## Known issues and limitations-- Some customers may see the control to enable Voice call grayed out due to a licensing requirement, despite having a premium license. This is a known issue that we are actively working to fix.-- As a part of the public preview we removed the ability to target individual users. Previously targeted users will remain in the policy but we recommend moving them to a targeted group.
+- In recent updates we removed the ability to target individual users. Previously targeted users will remain in the policy but we recommend moving them to a targeted group.
## Next steps
active-directory Concept Sspr Howitworks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-howitworks.md
Previously updated : 01/29/2023 Last updated : 03/22/2023
To improve awareness of password events, SSPR lets you configure notifications f
### Notify users on password resets
-If this option is set to **Yes**, users resetting their password receive an email notifying them that their password has been changed. The email is sent via the SSPR portal to their primary and alternate email addresses that are stored in Azure AD. No one else is notified of the reset event.
+If this option is set to **Yes**, users resetting their password receive an email notifying them that their password has been changed. The email is sent via the SSPR portal to their primary and alternate email addresses that are stored in Azure AD. If no primary or alternate email address is defined SSPR will attempt email notification via the users User Principal Name (UPN). No one else is notified of the reset event.
### Notify all admins when other admins reset their passwords
active-directory Concept System Preferred Multifactor Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-system-preferred-multifactor-authentication.md
description: Learn how to use system-preferred multifactor authentication
Previously updated : 03/20/2023 Last updated : 03/22/2023
When a user signs in, the authentication process checks which authentication met
1. [Temporary Access Pass](howto-authentication-temporary-access-pass.md) 1. [Certificate-based authentication](concept-certificate-based-authentication.md) 1. [FIDO2 security key](concept-authentication-passwordless.md#fido2-security-keys)
+1. [Microsoft Authenticator push notifications](concept-authentication-authenticator-app.md)
1. [Time-based one-time password (TOTP)](concept-authentication-oath-tokens.md)<sup>1</sup> 1. [Telephony](concept-authentication-phone-options.md)<sup>2</sup>
active-directory Howto Authentication Passwordless Phone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-phone.md
Previously updated : 01/29/2023 Last updated : 03/21/2023 -+
To enable the authentication method for passwordless phone sign-in, complete the
## User registration
-Users register themselves for the passwordless authentication method of Azure AD. For users who already registered the Microsoft Authenticator app for [multi-factor authentication](./concept-mfa-howitworks.md), skip to the next section, [enable phone sign-in](#enable-phone-sign-in). To register the Microsoft Authenticator app, follow these steps:
+Users register themselves for the passwordless authentication method of Azure AD. For users who already registered the Microsoft Authenticator app for [multi-factor authentication](./concept-mfa-howitworks.md), skip to the next section, [enable phone sign-in](#enable-phone-sign-in).
+
+### Direct phone Sign-in registration
+Users can register for passwordless phone sign-in directly within the Microsoft Authenticator app without the need to first registering Microsoft Authenticator with their account, all while never accruing a password. Here's how:
+1. Acquire a [Temporary Access Pass](../authentication/howto-authentication-temporary-access-pass.md) from your Admin or Organization.
+2. Download and install the Microsoft Authenticator app on your mobile device.
+3. Open Microsoft Authenticator and click **Add account** and then choose **Work or school account.**
+4. Choose **Sign in**.
+5. Follow the instructions to sign-in with your account using the Temporary Access Pass provided by your Admin or Organization.
+6. Once signed-in, continue following the additional steps to set up phone sign-in.
+
+### Guided registration with My Sign-ins
+To register the Microsoft Authenticator app, follow these steps:
1. Browse to [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo). 1. Sign in, then select **Add method** > **Authenticator app** > **Add** to add Microsoft Authenticator.
active-directory Howto Sspr Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-windows.md
Previously updated : 01/29/2023 Last updated : 03/22/2023
The following limitations apply to using SSPR from the Windows sign-in screen:
- This feature doesn't work for networks with 802.1x network authentication deployed and the option "Perform immediately before user logon". For networks with 802.1x network authentication deployed, it's recommended to use machine authentication to enable this feature. - Hybrid Azure AD joined machines must have network connectivity line of sight to a domain controller to use the new password and update cached credentials. This means that devices must either be on the organization's internal network or on a VPN with network access to an on-premises domain controller. - If using an image, prior to running sysprep ensure that the web cache is cleared for the built-in Administrator prior to performing the CopyProfile step. More information about this step can be found in the support article [Performance poor when using custom default user profile](https://support.microsoft.com/help/4056823/performance-issue-with-custom-default-user-profile).-- The following settings are known to interfere with the ability to use and reset passwords on Windows 10 devices:
+- The following settings are known to interfere with the ability to use and reset passwords on Windows devices:
- If lock screen notifications are turned off, **Reset password** won't work. - *HideFastUserSwitching* is set to enabled or 1 - *DontDisplayLastUserName* is set to enabled or 1
active-directory Msal Error Handling Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-error-handling-python.md
Previously updated : 11/26/2020 Last updated : 03/16/2023
In MSAL for Python, most errors are conveyed as a return value from the API call
* A successful response contains the `"access_token"` key. The format of the response is defined by the OAuth2 protocol. For more information, see [5.1 Successful Response](https://tools.ietf.org/html/rfc6749#section-5.1) * An error response contains `"error"` and usually `"error_description"`. The format of the response is defined by the OAuth2 protocol. For more information, see [5.2 Error Response](https://tools.ietf.org/html/rfc6749#section-5.2)
-When an error is returned, the `"error_description"` key contains a human-readable message; which in turn typically contains a Microsoft identity platform error code. For details about the various error codes, see [Authentication and authorization error codes](./reference-aadsts-error-codes.md).
+When an error is returned, the `"error"` key contains a machine-readable code. If the `"error"` is, for example, an `"interaction_required"`, you may prompt the user to provide additional information to complete the authentication process. If the `"error"` is `"invalid_grant"`, you may prompt the user to reenter their credentials. The following snippet is an example of error handling in MSAL for Python.
-In MSAL for Python, exceptions are rare because most errors are handled by returning an error value. The `ValueError` exception is only thrown when there is an issue with how you are attempting to use the library, such as when API parameter(s) are malformed.
+```python
+
+from msal import ConfidentialClientApplication
+
+authority_url = "https://login.microsoftonline.com/your_tenant_id"
+client_id = "your_client_id"
+client_secret = "your_client_secret"
+scopes = ["https://graph.microsoft.com/.default"]
+
+app = ConfidentialClientApplication(client_id, authority=authority_url, client_credential=client_secret)
+
+result = app.acquire_token_silent(scopes=scopes, account=None)
+
+if not result:
+ result = app.acquire_token_silent(scopes=scopes)
+
+if "access_token" in result:
+ print("Access token: %s" % result["access_token"])
+else:
+ print("Error: %s" % result.get("error"))
+
+```
+
+When an error is returned, the `"error_description"` key also contains a human-readable message, and there is typically also an `"error_code"` key which contains a machine-readable Microsoft identity platform error code. For more information about the various Microsoft identity platform error codes, see [Authentication and authorization error codes](./reference-aadsts-error-codes.md).
+
+In MSAL for Python, exceptions are rare because most errors are handled by returning an error value. The `ValueError` exception is only thrown when there's an issue with how you're attempting to use the library, such as when API parameter(s) are malformed.
[!INCLUDE [Active directory error handling claims challenges](../../../includes/active-directory-develop-error-handling-claims-challenges.md)]
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Office 365 E3_USGOV_DOD | ENTERPRISEPACK_USGOV_DOD | b107e5a3-3e60-4c0d-a184-a7e4395eb44c | EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS_AR_DOD (fd500458-c24c-478e-856c-a6067a8376cd)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)| Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams for DOD (AR) (fd500458-c24c-478e-856c-a6067a8376cd)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office Online (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SharePoint Online (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | | Office 365 E3_USGOV_GCCHIGH | ENTERPRISEPACK_USGOV_GCCHIGH | aea38a85-9bd5-4981-aa00-616b411205bf | EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS_AR_GCCHIGH (9953b155-8aef-4c56-92f3-72b0487fce41)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams for GCCHigh (AR) (9953b155-8aef-4c56-92f3-72b0487fce41)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office Online (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SharePoint Online (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | | Office 365 E4 | ENTERPRISEWITHSCAL | 1392051d-0cb9-4b7a-88d5-621fee5e8711 | BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>FORMS_PLAN_E3 (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MCOVOICECONF (27216c54-caf8-4d0d-97e2-517afb5c08f6)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE ONLINE (PLAN 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW FOR OFFICE 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>MICROSOFT FORMS (PLAN E3) (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 3) (27216c54-caf8-4d0d-97e2-517afb5c08f6)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>POWERAPPS FOR OFFICE 365(c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>MICROSOFT STREAM FOR O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) |
-| Office 365 E5 | ENTERPRISEPREMIUM | c7df2760-2c81-4ef7-b578-5b5392b571df | DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>CDS_O365_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>GRAPH_CONNECTORS_SEARCH_INDEX (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>EXCEL_PREMIUM (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWER_VIRTUAL_AGENTS_O365_P3 (ded3d325-1bdc-453e-8432-5bac26d7a014)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>VIVAENGAGE_CORE (a82fbf69-b4d7-49f4-83a6-915b2cf354f4)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Common Data Service - O365 P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Common Data Service for Teams_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Graph Connectors Search with Index (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics ΓÇô Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 ΓÇô Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>M365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Excel Advanced Analytics (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>Microsoft Forms (Plan E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Automate for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>Power Virtual Agents for Office 365 P3 (ded3d325-1bdc-453e-8432-5bac26d7a014)<br/>PowerApps for Office 365 Plan 3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>VIVAENGAGE_CORE (a82fbf69-b4d7-49f4-83a6-915b2cf354f4)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) |
+| Office 365 E5 | ENTERPRISEPREMIUM | c7df2760-2c81-4ef7-b578-5b5392b571df | DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>CDS_O365_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>GRAPH_CONNECTORS_SEARCH_INDEX (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>EXCEL_PREMIUM (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWER_VIRTUAL_AGENTS_O365_P3 (ded3d325-1bdc-453e-8432-5bac26d7a014)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>VIVAENGAGE_CORE (a82fbf69-b4d7-49f4-83a6-915b2cf354f4)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Common Data Service - O365 P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Common Data Service for Teams_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Graph Connectors Search with Index (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics ΓÇô Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 ΓÇô Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>M365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Excel Advanced Analytics (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>Microsoft Forms (Plan E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Automate for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>Power Virtual Agents for Office 365 P3 (ded3d325-1bdc-453e-8432-5bac26d7a014)<br/>PowerApps for Office 365 Plan 3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>VIVAENGAGE_CORE (a82fbf69-b4d7-49f4-83a6-915b2cf354f4)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) |
| Office 365 E5 without Audio Conferencing | ENTERPRISEPREMIUM_NOPSTNCONF | 26d45bd9-adf1-46cd-a9e1-51e9a5524128 | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>GRAPH_CONNECTORS_SEARCH_INDEX (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>EXCEL_PREMIUM (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>POWER_VIRTUAL_AGENTS_O365_P3 (ded3d325-1bdc-453e-8432-5bac26d7a014) | Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Graph Connectors Search with Index (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics - Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Excel Advanced Analytics (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>Microsoft Forms (Plan E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Kaizala Pro (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Cloud App Security (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Office 365 (Plan 3) (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>RETIRED - Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>Common Data Service (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Power Automate for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Power Virtual Agents for Office 365 (ded3d325-1bdc-453e-8432-5bac26d7a014) | | Office 365 F3 | DESKLESSPACK | 4b585984-651b-448a-9e53-3b10f069cf7f | DYN365_CDS_O365_F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>CDS_O365_F1 (90db65a7-bf11-4904-a79f-ef657605145b)<br/>EXCHANGE_S_DESKLESS (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>RMS_S_BASIC (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>FORMS_PLAN_K (f07046bd-2a3c-4b96-b0be-dea79d7cbfb8)<br/>KAIZALA_O365_P1 (73b2a583-6a59-42e3-8e83-54db46bc3278)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_K (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_S1 (e0287f9f-e222-4f98-9a83-f379e249159a)<br/>FLOW_O365_S1 (bd91b1a4-9f94-4ecf-b45b-3a65e5c8128a)<br/>POWER_VIRTUAL_AGENTS_O365_F1 (ba2fdb48-290b-4632-b46a-e4ecc58ac11a)<br/>PROJECT_O365_F3 (7f6f28c2-34bb-4d4b-be36-48ca2e77e1ec)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_FIRSTLINE (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>VIVAENGAGE_CORE (a82fbf69-b4d7-49f4-83a6-915b2cf354f4)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Common Data Service - O365 F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>Common Data Service for Teams_F1 (90db65a7-bf11-4904-a79f-ef657605145b)<br/>Exchange Online Kiosk (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>Microsoft Azure Rights Management Service (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Forms (Plan F1) (f07046bd-2a3c-4b96-b0be-dea79d7cbfb8)<br/>Microsoft Kaizala Pro Plan 1 (73b2a583-6a59-42e3-8e83-54db46bc3278)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 F3 (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Office Mobile Apps for Office 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>Power Apps for Office 365 F3 (e0287f9f-e222-4f98-9a83-f379e249159a)<br/>Power Automate for Office 365 F3 (bd91b1a4-9f94-4ecf-b45b-3a65e5c8128a)<br/>Power Virtual Agents for Office 365 F1 (ba2fdb48-290b-4632-b46a-e4ecc58ac11a)<br/>Project for Office (Plan F) (7f6f28c2-34bb-4d4b-be36-48ca2e77e1ec)<br/>SharePoint Kiosk (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>Skype for Business Online (Plan 1) (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Firstline) (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>VIVAENGAGE_CORE (a82fbf69-b4d7-49f4-83a6-915b2cf354f4)<br/>Whiteboard (Firstline) (36b29273-c6d0-477a-aca6-6fbe24f538e3)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) | | Office 365 G1 GCC | STANDARDPACK_GOV | 3f4babde-90ec-47c6-995d-d223749065d1 | DYN365_CDS_O365_P1_GCC (8eb5e9bc-783f-4425-921a-c65f45dd72c6)<br/>CDS_O365_P1_GCC (959e5dec-6522-4d44-8349-132c27c3795a)<br/>EXCHANGE_S_STANDARD_GOV (e9b4930a-925f-45e2-ac2a-3f7788ca6fdd)<br/>FORMS_GOV_E1 (f4cba850-4f34-4fd2-a341-0fddfdce1e8f)<br/>MYANALYTICS_P2_GOV (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>STREAM_O365_E1_GOV (15267263-5986-449d-ac5c-124f3b49b2d6)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>OFFICEMOBILE_SUBSCRIPTION_GOV (4ccb60ee-9523-48fd-8f63-4b090f1ad77a)<br/>POWERAPPS_O365_P1_GOV (c42aa49a-f357-45d5-9972-bc29df885fee)<br/>FLOW_O365_P1_GOV (ad6c8870-6356-474c-901c-64d7da8cea48)<br/>SharePoint Plan 1G (f9c43823-deb4-46a8-aa65-8b551f0c4f8a)<br/>MCOSTANDARD_GOV (a31ef4a2-f787-435e-8335-e47eb0cafc94)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d) | Common Data Service - O365 P1 GCC (8eb5e9bc-783f-4425-921a-c65f45dd72c6)<br/>Common Data Service for Teams_P1 GCC (959e5dec-6522-4d44-8349-132c27c3795a)<br/>Exchange Online (Plan 1) for Government (e9b4930a-925f-45e2-ac2a-3f7788ca6fdd)<br/>Forms for Government (Plan E1) (f4cba850-4f34-4fd2-a341-0fddfdce1e8f)<br/>Insights by MyAnalytics for Government (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft Stream for O365 for Government (E1) (15267263-5986-449d-ac5c-124f3b49b2d6)<br/>Microsoft Teams for Government (304767db-7d23-49e8-a945- 4a7eb65f9f28)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Planner for Government (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>Office for the Web for Government (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>Office Mobile Apps for Office 365 for GCC (4ccb60ee-9523-48fd-8f63-4b090f1ad77a)<br/> Power Apps for Office 365 for Government (c42aa49a-f357-45d5-9972-bc29df885fee)<br/>Power Automate for Office 365 for Government (ad6c8870-6356-474c-901c-64d7da8cea48)<br/>SharePoint Plan 1G (f9c43823-deb4-46a8-aa65-8b551f0c4f8a)<br/>Skype for Business Online (Plan 2) for Government (a31ef4a2-f787-435e-8335-e47eb0cafc94)<br/>To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Whiteboard (Plan 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d) |
active-directory Cross Tenant Synchronization Configure Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure-graph.md
These steps describe how to use Microsoft Graph Explorer (recommended), but you
1. Get the tenant ID of the source and target tenants. The example configuration described in this article uses the following tenant IDs:
- - Source tenant ID: 3d0f5dec-5d3d-455c-8016-e2af1ae4d31a
- - Target tenant ID: 376a1f89-b02f-4a85-8252-2974d1984d14
+ - Source tenant ID: {sourceTenantId}
+ - Target tenant ID: {targetTenantId}
## Step 2: Enable user synchronization in the target tenant
These steps describe how to use Microsoft Graph Explorer (recommended), but you
Content-Type: application/json {
- "tenantId": "3d0f5dec-5d3d-455c-8016-e2af1ae4d31a"
+ "tenantId": "{sourceTenantId}"
} ```
These steps describe how to use Microsoft Graph Explorer (recommended), but you
{ "@odata.context": "https://graph.microsoft.com/beta/$metadata#policies/crossTenantAccessPolicy/partners/$entity",
- "tenantId": "3d0f5dec-5d3d-455c-8016-e2af1ae4d31a",
+ "tenantId": "{sourceTenantId}",
"isServiceProvider": null, "inboundTrust": null, "b2bCollaborationOutbound": null,
These steps describe how to use Microsoft Graph Explorer (recommended), but you
**Request** ```http
- PUT https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/partners/3d0f5dec-5d3d-455c-8016-e2af1ae4d31a/identitySynchronization
+ PUT https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/partners/{sourceTenantId}/identitySynchronization
Content-type: application/json {
These steps describe how to use Microsoft Graph Explorer (recommended), but you
**Request** ```http
- PATCH https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/partners/3d0f5dec-5d3d-455c-8016-e2af1ae4d31a
+ PATCH https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/partners/{sourceTenantId}
Content-Type: application/json {
These steps describe how to use Microsoft Graph Explorer (recommended), but you
Content-Type: application/json {
- "tenantId": "376a1f89-b02f-4a85-8252-2974d1984d14"
+ "tenantId": "{targetTenantId}"
} ```
These steps describe how to use Microsoft Graph Explorer (recommended), but you
{ "@odata.context": "https://graph.microsoft.com/beta/$metadata#policies/crossTenantAccessPolicy/partners/$entity",
- "tenantId": "376a1f89-b02f-4a85-8252-2974d1984d14",
+ "tenantId": "{targetTenantId}",
"isServiceProvider": null, "inboundTrust": null, "b2bCollaborationOutbound": null,
These steps describe how to use Microsoft Graph Explorer (recommended), but you
**Request** ```http
- PATCH https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/partners/376a1f89-b02f-4a85-8252-2974d1984d14
+ PATCH https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/partners/{targetTenantId}
Content-Type: application/json {
These steps describe how to use Microsoft Graph Explorer (recommended), but you
"appId": "{appId}", "appDisplayName": "Fabrikam", "applicationTemplateId": "518e5f48-1fc8-4c48-9387-9fdf28b0dfe7",
- "appOwnerTenantId": "376a1f89-b02f-4a85-8252-2974d1984d14",
+ "appOwnerTenantId": "{targetTenantId}",
"appRoleAssignmentRequired": true, "displayName": "Fabrikam", "errorUrl": null,
These steps describe how to use Microsoft Graph Explorer (recommended), but you
"credentials": [ { "key": "CompanyId",
- "value": "376a1f89-b02f-4a85-8252-2974d1984d14"
+ "value": "{targetTenantId}"
}, { "key": "AuthenticationType",
In the source tenant, to enable provisioning, create a provisioning job.
"value": [ { "key": "CompanyId",
- "value": "376a1f89-b02f-4a85-8252-2974d1984d14"
+ "value": "{targetTenantId}"
}, { "key": "AuthenticationType",
Now that you have a configuration, you can test on-demand provisioning with one
{ "id": "{id}", "activityDateTime": "2022-12-11T00:40:37Z",
- "tenantId": "376a1f89-b02f-4a85-8252-2974d1984d14",
+ "tenantId": "{targetTenantId}",
"jobId": "{jobId}", "cycleId": "{cycleId}", "changeId": "{changeId}",
active-directory Servicenow Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/servicenow-provisioning-tutorial.md
To configure automatic user provisioning for ServiceNow in Azure AD:
1. Set **Provisioning Mode** to **Automatic**.
-1. In the **Admin Credentials** section, enter your ServiceNow tenant URL, Client ID, Client Secret and Authorization Endpoint. Select **Test Connection** to ensure that Azure AD can connect to ServiceNow. [This ServiceNow documentation](https://docs.servicenow.com/bundle/utah-platform-security/page/administer/security/task/t_CreateEndpointforExternalClients.html) outlines how to generate these values.
- ![Screenshot that shows the Service Provisioning page, where you can enter admin credentials.](./media/servicenow-provisioning-tutorial/servicenow-provisioning.png)
-
- > [!NOTE]
- > - Tenant URL: https://**InsertInstanceName**.service-now.com/api/now/scim
- > - Authorization Endpoint: https://**InsertInstanceName**.service-now.com/oauth_auth.do?response_type=code&client_id=**InsertClientID**&state=1&scope=useraccount&redirect_uri=https%3A%2F%2Fportal.azure.com%2FTokenAuthorize
- > - Token Endoint: https://**InsertInstanceName**.service-now.com/api/now/scim
+1. In the **Admin Credentials** section, enter your ServiceNow admin credentials and username. Select **Test Connection** to ensure that Azure AD can connect to ServiceNow. If the connection fails, ensure that your ServiceNow account has admin permissions and try again.
1. In the **Notification Email** box, enter the email address of a person or group that should receive the provisioning error notifications. Then select the **Send an email notification when a failure occurs** check box.
aks Coredns Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/coredns-custom.md
You can customize CoreDNS with AKS to perform on-the-fly DNS name rewrites.
name: coredns-custom namespace: kube-system data:
- test.override: |
+ test.server: |
<domain to be rewritten>.com:53 { log errors
aks Draft Devx Extension Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/draft-devx-extension-aks.md
+
+ Title: Use Draft and the DevX extension for Visual Studio Code with Azure Kubernetes Service (AKS) (preview)
+description: Learn how to use Draft and the DevX extension for Visual Studio Code with Azure Kubernetes Service (AKS)
++ Last updated : 03/20/2023+++
+# Use Draft and the DevX extension for Visual Studio Code with Azure Kubernetes Service (AKS) (preview)
+
+[Draft][draft] is an open-source project that streamlines Kubernetes development by taking a non-containerized application and generating the DockerFiles, Kubernetes manifests, Helm charts, Kustomize configurations, and other artifacts associated with a containerized application. The Azure Kubernetes Service (AKS) DevX extension for Visual Studio Code enhances non-cluster experiences, allowing you to create deployment files to deploy your applications to AKS. Draft is the available feature included in the DevX extension.
+
+This article shows you how to use Draft with the DevX extension to draft a DockerFile, draft a Kubernetes deployment and service, and build an image on Azure Container Registry (ACR).
++
+## Before you begin
+
+* You need an Azure resource group and an AKS cluster with an attached ACR. To attach an ACR to your AKS cluster, use `az aks update -n <cluster-name> -g <resource-group-name> --attach-acr <acr-name>` or follow the instructions in [Authenticate with ACR from AKS][aks-acr-authenticate].
+* Download and install the [Azure Kubernetes Service DevX Extension for Visual Studio Code][devx-extension].
+
+## Draft with the DevX extension for Visual Studio Code
+
+To get started with Draft in Visual Studio Code, press **Ctrl + Shift + P** in your Visual Studio Code window and enter **AKS Developer**. From here, you'll see available Draft commands:
+
+* Get started
+* Draft a DockerFile
+* Draft a Kubernetes Deployment and Service
+* Build an Image on Azure Container Registry
+
+### Get started
+
+The `Get started` command shows you all the steps you need to get up and running on AKS.
+
+1. Press **Ctrl + Shift + P** to open the command palette.
+2. Enter **AKS Developer**.
+3. Select **AKS Developer: Get started**.
+
+You'll see the following getting started page:
++
+### Draft a DockerFile
+
+`Draft a DockerFile` adds the minimum required DockerFile to your project directory.
+
+1. Press **Ctrl + Shift + P** to open the command palette.
+2. Enter **AKS Developer**.
+3. Select **AKS Developer: Draft a DockerFile**.
+
+### Draft a Kubernetes Deployment and Service
+
+`Draft a Kubernetes Deployment and Service` adds the appropriate deployment and service files to your application, which allows you to deploy to your AKS cluster. The supported deployment types include: Helm, Kustomize, and Kubernetes manifests.
+
+1. Press **Ctrl + Shift + P** to open the command palette.
+2. Enter **AKS Developer**.
+3. Select **AKS Developer: Draft a Kubernetes Deployment and Service**.
+
+### Build an Image on Azure Container Registry
+
+`Build an Image on Azure Container Registry` builds an image on your ACR to use in your deployment files.
+
+1. Press **Ctrl + Shift + P** to open the command palette.
+2. Enter **AKS Developer**.
+3. Select **AKS Developer: Build an Image on Azure Container Registry**.
+
+## Next steps
+
+In this article, you learned how to use Draft and the DevX extension for Visual Studio Code with AKS. To use Draft with the Azure CLI, see [Draft for AKS][draft-aks-cli].
+
+<!-- LINKS -->
+
+[draft-aks-cli]: ../aks/draft.md
+[aks-acr-authenticate]: ../aks/cluster-container-registry-integration.md
+[devx-extension]: https://marketplace.visualstudio.com/items?itemName=ms-kubernetes-tools.aks-devx-tools
+[draft]: https://github.com/Azure/draft
aks Enable Host Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/enable-host-encryption.md
This feature can only be set at cluster creation or node pool creation time.
## Use host-based encryption on new clusters
-Configure the cluster agent nodes to use host-based encryption when the cluster is created.
+Configure the cluster agent nodes to use host-based encryption when the cluster is created.
```azurecli-interactive az aks create --name myAKSCluster --resource-group myResourceGroup -s Standard_DS2_v2 -l westus2 --enable-encryption-at-host
az aks nodepool add --name hostencrypt --cluster-name myAKSCluster --resource-gr
If you want to create new node pools without the host-based encryption feature, you can do so by omitting the `--enable-encryption-at-host` parameter.
-## Next steps
-
-Review [best practices for AKS cluster security][best-practices-security]
-Read more about [host-based encryption](../virtual-machines/disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data).
+> [!NOTE]
+> After you enable host-based encryption on your cluster, make sure you provide the proper access to your Azure Key Vault to enable encryption at rest. For more information, see [Control access][control-keys] and [Azure built-in roles for Key Vault data plane operations][akv-built-in-roles].
+## Next steps
+- Review [best practices for AKS cluster security][best-practices-security].
+- Read more about [host-based encryption](../virtual-machines/disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data).
<!-- LINKS - external --> <!-- LINKS - internal -->
Read more about [host-based encryption](../virtual-machines/disk-encryption.md#e
[az-feature-register]: /cli/azure/feature#az_feature_register [az-feature-list]: /cli/azure/feature#az_feature_list [az-provider-register]: /cli/azure/provider#az_provider_register
+[control-keys]: ../key-vault/general/best-practices.md#control-access-to-your-vault
+[akv-built-in-roles]: ../key-vault/general/rbac-guide.md#azure-built-in-roles-for-key-vault-data-plane-operations
aks Use Metrics Server Vertical Pod Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-metrics-server-vertical-pod-autoscaler.md
+
+ Title: Configure Metrics Server VPA in Azure Kubernetes Service (AKS)
+description: Learn how to vertically autoscale your Metrics Server pods on an Azure Kubernetes Service (AKS) cluster.
+ Last updated : 03/21/2023++
+# Configure Metrics Server VPA in Azure Kubernetes Service (AKS)
+
+[Metrics Server][metrics-server-overview] is a scalable, efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines. With Azure Kubernetes Service (AKS), vertical pod autoscaling is enabled for the Metrics Server. The Metrics Server is commonly used by other Kubernetes add ons, such as the [Horizontal Pod Autoscaler][horizontal-pod-autoscaler].
+
+Vertical Pod Autoscaler (VPA) enables you to adjust the resource limit when the Metrics Server is experiencing consistent CPU and memory resource constraints.
+
+## Before you begin
+
+AKS cluster is running Kubernetes version 1.24 and higher.
+
+## Metrics server throttling
+
+If the Metrics Server throttling rate is high, and the memory usage of its two pods is unbalanced, this indicates the Metrics Server requires more resources than the default values specified.
+
+To update the coefficient values, create a ConfigMap in the overlay *kube-system* namespace to override the values in the Metrics Server specification. Perform the following steps to update the metrics server.
+
+1. Create a ConfigMap file named *metrics-server-config.yaml* and copy in the following manifest.
+
+ ```yml
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: metrics-server-config
+ namespace: kube-system
+ labels:
+ kubernetes.io/cluster-service: "true"
+ addonmanager.kubernetes.io/mode: EnsureExists
+ data:
+ NannyConfiguration: |-
+ apiVersion: nannyconfig/v1alpha1
+ kind: NannyConfiguration
+ baseCPU: 100m
+ cpuPerNode: 1m
+ baseMemory: 100Mi
+ memoryPerNode: 8Mi
+ ```
+
+ In the ConfigMap example, the resource limit and request are changed to the following:
+
+ * cpu: (100+1n) millicore
+ * memory: (100+8n) mebibyte
+
+ Where *n* is the number of nodes.
+
+2. Create the ConfigMap using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
+
+ ```bash
+ kubectl apply -f metrics-server-config.yaml
+ ```
+
+3. Restart the Metrics Server pods. There are two Metrics server pods, and the following command deletes all of them.
+
+ ```bash
+ kubectl -n kube-system delete po metrics-server-pod-name
+ ```
+
+4. To verify the updated resources took effect, run the following command to review the Metrics Server VPA log.
+
+ ```bash
+ kubectl -n kube-system logs metrics-server-pod-name -c metrics-server-vpa
+ ```
+
+ The following example output resembles the results showing the updated throttling settings were applied.
+
+ ```output
+ ERROR: logging before flag.Parse: I0315 23:12:33.956112 1 pod_nanny.go:68] Invoked by [/pod_nanny --config-dir=/etc/config --cpu=44m --extra-cpu=0.5m --memory=51Mi --extra-memory=4Mi --poll-period=180000 --threshold=5 --deployment=metrics-server --container=metrics-server]
+ ERROR: logging before flag.Parse: I0315 23:12:33.956159 1 pod_nanny.go:69] Version: 1.8.14
+ ERROR: logging before flag.Parse: I0315 23:12:33.956171 1 pod_nanny.go:85] Watching namespace: kube-system, pod: metrics-server-545d8b77b7-5nqq9, container: metrics-server.
+ ERROR: logging before flag.Parse: I0315 23:12:33.956175 1 pod_nanny.go:86] storage: MISSING, extra_storage: 0Gi
+ ERROR: logging before flag.Parse: I0315 23:12:33.957441 1 pod_nanny.go:116] cpu: 100m, extra_cpu: 1m, memory: 100Mi, extra_memory: 8Mi
+ ERROR: logging before flag.Parse: I0315 23:12:33.957456 1 pod_nanny.go:145] Resources: [{Base:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} ExtraPerNode:{i:{value:0 scale:-3} d:{Dec:<nil>} s: Format:DecimalSI} Name:cpu} {Base:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI} ExtraPerNode:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} Name:memory
+ ```
+
+Be cautious of the *baseCPU*, *cpuPerNode*, *baseMemory*, and the *memoryPerNode*, because the ConfigMap isn't validated by AKS. As a recommended practice, increase the value gradually to avoid unnecessary resource consumption. Proactively monitor resource usage when updating or creating the ConfigMap. A large number of resource requests could negatively impact the node.
+
+## Manually configure Metrics Server resource usage
+
+The Metrics Server VPA adjusts resource usage by the number of nodes. If the cluster scales up or down often, the Metrics Server might restart frequently. In this case, you can bypass VPA and manually control its resource usage. This method to configure VPA isn't to be performed in addition to the steps described in the previous section.
+
+If you would like to bypass VPA for Metrics Server and manually control its resource usage, perform the following steps.
+
+1. Create a ConfigMap file named *metrics-server-config.yaml* and copy in the following manifest.
+
+ ```yml
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: metrics-server-config
+ namespace: kube-system
+ labels:
+ kubernetes.io/cluster-service: "true"
+ addonmanager.kubernetes.io/mode: EnsureExists
+ data:
+ NannyConfiguration: |-
+ apiVersion: nannyconfig/v1alpha1
+ kind: NannyConfiguration
+ baseCPU: 100m
+ cpuPerNode: 0m
+ baseMemory: 100Mi
+ memoryPerNode: 0Mi
+ ```
+
+ In this ConfigMap example, it changes the resource limit and request to the following:
+
+ * cpu: 100 millicore
+ * memory: 100 mebibyte
+
+ Changing the number of nodes doesn't trigger autoscaling.
+
+2. Create the ConfigMap using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
+
+ ```yml
+ kubectl apply -f metrics-server-config.yaml
+ ```
+
+3. Restart the Metrics Server pods. There are two Metrics server pods, and the following command deletes all of them.
+
+ ```bash
+ kubectl -n kube-system delete po metrics-server-pod-name
+ ```
+
+4. To verify the updated resources took affect, run the following command to review the Metrics Server VPA log.
+
+ ```bash
+ kubectl -n kube-system logs metrics-server-pod-name -c metrics-server-vpa
+ ```
+
+ The following example output resembles the results showing the updated throttling settings were applied.
+
+ ```output
+ ERROR: logging before flag.Parse: I0315 23:12:33.956112 1 pod_nanny.go:68] Invoked by [/pod_nanny --config-dir=/etc/config --cpu=44m
+ --extra-cpu=0.5m --memory=51Mi --extra-memory=4Mi --poll-period=180000 --threshold=5 --deployment=metrics-server --container=metrics-server]
+ ERROR: logging before flag.Parse: I0315 23:12:33.956159 1 pod_nanny.go:69] Version: 1.8.14
+ ERROR: logging before flag.Parse: I0315 23:12:33.956171 1 pod_nanny.go:85] Watching namespace: kube-system, pod: metrics-server-545d8b77b7-5nqq9, container: metrics-server.
+ ERROR: logging before flag.Parse: I0315 23:12:33.956175 1 pod_nanny.go:86] storage: MISSING, extra_storage: 0Gi
+ ERROR: logging before flag.Parse: I0315 23:12:33.957441 1 pod_nanny.go:116] cpu: 100m, extra_cpu: 0m, memory: 100Mi, extra_memory: 0Mi
+ ERROR: logging before flag.Parse: I0315 23:12:33.957456 1 pod_nanny.go:145] Resources: [{Base:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} ExtraPerNode:{i:{value:0 scale:-3} d:{Dec:<nil>} s: Format:DecimalSI} Name:cpu} {Base:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}
+ ExtraPerNode:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} Name:memory}]
+ ```
+
+## Troubleshooting
+
+1. If you use the following configmap, the Metrics Server VPA customizations aren't applied. You need add a unit for `baseCPU`.
+
+ ```yml
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: metrics-server-config
+ namespace: kube-system
+ labels:
+ kubernetes.io/cluster-service: "true"
+ addonmanager.kubernetes.io/mode: EnsureExists
+ data:
+ NannyConfiguration: |-
+ apiVersion: nannyconfig/v1alpha1
+ kind: NannyConfiguration
+ baseCPU: 100
+ cpuPerNode: 1m
+ baseMemory: 100Mi
+ memoryPerNode: 8Mi
+ ```
+
+ The following example output resembles the results showing the updated throttling settings aren't applied.
+
+ ```output
+ ERROR: logging before flag.Parse: I0316 23:32:08.383389 1 pod_nanny.go:68] Invoked by [/pod_nanny --config-dir=/etc/config --cpu=44m
+ --extra-cpu=0.5m --memory=51Mi --extra-memory=4Mi --poll-period=180000 --threshold=5 --deployment=metrics-server --container=metrics-server]
+ ERROR: logging before flag.Parse: I0316 23:32:08.383430 1 pod_nanny.go:69] Version: 1.8.14
+ ERROR: logging before flag.Parse: I0316 23:32:08.383441 1 pod_nanny.go:85] Watching namespace: kube-system, pod: metrics-server-7d78876589-hcrff, container: metrics-server.
+ ERROR: logging before flag.Parse: I0316 23:32:08.383446 1 pod_nanny.go:86] storage: MISSING, extra_storage: 0Gi
+ ERROR: logging before flag.Parse: I0316 23:32:08.384554 1 pod_nanny.go:192] Unable to decode Nanny Configuration from config map, using default parameters
+ ERROR: logging before flag.Parse: I0316 23:32:08.384565 1 pod_nanny.go:116] cpu: 44m, extra_cpu: 0.5m, memory: 51Mi, extra_memory: 4Mi
+ ERROR: logging before flag.Parse: I0316 23:32:08.384589 1 pod_nanny.go:145] Resources: [{Base:{i:{value:44 scale:-3} d:{Dec:<nil>} s:44m Format:DecimalSI} ExtraPerNode:{i:{value:5 scale:-4} d:{Dec:<nil>} s: Format:DecimalSI} Name:cpu} {Base:{i:{value:53477376 scale:0} d:{Dec:<nil>} s:51Mi Format:BinarySI} ExtraPerNode:{i:{value:4194304 scale:0}
+ d:{Dec:<nil>} s:4Mi Format:BinarySI} Name:memory}]
+ ```
+
+2. For Kubernetes version 1.23 and higher clusters, Metrics Server has a *PodDisruptionBudget*. It ensures the number of available Metrics Server pods is at least one. If you get something like this after running `kubectl -n kube-system get po`, it's possible that the customized resource usage is small. Increase the coefficient values to resolve it.
+
+ ```output
+ metrics-server-679b886d4-pxwdf 1/2 CrashLoopBackOff 6 (36s ago) 6m33s
+ metrics-server-679b886d4-svxxx 1/2 CrashLoopBackOff 6 (54s ago) 6m33s
+ metrics-server-7d78876589-hcrff 2/2 Running 0 37m
+ ```
+
+## Next steps
+
+Metrics Server is a component in the core metrics pipeline. For more information see, [Metrics Server API design][metrics-server-api-design].
+
+<!-- EXTERNAL LINKS -->
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[metrics-server-overview]: https://kubernetes-sigs.github.io/metrics-server/
+[metrics-server-api-design]: https://github.com/kubernetes/design-proposals-archive/blob/main/instrumentation/resource-metrics-api.md
+
+<! INTERNAL LINKS >
+[horizontal-pod-autoscaler]: concepts-scale.md#horizontal-pod-autoscaler
aks Vertical Pod Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/vertical-pod-autoscaler.md
Title: Vertical Pod Autoscaling (preview) in Azure Kubernetes Service (AKS)
description: Learn how to vertically autoscale your pod on an Azure Kubernetes Service (AKS) cluster. Previously updated : 01/12/2023 Last updated : 03/17/2023 # Vertical Pod Autoscaling (preview) in Azure Kubernetes Service (AKS)
Vertical Pod autoscaling uses the `VerticalPodAutoscaler` object to automaticall
The Vertical Pod Autoscaler uses the `lowerBound` and `upperBound` attributes to decide whether to delete a Pod and replace it with a new Pod. If a Pod has requests less than the lower bound or greater than the upper bound, the Vertical Pod Autoscaler deletes the Pod and replaces it with a Pod that meets the target attribute.
-## Metrics server VPA throttling
-
-With AKS clusters version 1.24 and higher, vertical pod autoscaling is enabled for the metrics server. VPA enables you to adjust the resource limit when the metrics server is experiencing consistent CPU and memory resource constraints.
-
-If the metrics server throttling rate is high and the memory usage of its two pods are unbalanced, this indicates the metrics server requires more resources than the default values specified.
-
-To update the coefficient values, create a ConfigMap in the overlay *kube-system* namespace to override the values in the metrics server specification. Perform the following steps to update the metrics server.
-
-1. Create a ConfigMap file named *metrics-server-config.yaml* and copy in the following manifest.
-
- ```yml
- apiVersion: v1
- kind: ConfigMap
- metadata:
- name: metrics-server-config
- namespace: kube-system
- labels:
- kubernetes.io/cluster-service: "true"
- addonmanager.kubernetes.io/mode: EnsureExists
- data:
- NannyConfiguration: |-
- apiVersion: nannyconfig/v1alpha1
- kind: NannyConfiguration
- baseCPU: 100m
- cpuPerNode: 1m
- baseMemory: 100Mi
- memoryPerNode: 8Mi
- ```
-
- In this ConfigMap example, it changes the resource limit and request to the following:
-
- * cpu: (100+1n) millicore
- * memory: (100+8n) mebibyte
-
- Where *n* is the number of nodes.
-
-2. Create the ConfigMap using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
-
- ```bash
- kubectl apply -f metrics-server-config.yaml
- ```
-
-Be cautious of the *baseCPU*, *cpuPerNode*, *baseMemory*, and the *memoryPerNode* as the ConfigMap won't be validated by AKS. As a recommended practice, increase the value gradually to avoid unnecessary resource consumption. Proactively monitor resource usage when updating or creating the ConfigMap. A large number of resource requests could negatively impact the node.
- ## Next steps This article showed you how to automatically scale resource utilization, such as CPU and memory, of cluster nodes to match application requirements. You can also use the horizontal pod autoscaler to automatically adjust the number of pods that run your application. For steps on using the horizontal pod autoscaler, see [Scale applications in AKS][scale-applications-in-aks].
app-service App Service Web Tutorial Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-custom-domain.md
Browse to the DNS names that you configured earlier.
If you receive an HTTP 404 (Not Found) error when you browse to the URL of your custom domain, the two most-likely causes are: - The browser client has cached the old IP address of your domain. Clear the cache, and test DNS resolution again. On a Windows machine, you clear the cache with `ipconfig /flushdns`.-- You configured an IP-based certificate binding, and the app's IP address has changed because of it. [Remap the A record](configure-ssl-bindings.md#remap-records-for-ip-ssl) in your DNS entries to the new IP address.
+- You configured an IP-based certificate binding, and the app's IP address has changed because of it. [Remap the A record](configure-ssl-bindings.md#2-remap-records-for-ip-based-ssl) in your DNS entries to the new IP address.
If you receive a `Page not secure` warning or error, it's because your domain doesn't have a certificate binding yet. [Add a private certificate for the domain](configure-ssl-certificate.md) and [configure the binding](configure-ssl-bindings.md).
app-service Configure Common https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-common.md
description: Learn to configure common settings for an App Service app. App sett
keywords: azure app service, web app, app settings, environment variables ms.assetid: 9af8a367-7d39-4399-9941-b80cbc5f39a0 Previously updated : 07/11/2022 Last updated : 04/21/2023 ms.devlang: azurecli
App settings are always encrypted when stored (encrypted-at-rest).
![Application Settings](./media/configure-common/open-ui.png)
- By default, values for app settings are hidden in the portal for security. To see a hidden value of an app setting, click its **Value** field. To see the hidden values of all app settings, click the **Show values** button.
+ By default, values for app settings are hidden in the portal for security. To see a hidden value of an app setting, select its **Value** field. To see the hidden values of all app settings, select the **Show values** button.
-1. To add a new app setting, click **New application setting**. To edit a setting, click the **Edit** button on the right side.
+1. To add a new app setting, select **New application setting**. To edit a setting, select the **Edit** button on the right side.
1. In the dialog, you can [stick the setting to the current slot](deploy-staging-slots.md#which-settings-are-swapped).
App settings are always encrypted when stored (encrypted-at-rest).
> In a default Linux app service or a custom Linux container, any nested JSON key structure in the app setting name like `ApplicationInsights:InstrumentationKey` needs to be configured in App Service as `ApplicationInsights__InstrumentationKey` for the key name. In other words, any `:` should be replaced by `__` (double underscore). >
-1. When finished, click **Update**. Don't forget to click **Save** back in the **Configuration** page.
+1. When finished, select **Update**. Don't forget to select **Save** back in the **Configuration** page.
# [Azure CLI](#tab/cli)
Set-AzWebAppSlotConfigName -ResourceGroupName <group-name> -Name <app-name> -App
# [Azure portal](#tab/portal)
-Click the **Advanced edit** button. Edit the settings in the text area. When finished, click **Update**. Don't forget to click **Save** back in the **Configuration** page.
+Select the **Advanced edit** button. Edit the settings in the text area. When finished, select **Update**. Don't forget to select **Save** back in the **Configuration** page.
App settings have the following JSON formatting:
It's not possible to edit app settings in bulk by using a JSON file with Azure P
### Configure arrays in app settings
-You can also configure arrays in app settings as shown in the table below.
+You can also configure arrays in app settings as shown in the following table.
|App setting name | App setting value | |--|-|
Connection strings are always encrypted when stored (encrypted-at-rest).
![Application Settings](./media/configure-common/open-ui.png)
- By default, values for connection strings are hidden in the portal for security. To see a hidden value of a connection string, click its **Value** field. To see the hidden values of all connection strings, click the **Show value** button.
+ By default, values for connection strings are hidden in the portal for security. To see a hidden value of a connection string, select its **Value** field. To see the hidden values of all connection strings, select the **Show value** button.
-1. To add a new connection string, click **New connection string**. To edit a connection string, click the **Edit** button on the right side.
+1. To add a new connection string, select **New connection string**. To edit a connection string, select the **Edit** button on the right side.
1. In the dialog, you can [stick the connection string to the current slot](deploy-staging-slots.md#which-settings-are-swapped).
-1. When finished, click **Update**. Don't forget to click **Save** back in the **Configuration** page.
+1. When finished, select **Update**. Don't forget to select **Save** back in the **Configuration** page.
# [Azure CLI](#tab/cli)
Set-AzWebAppSlotConfigName -ResourceGroupName <group-name> -Name <app-name> -Con
# [Azure portal](#tab/portal)
-Click the **Advanced edit** button. Edit the connection strings in the text area. When finished, click **Update**. Don't forget to click **Save** back in the **Configuration** page.
+Select the **Advanced edit** button. Edit the connection strings in the text area. When finished, select **Update**. Don't forget to select **Save** back in the **Configuration** page.
Connection strings have the following JSON formatting:
Here, you can configure some common settings for the app. Some settings require
![General settings for Linux containers](./media/configure-common/open-general-linux.png) - **Platform settings**: Lets you configure settings for the hosting platform, including:
+ - **Platform bitness**: 32-bit or 64-bit. For Windows apps only.
- **FTP state**: Allow only FTPS or disable FTP altogether.
- - **Bitness**: 32-bit or 64-bit. For Windows apps only.
- - **WebSocket protocol**: For [ASP.NET SignalR] or [socket.io](https://socket.io/), for example.
- - **Always On**: Keeps the app loaded even when there's no traffic. When **Always On** is not turned on (default), the app is unloaded after 20 minutes without any incoming requests. The unloaded app can cause high latency for new requests because of its warm-up time. When **Always On** is turned on, the front-end load balancer sends a GET request to the application root every five minutes. The continuous ping prevents the app from being unloaded.
-
- Always On is required for continuous WebJobs or for WebJobs that are triggered using a CRON expression.
- **HTTP version**: Set to **2.0** to enable support for [HTTPS/2](https://wikipedia.org/wiki/HTTP/2) protocol. > [!NOTE] > Most modern browsers support HTTP/2 protocol over TLS only, while non-encrypted traffic continues to use HTTP/1.1. To ensure that client browsers connect to your app with HTTP/2, secure your custom DNS name. For more information, see [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md).
+ - **Web sockets**: For [ASP.NET SignalR] or [socket.io](https://socket.io/), for example.
+ - **Always On**: Keeps the app loaded even when there's no traffic. When **Always On** isn't turned on (default), the app is unloaded after 20 minutes without any incoming requests. The unloaded app can cause high latency for new requests because of its warm-up time. When **Always On** is turned on, the front-end load balancer sends a GET request to the application root every five minutes. The continuous ping prevents the app from being unloaded.
+
+ Always On is required for continuous WebJobs or for WebJobs that are triggered using a CRON expression.
- **ARR affinity**: In a multi-instance deployment, ensure that the client is routed to the same instance for the life of the session. You can set this option to **Off** for stateless applications.
+ - **HTTPS Only**: When enabled, all HTTP traffic is redirected to HTTPS.
+ - **Minimum TLS version**: Select the minimum TLS encryption version required by your app.
- **Debugging**: Enable remote debugging for [ASP.NET](troubleshoot-dotnet-visual-studio.md#remotedebug), [ASP.NET Core](/visualstudio/debugger/remote-debugging-azure), or [Node.js](configure-language-nodejs.md#debug-remotely) apps. This option turns off automatically after 48 hours. - **Incoming client certificates**: require client certificates in [mutual authentication](app-service-web-configure-tls-mutual-auth.md).
To show the existing settings, use the [Get-AzWebApp](/powershell/module/az.webs
This setting is only for Windows apps.
-The default document is the web page that's displayed at the root URL of an App Service app. The first matching file in the list is used. If the app uses modules that route based on URL instead of serving static content, there is no need for default documents.
+The default document is the web page that's displayed at the root URL of an App Service app. The first matching file in the list is used. If the app uses modules that route based on URL instead of serving static content, there's no need for default documents.
# [Azure portal](#tab/portal)
The default document is the web page that's displayed at the root URL of an App
![Default documents](./media/configure-common/open-documents.png)
-1. To add a default document, click **New document**. To remove a default document, click **Delete** to its right.
+1. To add a default document, select **New document**. To remove a default document, select **Delete** to its right.
# [Azure CLI](#tab/cli)
By default, App Service starts your app from the root directory of your app code
1. In the [Azure portal], search for and select **App Services**, and then select your app. 1. In the app's left menu, select **Configuration** > **Path mappings**
-1. Click **New virtual application or directory**.
+1. Select **New virtual application or directory**.
- To map a virtual directory to a physical path, leave the **Directory** check box selected. Specify the virtual directory and the corresponding relative (physical) path to the website root (`D:\home`). - To mark a virtual directory as a web application, clear the **Directory** check box. ![Directory check box](./media/configure-common/directory-check-box.png)
-1. Click **OK**.
+1. Select **OK**.
# [Azure CLI](#tab/cli)
To add a custom handler:
![Path mappings](./media/configure-common/open-path.png)
-1. Click **New handler mapping**. Configure the handler as follows:
+1. Select **New handler mapping**. Configure the handler as follows:
- **Extension**. The file extension you want to handle, such as *\*.php* or *handler.fcgi*. - **Script processor**. The absolute path of the script processor to you. Requests to files that match the file extension are processed by the script processor. Use the path `D:\home\site\wwwroot` to refer to your app's root directory. - **Arguments**. Optional command-line arguments for the script processor.
-1. Click **OK**.
+1. Select **OK**.
## Configure custom containers
app-service Configure Ssl Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-bindings.md
description: Secure HTTPS access to your custom domain by creating a TLS/SSL bin
tags: buy-ssl-certificates Previously updated : 04/27/2022 Last updated : 04/20/2023
This article shows you how to secure the [custom domain](app-service-web-tutoria
![Web app with custom TLS/SSL certificate](./media/configure-ssl-bindings/app-with-custom-ssl.png)
-Securing a [custom domain](app-service-web-tutorial-custom-domain.md) with a certificate involves two steps:
--- [Add a private certificate to App Service](configure-ssl-certificate.md) that satisfies all the [private certificate requirements](configure-ssl-certificate.md#private-certificate-requirements).-- Create a TLS binding to the corresponding custom domain. This second step is covered by this article.- ## Prerequisites
-To follow this how-to guide:
--- [Create an App Service app](./index.yml)-- [Map a domain name to your app](app-service-web-tutorial-custom-domain.md) or [buy and configure it in Azure](manage-custom-dns-buy-domain.md)-- [Add a private certificate to your app](configure-ssl-certificate.md)-
-> [!NOTE]
-> The easiest way to add a private certificate is to [create a free App Service managed certificate with your custom domain](tutorial-secure-domain-certificate.md).
--
+- [Scale up your App Service app](manage-scale-up.md) to one of the supported pricing tiers: **Basic**, **Standard**, **Premium**.
+- [Map a domain name to your app](app-service-web-tutorial-custom-domain.md) or [buy and configure it in Azure](manage-custom-dns-buy-domain.md).
<a name="upload"></a>
-## Secure a custom domain
-
-Do the following steps:
-
-In the <a href="https://portal.azure.com" target="_blank">Azure portal</a>, from the left menu, select **App Services** > **\<app-name>**.
-
-From the left navigation of your app, start the **TLS/SSL Binding** dialog by:
+## 1. Add the binding
-- Selecting **Custom domains** > **Add binding**-- Selecting **TLS/SSL settings** > **Add TLS/SSL binding**
+In the <a href="https://portal.azure.com" target="_blank">Azure portal</a>:
-![Add binding to domain](./media/configure-ssl-bindings/secure-domain-launch.png)
+1. From the left menu, select **App Services** > **\<app-name>**.
-In **Custom Domain**, select the custom domain you want to add a binding for.
+1. From the left navigation of your app, select **Custom domains**
-If your app already has a certificate for the selected custom domain, go to [Create binding](#create-binding) directly. Otherwise, keep going.
+1. Next to the custom domain, select **Add binding**
-### Add a certificate for custom domain
+ :::image type="content" source="media/configure-ssl-bindings/secure-domain-launch.png" alt-text="A screenshot showing how to launch the Add TLS/SSL Binding dialog.":::
-If your app has no certificate for the selected custom domain, then you have two options:
+1. If your app already has a certificate for the selected custom domain, you can select it in **Certificate**. If not, you must add a certificate using one of the selections in **Source**.
-- **Upload PFX Certificate** - Follow the workflow at [Upload a private certificate](configure-ssl-certificate.md#upload-a-private-certificate), then select this option here.-- **Import App Service Certificate** - Follow the workflow at [Import an App Service certificate](configure-ssl-certificate.md#buy-and-import-app-service-certificate), then select this option here.
+ - **Create App Service Managed Certificate** - Let App Service create a managed certificate for your selected domain. This option is the simplest. For more information, see [Create a free managed certificate](configure-ssl-certificate.md#create-a-free-managed-certificate).
+ - **Import App Service Certificate** - In **App Service Certificate**, choose an App Service certificate you've purchased for your selected domain. To purchase an App Service certificate, see [Import an App Service certificate](configure-ssl-certificate.md#buy-and-import-app-service-certificate).
+ - **Upload certificate (.pfx)** - Follow the workflow at [Upload a private certificate](configure-ssl-certificate.md#upload-a-private-certificate) to upload a PFX certificate from your local machine and specify the certificate password.
+ - **Import from Key Vault** - Select **Select key vault certificate** and select the certificate in the dialog.
-> [!NOTE]
-> You can also [Create a free certificate](configure-ssl-certificate.md#create-a-free-managed-certificate) or [Import a Key Vault certificate](configure-ssl-certificate.md#import-a-certificate-from-key-vault), but you must do it separately and then return to the **TLS/SSL Binding** dialog.
-
-### Create binding
-
-Use the following table to help you configure the TLS binding in the **TLS/SSL Binding** dialog, then click **Add Binding**.
+1. In **TLS/SSL type**, choose between **SNI SSL** and **IP based SSL**.
+ - **[SNI SSL](https://en.wikipedia.org/wiki/Server_Name_Indication)**: Multiple SNI SSL bindings may be added. This option allows multiple TLS/SSL certificates to secure multiple domains on the same IP address. Most modern browsers (including Internet Explorer, Chrome, Firefox, and Opera) support SNI (for more information, see [Server Name Indication](https://wikipedia.org/wiki/Server_Name_Indication)).
+
-Once the operation is complete, the custom domain's TLS/SSL state is changed to **Secure**.
+1. When adding a new certificate, validate the new certificate by selecting **Validate**.
-![TLS/SSL binding successful](./media/configure-ssl-bindings/secure-domain-finished.png)
+1. Select **Add**.
+ Once the operation is complete, the custom domain's TLS/SSL state is changed to **Secure**.
+
+ :::image type="content" source="media/configure-ssl-bindings/secure-domain-finished.png" alt-text="A screenshot showing the custom domain secured by a certificate binding.":::
+
> [!NOTE] > A **Secure** state in the **Custom domains** means that it is secured with a certificate, but App Service doesn't check if the certificate is self-signed or expired, for example, which can also cause browsers to show an error or warning.
-## Remap records for IP SSL
+## 2. Remap records for IP based SSL
-If you don't use IP SSL in your app, skip to [Test HTTPS for your custom domain](#test-https).
+This step is needed only for IP based SSL. For an SNI SSL binding, skip to [Test HTTPS for your custom domain](#3-test-https).
There are two changes you need to make, potentially:
There are two changes you need to make, potentially:
- If you have an SNI SSL binding to `<app-name>.azurewebsites.net`, [remap any CNAME mapping](app-service-web-tutorial-custom-domain.md#2-create-the-dns-records) to point to `sni.<app-name>.azurewebsites.net` instead (add the `sni` prefix).
-## Test HTTPS
+## 3. Test HTTPS
In various browsers, browse to `https://<your.custom.domain>` to verify that it serves up your app. :::image type="content" source="./media/configure-ssl-bindings/app-with-custom-ssl.png" alt-text="Screenshot showing an example of browsing to your custom domain with the contoso.com URL highlighted.":::
-Your application code can inspect the protocol via the "x-appservice-proto" header. The header will have a value of `http` or `https`.
+Your application code can inspect the protocol via the "x-appservice-proto" header. The header has a value of `http` or `https`.
> [!NOTE] > If your app gives you certificate validation errors, you're probably using a self-signed certificate. > > If that's not the case, you may have left out intermediate certificates when you export your certificate to the PFX file.
-## Prevent IP changes
+## Frequently asked questions
+
+- [How do I make sure that the app's IP address doesn't change when I make changes to the certificate binding?](#how-do-i-make-sure-that-the-apps-ip-address-doesnt-change-when-i-make-changes-to-the-certificate-binding)
+- [Can I disable the forced redirect from HTTP to HTTPS?](#can-i-disable-the-forced-redirect-from-http-to-https)
+- [How can I change the minimum TLS versions for the app?](#how-can-i-change-the-minimum-tls-versions-for-the-app)
+- [How do I handle TLS termination in App Service?](#how-do-i-handle-tls-termination-in-app-service)
+
+<a name="prevent-ip-changes"></a>
+
+#### How do I make sure that the app's IP address doesn't change when I make changes to the certificate binding?
Your inbound IP address can change when you delete a binding, even if that binding is IP SSL. This is especially important when you renew a certificate that's already in an IP SSL binding. To avoid a change in your app's IP address, follow these steps in order:
Your inbound IP address can change when you delete a binding, even if that bindi
2. Bind the new certificate to the custom domain you want without deleting the old one. This action replaces the binding instead of removing the old one. 3. Delete the old certificate.
-## Enforce HTTPS
-
-In your app page, in the left navigation, select **TLS/SSL settings**. Then, in **HTTPS Only**, select **On**.
-
-If selected **HTTPS Only**, **Off** It means anyone can still access your app using HTTP. You can redirect all HTTP requests to the HTTPS port by selecting **On**.
-
-![Enforce HTTPS](./media/configure-ssl-bindings/enforce-https.png)
-
-When the operation is complete, navigate to any of the HTTP URLs that point to your app. For example:
+<a name="enforce-https"></a>
-- `http://<app_name>.azurewebsites.net`-- `http://contoso.com`-- `http://www.contoso.com`
+#### Can I disable the forced redirect from HTTP to HTTPS?
-## Enforce TLS versions
+By default, App Service forces a redirect from HTTP requests to HTTPS. To disable this behavior, see [Configure general settings](configure-common.md#configure-general-settings).
-Your app allows [TLS](https://wikipedia.org/wiki/Transport_Layer_Security) 1.2 by default, which is the recommended TLS level by industry standards, such as [PCI DSS](https://wikipedia.org/wiki/Payment_Card_Industry_Data_Security_Standard). To enforce different TLS versions, follow these steps:
+<a name="enforce-tls-versions"></a>
-In your app page, in the left navigation, select **TLS/SSL settings**. Then, in **TLS version**, select the minimum TLS version you want. This setting controls the inbound calls only.
+#### How can I change the minimum TLS versions for the app?
-![Enforce TLS 1.1 or 1.2](./media/configure-ssl-bindings/enforce-tls1-2.png)
+Your app allows [TLS](https://wikipedia.org/wiki/Transport_Layer_Security) 1.2 by default, which is the recommended TLS level by industry standards, such as [PCI DSS](https://wikipedia.org/wiki/Payment_Card_Industry_Data_Security_Standard). To enforce different TLS versions, see [Configure general settings](configure-common.md#configure-general-settings).
-When the operation is complete, your app rejects all connections with lower TLS versions.
+<a name="handle-tls-termination"></a>
-## Handle TLS termination
+#### How do I handle TLS termination in App Service?
In App Service, [TLS termination](https://wikipedia.org/wiki/TLS_termination_proxy) happens at the network load balancers, so all HTTPS requests reach your app as unencrypted HTTP requests. If your app logic needs to check if the user requests are encrypted or not, inspect the `X-Forwarded-Proto` header.
app-service Configure Ssl Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate.md
The following table lists the options for you to add certificates in App Service
| Purchase an App Service certificate | A private certificate that's managed by Azure. It combines the simplicity of automated certificate management and the flexibility of renewal and export options. | | Import a certificate from Key Vault | Useful if you use [Azure Key Vault](../key-vault/index.yml) to manage your [PKCS12 certificates](https://wikipedia.org/wiki/PKCS_12). See [Private certificate requirements](#private-certificate-requirements). | | Upload a private certificate | If you already have a private certificate from a third-party provider, you can upload it. See [Private certificate requirements](#private-certificate-requirements). |
-| Upload a public certificate | Public certificates are not used to secure custom domains, but you can load them into your code if you need them to access remote resources. |
+| Upload a public certificate | Public certificates aren't used to secure custom domains, but you can load them into your code if you need them to access remote resources. |
> [!NOTE] > After you upload a certificate to an app, the certificate is stored in a deployment unit that's bound to the App Service plan's resource group, region, and operating system combination, internally called a *webspace*. That way, the certificate is accessible to other apps in the same resource group and region combination.
The free certificate comes with the following limitations:
### [Apex domain](#tab/apex) - Must have an A record pointing to your web app's IP address.-- Isn't supported on apps that are not publicly accessible.
+- Isn't supported on apps that aren't publicly accessible.
- Isn't supported with root domains that are integrated with Traffic Manager. - Must meet all the above for successful certificate issuances and renewals.
The free certificate comes with the following limitations:
![Screenshot of "Private Key Certificates" pane with newly created certificate listed.](./media/configure-ssl-certificate/create-free-cert-finished.png)
-1. To secure a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Create binding](configure-ssl-bindings.md#create-binding).
+1. To secure a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md).
## Buy and import App Service certificate
If you already have a working App Service certificate, you can complete the foll
### Store certificate in Azure Key Vault
-[Key Vault](../key-vault/general/overview.md) is an Azure service that helps safeguard cryptographic keys and secrets used by cloud applications and services. For App Service certificates, the storage of choice is Key Vault. After you finish the certificate purchase process, you have to complete a few more steps before you start using this certificate.
+[Key Vault](../key-vault/general/overview.md) is an Azure service that helps safeguard cryptographic keys and secrets used by cloud applications and services. For App Service certificates, the storage of choice is Key Vault. After you finish the certificate purchase process, you must complete a few more steps before you start using this certificate.
1. On the [App Service Certificates page](https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.CertificateRegistration%2FcertificateOrders), select the certificate. On the certificate menu, select **Certificate Configuration** > **Step 1: Store**.
The following domain verification methods are supported:
![Screenshot of "Private Key Certificates" pane with purchased certificate listed.](./media/configure-ssl-certificate/import-app-service-cert-finished.png)
-1. To secure a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Create binding](configure-ssl-bindings.md#create-binding).
+1. To secure a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md).
## Import a certificate from Key Vault
If you use Azure Key Vault to manage your certificates, you can import a PKCS12
### Authorize App Service to read from the vault
-By default, the App Service resource provider doesn't have access to your key vault. To use a key vault for a certificate deployment, you have to [authorize read access for the resource provider to the key vault](../key-vault/general/assign-access-policy-cli.md).
+By default, the App Service resource provider doesn't have access to your key vault. To use a key vault for a certificate deployment, you must [authorize read access for the resource provider to the key vault](../key-vault/general/assign-access-policy-cli.md).
> [!NOTE] > Currently, a Key Vault certificate supports only the Key Vault access policy, not RBAC model.
By default, the App Service resource provider doesn't have access to your key va
> [!NOTE] > If you update your certificate in Key Vault with a new certificate, App Service automatically syncs your certificate within 24 hours.
-1. To secure a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Create binding](configure-ssl-bindings.md#create-binding).
+1. To secure a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md).
## Upload a private certificate
After you get a certificate from your certificate provider, make the certificate
### Merge intermediate certificates
-If your certificate authority gives you multiple certificates in the certificate chain, you have to merge the certificates following the same order.
+If your certificate authority gives you multiple certificates in the certificate chain, you must merge the certificates following the same order.
1. In a text editor, open each received certificate.
Now, export your merged TLS/SSL certificate with the private key that was used t
openssl pkcs12 -export -out myserver.pfx -inkey <private-key-file> -in <merged-certificate-file> ```
-1. When you're prompted, specify a password for the export operation. When you upload your TLS/SSL certificate to App Service later, you'll have to provide this password.
+1. When you're prompted, specify a password for the export operation. When you upload your TLS/SSL certificate to App Service later, you must provide this password.
1. If you used IIS or _Certreq.exe_ to generate your certificate request, install the certificate to your local computer, and then [export the certificate to a PFX file](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc754329(v=ws.11)).
You're now ready upload the certificate to App Service.
![Screenshot of "Private Key Certificates" pane with uploaded certificate listed.](./media/configure-ssl-certificate/create-free-cert-finished.png)
-1. To secure a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Create binding](configure-ssl-bindings.md#create-binding).
+1. To secure a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md).
## Upload a public certificate
The downloaded PFX file is a raw PKCS12 file that contains both the public and p
If you delete an App Service certificate, the delete operation is irreversible and final. The result is a revoked certificate, and any binding in App Service that uses this certificate becomes invalid.
-To prevent accidental deletion, Azure puts a lock on the App Service certificate. So, to delete the certificate, you have to first remove the delete lock on the certificate.
+To prevent accidental deletion, Azure puts a lock on the App Service certificate. So, to delete the certificate, you must first remove the delete lock on the certificate.
1. On the [App Service Certificates page](https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.CertificateRegistration%2FcertificateOrders), select the certificate.
app-service Overview Inbound Outbound Ips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-inbound-outbound-ips.md
nslookup <app-name>.azurewebsites.net
## Get a static inbound IP
-Sometimes you might want a dedicated, static IP address for your app. To get a static inbound IP address, you need to [secure a custom domain](configure-ssl-bindings.md#secure-a-custom-domain). If you don't actually need TLS functionality to secure your app, you can even upload a self-signed certificate for this binding. In an IP-based TLS binding, the certificate is bound to the IP address itself, so App Service provisions a static IP address to make it happen.
+Sometimes you might want a dedicated, static IP address for your app. To get a static inbound IP address, you need to [secure a custom DNS name with an IP-based certificate binding](configure-ssl-bindings.md). If you don't actually need TLS functionality to secure your app, you can even upload a self-signed certificate for this binding. In an IP-based TLS binding, the certificate is bound to the IP address itself, so App Service provisions a static IP address to make it happen.
## When outbound IPs change
app-service Overview Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-manage-costs.md
Other cost resources for App Service are (see [App Service pricing](https://azur
- [App Service domains](manage-custom-dns-buy-domain.md) Your subscription is charged for the domain registration on a yearly basis, if you enable automatic renewal. - [App Service certificates](configure-ssl-certificate.md#buy-and-import-app-service-certificate) One-time charge at the time of purchase. If you have multiple subdomains to secure, you can reduce cost by purchasing one wildcard certificate instead of multiple standard certificates.-- [IP-based certificate bindings](configure-ssl-bindings.md#create-binding) The binding is configured on a certificate at the app level. Costs are accrued for each binding. For **Standard** tier and above, the first IP-based binding is not charged.
+- [IP-based SSL binding](configure-ssl-bindings.md) The binding is configured on a certificate at the app level. Costs are accrued for each binding. For **Standard** tier and above, the first IP-based binding is not charged.
At the end of your billing cycle, the charges for each VM instance. Your bill or invoice shows a section for all App Service costs. There's a separate line item for each meter.
app-service Troubleshoot Domain Ssl Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-domain-ssl-certificates.md
The App Service certificate was renewed, but the app that uses the App Service c
#### Cause
-App Service automatically syncs your certificate within 48 hours. When you rotate or update a certificate, sometimes the application is still retrieving the old certificate and not the newly updated certificate. The reason is that the job to sync the certificate resource hasn't run yet. To resolve this problem, sync the certificate, which automatically updates the hostname bindings for the certificate in App Service without causing any downtime to your apps.
+App Service automatically syncs your certificate within 24 hours. When you rotate or update a certificate, sometimes the application is still retrieving the old certificate and not the newly updated certificate. The reason is that the job to sync the certificate resource hasn't run yet. To resolve this problem, sync the certificate, which automatically updates the hostname bindings for the certificate in App Service without causing any downtime to your apps.
#### Solution
app-service Tutorial Auth Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-auth-aad.md
if (bearerToken) {
## 9. Clean up resources
-In the preceding steps, you created Azure resources in a resource group.
-
-1. Delete the resource group by running the following command in the Cloud Shell. This command may take a minute to run.
--
- ```azurecli-interactive
- az group delete --name myAuthResourceGroup
- ```
--
-1. Use the authentication apps' **Client ID**, you previously found and made note of in the `Enable authentication and authorization` sections for the backend and frontend apps.
-1. Delete app registrations for both frontend and backend apps.
-
- ```azurecli-interactive
- # delete app - do this for both frontend and backend client ids
- az ad app delete <client-id>
- ```
## Frequently asked questions
app-service Tutorial Connect App App Graph Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-app-graph-javascript.md
+
+ Title: 'Tutorial: Authenticate users E2E to Azure'
+description: Learn how to use App Service authentication and authorization to secure your App Service apps end-to-end to a downstream Azure service.
+keywords: app service, azure app service, authN, authZ, secure, security, multi-tiered, azure active directory, azure ad
+ms.devlang: javascript
+ Last updated : 3/13/2023+
+zone_pivot_groups: app-service-platform-windows-linux
+# Requires non-internal subscription - internal subscriptons doesn't provide permission to correctly configure AAD apps
++
+# Tutorial: Flow authentication from App Service through back-end API to Microsoft Graph
+
+Learn how to create and configure a backend App service to accept a frontend app's user credential, then exchange that credential for a downstream Azure service. This allows a user to sign in to a frontend App service, pass their credential to a backend App service, then access an Azure service with the same identity.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+>
+> * Configure the backend authentication app to provide a token scoped for the downstream Azure service
+> * Use JavaScript code to exchange the [signed-in user's access token](configure-authentication-oauth-tokens.md#retrieve-tokens-in-app-code) for a new token for downstream service.
+> * Use JavaScript code to access downstream service.
+
+## Prerequisites
+
+Complete the previous tutorial, [Access Microsoft Graph from a secured JavaScript app as the user](tutorial-auth-aad.md), before starting this tutorial but don't remove the resources at the end of the tutorial. This tutorial assumes you have the two App services and their corresponding authentication apps.
+
+The previous tutorial used the Azure Cloud Shell as the shell for the Azure CLI. This tutorial continues that usage.
+
+## Architecture
+
+The tutorial shows how to pass the user credential provided by the frontend app to the backend app then on to an Azure service. In this tutorial, the downstream service is Microsoft Graph. The user's credential is used to get their profile from Microsoft Graph.
++
+**Authentication flow** for a user to get Microsoft Graph information in this architecture:
+
+[Previous tutorial](tutorial-auth-aad.md) covered:
+
+1. Sign in user to a frontend App service configured to use Active Directory as the identity provider.
+1. The frontend App service passes user's token to backend App service.
+1. The backend App is secured to allow the frontend to make an API request. The user's access token has an audience for the backend API and scope of `user_impersonation`.
+1. The backend app registration already has the Microsoft Graph with the scope `User.Read`. This is added by default to all app registrations.
+1. At the end of the previous tutorial, a _fake_ profile was returned to the frontend app because Graph wasn't connected.
+
+This tutorial extends the architecture:
+
+1. Grant admin consent to bypass the user consent screen for the back-end app.
+1. Change the application code to convert the access token sent from the front-end app to an access token with the required permission for Microsoft Graph.
+1. Provide code to have backend app **exchange token** for new token with scope of downstream Azure service such as Microsoft Graph.
+1. Provide code to have backend app **use new token** to access downstream service as the current authenticate user.
+1. **Redeploy** backend app with `az webapp up`.
+1. At the end of this tutorial, a _real_ profile is returned to the frontend app because Graph is connected.
+
+This tutorial doesn't:
+* Change the frontend app from the previous tutorial.
+* Change the backend authentication app's scope permission because `User.Read` is added by default to all authentication apps.
+
+## 1. Configure admin consent for the backend app
+
+In the previous tutorial, when the user signed in to the frontend app, a pop-up displayed asking for user consent.
+
+In this tutorial, in order to read user profile from Microsoft Graph, the back-end app needs to exchange the signed-in user's [access token](../active-directory/develop/access-tokens.md) for a new access token with the required permissions for Microsoft Graph. Because the user isn't directly connected to the backend app, they can't access the consent screen interactively. You must work around this by configuring the back-end app's app registration in Azure AD to [grant admin consent](../active-directory/manage-apps/grant-admin-consent.md?pivots=portal). This is a setting change typically done by an Active Directory administrator.
+
+1. Open the Azure portal and search for your research for the backend App Service.
+1. Find the **Settings -> Authentication** section.
+1. Select the identity provider to go to the authentication app.
+1. In the authentication app, select **Manage -> API permissions**.
+1. Select **Grant admin consent for Default Directory**.
+
+ :::image type="content" source="./media/tutorial-connect-app-app-graph-javascript/azure-portal-authentication-app-api-permission-admin-consent-area.png" alt-text="Screenshot of Azure portal authentication app with admin consent button highlighted.":::
+
+1. In the pop-up window, select **Yes** to confirm the consent.
+1. Verify the **Status** column says **Granted for Default Directory**. With this setting, the back-end app is no longer required to show a consent screen to the signed-in user and can directly request an access token. The signed-in user has access to the `User.Read` scope setting because that is the default scope with which the app registration is created.
+
+ :::image type="content" source="./media/tutorial-connect-app-app-graph-javascript/azure-portal-authentication-app-api-permission-admin-consent-granted.png" alt-text="Screenshot of Azure portal authentication app with admin consent granted in status column.":::
+
+## 2. Install npm packages
+
+In the previous tutorial, the backend app didn't need any npm packages for authentication because the only authentication was provided by configuring the identity provider in the Azure portal. In this tutorial, the signed-in user's access token for the back-end API must be exchanged for an access token with Microsoft Graph in its scope. This exchange is completed with two libraries because this exchange doesn't use App Service authentication anymore, but Azure Active Directory and MSAL.js directly.
+
+* [@azure/msal-node](https://www.npmjs.com/package/@azure/msal-node) - exchange token
+* [@microsoft/microsoft-graph-client](https://www.npmjs.com/package/@microsoft/microsoft-graph-client) - connect to Microsoft Graph
+
+1. Open the Azure Cloud Shell and change into the sample directory's backend app:
+
+ ```azurecli-interactive
+ cd js-e2e-web-app-easy-auth-app-to-app/backend
+ ```
+
+1. Install the Azure MSAL npm package:
+
+ ```azurecli-interactive
+ npm install @azure/msal-node
+ ```
+
+1. Install the Microsoft Graph npm package:
+
+ ```azurecli-interactive
+ npm install @microsoft/microsoft-graph-client
+ ```
+
+## 3. Add code to exchange current token for Microsoft Graph token
+
+The source code to complete this step is provided for you. Use the following steps to include it.
+
+1. Open the `./src/server.js` file.
+1. Uncomment the following dependency at the top of the file:
+
+ ```javascript
+ import { getGraphProfile } from './with-graph/graph';
+ ```
+
+1. In the same file, uncomment the `graphProfile` variable:
+
+ ```javascript
+ let graphProfile={};
+ ```
+
+1. In the same file, uncomment the following `getGraphProfile` lines in the `get-profile` route to get the profile from Microsoft Graph:
+
+ ```javascript
+ // where did the profile come from
+ profileFromGraph=true;
+
+ // get the profile from Microsoft Graph
+ graphProfile = await getGraphProfile(accessToken);
+
+ // log the profile for debugging
+ console.log(`profile: ${JSON.stringify(graphProfile)}`);
+ ```
+
+1. Save the changes: <kbd>Ctrl</kbd> + <kbd>s</kbd>.
+1. Redeploy the backend app:
+
+ ```azurecli-interactive
+ az webapp up --resource-group myAuthResourceGroup --name <back-end-app-name>
+
+## 4. Inspect backend code to exchange backend API token for the Microsoft Graph token
+
+In order to change the backend API audience token for a Microsoft Graph token, the backend app needs to find the Tenant ID and use that as part of the MSAL.js configuration object. Because the backend app with configured with Microsoft as the identity provider, the Tenant ID and several other required values are already in the App service app settings.
+
+The following code is already provided for you in the sample app. You need to understand why it's there and how it works so that you can apply this work to other apps you build that need this same functionality.
+
+### Inspect code for getting the Tenant ID
+
+1. Open the `./backend/src/with-graph/auth.js` file.
+
+2. Review the `getTenantId()` function.
+
+ ```javascript
+ export function getTenantId() {
+
+ const openIdIssuer = process.env.WEBSITE_AUTH_OPENID_ISSUER;
+ const backendAppTenantId = openIdIssuer.replace(/https:\/\/sts\.windows\.net\/(.{1,36})\/v2\.0/gm, '$1');
+
+ return backendAppTenantId;
+ }
+ ```
+
+3. This function gets the current tenant ID from the `WEBSITE_AUTH_OPENID_ISSUER` environment variable. The ID is parsed out of the variable with a regular expression.
+
+### Inspect code to get Graph token using MSAL.js
+
+1. Still in the `./backend/src/with-graph/auth.js` file, review the `getGraphToken()` function.
+1. Build the MSAL.js configuration object, use the MSAL configuration to create the clientCredentialAuthority. Configure the on-behalf-off request. Then use the acquireTokenOnBehalfOf to exchange the backend API access token for a Graph access token.
+
+ ```javascript
+ // ./backend/src/auth.js
+ // Exchange current bearerToken for Graph API token
+ // Env vars were set by App Service
+ export async function getGraphToken(backEndAccessToken) {
+
+ const config = {
+ // MSAL configuration
+ auth: {
+ // the backend's authentication CLIENT ID
+ clientId: process.env.WEBSITE_AUTH_CLIENT_ID,
+ // the backend's authentication CLIENT SECRET
+ clientSecret: process.env.MICROSOFT_PROVIDER_AUTHENTICATION_SECRET,
+ // OAuth 2.0 authorization endpoint (v2)
+ // should be: https://login.microsoftonline.com/BACKEND-TENANT-ID
+ authority: `https://login.microsoftonline.com/${getTenantId()}`
+ },
+ // used for debugging
+ system: {
+ loggerOptions: {
+ loggerCallback(loglevel, message, containsPii) {
+ console.log(message);
+ },
+ piiLoggingEnabled: true,
+ logLevel: MSAL.LogLevel.Verbose,
+ }
+ }
+ };
+
+ const clientCredentialAuthority = new MSAL.ConfidentialClientApplication(config);
+
+ const oboRequest = {
+ oboAssertion: backEndAccessToken,
+ // this scope must already exist on the backend authentication app registration
+ // and visible in resources.azure.com backend app auth config
+ scopes: ["https://graph.microsoft.com/.default"]
+ }
+
+ // This example has App service validate token in runtime
+ // from headers that can't be set externally
+
+ // If you aren't using App service's authentication,
+ // you must validate your access token yourself
+ // before calling this code
+ try {
+ const { accessToken } = await clientCredentialAuthority.acquireTokenOnBehalfOf(oboRequest);
+ return accessToken;
+ } catch (error) {
+ console.log(`getGraphToken:error.type = ${error.type} ${error.message}`);
+ }
+ }
+ ```
+
+## 5. Inspect backend code to access Microsoft Graph with the new token
+
+To access Microsoft Graph as a user signed in to the frontend application, the changes include:
+
+* Configuration of the Active Directory app registration with an API permission to the downstream service, Microsoft Graph, with the necessary scope of `User.Read`.
+* Grant admin consent to bypass the user consent screen for the back-end app.
+* Change the application code to convert the access token sent from the front-end app to an access token with the required permission for the downstream service, Microsoft Graph.
+
+Now that the code has the correct token for Microsoft Graph, use it to create a client to Microsoft Graph then get the user's profile.
+
+1. Open the `./backend/src/graph.js`
+
+2. In the `getGraphProfile()` function, get the token, then the authenticated client from the token then get the profile.
+
+ ```javascript
+ //
+ import graph from "@microsoft/microsoft-graph-client";
+ import { getGraphToken } from "./auth.js";
+
+ // Create client from token with Graph API scope
+ export function getAuthenticatedClient(accessToken) {
+ const client = graph.Client.init({
+ authProvider: (done) => {
+ done(null, accessToken);
+ }
+ });
+
+ return client;
+ }
+ export async function getGraphProfile(accessToken) {
+ // exchange current backend token for token with
+ // graph api scope
+ const graphToken = await getGraphToken(accessToken);
+
+ // use graph token to get Graph client
+ const graphClient = getAuthenticatedClient(graphToken);
+
+ // get profile of user
+ const profile = await graphClient
+ .api('/me')
+ .get();
+
+ return profile;
+ }
+ ```
+
+## 6. Test your changes
+
+1. Use the frontend web site in a browser. The URL is in the format of `https://<front-end-app-name>.azurewebsites.net/`. You may need to refresh your token if it's expired.
+1. Select `Get user's profile`. This passes your authentication in the bearer token to the backend.
+1. The backend end responds with the _real_ Microsoft Graph profile for your account.
+
+ :::image type="content" source="./media/tutorial-connect-app-app-graph-javascript/browser-profile-from-microsoft-graph.png" alt-text="Screenshot of web browser showing frontend application after successfully getting real profile from backend app.":::
+
+## 7. Clean up
+++
+## Frequently asked questions
+
+#### I got an error `80049217`, what does it mean?
+
+This error, `CompactToken parsing failed with error code: 80049217`, means the backend App service isn't authorized to return the Microsoft Graph token. This error is caused because the app registration is missing the `User.Read` permission.
+
+#### I got an error `AADSTS65001`, what does it mean?
+
+This error, `AADSTS65001: The user or administrator has not consented to use the application with ID \<backend-authentication-id>. Send an interactive authorization request for this user and resource`, means the backend authentication app hasn't been configured for Admin consent. Because the error shows up in the log for the backend app, the frontend application can't tell the user why they didn't see their profile in the frontend app.
+
+#### How do I connect to a different downstream Azure service as user?
+
+This tutorial demonstrates an API app authenticated to **Microsoft Graph**, however, the same general steps can be applied to access any Azure service on behalf of the user.
+
+1. No change to the frontend application. Only changes to the backend's authentication app registration and backend app source code.
+1. Exchange the user's token scoped for backend API for a token to the downstream service you want to access.
+1. Use token in downstream service's SDK to create the client.
+1. Use downstream client to access service functionality.
+
+## Next steps
+
+* [Tutorial: Create a secure n-tier app in Azure App Service](tutorial-secure-ntier-app.md)
+* [Deploy a Node.js + MongoDB web app to Azure](tutorial-nodejs-mongodb-app.md)
app-service Tutorial Connect Msi Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-sql-database.md
description: Secure Azure SQL Database connectivity with managed identity from a
ms.devlang: csharp Previously updated : 02/16/2022 Last updated : 04/01/2023 # Tutorial: Connect to SQL Database from .NET App Service without secrets using a managed identity
The steps you follow for your project depends on whether you're using [Entity Fr
1. In Visual Studio, open the Package Manager Console and add the NuGet package [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient): ```powershell
- Install-Package Microsoft.Data.SqlClient -Version 4.0.1
+ Install-Package Microsoft.Data.SqlClient -Version 5.1.0
``` 1. In the [ASP.NET Core and SQL Database tutorial](tutorial-dotnetcore-sqldb-app.md), the `MyDbConnection` connection string in *appsettings.json* isn't used at all yet. The local environment and the Azure environment both get connection strings from their respective environment variables in order to keep connection secrets out of the source file. But now with Active Directory authentication, there are no more secrets. In *appsettings.json*, replace the value of the `MyDbConnection` connection string with:
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
Follow the steps below to setup the Azure Developer CLI and provision and deploy
powershell -ex AllSigned -c "Invoke-RestMethod 'https://aka.ms/install-azd.ps1' | Invoke-Expression" ```
- ### [Linux/MacOS](#tab/linuxmac)
+ ### [macOS/Linux](#tab/mac-linux)
```azdeveloper curl -fsSL https://aka.ms/install-azd.sh | bash
applied-ai-services Concept Composed Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-composed-models.md
With composed models, you can assign multiple custom models to a composed model
::: moniker range="form-recog-3.0.0"
-With the introduction of [****custom classifier models****](./concept-custom-classifier.md), you can choose to use [**composed models**](./concept-composed-models.md) or the classifier model as an explicit step before analysis. For a deeper understanding of when to use a classifier or composed model, _see_ [**Custom classifier models**](concept-custom-classifier.md).
+With the introduction of [**custom classification models**](./concept-custom-classifier.md), you can choose to use a [**composed model**](./concept-composed-models.md) or [**classification model**](concept-custom-classifier.md) as an explicit step before analysis. For a deeper understanding of when to use a classification or composed model, _see_ [**Custom classification models**](concept-custom-classifier.md#compare-custom-classification-and-composed-models).
## Compose model limits
applied-ai-services Concept Custom Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-classifier.md
Title: Custom classifier model - Form Recognizer
+ Title: Custom classification model - Form Recognizer
-description: Use the custom classifier model to train a model to identify and split the documents you process within your application.
+description: Use the custom classification model to train a model to identify and split the documents you process within your application.
monikerRange: 'form-recog-3.0.0'
recommendations: false
-# Custom classifier model
+# Custom classification model
**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
-Custom classifier models are deep-learning-model types that combine layout and language features to accurately detect and identify documents you process within your application. Custom classifier models can classify each page in an input file to identify the document(s) within and can also identify multiple documents or multiple instances of a single document within an input file.
+Custom classification models are deep-learning-model types that combine layout and language features to accurately detect and identify documents you process within your application. Custom classification models can classify each page in an input file to identify the document(s) within and can also identify multiple documents or multiple instances of a single document within an input file.
## Model capabilities
-Custom classifier models can analyze a single- or multi-file documents to identify if any of the trained document types are contained within an input file. Here are the currently supported scenarios:
+Custom classification models can analyze a single- or multi-file documents to identify if any of the trained document types are contained within an input file. Here are the currently supported scenarios:
* A single file containing one document. For instance, a loan application form.
Custom classifier models can analyze a single- or multi-file documents to identi
* A single file containing multiple instances of the same document. For instance, a collection of scanned invoices.
-Training a custom classifier model requires at least two distinct classes and a minimum of five samples per class.
+Training a custom classifier requires at least two distinct classes and a minimum of five samples per class.
-### Compare custom classifier and composed models
+### Compare custom classification and composed models
-A custom classifier model can replace [a composed model](concept-composed-models.md) in some scenarios but there are a few differences to be aware of:
+A custom classification model can replace [a composed model](concept-composed-models.md) in some scenarios but there are a few differences to be aware of:
| Capability | Custom classifier process | Composed model process | |--|--|--|
-|Analyze a single document of unknown type belonging to one of the types trained for extraction model processing.| &#9679; Requires multiple calls. </br> &#9679; Call the classifier models based on the document class. This step allows for a confidence-based check before invoking the extraction model analysis.</br> &#9679; Invoke the extraction model. | &#9679; Requires a single call to a composed model containing the model corresponding to the input document type. |
+|Analyze a single document of unknown type belonging to one of the types trained for extraction model processing.| &#9679; Requires multiple calls. </br> &#9679; Call the classification model based on the document class. This step allows for a confidence-based check before invoking the extraction model analysis.</br> &#9679; Invoke the extraction model. | &#9679; Requires a single call to a composed model containing the model corresponding to the input document type. |
|Analyze a single document of unknown type belonging to several types trained for extraction model processing.| &#9679;Requires multiple calls.</br> &#9679; Make a call to the classifier that ignores documents not matching a designated type for extraction.</br> &#9679; Invoke the extraction model. | &#9679; Requires a single call to a composed model. The service selects a custom model within the composed model with the highest match.</br> &#9679; A composed model can't ignore documents.| |Analyze a file containing multiple documents of known or unknown type belonging to one of the types trained for extraction model processing.| &#9679; Requires multiple calls. </br> &#9679; Call the extraction model for each identified document in the input file.</br> &#9679; Invoke the extraction model. | &#9679; Requires a single call to a composed model.</br> &#9679; The composed model invokes the component model once on the first instance of the document. </br> &#9679;The remaining documents are ignored. | ## Language support
-Classifier models currently only support English language documents.
+Classification models currently only support English language documents.
## Best practices
-Custom classifier models require a minimum of five samples per class to train. If the classes are similar, adding extra training samples improves model accuracy.
+Custom classification models require a minimum of five samples per class to train. If the classes are similar, adding extra training samples improves model accuracy.
## Training a model
-Custom classifier models are only available in the [v3.0 API](v3-migration-guide.md) starting with API version ```2023-02-28-preview```. [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio) provides a no-code user interface to interactively train a custom classifier.
+Custom classification models are only available in the [v3.0 API](v3-migration-guide.md) starting with API version ```2023-02-28-preview```. [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio) provides a no-code user interface to interactively train a custom classifier.
-When using the REST API, if you've organized your documents by folders, you can use the ```azureBlobSource``` property of the request to train a classifier model.
+When using the REST API, if you've organized your documents by folders, you can use the ```azureBlobSource``` property of the request to train a classification model.
```rest https://{endpoint}/formrecognizer/documentClassifiers:build?api-version=2023-02-28-preview
File list `car-maint.jsonl` contains the following files.
## Next steps
-Learn to create custom classifier models:
+Learn to create custom classification models:
> [!div class="nextstepaction"]
-> [**Build a custom classifier model**](how-to-guides/build-a-custom-classifier.md)
+> [**Build a custom classification model**](how-to-guides/build-a-custom-classifier.md)
> [**Custom models overview**](concept-custom.md)
applied-ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom.md
recommendations: false
Form Recognizer uses advanced machine learning technology to identify documents, detect and extract information from forms and documents, and return the extracted data in a structured JSON output. With Form Recognizer, you can use document analysis models, pre-built/pre-trained, or your trained standalone custom models.
-Custom models now include [custom classifier models](./concept-custom-classifier.md) for scenarios where you need to identify the document type prior to invoking the extraction model. Classifier models are available starting with the ```2023-02-28-preview``` API. A classifier model can be paired with a custom extraction model to analyze and extract fields from forms and documents specific to your business to create a document processing solution. Standalone custom extraction models can be combined to create [composed models](concept-composed-models.md).
+Custom models now include [custom classification models](./concept-custom-classifier.md) for scenarios where you need to identify the document type prior to invoking the extraction model. Classifier models are available starting with the ```2023-02-28-preview``` API. A classification model can be paired with a custom extraction model to analyze and extract fields from forms and documents specific to your business to create a document processing solution. Standalone custom extraction models can be combined to create [composed models](concept-composed-models.md).
::: moniker range="form-recog-3.0.0"
The following table compares custom template and custom neural features:
|Document variations | Requires a model per each variation | Uses a single model for all variations | |Language support | Multiple [language support](language-support.md#read-layout-and-custom-form-template-model) | English, with preview support for Spanish, French, German, Italian and Dutch [language support](language-support.md#custom-neural-model) |
-### Custom classifier model
+### Custom classification model
- Document classification is a new scenario supported by Form Recognizer with the ```2023-02-28-preview``` API. Document classifier supports classification and splitting scenarios. Train a classifier model to identify the different types of documents your application supports. The input file for the classifier model can contain multiple documents and classifies each document within an associated page range. See [custom classification](concept-custom-classifier.md) models to learn more.
+ Document classification is a new scenario supported by Form Recognizer with the ```2023-02-28-preview``` API. the document classifier API supports classification and splitting scenarios. Train a classification model to identify the different types of documents your application supports. The input file for the classification model can contain multiple documents and classifies each document within an associated page range. See [custom classification](concept-custom-classifier.md) models to learn more.
## Custom model tools
Extract data from your specific or unique documents using custom models. You nee
1. Review and create your project.
-1. Label your documents to build and test your custom classifier model.
+1. Label your documents to build and test your custom classification model.
> [!div class="nextstepaction"] > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/document-classifier/projects)
applied-ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-model-overview.md
recommendations: false
|**Custom models**|| | [Custom model (overview)](#custom-models) | Extract data from forms and documents specific to your business. Custom models are trained for your distinct data and use cases. | | [Custom extraction models](#custom-extraction)| &#9679; **Custom template models** use layout cues to extract values from documents and are suitable to extract fields from highly structured documents with defined visual templates.</br>&#9679; **Custom neural models** are trained on various document types to extract fields from structured, semi-structured and unstructured documents.|
-| [Custom classifier model](#custom-classifier)| The **Custom classifier model** can classify each page in an input file to identify the document(s) within and can also identify multiple documents or multiple instances of a single document within an input file.
+| [Custom classification model](#custom-classifier)| The **Custom classification model** can classify each page in an input file to identify the document(s) within and can also identify multiple documents or multiple instances of a single document within an input file.
| [Composed models](#composed-models) | Combine several custom models into a single model to automate processing of diverse document types with a single composed model. ### Read OCR
-[:::image type="icon" source="media/studio/read-card.png" :::](https://formrecognizer.appliedai.azure.com/studio/read)
The Read API analyzes and extracts lines, words, their locations, detected languages, and handwritten style if detected.
The Read API analyzes and extracts lines, words, their locations, detected langu
### Layout analysis
-[:::image type="icon" source="media/studio/layout.png":::](https://formrecognizer.appliedai.azure.com/studio/layout)
The Layout analysis model analyzes and extracts text, tables, selection marks, and other structure elements like titles, section headings, page headers, page footers, and more.
The Layout analysis model analyzes and extracts text, tables, selection marks, a
### General document
-[:::image type="icon" source="media/studio/general-document.png":::](https://formrecognizer.appliedai.azure.com/studio/document)
-The general document model is ideal for extracting common key-value pairs from forms and documents. It's a pre-trained model and can be directly invoked via the REST API and the SDKs. You can use the general document model as an alternative to training a custom model.
+The general document model is ideal for extracting common key-value pairs from forms and documents. It's a pretrained model and can be directly invoked via the REST API and the SDKs. You can use the general document model as an alternative to training a custom model.
***Sample document processed using the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/document)***:
The general document model is ideal for extracting common key-value pairs from f
### Health insurance card The health insurance card model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from US health insurance cards.
The health insurance card model combines powerful Optical Character Recognition
### W-2
-[:::image type="icon" source="media/studio/w2.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)
The W-2 form model extracts key information reported in each box on a W-2 form. The model supports standard and customized forms from 2018 to the present, including single and multiple forms on one page.
The W-2 form model extracts key information reported in each box on a W-2 form.
### Invoice
-[:::image type="icon" source="media/studio/invoice.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)
The invoice model automates processing of invoices to extracts customer name, billing address, due date, and amount due, line items and other key data. Currently, the model supports English, Spanish, German, French, Italian, Portuguese, and Dutch invoices.
The invoice model automates processing of invoices to extracts customer name, bi
### Receipt
-[:::image type="icon" source="media/studio/receipt.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)
Use the receipt model to scan sales receipts for merchant name, dates, line items, quantities, and totals from printed and handwritten receipts. The version v3.0 also supports single-page hotel receipt processing.
Use the receipt model to scan sales receipts for merchant name, dates, line item
### Identity document (ID)
-[:::image type="icon" source="media/studio/id-document.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)
Use the Identity document (ID) model to process U.S. Driver's Licenses (all 50 states and District of Columbia) and biographical pages from international passports (excluding visa and other travel documents) to extract key fields.
Use the Identity document (ID) model to process U.S. Driver's Licenses (all 50 s
### Business card
-[:::image type="icon" source="media/studio/business-card.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)
Use the business card model to scan and extract key information from business card images.
Use the business card model to scan and extract key information from business ca
### Custom models
- [:::image type="icon" source="media/studio/custom.png":::](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)
Custom document models analyze and extract data from forms and documents specific to your business. They're trained to recognize form fields within your distinct content and extract key-value pairs and table data. You only need five examples of the same form type to get started.
Version v3.0 custom model supports signature detection in custom forms (template
#### Custom extraction
-[:::image type="icon" source="media/studio/custom-extraction.png":::](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)
Custom extraction model can be one of two types, **custom template** or **custom neural**. To create a custom extraction model, label a dataset of documents with the values you want extracted and train the model on the labeled dataset. You only need five examples of the same form or document type to get started.
Custom extraction model can be one of two types, **custom template** or **custom
#### Custom classifier
-[:::image type="icon" source="media/studio/custom-classifier.png":::](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)
-The custom classifier model enables you to identify the document type prior to invoking the extraction model. The classifier model is available starting with the 2023-02-28-preview. Training a custom classifier model requires at least two distinct classes and a minimum of five samples per class.
+The custom classification model enables you to identify the document type prior to invoking the extraction model. The classification model is available starting with the 2023-02-28-preview. Training a custom classification model requires at least two distinct classes and a minimum of five samples per class.
> [!div class="nextstepaction"]
-> [Learn more: custom classifier model](concept-custom-classifier.md)
+> [Learn more: custom classification model](concept-custom-classifier.md)
#### Composed models
applied-ai-services Build A Custom Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/build-a-custom-classifier.md
Title: "Build and train a custom classifier"
-description: Learn how to label, and build a custom document classifier model.
+description: Learn how to label, and build a custom document classification model.
monikerRange: 'form-recog-3.0.0'
recommendations: false
-# Build and train a custom classifier model
+# Build and train a custom classification model
[!INCLUDE [applies to v3.0](../includes/applies-to-v3-0.md)]
-Custom classifier models can classify each page in an input file to identify the document(s) within. Classifier models can also identify multiple documents or multiple instances of a single document in the input file. Form Recognizer custom models require as few as five training documents per document class to get started. To get started training a custom classifier model, you need at least **five documents** for each class and **two classes** of documents.
+Custom classification models can classify each page in an input file to identify the document(s) within. Classifier models can also identify multiple documents or multiple instances of a single document in the input file. Form Recognizer custom models require as few as five training documents per document class to get started. To get started training a custom classification model, you need at least **five documents** for each class and **two classes** of documents.
-## Custom classifier model input requirements
+## Custom classification model input requirements
Make sure your training data set follows the input requirements for Form Recognizer.
The Form Recognizer Studio provides and orchestrates all the API calls required
1. Start by navigating to the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio). The first time you use the Studio, you need to [initialize your subscription, resource group, and resource](../quickstarts/try-v3-form-recognizer-studio.md). Then, follow the [prerequisites for custom projects](../quickstarts/try-v3-form-recognizer-studio.md#additional-prerequisites-for-custom-projects) to configure the Studio to access your training dataset.
-1. In the Studio, select the **Custom classifier models** tile, on the custom models section of the page and select the **Create a project** button.
+1. In the Studio, select the **Custom classification model** tile, on the custom models section of the page and select the **Create a project** button.
:::image type="content" source="../media/how-to/studio-create-classifier-project.png" alt-text="Screenshot of how to create a classifier project in the Form Recognizer Studio.":::
Once the model training is complete, you can test your model by selecting the mo
1. Validate your model by evaluating the results for each document identified.
-Congratulations you've trained a custom classifier model in the Form Recognizer Studio! Your model is ready for use with the REST API or the SDK to analyze documents.
+Congratulations you've trained a custom classification model in the Form Recognizer Studio! Your model is ready for use with the REST API or the SDK to analyze documents.
## Next steps
applied-ai-services Try Form Recognizer Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-form-recognizer-studio.md
Prebuilt models help you add Form Recognizer features to your apps without havin
* [**Invoice**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice): extract text, selection marks, tables, key-value pairs, and key information from invoices. * [**Receipt**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt): extract text and key information from receipts.
-* [**Health insurance card**](https://formrecognizer-dogfood.appliedai.azure.com/studio/prebuilt?formType=healthInsuranceCard.us): extract insurer, member, prescription, group number and other key information from US health insurance cards.
+* [**Health insurance card**](https://formrecognizer.appliedai.azure.com/studio): extract insurer, member, prescription, group number and other key information from US health insurance cards.
* [**W-2**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2): extract text and key information from W-2 tax forms. * [**ID document**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument): extract text and key information from driver licenses and international passports. * [**Business card**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard): extract text and key information from business cards. #### Custom
-* [**Custom extraction models**](https://formrecognizer-dogfood.appliedai.azure.com/studio/custommodel/projects): extract information from forms and documents with custom extraction models. Quickly train a model by labeling as few as five sample documents.
-* [**Custom classifier model**](https://formrecognizer-dogfood.appliedai.azure.com/studio/document-classifier/projects): train a custom classifier to distinguish between the different document types within your applications. Quickly train a model with as few as two classes and five samples per class.
+* [**Custom extraction models**](https://formrecognizer.appliedai.azure.com/studio): extract information from forms and documents with custom extraction models. Quickly train a model by labeling as few as five sample documents.
+* [**Custom classification model**](https://formrecognizer.appliedai.azure.com/studio): train a custom classifier to distinguish between the different document types within your applications. Quickly train a model with as few as two classes and five samples per class.
#### Gated preview models
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
> * West US2 > * East US
-* [**Custom classifier model**](concept-custom-classifier.md) is a new capability within Form Recognizer starting with the ```2023-02-28-preview``` API. Try the document classification capability using the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/document-classifier/projects) or the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/GetClassifyDocumentResult).
+* [**Custom classification model**](concept-custom-classifier.md) is a new capability within Form Recognizer starting with the ```2023-02-28-preview``` API. Try the document classification capability using the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/document-classifier/projects) or the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/GetClassifyDocumentResult).
* [**Query fields**](concept-query-fields.md) capabilities, added to the General Document model, use Azure OpenAI models to extract specific fields from documents. Try the **General documents with query fields** feature using the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio). Query fields are currently only active for resources in the `East US` region. * [**Read**](concept-read.md#barcode-extraction) and [**Layout**](concept-layout.md#barcode-extraction) models support **barcode** extraction with the ```2023-02-28-preview``` API. * [**Add-on capabilities**](concept-add-on-capabilities.md)
automation Automation Hrw Run Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hrw-run-runbooks.md
Title: Run Azure Automation runbooks on a Hybrid Runbook Worker
description: This article describes how to run runbooks on machines in your local datacenter or other cloud provider with the Hybrid Runbook Worker. Previously updated : 11/18/2022 Last updated : 03/15/2022
# Run Automation runbooks on a Hybrid Runbook Worker > [!IMPORTANT]
-> Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](https://learn.microsoft.com/azure/automation/migrate-run-as-accounts-managed-identity?tabs=run-as-account#sample-scripts) to start migrating the runbooks from Run As account to managed identities before 30 September 2023.
+> - Azure Automation Agent-based User Hybrid Runbook Worker (Windows and Linux) will retire on **31 August 2024** and wouldn't be supported after that date. You must complete migrating existing Agent-based User Hybrid Runbook Workers to Extension-based Workers before 31 August 2024. Moreover, starting **1 October 2023**, creating new Agent-based Hybrid Workers wouldn't be possible. [Learn more](migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md).
+> - Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](migrate-run-as-accounts-managed-identity.md#sample-scripts) to start migrating the runbooks from Run As account to managed identities before September 30, 2023.
Runbooks that run on a [Hybrid Runbook Worker](automation-hybrid-runbook-worker.md) typically manage resources on the local computer or against resources in the local environment where the worker is deployed. Runbooks in Azure Automation typically manage resources in the Azure cloud. Even though they are used differently, runbooks that run in Azure Automation and runbooks that run on a Hybrid Runbook Worker are identical in structure.
automation Automation Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hybrid-runbook-worker.md
Title: Azure Automation Hybrid Runbook Worker overview
description: Know about Hybrid Runbook Worker. How to install and run the runbooks on machines in your local datacenter or cloud provider. Previously updated : 11/09/2022 Last updated : 03/15/2023 # Automation Hybrid Runbook Worker overview
+> [!IMPORTANT]
+> Azure Automation Agent-based User Hybrid Runbook Worker (Windows and Linux) will retire on **31 August 2024** and wouldn't be supported after that date. You must complete migrating existing Agent-based User Hybrid Runbook Workers to Extension-based Workers before 31 August 2024. Moreover, starting **1 October 2023**, creating new Agent-based Hybrid Workers wouldn't be possible. [Learn more](migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md).
+ Runbooks in Azure Automation might not have access to resources in other clouds or in your on-premises environment because they run on the Azure cloud platform. You can use the Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on the machine hosting the role and against resources in the environment to manage those local resources. Runbooks are stored and managed in Azure Automation and then delivered to one or more assigned machines. Azure Automation provides native integration of the Hybrid Runbook Worker role through the Azure virtual machine (VM) extension framework. The Azure VM agent is responsible for management of the extension on Azure VMs on Windows and Linux VMs, and [Azure Connected Machine agent](../azure-arc/servers/agent-overview.md) on Non-Azure machines including [Azure Arc-enabled Servers](../azure-arc/servers/overview.md) and [Azure Arc-enabled VMware vSphere (preview)](../azure-arc/vmware-vsphere/overview.md). Now there are two Hybrid Runbook Workers installation platforms supported by Azure Automation.
automation Automation Linux Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-linux-hrw-install.md
Title: Deploy an agent-based Linux Hybrid Runbook Worker in Automation
description: This article tells how to install an agent-based Hybrid Runbook Worker to run runbooks on Linux-based machines in your local datacenter or cloud environment. Previously updated : 09/24/2021 Last updated : 03/15/2023 # Deploy an agent-based Linux Hybrid Runbook Worker in Automation
+> [!IMPORTANT]
+> Azure Automation Agent-based User Hybrid Runbook Worker (Windows and Linux) will retire on **31 August 2024** and wouldn't be supported after that date. You must complete migrating existing Agent-based User Hybrid Runbook Workers to Extension-based Workers before 31 August 2024. Moreover, starting **1 October 2023**, creating new Agent-based Hybrid Workers wouldn't be possible. [Learn more](migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md).
+ You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on the Azure or non-Azure machine, including servers registered with [Azure Arc-enabled servers](../azure-arc/servers/overview.md). From the machine or server that's hosting the role, you can run runbooks directly it and against resources in the environment to manage those local resources. The Linux Hybrid Runbook Worker executes runbooks as a special user that can be elevated for running commands that need elevation. Azure Automation stores and manages runbooks and then delivers them to one or more chosen machines. This article describes how to: install the Hybrid Runbook Worker on a Linux machine, remove the worker, and remove a Hybrid Runbook Worker group. For User Hybrid Runbook Workers, see also [Deploy an extension-based Windows or Linux User Hybrid Runbook Worker in Automation](./extension-based-hybrid-runbook-worker-install.md)
automation Automation Windows Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-windows-hrw-install.md
Title: Deploy an agent-based Windows Hybrid Runbook Worker in Automation
description: This article tells how to deploy an agent-based Hybrid Runbook Worker that you can use to run runbooks on Windows-based machines in your local datacenter or cloud environment. Previously updated : 12/29/2022 Last updated : 03/15/2023 # Deploy an agent-based Windows Hybrid Runbook Worker in Automation
+> [!IMPORTANT]
+> Azure Automation Agent-based User Hybrid Runbook Worker (Windows and Linux) will retire on **31 August 2024** and wouldn't be supported after that date. You must complete migrating existing Agent-based User Hybrid Runbook Workers to Extension-based Workers before 31 August 2024. Moreover, starting **1 October 2023**, creating new Agent-based Hybrid Workers wouldn't be possible. [Learn more](migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md).
++ You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on an Azure or non-Azure machine, including servers registered with [Azure Arc-enabled servers](../azure-arc/servers/overview.md). From the machine or server that's hosting the role, you can run runbooks directly against it and against resources in the environment to manage those local resources. Azure Automation stores and manages runbooks and then delivers them to one or more chosen machines. This article describes how to deploy a user Hybrid Runbook Worker on a Windows machine, how to remove the worker, and how to remove a Hybrid Runbook Worker group. For user Hybrid Runbook Workers, see also [Deploy an extension-based Windows or Linux user Hybrid Runbook Worker in Automation](./extension-based-hybrid-runbook-worker-install.md)
automation Enforce Job Execution Hybrid Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/enforce-job-execution-hybrid-worker.md
Title: Enforce job execution on Azure Automation Hybrid Runbook Worker
description: This article tells how to use a custom Azure Policy definition to enforce job execution on an Azure Automation Hybrid Runbook Worker. Previously updated : 05/24/2021 Last updated : 03/15/2023 # Use Azure Policy to enforce job execution on Hybrid Runbook Worker
+> [!IMPORTANT]
+> Azure Automation Agent-based User Hybrid Runbook Worker (Windows and Linux) will retire on **31 August 2024** and wouldn't be supported after that date. You must complete migrating existing Agent-based User Hybrid Runbook Workers to Extension-based Workers before 31 August 2024. Moreover, starting **1 October 2023**, creating new Agent-based Hybrid Workers wouldn't be possible. [Learn more](migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md).
+ Starting a runbook on a Hybrid Runbook Worker uses a **Run on** option that allows you to specify the name of a Hybrid Runbook Worker group when initiating from the Azure portal, with the Azure PowerShell, or REST API. When a group is specified, one of the workers in that group retrieves and runs the runbook. If your runbook does not specify this option, Azure Automation runs the runbook in the Azure sandbox. Anyone in your organization who is a member of the [Automation Job Operator](automation-role-based-access-control.md#automation-job-operator) or higher can create runbook jobs. To manage runbook execution targeting a Hybrid Runbook Worker group in your Automation account, you can use [Azure Policy](../governance/policy/overview.md). This helps to enforce organizational standards and ensure your automation jobs are controlled and managed by those designated, and anyone cannot execute a runbook on an Azure sandbox, only on Hybrid Runbook workers.
automation Migrate Existing Agent Based Hybrid Worker To Extension Based Workers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md
Title: Migrate an existing agent-based hybrid workers to extension-based-workers
description: This article provides information on how to migrate an existing agent-based hybrid worker to extension based workers. Previously updated : 02/20/2023 Last updated : 03/15/2023 #Customer intent: As a developer, I want to learn about extension so that I can efficiently migrate agent based hybrid workers to extension based workers. # Migrate the existing agent-based hybrid workers to extension-based hybrid workers
+> [!IMPORTANT]
+> Azure Automation Agent-based User Hybrid Runbook Worker (Windows and Linux) will retire on **31 August 2024** and wouldn't be supported after that date. You must complete migrating existing Agent-based User Hybrid Runbook Workers to Extension-based Workers before 31 August 2024. Moreover, starting **1 October 2023**, creating new Agent-based Hybrid Workers wouldn't be possible. [Learn more](migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md).
+ This article describes the benefits of Extension-based User Hybrid Runbook Worker and how to migrate existing Agent-based User Hybrid Runbook Workers to Extension-based Hybrid Workers. There are two Hybrid Runbook Workers installation platforms supported by Azure Automation:
automation Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/hybrid-runbook-worker.md
Title: Troubleshoot agent-based Hybrid Runbook Worker issues in Azure Automation description: This article tells how to troubleshoot and resolve issues that arise with Azure Automation agent-based Hybrid Runbook Workers. Previously updated : 10/18/2021 Last updated : 03/15/2023 # Troubleshoot agent-based Hybrid Runbook Worker issues in Automation
+> [!IMPORTANT]
+> Azure Automation Agent-based User Hybrid Runbook Worker (Windows and Linux) will retire on **31 August 2024** and wouldn't be supported after that date. You must complete migrating existing Agent-based User Hybrid Runbook Workers to Extension-based Workers before 31 August 2024. Moreover, starting **1 October 2023**, creating new Agent-based Hybrid Workers wouldn't be possible. [Learn more](../migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md)
+ This article provides information on troubleshooting and resolving issues with Azure Automation agent-based Hybrid Runbook Workers. For troubleshooting extension-based workers, see [Troubleshoot extension-based Hybrid Runbook Worker issues in Automation](./extension-based-hybrid-runbook-worker.md). For general information, see [Hybrid Runbook Worker overview](../automation-hybrid-runbook-worker.md). ## General
azure-cache-for-redis Cache How To Premium Persistence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-persistence.md
Previously updated : 01/23/2023 Last updated : 03/23/2023 # Configure data persistence for a Premium Azure Cache for Redis instance
The following list contains answers to commonly asked questions about Azure Cach
- [Will I be charged for the storage being used in Data Persistence](#will-i-be-charged-for-the-storage-being-used-in-data-persistence) - [How frequently does RDB and AOF persistence write to my blobs, and should I enable soft delete?](#how-frequently-does-rdb-and-aof-persistence-write-to-my-blobs-and-should-i-enable-soft-delete) - [Will having firewall exceptions on the storage account affect persistence](#will-having-firewall-exceptions-on-the-storage-account-affect-persistence)
+- [How do I check if soft delete is enabled on my storage account?](#how-do-i-check-if-soft-delete-is-enabled-on-my-storage-account)
### RDB persistence
azure-functions Create First Function Cli Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-python.md
Title: Create a Python function from the command line - Azure Functions description: Learn how to create a Python function from the command line, then publish the local project to serverless hosting in Azure Functions. Previously updated : 06/15/2022 Last updated : 03/22/2023 ms.devlang: python
Verify your prerequisites, which depend on whether you're using Azure CLI or Azu
## <a name="create-venv"></a>Create and activate a virtual environment
-In a suitable folder, run the following commands to create and activate a virtual environment named `.venv`. Make sure that you're using Python 3.8, 3.7 or 3.6, which are supported by Azure Functions.
+In a suitable folder, run the following commands to create and activate a virtual environment named `.venv`. Make sure that you're using Python 3.9, 3.8, or 3.7, which are supported by Azure Functions.
# [bash](#tab/bash)
Use the following commands to create these items. Both Azure CLI and PowerShell
az functionapp create --consumption-plan-location westeurope --runtime python --runtime-version 3.9 --functions-version 4 --name <APP_NAME> --os-type linux --storage-account <STORAGE_NAME> ```
- The [az functionapp create](/cli/azure/functionapp#az-functionapp-create) command creates the function app in Azure. If you're using Python 3.8, 3.7, or 3.6, change `--runtime-version` to `3.8`, `3.7`, or `3.6`, respectively. You must supply `--os-type linux` because Python functions can't run on Windows, which is the default.
+ The [az functionapp create](/cli/azure/functionapp#az-functionapp-create) command creates the function app in Azure. If you're using Python 3.9, 3.8, or 3.7, change `--runtime-version` to `3.9`, `3.8`, or `3.7`, respectively. You must supply `--os-type linux` because Python functions can't run on Windows, which is the default.
# [Azure PowerShell](#tab/azure-powershell)
Use the following commands to create these items. Both Azure CLI and PowerShell
New-AzFunctionApp -Name <APP_NAME> -ResourceGroupName AzureFunctionsQuickstart-rg -StorageAccountName <STORAGE_NAME> -FunctionsVersion 4 -RuntimeVersion 3.9 -Runtime python -Location '<REGION>' ```
- The [New-AzFunctionApp](/powershell/module/az.functions/new-azfunctionapp) cmdlet creates the function app in Azure. If you're using Python 3.8, 3.7, or 3.6, change `-RuntimeVersion` to `3.8`, `3.7`, or `3.6`, respectively.
+ The [New-AzFunctionApp](/powershell/module/az.functions/new-azfunctionapp) cmdlet creates the function app in Azure. If you're using Python 3.9, 3.8, or 3.7, change `-RuntimeVersion` to `3.9`, `3.8`, or `3.7`, respectively.
azure-functions Functions Infrastructure As Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-infrastructure-as-code.md
resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
} { name: 'FUNCTIONS_EXTENSION_VERSION'
- value: '~3'
+ value: '~4'
} { name: 'DOCKER_REGISTRY_SERVER_URL'
resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
}, { "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~3"
+ "value": "~4"
}, { "name": "DOCKER_REGISTRY_SERVER_URL",
resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
appSettings: [ { name: 'FUNCTIONS_EXTENSION_VERSION'
- value: '~3'
+ value: '~4'
} { name: 'AzureWebJobsStorage'
resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
"appSettings": [ { "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~3"
+ "value": "~4"
}, { "name": "AzureWebJobsStorage",
Considerations for custom deployments:
appSettings: [ { name: 'FUNCTIONS_EXTENSION_VERSION'
- value: '~3'
+ value: '~4'
} { name: 'Project'
Considerations for custom deployments:
properties: { AzureWebJobsStorage: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${storageAccount.listKeys().keys[0].value}' AzureWebJobsDashboard: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${storageAccount.listKeys().keys[0].value}'
- FUNCTIONS_EXTENSION_VERSION: '~3'
+ FUNCTIONS_EXTENSION_VERSION: '~4'
FUNCTIONS_WORKER_RUNTIME: 'dotnet' Project: 'src' }
Considerations for custom deployments:
"appSettings": [ { "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~3"
+ "value": "~4"
}, { "name": "Project",
Considerations for custom deployments:
"properties": { "AzureWebJobsStorage": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}', variables('storageAccountName'), listKeys(variables('storageAccountName'), '2021-09-01').keys[0].value)]", "AzureWebJobsDashboard": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}', variables('storageAccountName'), listKeys(variables('storageAccountName'), '2021-09-01').keys[0].value)]",
- "FUNCTIONS_EXTENSION_VERSION": "~3",
+ "FUNCTIONS_EXTENSION_VERSION": "~4",
"FUNCTIONS_WORKER_RUNTIME": "dotnet", "Project": "src" },
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
More logging methods are available that let you write to the console at differen
To learn more about logging, see [Monitor Azure Functions](functions-monitoring.md).
+### Logging from created threads
+
+To see logs coming from your created threads, include the [`context`](/python/api/azure-functions/azure.functions.context) argument in the function's signature. This argument contains an attribute `thread_local_storage` which stores a local `invocation_id`. This can be set to the function's current `invocation_id` to ensure the context is changed.
+
+```python
+import azure.functions as func
+import logging
+import threading
++
+def main(req, context):
+ logging.info('Python HTTP trigger function processed a request.')
+ t = threading.Thread(target=log_function, args=(context,))
+ t.start()
++
+def log_function(context):
+ context.thread_local_storage.invocation_id = context.invocation_id
+ logging.info('Logging from thread.')
+```
++ ### Log custom telemetry By default, the Functions runtime collects logs and other telemetry data that are generated by your functions. This telemetry ends up as traces in Application Insights. Request and dependency telemetry for certain Azure services are also collected by default by [triggers and bindings](functions-triggers-bindings.md#supported-bindings).
The [`Context`](/python/api/azure-functions/azure.functions.context) class has t
| `function_directory` | The directory in which the function is running. | | `function_name` | The name of the function. | | `invocation_id` | The ID of the current function invocation. |
+| `thread_local_storage` | The thread local storage of the function. Contains a local `invocation_id` for [logging from created threads](#logging-from-created-threads). |
| `trace_context` | The context for distributed tracing. For more information, see [`Trace Context`](https://www.w3.org/TR/trace-context/). | | `retry_context` | The context for retries to the function. For more information, see [`retry-policies`](./functions-bindings-errors.md#retry-policies). |
azure-functions Storage Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/storage-considerations.md
Title: Storage considerations for Azure Functions description: Learn about the storage requirements of Azure Functions and about encrypting stored data. Previously updated : 12/13/2022 Last updated : 03/21/2023 # Storage considerations for Azure Functions
To learn more about storage account types, see [Storage account overview](../sto
While you can use an existing storage account with your function app, you must make sure that it meets these requirements. Storage accounts created as part of the function app create flow in the Azure portal are guaranteed to meet these storage account requirements. In the portal, unsupported accounts are filtered out when choosing an existing storage account while creating a function app. In this flow, you're only allowed to choose existing storage accounts in the same region as the function app you're creating. To learn more, see [Storage account location](#storage-account-location).
+Storage accounts secured by using firewalls or virtual private networks also can't be used in the portal creation flow. For more information, see [Restrict your storage account to a virtual network](functions-networking-options.md#restrict-your-storage-account-to-a-virtual-network).
+ <!-- JH: Does using a Premium Storage account improve perf? --> ## Storage account guidance
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
recommendations: false Previously updated : 01/31/2023 Last updated : 03/21/2023 # Azure, Dynamics 365, Microsoft 365, and Power Platform services compliance scope
Microsoft Azure cloud environments meet demanding US government compliance requi
- [Intelligence Community Directive (ICD) 503](http://www.dni.gov/files/documents/ICD/ICD_503.pdf) - [Joint Special Access Program (SAP) Implementation Guide (JSIG)](https://www.dcsa.mil/portals/91/documents/ctp/nao/JSIG_2016April11_Final_(53Rev4).pdf)
-**Azure** (also known as Azure Commercial, Azure Public, or Azure Global) maintains the following authorizations:
+**Azure** (also known as Azure Commercial, Azure Public, or Azure Global) maintains the following authorizations that pertain to all Azure public regions in the United States:
- [FedRAMP High](/azure/compliance/offerings/offering-fedramp) Provisional Authorization to Operate (P-ATO) issued by the FedRAMP Joint Authorization Board (JAB) - [DoD IL2](/azure/compliance/offerings/offering-dod-il2) Provisional Authorization (PA) issued by the Defense Information Systems Agency (DISA)
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Policy](../../governance/policy/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Policy's guest configuration](../../governance/machine-configuration/overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Red Hat OpenShift](../../openshift/index.yml) | &#x2705; | &#x2705; | | | |
-| [Azure Resource Manager](../../azure-resource-manager/management/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Azure Resource Manager](../../azure-resource-manager/management/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Sign-up portal](https://signup.azure.com/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Stack Bridge](/azure-stack/operator/azure-stack-usage-reporting) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Bot Service](/azure/bot-service/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Cloud Services](../../cloud-services/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Cloud Shell](../../cloud-shell/overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Cognitive Search](../../search/index.yml) (formerly Azure Search) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Cognitive Search](../../search/index.yml) (formerly Azure Search) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Cognitive | [Cognitive | [Cognitive Services Containers](../../cognitive-services/cognitive-services-container-support.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Container Instances](../../container-instances/index.yml)| &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Container Registry](../../container-registry/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Content Delivery Network (CDN)](../../cdn/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Cost Management and Billing](../../cost-management-billing/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Cost Management and Billing](../../cost-management-billing/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Customer Lockbox](../../security/fundamentals/customer-lockbox-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Data Box](../../databox/index.yml) **&ast;** | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Data Explorer](/azure/data-explorer/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Dynamics 365 Chat (Omnichannel Engagement Hub)](/dynamics365/omnichannel/introduction-omnichannel) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Dynamics 365 Customer Insights](/dynamics365/customer-insights/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Dynamics 365 Customer Service](/dynamics365/customer-service/overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Dynamics 365 Customer Voice](/dynamics365/customer-voice/about) (formerly Forms Pro) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Dynamics 365 Customer Voice](/dynamics365/customer-voice/about) (formerly Forms Pro) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Dynamics 365 Field Service](/dynamics365/field-service/overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Dynamics 365 Finance](/dynamics365/finance/) | &#x2705; | &#x2705; | &#x2705; | | | | [Dynamics 365 Project Service Automation](/dynamics365/project-operations/psa/overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Form Recognizer](../../applied-ai-services/form-recognizer/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Front Door](../../frontdoor/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Functions](../../azure-functions/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [GitHub AE](https://docs.github.com/en/github-ae@latest/admin/overview/about-github-ae) | &#x2705; | &#x2705; | &#x2705; | | |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [GitHub AE](https://docs.github.com/en/github-ae@latest/admin/overview/about-github-ae) | &#x2705; | &#x2705; | &#x2705; | | |
| [HDInsight](../../hdinsight/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [HPC Cache](../../hpc-cache/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Import/Export](../../import-export/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Media Services](/azure/media-services/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Microsoft 365 Defender](/microsoft-365/security/defender/) (formerly Microsoft Threat Protection) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Microsoft Azure portal](../../azure-portal/index.yml) | &#x2705; | &#x2705; | &#x2705;| &#x2705; | &#x2705; |
-| [Microsoft Azure Government portal](../documentation-government-get-started-connect-with-portal.md) | &#x2705; | &#x2705; | &#x2705;| &#x2705; | |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Microsoft Azure Government portal](../documentation-government-get-started-connect-with-portal.md) | &#x2705; | &#x2705; | &#x2705;| &#x2705; | |
| [Microsoft Defender for Cloud](../../defender-for-cloud/index.yml) (formerly Azure Security Center) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Microsoft Defender for Cloud Apps](/defender-cloud-apps/) (formerly Microsoft Cloud App Security) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) (formerly Microsoft Defender Advanced Threat Protection) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Notification Hubs](../../notification-hubs/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Peering Service](../../peering-service/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Planned Maintenance for VMs](../../virtual-machines/maintenance-and-updates.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Power Apps](/powerapps/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Power Apps](/powerapps/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Power Automate](/power-automate/) (formerly Microsoft Flow) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Power BI](/power-bi/fundamentals/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Power BI Embedded](/power-bi/developer/embedded/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Scheduler](../../scheduler/index.yml) (replaced by [Logic Apps](../../logic-apps/index.yml)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Service Bus](../../service-bus-messaging/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Service Fabric](../../service-fabric/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Service Health](../../service-health/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Service Health](../../service-health/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [SignalR Service](../../azure-signalr/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Site Recovery](../../site-recovery/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [SQL Database](/azure/azure-sql/database/sql-database-paas-overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [StorSimple](../../storsimple/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Stream Analytics](../../stream-analytics/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Synapse Analytics](../../synapse-analytics/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Synapse Link for Dataverse](/powerapps/maker/data-platform/export-to-data-lake) | &#x2705; | &#x2705; | &#x2705; | | |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Synapse Link for Dataverse](/powerapps/maker/data-platform/export-to-data-lake) | &#x2705; | &#x2705; | &#x2705; | | |
| [Traffic Manager](../../traffic-manager/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Virtual Machine Scale Sets](../../virtual-machine-scale-sets/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Virtual Machines](../../virtual-machines/index.yml) (incl. [Reserved VM Instances](../../virtual-machines/prepay-reserved-vm-instances.md)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
azure-government Documentation Government Stig Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-stig-linux-vm.md
recommendations: false Previously updated : 01/20/2023 Last updated : 03/21/2023 # Deploy STIG-compliant Linux Virtual Machines (Preview)
For more information about Azure Government, see the following resources:
- [DoD Impact Level 5 ΓÇô Azure compliance](/azure/compliance/offerings/offering-dod-il5) - [Isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md) - [Secure Azure Computing Architecture](./compliance/secure-azure-computing-architecture.md)
+- [Security Technical Implementation Guides (STIGs)](https://public.cyber.mil/stigs/)
azure-government Documentation Government Stig Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-stig-windows-vm.md
recommendations: false Previously updated : 01/20/2023 Last updated : 03/21/2023 # Deploy STIG-compliant Windows Virtual Machines (Preview)
For more information about Azure Government, see the following resources:
- [DoD Impact Level 5 ΓÇô Azure compliance](/azure/compliance/offerings/offering-dod-il5) - [Isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md) - [Secure Azure Computing Architecture](./compliance/secure-azure-computing-architecture.md)
+- [Security Technical Implementation Guides (STIGs)](https://public.cyber.mil/stigs/)
azure-maps Drawing Error Visualizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-error-visualizer.md
# Using the Azure Maps Drawing Error Visualizer with Creator - The Drawing Error Visualizer is a stand-alone web application that displays [Drawing package warnings and errors](drawing-conversion-error-codes.md) detected during the conversion process. The Error Visualizer web application consists of a static page that you can use without connecting to the internet. You can use the Error Visualizer to fix errors and warnings in accordance with [Drawing package requirements](drawing-requirements.md). The [Azure Maps Conversion API](/rest/api/maps/v2/conversion) returns a response with a link to the Error Visualizer only when an error is detected. ## Prerequisites
-Before you can download the Drawing Error Visualizer, you'll need to:
-
-1. [Create an Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account)
-2. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key.
-3. [Create a Creator resource](how-to-manage-creator.md)
+* An [Azure Maps account]
+* A [subscription key]
+* A [Creator resource]
This tutorial uses the [Postman](https://www.postman.com/) application, but you may choose a different API development environment.
Learn more by reading:
> [!div class="nextstepaction"] > [Creator for indoor maps](creator-indoor-maps.md)+
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[Creator resource]: how-to-manage-creator.md
azure-maps Drawing Package Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-package-guide.md
Title: Drawing package guide for Microsoft Azure Maps Creator description: Learn how to prepare a drawing package for the Azure Maps Conversion service-- Previously updated : 01/31/2023++ Last updated : 03/21/2023
+zone_pivot_groups: drawing-package-version
# Conversion drawing package guide + This guide shows you how to prepare your Drawing Package for the [Azure Maps Conversion service] using specific CAD commands to correctly prepare your DWG files and manifest file for the Conversion service. To start with, make sure your Drawing Package is in .zip format, and contains the following files:
The following snippet shows the unit property object that is associated with the
You should now have all the DWG drawings prepared to meet Azure Maps Conversion service requirements. A manifest file has also been created to help describe the facility. All files will need to be zipped into a single archive file, with the `.zip` extension. It's important that the manifest file is named `manifest.json` and is placed in the root directory of the zipped package. All other files can be in any directory of the zipped package if the filename includes the relative path to the manifest. For an example of a drawing package, see the [sample drawing package]. ++
+This guide shows you how to prepare your Drawing Package for the Azure Maps [Conversion service v2]. A Drawing Package contains one or more DWG drawing files for a single facility and a manifest file describing the DWG files.
+
+If you don't have your own package to reference along with this guide, you may download the [sample drawing package v2].
+
+You may choose any CAD software to open and prepare your facility drawing files. However, this guide is created using Autodesk's AutoCAD® software. Any commands referenced in this guide are meant to be executed using Autodesk's AutoCAD® software.
+
+> [!TIP]
+> For more information about drawing package requirements that aren't covered in this guide, see [Drawing Package Requirements].
+
+## Glossary of terms
+
+For easy reference, here are some terms and definitions that are important as you read this guide.
+
+| Term | Definition |
+|:--|:|
+| Layer | An AutoCAD DWG layer from the drawing file. |
+| Entity | An AutoCAD DWG entity from the drawing file. |
+| Level | An area of a building at a set elevation. For example, the floor of a building.ΓÇ»|
+| Feature | An object that combines a geometry with more metadata information. |
+| Feature classes | A common blueprint for features. For example, a *unit* is a feature class, and an *office* is a feature. |
+
+## Step 1: DWG file requirements
+
+When preparing your facility drawing files for the Conversion service, make sure to follow these preliminary requirements and recommendations:
+
+* Facility drawing files must be saved in DWG format, which is the native file format for Autodesk's AutoCAD® software.
+* The Conversion service works with the AutoCAD DWG file format. AC1032 is the internal format version for the DWG files, and it's a good idea to select AC1032 for the internal DWG file format version.
+* A DWG file can only contain a single floor. A floor of a facility must be provided in its own separate DWG file. So, if you have five floors in a facility, you must create five separate DWG files.
+
+## Step 2: Prepare the DWG files
+
+This part of the guide shows you how to use CAD commands to ensure that your DWG files meet the requirements of the Conversion service.
+
+You may choose any CAD software to open and prepare your facility drawing files. However, this guide is created using Autodesk's AutoCAD® software. Any commands referenced in this guide are meant to be executed using Autodesk's AutoCAD® software.
+
+### Bind External References
+
+Each floor of a facility must be provided as one DWG file. If there are no external references, then nothing more needs to be done. However, if there are any external references, they must be bound to a single drawing. To bind an external reference, you may use the `XREF` command. After binding, each external reference drawing will be added as a block reference. If you need to make changes to any of these layers, remember to explode the block references by using the `XPLODE` command.
+
+### Unit of measurement
+
+The drawings can be created using any unit of measurement. However, all drawings must use the same unit of measurement. So, if one floor of the facility is using millimeters, then all other floors (drawings) must also be in millimeters. You can verify or modify the measurement unit by using the `UNITS` command and setting the ΓÇ£Insertion scaleΓÇ¥ value.
+
+The following image shows the **Drawing Units** window within Autodesk's AutoCAD® software that you can use to verify the unit of measurement.
++
+### Alignment
+
+Each floor of a facility is provided as an individual DWG file. As a result, it's possible that the floors don't align perfectly, as required by the Azure Maps Conversion service. To verify alignment, use a reference point such as an elevator or column that spans multiple floors. Use the `XATTACH` command to load all floor drawings, then the `MOVE` command with the reference points to realign any floors that require it.
+
+### Layers
+
+Ensure that each layer of a drawing contains entities of one feature class. If a layer contains entities for walls, then it shouldn't have other entities such as units or doors. However, a feature class can be composed of multiple layers. For example, you can have three layers in the drawing that contain wall entities.
+
+For a better understanding of layers and feature classes, see [Drawing Package Requirements].
+
+## Step 3: Prepare the manifest
+
+The drawing package Manifest is a JSON file. The Manifest tells the Azure Maps Conversion service how to read the facility DWG files and metadata. Some examples of this information could be the specific information each DWG layer contains, or the geographical location of the facility.
+
+To achieve a successful conversion, all ΓÇ£requiredΓÇ¥ properties must be defined. A sample manifest file can be found inside the [sample drawing package v2]. This guide doesn't cover properties supported by the manifest. For more information about manifest properties, see [Manifest File Properties].
+
+The manifest can be created manually in any text editor, or can be created using the Azure Maps Creator onboarding tool. This guide provides examples for each.
+
+### The Azure Maps Creator onboarding tool
+
+You can use the [Azure Maps Creator onboarding tool] to create new and edit existing [manifest files].
+
+To process the DWG files, enter the geography of your Azure Maps Creator resource, the subscription key of your Azure Maps account and the path and filename of the DWG ZIP package, the select **Process**. This process can take several minutes to complete.
++
+### Facility levels
+
+The facility level specifies which DWG file to use for which level. A level must have a level name and ordinal that describes that vertical order of each level in the facility, along with a **Vertical Extent** describing the height of each level in meters.
+
+The following example is taken from the [sample drawing package v2]. The facility has two levels: ground and level 2. The filename contains the full file name and path of the file relative to the manifest file within the drawing package.
++
+### DWG layers
+
+The `dwgLayers` object is used to specify the DWG layer names where feature classes can be found. To receive a properly converted facility, it's important to provide the correct layer names. For example, a DWG wall layer must be provided as a wall layer and not as a unit layer. The drawing can have other layers such as furniture or plumbing; but, the Azure Maps Conversion service ignores anything not specified in the manifest.
+Defining text properties enables you to associate text entities that fall inside the bounds of a feature. Once defined they can be used to style and display elements on your indoor map
++
+> [!IMPORTANT]
+> Wayfinding support for `Drawing Package 2.0` will be supported in the near future. The following feature class should be defined (non-case sensitive) in order to use [wayfinding]. `Wall` will be treated as an obstruction for a given path request. `Stair` and `Elevator` will be treated as level connectors to navigate across floors:
+>
+> 1. Wall
+> 2. Stair
+> 3. Elevator
+
+### georeference
+
+Georeferencing is used to specify the exterior profile, location and rotation of the facility.
+
+The [facility level] defines the exterior profile as it appears on the map and is selected from the list of DWG layers in the **Exterior** drop-down list.
+
+The **Anchor Point Longitude** and **Anchor Point Latitude** specify the facility's location, the default value is zero (0).
+
+The **Anchor Point Angle** is specified in degrees between true north and the drawing's vertical (Y) axis, the default value is zero (0).
++
+You position the facility's location by entering either an address or longitude and latitude values. You can also pan the map to make minor adjustments to the facility's location.
++
+### Review and download
+
+When finished, select the **Review + Download** button to view the manifest. When you finished verifying that it's ready, select the **Download** button to save it locally so that you can include it in the drawing package to import into your Azure Maps Creator resource.
++
+## Step 4: Prepare the drawing package
+
+You should now have all the DWG drawings prepared to meet Azure Maps Conversion service requirements. A manifest file has also been created to help describe the facility. All files need to be compressed into a single archive file, with the `.zip` extension. It's important that the manifest file is named `manifest.json` and is placed in the root directory of the drawing package. All other files can be in any directory of the drawing package if the filename includes the relative path to the manifest. For an example of a drawing package, see the [sample drawing package v2].
++ ## Next steps > [!div class="nextstepaction"] > [Tutorial: Creating a Creator indoor map]
+<! Drawing Package v1 links>
[Azure Maps Conversion service]: /rest/api/maps/v2/conversion
-[sample drawing package]: https://github.com/Azure-Samples/am-creator-indoor-data-examples
+[sample drawing package]: https://github.com/Azure-Samples/am-creator-indoor-data-examples/tree/master/Drawing%20Package%201.0
[Manifest File Properties]: drawing-requirements.md#manifest-file-requirements [Drawing Package Requirements]: drawing-requirements.md [Tutorial: Creating a Creator indoor map]: tutorial-creator-indoor-maps.md+
+<! Drawing Package v2 links>
+[Conversion service v2]: https://aka.ms/creator-conversion
+[sample drawing package v2]: https://github.com/Azure-Samples/am-creator-indoor-data-examples/tree/master/Drawing%20Package%202.0
+[Azure Maps Creator onboarding tool]: https://azure.github.io/azure-maps-creator-onboarding-tool
+[manifest files]: /azure/azure-maps/drawing-requirements#manifest-file-requirements
+[wayfinding]: creator-indoor-maps.md#wayfinding-preview
+[facility level]: drawing-requirements.md#facility-level
azure-maps Drawing Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-requirements.md
Title: Drawing package requirements in Microsoft Azure Maps Creator description: Learn about the drawing package requirements to convert your facility design files to map data-- Previously updated : 02/17/2023++ Last updated : 03/21/2023 -
+zone_pivot_groups: drawing-package-version
# Drawing package requirements + You can convert uploaded drawing packages into map data by using the Azure Maps [Conversion service]. This article describes the drawing package requirements for the Conversion API. To view a sample package, you can download the sample [Drawing package]. For a guide on how to prepare your drawing package, see [Conversion Drawing Package Guide].
Below is the manifest file for the sample drawing package. Go to the [Sample dra
} ``` ++
+You can convert uploaded drawing packages into map data by using the Azure Maps [Conversion service v2]. This article describes the drawing package requirements for the Conversion API. To view a sample package, you can download the [sample drawing package v2].
+
+For a guide on how to prepare your drawing package, see [Conversion Drawing Package Guide].
+
+## Changes and Revisions
+
+- Added support for user defined feature classes.
+- Simplified requirements of DWG layers.
+
+## Prerequisites
+
+The drawing package includes drawings saved in DWG format, which is the native file format for Autodesk's AutoCAD® software.
+
+You can choose any CAD software to produce the drawings in the drawing package.
+
+The [Conversion service v2] converts the drawing package into map data. The Conversion service works with the AutoCAD DWG file format AC1032.
+
+## Glossary of terms
+
+For easy reference, here are some terms and definitions that are important as you read this article.
+
+| Term | Definition |
+|:|:-|
+| Layer | An AutoCAD DWG layer from the drawing file. |
+| Entity| An AutoCAD DWG entity from the drawing file. |
+| Xref | A file in AutoCAD DWG file format, attached to the primary drawing as an external reference. |
+| Level | An area of a facility at a set elevation. For example, the floor of a facility. |
+|Feature| An instance of an object produced from the Conversion service that combines a geometry with metadata information. |
+|Feature classes| A common blueprint for features. |
+
+## Drawing package structure
+
+A drawing package is a ZIP archive that contains the following files:
+
+- DWG files in AutoCAD DWG file format.
+- A *manifest.json* file that describes the DWG files in the drawing package.
+
+The drawing package must be compressed into a single archive file, with the .zip extension. The DWG files can be organized in any way inside the drawing package, but the manifest file must be in the root directory. The next sections explain the conversion process and requirements for both the DWG and manifest files, and the content of these files. To view a sample package, you can download the [sample drawing package v2].
+
+## DWG file conversion process
+
+The Azure Maps Conversion service converts DWG file(s) of a facility to map data representing a facility and features of a facility.
+
+The Azure Maps Conversion service creates:
+
+- **Facility Feature**: The top-level feature of a facility that all levels of a facility are associated to.
+- **Level features**: One Level feature is created for each floor of a facility. All features on a level are associated with a level.
+- **User defined features**: DWG layers are mapped to a user defined [feature class](#featureclass) and become instances of the feature class.
+<!--DWG layers are mapped to a user defined [feature class](#featureclass). All mapped DWG layer entities become instances of the feature class.-->
+
+## DWG file requirements
+
+Each DWG file must adhere to these requirements:
+
+- The DWG file can't contain features from multiple facilities.
+- The DWG file can't contain features from multiple levels. For example, a facility with three levels has three DWG files in the drawing package.
+- All data of a single level must be contained in a single DWG file. Any external references (*xrefs*) must be bound to the parent drawing.
+- The DWG file must define layer(s) representing the boundary of that level.
+- The DWG must reference the same measurement system and unit of measurement as other DWG files in the drawing package.
+- The DWG file must be aligned when stacked on top of another level from the same facility.
+
+## DWG layer requirements
+
+### Feature classes
+
+One or more DWG layer(s) can be mapped to a user defined feature class. One instance of the feature is created from an entity on the mapped layer. For example, DWG layers chair, table, and couch are mapped to a feature class called furniture. A furniture feature is created for every entity from the defined layers. Additionally:
+
+- All layers should be separated to represent different feature types of the facility.
+- All entities must fall inside the bounds of the level perimeter.
+- Supported AutoCAD entity types: text, mtext, point, arc, circle, line, polyline, ellipse.
+
+### Feature class properties
+
+Text entities that fall within the bounds of a closed shape can be associated to that feature as a property. For example, a room feature class might have text that describes the room name and another the room type [sample drawing package v2]. Additionally:
+
+- Only TEXT and MTEXT entities will be associated to the feature as a property. All other entity types will be ignored.
+- TEXT and MTEXT justification point must fall within the bounds of the closed shape.
+- If more than one TEXT property is within the bounds of the closed shape and both are mapped to one property, one will randomly be selected.
+
+### Facility level
+
+The DWG file for each level must contain a layer to define that level's perimeter. For example, if a facility contains two levels, then it needs to have two DWG files, each with a layer that defines that level's perimeter.
+
+No matter how many entity drawings are in the level perimeter layer, the resulting facility dataset contains only one level feature for each DWG file. Additionally:
+
+- Level perimeters must be drawn as Polygon, Polyline (closed), Circle, or Ellipse (closed).
+- Level perimeters may overlap but are dissolved into one geometry.
+- The resulting level feature must be at least 4 square meters.
+- The resulting level feature must not be greater than 400,000 square meters.
+
+If the layer contains multiple overlapping Polylines, the Polylines are dissolved into a single Level feature. Instead, if the layer contains
+multiple nonoverlapping Polylines, the resulting Level feature has a multi-polygonal representation.
+
+You can see an example of the Level perimeter layer as the 'GROS$' layer in the [sample drawing package v2].
+
+## Manifest file requirements
+
+The drawing package must contain a manifest file at the root level and the file must be named **manifest.json**. It describes the DWG files
+allowing the  [Conversion service v2] to parse their content. Only the files identified by the manifest are used. Files that are in the drawing package, but aren't properly listed in the manifest, are ignored.
+
+The file paths in the buildingLevels object of the manifest file must be relative to the root of the drawing package. The DWG file name must exactly match the name of the facility level. For example, a DWG file for the "Basement" level is *Basement.dwg*. A DWG file for level 2 is named as *level_2.dwg*. Filenames can't contain spaces, you can use an underscore to replace any spaces.
+
+Although there are requirements when you use the manifest objects, not all objects are required. The following table shows the required and optional objects for the 2023-03-01-preview [Conversion service v2].
+
+> [!NOTE]
+> Unless otherwise specified, all string properties are limited to one thousand characters.
+
+### Manifest JSON file
+
+| Property | Type | Required | Description  |
+|-|-|-|--|
+| `version` | number | TRUE | Manifest schema version. Currently version 2.0 |
+|`buildingLevels`| [BuildingLevels](#buildinglevels) object  | TRUE | Specifies the levels of the facility and the files containing the design of the levels. |
+|`featureClasses`|Array of [featureClass] objects| TRUE | List of feature class objects that define how layers are read from the DWG drawing file.|
+| `georeference` |[Georeference](#georeference) object| FALSE | Contains numerical geographic information for the facility drawing.     |
+| `facilityName` | string | FALSE | The name of the facility. |
+
+The next sections detail the requirements for each object.
+
+#### buildingLevels
+
+| Property | Type | Required | Description |
+|--||-||
+|`dwgLayers`| Array of strings | TRUE | Names of layers that define the exterior profile of the facility. |
+| `levels` | Array of level objects | TRUE | A level refers to a unique floor in the facility defined in a DWG file, the height of each level and vertical order in which they appear. |
+
+#### level
+
+| Property | Type | Required | Description |
+|-||-||
+| `levelName` | string | TRUE | The name of the level. For example: Floor 1, Lobby, Blue Parking, or Basement. |
+| `ordinal` | integer | TRUE | Defines the vertical order of levels. All `ordinal` values must be unique within a facility.|
+| `filename` | string | TRUE | The path and name of the DWG file representing the level in a facility. The path must be relative to the root of the drawing package.  |
+|`verticalExtent`| number | FALSE | Floor-to-ceiling vertical height (thickness) of the level in meters. |
+
+#### featureClass
+
+| Property | Type | Required | Description |
+||-|-||
+| `dwgLayers`| Array of strings| TRUE| The name of each layer that defines the feature class. Each entity on the specified layer is converted to an instance of the feature class. The `dwgLayer` name that a feature is converted from ends up as a property of that feature. |
+| `featureClassName` | String | TRUE | The name of the feature class. Typical examples include room, workspace or wall.|
+|`featureClassProperties`| Array of [featureClassProperty] objects | TRUE | Specifies text layers in the DWG file associated to the feature as a property. For example, a label that falls inside the bounds of a space, such as a room number.|
+
+#### featureClassProperty
+
+| Property | Type | Required | Description |
+|--|--|-|--|
+| `dwgLayers` | Array of strings | TRUE | The name of each layer that defines the feature class property. Each entity on the specified layer is converted to a property. Only the DWG `TEXT` and `MTEXT` entities are converted to properties. All other entities are ignored. |
+|`featureClassPropertyName`| String | TRUE | Name of the feature class property, for example, spaceName or spaceUseType.|
+
+#### georeference
+
+| Property | Type | Required | Description |
+|-|--|-|-|
+| `lat` | number | TRUE | Decimal representation of degrees latitude at the facility drawing's origin. The origin coordinates must be in WGS84 Web Mercator (EPSG:3857). |
+| `lon` | number | TRUE | Decimal representation of degrees longitude at the facility drawing's origin. The origin coordinates must be in WGS84 Web Mercator (EPSG:3857). |
+| `angle` | number | TRUE | The clockwise angle, in degrees, between true north and the drawing's vertical (Y) axis. |
+
+### Sample drawing package manifest
+
+The JSON in this example shows the manifest file for the sample drawing package. Go to the [sample drawing package v2] for Azure Maps Creator on GitHub to download the entire package.
+
+#### Manifest file
+
+```json
+{
+ "version": "2.0",
+ "buildingLevels": {
+ "dwgLayers": [
+ "GROS$"
+ ],
+ "levels": [
+ {
+ "filename": "Ground.dwg",
+ "levelName": "level 1",
+ "ordinal": 0
+ },
+ {
+ "filename": "Level_2.dwg",
+ "levelName": "level 2",
+ "ordinal": 1
+ }
+ ]
+ },
+ "georeference": {
+ "lat": 47.63529901,
+ "lon": -122.13355885,
+ "angle": 0
+ },
+ "featureClasses": [
+ {
+ "featureClassName": "room",
+ "dwgLayers": [
+ "RM$"
+ ],
+ "featureClassProperties": [
+ {
+ "featureClassPropertyName": "name",
+ "dwgLayers": [
+ "A-IDEN-NUMR-EXST"
+ ]
+ },
+ {
+ "featureClassPropertyName": "roomType",
+ "dwgLayers": [
+ "A-IDEN-NAME-EXST"
+ ]
+ }
+ ]
+ },
+ {
+ "featureClassName": "wall",
+ "dwgLayers": [
+ "A-WALL-EXST",
+ "A-WALL-CORE-EXST",
+ "A-GLAZ-SILL-EXST",
+ "A-GLAZ-SHEL-SILL-EXST",
+ "A-GLAZ-SHEL-EXST",
+ "A-GLAZ-EXST"
+ ]
+ },
+ {
+ "featureClassName": "workspace",
+ "dwgLayers": [
+ "A-BOMA"
+ ]
+ },
+ {
+ "featureClassName": "workspaceFurniture",
+ "dwgLayers": [
+ "A-FURN-SYTM-EXST"
+ ]
+ },
+ {
+ "featureClassName": "buildingFurniture",
+ "dwgLayers": [
+ "A-FURN-FREE-EXST"
+ ]
+ }
+ ],
+ "facilityName": "Contoso Building"
+}
+```
++ ## Next steps > [!div class="nextstepaction"]
Learn more by reading:
> [!div class="nextstepaction"] > [Creator for indoor maps](creator-indoor-maps.md)
+<! Drawing Package v1 links>
[Conversion service]: /rest/api/maps/v2/conversion
-[Drawing package]: https://github.com/Azure-Samples/am-creator-indoor-data-examples
+[Drawing package]: https://github.com/Azure-Samples/am-creator-indoor-data-examples/tree/master/Drawing%20Package%201.0
[Conversion Drawing Package Guide]: drawing-package-guide.md
-[sample drawing package]: https://github.com/Azure-Samples/am-creator-indoor-data-examples
+[sample drawing package]: https://github.com/Azure-Samples/am-creator-indoor-data-examples/tree/master/Drawing%20Package%201.0
[OSM Opening Hours]: https://wiki.openstreetmap.org/wiki/Key:opening_hours/specification+
+<! Drawing Package v2 links>
+[Conversion service v2]: https://aka.ms/creator-conversion
+[sample drawing package v2]: https://github.com/Azure-Samples/am-creator-indoor-data-examples/tree/master/Drawing%20Package%202.0
+[Georeference]: drawing-package-guide.md#georeference
+[featureClass]: #featureclass
+[featureClassProperty]: #featureclassproperty
azure-maps How To Create Custom Styles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-custom-styles.md
Now when you select that unit in the map, the pop-up menu will have the new laye
[Use Creator to create indoor maps]: tutorial-creator-indoor-maps.md [Creators Rest API]: /rest/api/maps-creator/ [style editor]: https://azure.github.io/Azure-Maps-Style-Editor
-[subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
[manifest]: drawing-requirements.md#manifest-file-requirements [unitProperties]: drawing-requirements.md#unitproperties [categories]: https://atlas.microsoft.com/sdk/javascript/indoor/0.1/categories.json
azure-maps How To Create Data Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-data-registries.md
When you register a file in Azure Maps using the data registry API, an MD5 hash
[Get operation]: /rest/api/maps/data-registry/get-operation [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[storage account overview]: ../storage/common/storage-account-overview.md
-[create storage account]: ../storage/common/storage-account-create.md?tabs=azure-portal
-[managed identity]: ../active-directory/managed-identities-azure-resources/overview.md
-[subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account
+[storage account overview]: /azure/storage/common/storage-account-overview
+[create storage account]: /azure/storage/common/storage-account-create?tabs=azure-portal
+[managed identity]: /azure/active-directory/managed-identities-azure-resources/overview
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
[Azure portal]: https://portal.azure.com/ [Visual Studio]: https://visualstudio.microsoft.com/downloads/ [geographic scope]: geographic-scope.md
azure-maps How To Dataset Geojson https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dataset-geojson.md
Feature IDs can only contain alpha-numeric (a-z, A-Z, 0-9), hyphen (-), dot (.)
- The top level element is [facility], which defines each building in the file *facility.geojson*. - Each facility has one or more levels, which are defined in the file *levels.goejson*. - Each level must be inside the facility.-- Each [level] contain [units], [structures], [verticalPenetrations] and [openings]. All of the items defined in the level must be fully contained within the Level geometry.
+- Each [level] contains [units], [structures], [verticalPenetrations] and [openings]. All of the items defined in the level must be fully contained within the Level geometry.
- `unit` can consist of an array of items such as hallways, offices and courtyards, which are defined by [area], [line] or [point] elements. Units are defined in the file *unit.goejson*. - All `unit` elements must be fully contained within their level and intersect with their children. - `structure` defines physical, non-overlapping areas that can't be navigated through, such as a wall. Structures are defined in the file *structure.goejson*.
Feature IDs can only contain alpha-numeric (a-z, A-Z, 0-9), hyphen (-), dot (.)
[Convert a drawing package]: tutorial-creator-indoor-maps.md#convert-a-drawing-package [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [Creator resource]: how-to-manage-creator.md
-[Subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account
+[Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
[Facility Ontology 2.0]: creator-facility-ontology.md?pivots=facility-ontology-v2 [RFC 7946]: https://www.rfc-editor.org/rfc/rfc7946.html [dataset]: creator-indoor-maps.md#datasets
azure-maps How To Dev Guide Csharp Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-csharp-sdk.md
void printReverseBatchAddresses(ReverseSearchAddressBatchResult batchResult)
The [Azure.Maps Namespace][Azure.Maps Namespace] in the .NET documentation. [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[Subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account
+[Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
[authentication]: azure-maps-authentication.md [Host daemon]: ./how-to-secure-daemon-app.md#host-a-daemon-on-non-azure-resources
azure-maps How To Dev Guide Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-java-sdk.md
public class Demo{
``` [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[Subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account
+[Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
[authentication]: azure-maps-authentication.md [Java Standard Versions]: https://www.oracle.com/java/technologies/downloads/
azure-maps How To Dev Guide Js Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-js-sdk.md
main().catch(console.error);
[Node.js Release Working Group]: https://github.com/nodejs/release#release-schedule [Node.js]: https://nodejs.org/en/download/ [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[Subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account
+[Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
[authentication]: azure-maps-authentication.md [Azure Identity library]: /javascript/api/overview/azure/identity-readme
azure-maps How To Dev Guide Py Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-py-sdk.md
def fuzzy_search():
ΓÇ» ΓÇ» ΓÇ» ΓÇ» query="Starbucks", ΓÇ» ΓÇ» ΓÇ» ΓÇ» coordinates=(47.61010, -122.34255) ΓÇ» ΓÇ» )
-ΓÇ» ΓÇ» # Print the search results
-ΓÇ» ΓÇ» print("Starbucks search result nearby Seattle:")
-ΓÇ» ΓÇ» for result_item in result.results:
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» print(f"* {result_item.address.street_number } {result_item.address.street_name }")
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» print(f" {result_item.address.municipality } {result_item.address.country_code } {result_item.address.postal_code }")
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» print(f" Coordinate: {result_item.position.lat}, {result_item.position.lon}"
- )
+
+ΓÇ» ΓÇ» # Print the search results
+ if len(result.results) > 0:
+ print("Starbucks search result nearby Seattle:")
+ for result_item in result.results:
+ print(f"* {result_item.address.street_number } {result_item.address.street_name }")
+ print(f" {result_item.address.municipality } {result_item.address.country_code } {result_item.address.postal_code }")
+ print(f" Coordinate: {result_item.position.lat}, {result_item.position.lon}")
if __name__ == '__main__': ΓÇ» ΓÇ» fuzzy_search()
def search_address():
ΓÇ» result = maps_search_client.search_address( query="1301 Alaskan Way, Seattle, WA 98101, US" )
-ΓÇ» ΓÇ» print(f"Coordinate: {result.results[0].position.lat}, {result.results[0].position.lon}")
+
+ # Print reuslts if any
+ if len(result.results) > 0:
+ΓÇ» ΓÇ» print(f"Coordinate: {result.results[0].position.lat}, {result.results[0].position.lon}")
+ else:
+ print("No address found")
if __name__ == '__main__': ΓÇ» ΓÇ» search_address()
The [Azure Maps Search package client library] in the *Azure SDK for Python Prev
<!> [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[Subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account
+[Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
[authentication]: azure-maps-authentication.md [Azure Maps Search package client library]: /python/api/overview/azure/maps-search-readme?view=azure-python-preview
azure-maps How To Render Custom Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-render-custom-data.md
Similarly, you can change, add, and remove other style modifiers.
- To learn more about Azure Maps Data service, see the [service documentation]. [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[Subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account
+[Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
[Postman]: https://www.postman.com/ [Data service]: /rest/api/maps/data [static image service]: /rest/api/maps/render/getmapimage
azure-maps How To Request Elevation Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-request-elevation-data.md
The Azure Maps [Elevation service](/rest/api/maps/elevation) provides APIs to qu
## Prerequisites
-1. [Make an Azure Maps account in Gen 1 (S1) or Gen 2 pricing tier](quick-demo-map-app.md#create-an-azure-maps-account).
-2. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key.
+* An [Azure Maps account]
+* A [subscription key]
For more information about authentication in Azure Maps, see [Manage Authentication in Azure Maps](how-to-manage-authentication.md).
For a complete list of Azure Maps REST APIs, see:
> [!div class="nextstepaction"] > [Azure Maps REST APIs](/rest/api/maps/)+
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
azure-maps How To Request Weather Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-request-weather-data.md
This video provides examples for making REST calls to Azure Maps Weather service
## Prerequisites
-1. [Make an Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account)
-2. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key. For more information on authentication in Azure Maps, see [manage authentication in Azure Maps](./how-to-manage-authentication.md).
+* An [Azure Maps account]
+* A [subscription key]
>[!IMPORTANT] >The [Get Minute Forecast API](/rest/api/maps/weather/getminuteforecast)requires a Gen 1 (S1) or Gen 2 pricing tier. All other APIs require an S0 pricing tier key.
In this example, you'll use the [Get Minute Forecast API](/rest/api/maps/weather
> [!div class="nextstepaction"] > [Azure Maps Weather services](/rest/api/maps/weather)+
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
azure-maps How To Search For Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-search-for-address.md
In this article, you'll learn how to:
## Prerequisites
-1. An [Azure Maps account]
-2. A [subscription key]
+* An [Azure Maps account]
+* A [subscription key]
This tutorial uses the [Postman] application, but you may choose a different API development environment.
In this example, we'll search for a cross street based on the coordinates of an
[Reverse Address Search]: /rest/api/maps/search/getsearchaddressreverse [Search Address Reverse Cross Street]: /rest/api/maps/search/getsearchaddressreversecrossstreet [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
[Postman]: https://www.postman.com/ [Get Search Address]: /rest/api/maps/search/getsearchaddress [Post Search Address Batch]: /rest/api/maps/search/postsearchaddressbatch
azure-maps How To Use Best Practices For Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-best-practices-for-routing.md
The Route Directions and Route Matrix APIs in Azure Maps [Route service] can be
## Prerequisites
-1. An [Azure Maps account]
-2. A [subscription key]
+* An [Azure Maps account]
+* A [subscription key]
For more information about the coverage of the Route service, see the [Routing Coverage].
To learn more, please see:
[Route service]: /rest/api/maps/route [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
[Routing Coverage]: routing-coverage.md [Postman]: https://www.postman.com/downloads/ [RouteType]: /rest/api/maps/route/postroutedirections#routetype
azure-maps How To Use Best Practices For Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-best-practices-for-search.md
# Best practices for Azure Maps Search service
-Azure Maps [Search service] includes API that offer various capabilities to help developers to search addresses, places, business listings by name or category, and other geographic information. For example, [Search Fuzzy] allows users to search for an address or Point of Interest (POI).
+Azure Maps [Search service] includes API that offers various capabilities to help developers to search addresses, places, business listings by name or category, and other geographic information. For example, [Search Fuzzy] allows users to search for an address or Point of Interest (POI).
This article explains how to apply sound practices when you call data from Azure Maps Search service. You'll learn how to: > [!div class="checklist"]
This article explains how to apply sound practices when you call data from Azure
## Prerequisites
-1. An [Azure Maps account]
-2. A [subscription key]
+* An [Azure Maps account]
+* A [subscription key]
This article uses the [Postman] application to build REST calls, but you can choose any API development environment.
To learn more, please see:
[Search service]: /rest/api/maps/search [Search Fuzzy]: /rest/api/maps/search/getsearchfuzzy [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
[Postman]: https://www.postman.com/downloads/ [Geocoding coverage]: geocoding-coverage.md [Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse
azure-maps How To Use Indoor Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module.md
When you create an indoor map using Azure Maps Creator, default styles are appli
- [Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account) - [Azure Maps Creator resource](how-to-manage-creator.md)-- [Subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account).
+- [Subscription key](quick-demo-map-app.md#get-the-subscription-key-for-your-account).
- [Map configuration][mapConfiguration] alias or ID. If you have never used Azure Maps Creator to create an indoor map, you might find the [Use Creator to create indoor maps][tutorial] tutorial helpful. You'll need the map configuration `alias` (or `mapConfigurationId`) to render indoor maps with custom styles via the Azure Maps Indoor Maps module.
Next, instantiate a *Map object* with the map configuration object set to the `a
The *Map object* will be used in the next step to instantiate the *Indoor Manager* object. The code below shows you how to instantiate the *Map object* with `mapConfiguration`, `styleAPIVersion` and map domain set: ```javascript
-const subscriptionKey = "<Your Azure Maps Primary Subscription Key>";
+const subscriptionKey = "<Your Azure Maps Subscription Key>";
const region = "<Your Creator resource region: us or eu>" const mapConfiguration = "<map configuration alias or ID>" atlas.setDomain(`${region}.atlas.microsoft.com`);
When you create an indoor map using Azure Maps Creator, default styles are appli
5. Set the map domain with a prefix matching a location of your Creator resource: `atlas.setDomain('us.atlas.microsoft.com');` if your Creator resource has been created in US region, or `atlas.setDomain('eu.atlas.microsoft.com');` if your Creator resource has been created in EU region. 6. Initialize a *Map object*. The *Map object* supports the following options:
- - `Subscription key` is your Azure Maps primary subscription key.
+ - `Subscription key` is your Azure Maps subscription key.
- `center` defines a latitude and longitude for your indoor map center location. Provide a value for `center` if you don't want to provide a value for `bounds`. Format should appear as `center`: [-122.13315, 47.63637]. - `bounds` is the smallest rectangular shape that encloses the tileset map data. Set a value for `bounds` if you don't want to set a value for `center`. You can find your map bounds by calling the [Tileset List API](/rest/api/maps/v2/tileset/list). The Tileset List API returns the `bbox`, which you can parse and assign to `bounds`. Format should appear as `bounds`: [# west, # south, # east, # north]. - `mapConfiguration` the ID or alias of the map configuration that defines the custom styles you want to display on the map, use the map configuration ID or alias from step 1.
azure-maps How To Use Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-map-control.md
This documentation uses the Azure Maps Web SDK, however the Azure Maps services
To use the Map Control in a web page, you must have one of the following prerequisites:
-* [Make an Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account) and [obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key.
-
-* Obtain your Azure Active Directory (AAD) credentials with [authentication options](/javascript/api/azure-maps-control/atlas.authenticationoptions).
+* An [Azure Maps account]
+* A [subscription key]
+* Obtain your Azure Active Directory (AAD) credentials with [authentication options]
## Create a new map in a web page
You can embed a map in a web page by using the Map Control client-side JavaScrip
<script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> ```
- * Load the Azure Maps Web SDK source code locally using the [azure-maps-control](https://www.npmjs.com/package/azure-maps-control) NPM package and host it with your app. This package also includes TypeScript definitions.
+ * Load the Azure Maps Web SDK source code locally using the [azure-maps-control](https://www.npmjs.com/package/azure-maps-control) npm package and host it with your app. This package also includes TypeScript definitions.
> **npm install azure-maps-control**
You can embed a map in a web page by using the Map Control client-side JavaScrip
5. Now, we'll initialize the map control. In order to authenticate the control, you'll either need to own an Azure Maps subscription key or use Azure Active Directory (AAD) credentials with [authentication options](/javascript/api/azure-maps-control/atlas.authenticationoptions).
- If you're using a subscription key for authentication, copy and paste the following script element inside the `<head>` element, and below the first `<script>` element. Replace `<Your Azure Maps Key>` with your Azure Maps primary subscription key.
+ If you're using a subscription key for authentication, copy and paste the following script element inside the `<head>` element, and below the first `<script>` element. Replace `<Your Azure Maps Key>` with your Azure Maps subscription key.
```html <script type="text/javascript">
For a list of samples showing how to integrate Azure Active Directory (AAD) with
> [!div class="nextstepaction"] > [Azure AD authentication samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples)+
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[authentication options]: /javascript/api/azure-maps-control/atlas.authenticationoptions
azure-maps How To Use Spatial Io Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-spatial-io-module.md
This video provides an overview of Spatial IO module in the Azure Maps Web SDK.
## Prerequisites
-Before you can use the Spatial IO module, you'll need to [make an Azure Maps account](./quick-demo-map-app.md#create-an-azure-maps-account) and [get the primary subscription key for your account](./quick-demo-map-app.md#get-the-primary-key-for-your-account).
+* An [Azure Maps account]
+* A [subscription key]
## Installing the Spatial IO module
Refer to the Azure Maps Spatial IO documentation:
> [!div class="nextstepaction"] > [Azure Maps Spatial IO package](/javascript/api/azure-maps-spatial-io/)+
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
azure-maps Indoor Map Dynamic Styling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/indoor-map-dynamic-styling.md
Learn more by reading:
[Feature State service]: /rest/api/maps/v2/feature-state [Indoor Web module]: how-to-use-indoor-module.md <!--[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[Subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account
+[Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
[A Creator resource]: how-to-manage-creator.md
-[Sample Drawing package]: https://github.com/Azure-Samples/am-creator-indoor-data-examples-->
+[Sample Drawing package]: https://github.com/Azure-Samples/am-creator-indoor-data-examples/tree/master/Drawing%20Package%202.0-->
[How to use the Indoor Map module]: how-to-use-indoor-module.md [Postman]: https://www.postman.com/ [How to create a feature stateset]: how-to-creator-feature-stateset.md
azure-maps Map Add Popup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-popup.md
The String template replaces placeholders with values of the feature properties.
The `numberFormat` option specifies the format of the number to display. If the `numberFormat` isn't specified, then the code will use the popup templates date format. The `numberFormat` option formats numbers using the [Number.toLocaleString](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number/toLocaleString) function. To format large numbers, consider using the `numberFormat` option with functions from [NumberFormat.format](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/NumberFormat/format). For instance, the code snippet below uses `maximumFractionDigits` to limit the number of fraction digits to two.
-> [!Note]
+> [!NOTE]
> There's only one way in which the String template can render images. First, the String template needs to have an image tag in it. The value being passed to the image tag should be a URL to an image. Then, the String template needs to have `isImage` set to true in the `HyperLinkFormatOptions`. The `isImage` option specifies that the hyperlink is for an image, and the hyperlink will be loaded into an image tag. When the hyperlink is clicked, the image will open. ```javascript
azure-maps Map Extruded Polygon Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-extruded-polygon-android.md
This article shows you how to use the polygon extrusion layer to render areas of
Connect the polygon extrusion layer to a data source. Then, loaded it on the map. The polygon extrusion layer renders the areas of a `Polygon` and `MultiPolygon` features as extruded shapes. The `height` and `base` properties of the polygon extrusion layer define the base distance from the ground and height of the extruded shape in **meters**. The following code shows how to create a polygon, add it to a data source, and render it using the Polygon extrusion layer class.
-> [!Note]
+> [!NOTE]
> The `base` value defined in the polygon extrusion layer should be less than or equal to that of the `height`. ::: zone pivot="programming-language-java-android"
azure-maps Map Extruded Polygon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-extruded-polygon.md
This article shows you how to use the polygon extrusion layer to render areas of
Connect the [polygon extrusion layer](/javascript/api/azure-maps-control/atlas.layer.polygonextrusionlayer) to a data source. Then, loaded it on the map. The polygon extrusion layer renders the areas of a `Polygon` and `MultiPolygon` features as extruded shapes. The `height` and `base` properties of the polygon extrusion layer define the base distance from the ground and height of the extruded shape in **meters**. The following code shows how to create a polygon, add it to a data source, and render it using the Polygon extrusion layer class.
-> [!Note]
+> [!NOTE]
> The `base` value defined in the polygon extrusion layer should be less than or equal to that of the `height`. <br/>
azure-maps Migrate From Bing Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md
If developing using a JavaScript framework, one of the following open-source pro
## Prerequisites
-1. Sign in to the [Azure portal](https://portal.azure.com). If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-2. [Make an Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account)
-3. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key. For more information on authentication in Azure Maps, see [manage authentication in Azure Maps](how-to-manage-authentication.md).
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+
+* An [Azure Maps account]
+* A [subscription key]
## Key features support
The following is a collection of code samples for each platform that cover commo
### Load a map
-Loading a map in both SDKΓÇÖs follows the same set of steps;
+Loading a map in both SDKs follows the same set of steps;
* Add a reference to the Map SDK. * Add a `div` tag to the body of the page that will act as a placeholder for the map.
In Azure Maps, GeoJSON is the main data format used in the web SDK, additional s
### Add drawing tools
-Both Bing and Azure Maps provide a module that add the ability for the user to draw and edit shapes on the map using the mouse or other input device. They both support drawing pushpins, lines, and polygons. Azure Maps also provides options for drawing circles and rectangles.
+Both Bing and Azure Maps provide a module that adds the ability for the user to draw and edit shapes on the map using the mouse or other input device. They both support drawing pushpins, lines, and polygons. Azure Maps also provides options for drawing circles and rectangles.
**Before: Bing Maps**
Learn more about migrating from Bing Maps to Azure Maps.
> [!div class="nextstepaction"] > [Migrate a web service](migrate-from-bing-maps-web-services.md)+
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
azure-maps Migrate From Bing Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-services.md
Be sure to also review the following best practices guides:
## Prerequisites
-1. Sign in to the [Azure portal](https://portal.azure.com). If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-2. [Make an Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account)
-3. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key. For more information on authentication in Azure Maps, see [manage authentication in Azure Maps](how-to-manage-authentication.md).
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+
+* An [Azure Maps account]
+* A [subscription key]
+
+> [!NOTE]
+> For more information on authentication in Azure Maps, see [manage authentication in Azure Maps](how-to-manage-authentication.md).
## Geocoding addresses
Learn more about the Azure Maps REST services.
> [!div class="nextstepaction"] > [Best practices for using the search service](how-to-use-best-practices-for-search.md)+
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
azure-maps Migrate From Bing Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps.md
In this tutorial, you'll learn:
## Prerequisites
-1. Sign in to the [Azure portal](https://portal.azure.com). If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-2. [Make an Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account)
-3. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key. For more information on authentication in Azure Maps, see [manage authentication in Azure Maps](how-to-manage-authentication.md).
+If you don't have an Azure subscription, create a [free account] before you begin.
+
+* An [Azure Maps account]
+* A [subscription key]
+
+> [!NOTE]
+> For more information on authentication in Azure Maps, see [manage authentication in Azure Maps].
## Azure Maps platform overview
Learn the details of how to migrate your Bing Maps application with these articl
> [!div class="nextstepaction"] > [Migrate a web app](migrate-from-bing-maps-web-app.md)+
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[free account]: https://azure.microsoft.com/free/
+[manage authentication in Azure Maps]: how-to-manage-authentication.md
azure-maps Migrate From Google Maps Android App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-android-app.md
For more information on developing with the Android SDK by Azure Maps, see the [
## Prerequisites
-1. Create an Azure Maps account by signing into the [Azure portal](https://portal.azure.com). If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-2. [Make an Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account)
-3. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key. For more information on authentication in Azure Maps, see [manage authentication in Azure Maps](how-to-manage-authentication.md).
+If you don't have an Azure subscription, create a [free account] before you begin.
+
+* An [Azure Maps account]
+* A [subscription key]
+
+> [!NOTE]
+> For more information on authentication in Azure Maps, see [manage authentication in Azure Maps].
## Load a map
Learn more about the Azure Maps Android SDK:
> [!div class="nextstepaction"] > [Get started with Azure Maps Android SDK](how-to-use-android-map-control-library.md)+
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[free account]: https://azure.microsoft.com/free/
+[manage authentication in Azure Maps]: how-to-manage-authentication.md
azure-maps Migrate From Google Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-app.md
If developing using a JavaScript framework, one of the following open-source pro
## Prerequisites
-1. Sign in to the [Azure portal](https://portal.azure.com). If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-2. [Make an Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account)
-3. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key. For more information on authentication in Azure Maps, see [manage authentication in Azure Maps](how-to-manage-authentication.md).
+If you don't have an Azure subscription, create a [free account] before you begin.
+
+* An [Azure Maps account]
+* A [subscription key]
+
+> [!NOTE]
+> For more information on authentication in Azure Maps, see [manage authentication in Azure Maps].
## Key features support
Learn more about migrating to Azure Maps:
> [!div class="nextstepaction"] > [Migrate a web service](migrate-from-google-maps-web-services.md)+
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[free account]: https://azure.microsoft.com/free/
+[manage authentication in Azure Maps]: how-to-manage-authentication.md
azure-maps Migrate From Google Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-services.md
Azure Maps has several other REST web services that may be of interest:
## Prerequisites
-1. Sign in to the [Azure portal](https://portal.azure.com). If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-2. [Make an Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account)
-3. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key. For more information on authentication in Azure Maps, see [manage authentication in Azure Maps](how-to-manage-authentication.md).
+If you don't have an Azure subscription, create a [free account] before you begin.
+
+* An [Azure Maps account]
+* A [subscription key]
+
+> [!NOTE]
+> For more information on authentication in Azure Maps, see [manage authentication in Azure Maps].
## Geocoding addresses
Learn more about Azure Maps REST
> [!div class="nextstepaction"] > [Best practices for search](how-to-use-best-practices-for-search.md)+
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[free account]: https://azure.microsoft.com/free/
+[manage authentication in Azure Maps]: how-to-manage-authentication.md
azure-maps Migrate From Google Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps.md
This article provides insights on how to migrate web, mobile and server-based ap
## Prerequisites
-1. Sign in to the [Azure portal](https://portal.azure.com). If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-2. [Make an Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account)
-3. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key. For more information on authentication in Azure Maps, see [manage authentication in Azure Maps](how-to-manage-authentication.md).
+If you don't have an Azure subscription, create a [free account] before you begin.
+
+* An [Azure Maps account]
+* A [subscription key]
+
+> [!NOTE]
+> For more information on authentication in Azure Maps, see [manage authentication in Azure Maps].
## Azure Maps platform overview
No resources to be cleaned up.
Learn the details of how to migrate your Google Maps application with these articles: > [!div class="nextstepaction"]
-> [Migrate a web app](migrate-from-google-maps-web-app.md)
+> [Migrate a web app](migrate-from-google-maps-web-app.md)
+
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[free account]: https://azure.microsoft.com/free/
+[manage authentication in Azure Maps]: how-to-manage-authentication.md
azure-maps Quick Demo Map App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-demo-map-app.md
Create a new Azure Maps account with the following steps:
<a id="getkey"></a>
-## Get the primary key for your account
+## Get the subscription key for your account
Once your Azure Maps account is successfully created, retrieve the primary key that enables you to query the Maps APIs.
azure-maps Quick Ios App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-ios-app.md
This article shows you how to add the Azure Maps to an iOS app. It walks you thr
## Prerequisites
-* Create an Azure Maps account by signing into the [Azure portal](https://portal.azure.com/) . If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-* [Make an Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account)
-* [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account) , also known as the primary key or the subscription key. For more information on authentication in Azure Maps, see [manage authentication in Azure Maps](how-to-manage-authentication.md) .
-* Download [ΓÇÄXcode on the Mac App Store](https://apps.apple.com/cz/app/xcode/id497799835?mt=12) for free.
+If you don't have an Azure subscription, create a [free account] before you begin.
+
+* An [Azure Maps account]
+* A [subscription key]
+* [ΓÇÄXcode]
+
+> [!NOTE]
+> For more information on authentication in Azure Maps, see [manage authentication in Azure Maps].
## Create an Azure Maps account
In this quickstart, you created your Azure Maps account and created a demo appli
> [Load GeoJSON data into Azure Maps](tutorial-load-geojson-file-ios.md) -->
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[free account]: https://azure.microsoft.com/free/
+[manage authentication in Azure Maps]: how-to-manage-authentication.md
+[ΓÇÄXcode]: https://apps.apple.com/cz/app/xcode/id497799835?mt=12
azure-maps Spatial Io Core Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-core-operations.md
To use this class, follow the steps below:
- Call the `toString` function to retrieve the delimited string. - Optionally, call the `clear` method to make the writer reusable and reduce its resource allocation, or call the `delete` method to dispose of the writer instance.
-> [!Note]
+> [!NOTE]
> The number of columns written will be constrained to the number of cells in the first row of the data passed to the writer. ## Read XML files
azure-maps Supported Languages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-languages.md
Azure Maps have been localized in variety languages across its services. The fol
## Azure Maps supported views
-> [!Note]
+> [!NOTE]
> On August 1, 2019, Azure Maps was released in the following countries/regions: > > * Argentina
azure-maps Supported Map Styles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-map-styles.md
A **road** map is a standard map that displays roads. It also displays natural a
The **blank** and **blank_accessible** map styles provide a blank canvas for visualizing data. The **blank_accessible** style will continue to provide screen reader updates with map's location details, even though the base map isn't displayed.
-> [!Note]
+> [!NOTE]
> In the Web SDK, you can change the background color of the map by setting the CSS `background-color` style of map DIV element. **Applicable APIs:**
azure-maps Tutorial Create Store Locator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-create-store-locator.md
In this tutorial, you'll learn how to:
## Prerequisites
-1. An [Azure Maps account]
-2. A [subscription key]
+* [Visual Studio Code] is recommended for this tutorial, but you can use any suitable integrated development environment (IDE).
+* An [Azure Maps account]
+* A [subscription key]
-For more information about Azure Maps authentication, see [Manage authentication in Azure Maps].
-
-[Visual Studio Code] is recommended for this tutorial, but you can use any suitable integrated development environment (IDE).
+> [!NOTE]
+> For more information on authentication in Azure Maps, see [manage authentication in Azure Maps].
## Sample code
To add the JavaScript:
center: [-90, 40], zoom: 2,
- //Add your Azure Maps primary subscription key to the map SDK.
+ //Add your Azure Maps subscription key to the map SDK.
authOptions: { authType: 'subscriptionKey', subscriptionKey: '<Your Azure Maps Key>'
To see more code examples and an interactive coding experience:
> [How to use the map control](how-to-use-map-control.md) [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
[Manage authentication in Azure Maps]: how-to-manage-authentication.md [Visual Studio Code]: https://code.visualstudio.com [Simple Store Locator]: https://samples.azuremaps.com/?sample=simple-store-locator
azure-maps Tutorial Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-indoor-maps.md
This tutorial describes how to create indoor maps for use in Microsoft Azure Map
## Prerequisites
-1. An [Azure Maps account]
-2. A [subscription key]
-3. A [Creator resource]
-4. Download the [Sample drawing package]
+* An [Azure Maps account]
+* A [subscription key]
+* A [Creator resource]
+* Download the [Sample drawing package]
This tutorial uses the [Postman] application, but you can use a different API development environment.
To convert a drawing package:
5. Enter the following URL to the [Conversion service] (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key and `udid` with the `udid` of the uploaded package): ```http
- https://us.atlas.microsoft.com/conversions?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2.0&udid={udid}&inputType=DWG&outputOntology=facility-2.0
+ https://us.atlas.microsoft.com/conversions?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2023-03-01-preview&udid={udid}&inputType=DWG&dwgPackageVersion=2.0
``` 6. Select **Send**.
To create a dataset:
5. Enter the following URL to the [Dataset service]. The request should look like the following URL (replace `{conversionId`} with the `conversionId` obtained in [Check drawing package conversion status](#check-the-drawing-package-conversion-status)): ```http
- https://us.atlas.microsoft.com/datasets?api-version=2.0&conversionId={conversionId}&subscription-key={Your-Azure-Maps-Subscription-key}
+ https://us.atlas.microsoft.com/datasets?api-version=2023-03-01-preview&conversionId={conversionId}&subscription-key={Your-Azure-Maps-Subscription-key}
``` 6. Select **Send**.
To check the status of the dataset creation process and retrieve the `datasetId`
5. Enter the `status URL` you copied in [Create a dataset](#create-a-dataset). The request should look like the following URL: ```http
- https://us.atlas.microsoft.com/datasets/operations/{operationId}?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key}
+ https://us.atlas.microsoft.com/datasets/operations/{operationId}?api-version=2023-03-01-preview&subscription-key={Your-Azure-Maps-Subscription-key}
``` 6. Select **Send**.
To create a tileset:
5. Enter the following URL to the [Tileset service]. The request should look like the following URL (replace `{datasetId`} with the `datasetId` obtained in the [Check the dataset creation status](#check-the-dataset-creation-status) section above: ```http
- https://us.atlas.microsoft.com/tilesets?api-version=2022-09-01-preview&datasetID={datasetId}&subscription-key={Your-Azure-Maps-Primary-Subscription-key}
+ https://us.atlas.microsoft.com/tilesets?api-version=2023-03-01-preview&datasetID={datasetId}&subscription-key={Your-Azure-Maps-Primary-Subscription-key}
``` 6. Select **Send**.
To check the status of the tileset creation process and retrieve the `tilesetId`
5. Enter the `status URL` you copied in [Create a tileset](#create-a-tileset). The request should look like the following URL: ```http
- https://us.atlas.microsoft.com/tilesets/operations/{operationId}?api-version=2022-09-01-preview&subscription-key={Your-Azure-Maps-Subscription-key}
+ https://us.atlas.microsoft.com/tilesets/operations/{operationId}?api-version=2023-03-01-preview&subscription-key={Your-Azure-Maps-Subscription-key}
``` 6. Select **Send**.
For more information, see [Map configuration] in the indoor maps concepts articl
> [Use the Azure Maps Indoor Maps module with custom styles](how-to-use-indoor-module.md) [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
[Creator resource]: how-to-manage-creator.md [Sample drawing package]: https://github.com/Azure-Samples/am-creator-indoor-data-examples/blob/master/Sample%20-%20Contoso%20Drawing%20Package.zip [Postman]: https://www.postman.com
azure-maps Tutorial Ev Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-ev-routing.md
In this tutorial, you will:
## Prerequisites
-1. [Make an Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account), and [choose either Gen 2 or S1 pricing tier](choose-pricing-tier.md).
-2. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key.
+* An [Azure Maps account]
+* A [subscription key]
-For more information on authentication in Azure Maps, see [manage authentication in Azure Maps](how-to-manage-authentication.md).
+> [!NOTE]
+> For more information on authentication in Azure Maps, see [manage authentication in Azure Maps].
## Create an Azure Notebooks project
To learn more about Azure Notebooks, see
> [!div class="nextstepaction"] > [Azure Notebooks](https://notebooks.azure.com)+
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[manage authentication in Azure Maps]: how-to-manage-authentication.md
azure-maps Tutorial Iot Hub Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-iot-hub-maps.md
In this tutorial you will:
## Prerequisites
-1. Sign in to the [Azure portal](https://portal.azure.com).
+If you don't have an Azure subscription, create a [free account] before you begin.
-2. [Create an Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account).
+* An [Azure Maps account]
+* A [subscription key]
+* A [resource group]
+* The [rentalCarSimulation] C# project
-3. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key. For more information, see [manage authentication in Azure Maps](how-to-manage-authentication.md).
-
-4. [Create a resource group](../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups). In this tutorial, we'll name our resource group *ContosoRental*, but you can choose whatever name you like.
-
-5. Download the [rentalCarSimulation C# project](https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing/tree/master/src/rentalCarSimulation).
-
-This tutorial uses the [Postman](https://www.postman.com/) application, but you can choose a different API development environment.
+This tutorial uses the [Postman] application, but you can choose a different API development environment.
## Use case: rental car tracking Let's say that a car rental company wants to log location information, distance traveled, and running state for its rental cars. The company also wants to store this information whenever a car leaves the correct authorized geographic region.
-The rental cars are equipped with IoT devices that regularly send telemetry data to IoT Hub. The telemetry includes the current location and indicates whether the car's engine is running. The device location schema adheres to the IoT [Plug and Play schema for geospatial data](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v1-preview/schemas/geospatial.md). The rental car's device telemetry schema looks like the following JSON code:
+The rental cars are equipped with IoT devices that regularly send telemetry data to IoT Hub. The telemetry includes the current location and indicates whether the car's engine is running. The device location schema adheres to the IoT [Plug and Play schema for geospatial data]. The rental car's device telemetry schema looks like the following JSON code:
```JSON {
The rental cars are equipped with IoT devices that regularly send telemetry data
} ```
-In this tutorial, you only track one vehicle. After you set up the Azure services, you need to download the [rentalCarSimulation C# project](https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing/tree/master/src/rentalCarSimulation) to run the vehicle simulator. The entire process, from event to function execution, is summarized in the following steps:
+In this tutorial, you only track one vehicle. After you set up the Azure services, you need to download the [rentalCarSimulation] C# project to run the vehicle simulator. The entire process, from event to function execution, is summarized in the following steps:
1. The in-vehicle device sends telemetry data to IoT Hub.
In this tutorial, you only track one vehicle. After you set up the Azure service
3. An Azure function is triggered because of its event subscription to device telemetry events.
-4. The function logs the vehicle device location coordinates, event time, and the device ID. It then uses the [Spatial Geofence Get API](/rest/api/maps/spatial/getgeofence) to determine whether the car has driven outside the geofence. If it has traveled outside the geofence boundaries, the function stores the location data received from the event into a blob container. The function also queries the [Search Address Reverse](/rest/api/maps/search/getsearchaddressreverse) to translate the coordinate location to a street address, and stores it with the rest of the device location data.
+4. The function logs the vehicle device location coordinates, event time, and the device ID. It then uses the [Spatial Geofence Get API] to determine whether the car has driven outside the geofence. If it has traveled outside the geofence boundaries, the function stores the location data received from the event into a blob container. The function also queries the [Search Address Reverse] to translate the coordinate location to a street address, and stores it with the rest of the device location data.
The following diagram shows a high-level overview of the system.
The following figure highlights the geofence area in blue. The rental car's rout
## Create an Azure storage account
-To store car violation tracking data, create a [general-purpose v2 storage account](../storage/common/storage-account-overview.md) in your resource group. If you haven't created a resource group, follow the directions in [create a resource group](../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups). In this tutorial, you'll name your resource group *ContosoRental*.
+To store car violation tracking data, create a [general-purpose v2 storage account] in your resource group. If you haven't created a resource group, follow the directions in [create resource groups][resource group]. In this tutorial, you'll name your resource group *ContosoRental*.
-To create a storage account, follow the instructions in [create a storage account](../storage/common/storage-account-create.md?tabs=azure-portal). In this tutorial, name the storage account *contosorentalstorage*, but in general you can name it anything you like.
+To create a storage account, follow the instructions in [create a storage account]. In this tutorial, name the storage account *contosorentalstorage*, but in general you can name it anything you like.
When you successfully create your storage account, you then need to create a container to store logging data.
When you successfully create your storage account, you then need to create a con
## Upload a geofence
-Next, use the [Postman app](https://www.getpostman.com) to [upload the geofence](./geofence-geojson.md) to Azure Maps. The geofence defines the authorized geographical area for our rental vehicle. You'll be using the geofence in your Azure function to determine whether a car has moved outside the geofence area.
+Next, use the [Postman] app to [upload the geofence] to Azure Maps. The geofence defines the authorized geographical area for our rental vehicle. You'll be using the geofence in your Azure function to determine whether a car has moved outside the geofence area.
Follow these steps to upload the geofence by using the Azure Maps Data Upload API:
Follow these steps to upload the geofence by using the Azure Maps Data Upload AP
In the URL path, the `geojson` value against the `dataFormat` parameter represents the format of the data being uploaded.
-3. Select **Body** > **raw** for the input format, and choose **JSON** from the drop-down list. [Open the JSON data file](https://raw.githubusercontent.com/Azure-Samples/iothub-to-azure-maps-geofencing/master/src/Data/geofence.json?token=AKD25BYJYKDJBJ55PT62N4C5LRNN4), and copy the JSON into the body section. Select **Send**.
+3. Select **Body** > **raw** for the input format, and choose **JSON** from the drop-down list. [Open the JSON data file], and copy the JSON into the body section. Select **Send**.
4. Select **Send** and wait for the request to process. After the request completes, go to the **Headers** tab of the response. Copy the value of the **Operation-Location** key, which is the `status URL`.
Follow these steps to upload the geofence by using the Azure Maps Data Upload AP
https://us.atlas.microsoft.com/mapData/operations/{operationId}?api-version=2.0 ```
-5. To check the status of the API call, create a **GET** HTTP request on the `status URL`. You'll need to append your primary subscription key to the URL for authentication. The **GET** request should like the following URL:
+5. To check the status of the API call, create a **GET** HTTP request on the `status URL`. You'll need to append your subscription key to the URL for authentication. The **GET** request should like the following URL:
```HTTP https://us.atlas.microsoft.com/mapData/{operationId}/status?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key}
Follow these steps to upload the geofence by using the Azure Maps Data Upload AP
IoT Hub enables secure and reliable bi-directional communication between an IoT application and the devices it manages. For this tutorial, you want to get information from your in-vehicle device to determine the location of the rental car. In this section, you create an IoT hub within the *ContosoRental* resource group. This hub will be responsible for publishing your device telemetry events.
-To create an IoT hub in the *ContosoRental* resource group, follow the steps in [create an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp#create-an-iot-hub).
+To create an IoT hub in the *ContosoRental* resource group, follow the steps in [create an IoT hub].
## Register a device in your IoT hub
-Devices can't connect to the IoT hub unless they're registered in the IoT hub identity registry. Here, you'll create a single device with the name, *InVehicleDevice*. To create and register the device within your IoT hub, follow the steps in [register a new device in the IoT hub](../iot-hub/iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub). Make sure to copy the primary connection string of your device. You'll need it later.
+Devices can't connect to the IoT hub unless they're registered in the IoT hub identity registry. Here, you'll create a single device with the name, *InVehicleDevice*. To create and register the device within your IoT hub, follow the steps in [register a new device in the IoT hub]. Make sure to copy the primary connection string of your device. You'll need it later.
## Create a function and add an Event Grid subscription
-Azure Functions is a serverless compute service that allows you to run small pieces of code ("functions"), without the need to explicitly provision or manage compute infrastructure. To learn more, see [Azure Functions](../azure-functions/functions-overview.md).
+Azure Functions is a serverless compute service that allows you to run small pieces of code ("functions"), without the need to explicitly provision or manage compute infrastructure. To learn more, see [Azure Functions].
A function is triggered by a certain event. Here, you'll create a function that is triggered by an Event Grid trigger. Create the relationship between trigger and function by creating an event subscription for IoT Hub device telemetry events. When a device telemetry event occurs, your function is called as an endpoint, and receives the relevant data for the device you previously registered in IoT Hub.
-Here's the [C# script code that your function will contain](https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing/blob/master/src/Azure%20Function/run.csx).
+Here's the [C# script] code that your function will contain.
Now, set up your Azure function.
Now, set up your Azure function.
:::image type="content" source="./media/tutorial-iot-hub-maps/rental-app.png" alt-text="Screenshot of create a function app.":::
-1. For **Storage account**, select the storage account you created in [Create an Azure storage account](#create-an-azure-storage-account). Select **Review + create**.
+1. For **Storage account**, select the storage account you created in [Create an Azure storage account]. Select **Review + create**.
1. Review the function app details, and select **Create**.
Now, set up your Azure function.
1. Give the function a name. In this tutorial, you'll use the name, *GetGeoFunction*, but in general you can use any name you like. Select **Create function**.
-1. In the left menu, select the **Code + Test** pane. Copy and paste the [C# script](https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing/blob/master/src/Azure%20Function/run.csx) into the code window.
+1. In the left menu, select the **Code + Test** pane. Copy and paste the [C# script] into the code window.
:::image type="content" source="./media/tutorial-iot-hub-maps/function-code.png" alt-text="Copy/Screenshot of paste code into function window."::: 1. In the C# code, replace the following parameters:
- * Replace **SUBSCRIPTION_KEY** with your Azure Maps account primary subscription key.
- * Replace **UDID** with the `udid` of the geofence you uploaded in [Upload a geofence](#upload-a-geofence).
- * The `CreateBlobAsync` function in the script creates a blob per event in the data storage account. Replace the **ACCESS_KEY**, **ACCOUNT_NAME**, and **STORAGE_CONTAINER_NAME** with your storage account's access key, account name, and data storage container. These values were generated when you created your storage account in [Create an Azure storage account](#create-an-azure-storage-account).
+ * Replace **SUBSCRIPTION_KEY** with your Azure Maps account subscription key.
+ * Replace **UDID** with the `udid` of the geofence you uploaded in [Upload a geofence].
+ * The `CreateBlobAsync` function in the script creates a blob per event in the data storage account. Replace the **ACCESS_KEY**, **ACCOUNT_NAME**, and **STORAGE_CONTAINER_NAME** with your storage account's access key, account name, and data storage container. These values were generated when you created your storage account in [Create an Azure storage account].
1. In the left menu, select the **Integration** pane. Select **Event Grid Trigger** in the diagram. Type in a name for the trigger, *eventGridEvent*, and select **Create Event Grid subscription**.
Now, set up your Azure function.
## Filter events by using IoT Hub message routing
-When you add an Event Grid subscription to the Azure function, a messaging route is automatically created in the specified IoT hub. Message routing allows you to route different data types to various endpoints. For example, you can route device telemetry messages, device life-cycle events, and device twin change events. For more information, see [Use IoT Hub message routing](../iot-hub/iot-hub-devguide-messages-d2c.md).
+When you add an Event Grid subscription to the Azure function, a messaging route is automatically created in the specified IoT hub. Message routing allows you to route different data types to various endpoints. For example, you can route device telemetry messages, device life-cycle events, and device twin change events. For more information, see [Use IoT Hub message routing].
:::image type="content" source="./media/tutorial-iot-hub-maps/hub-route.png" alt-text="Screenshot of message routing in IoT hub.":::
In your example scenario, you only want to receive messages when the rental car
:::image type="content" source="./media/tutorial-iot-hub-maps/hub-filter.png" alt-text="Screenshot of filter routing messages."::: >[!TIP]
->There are various ways to query IoT device-to-cloud messages. To learn more about message routing syntax, see [IoT Hub message routing](../iot-hub/iot-hub-devguide-routing-query-syntax.md).
+>There are various ways to query IoT device-to-cloud messages. To learn more about message routing syntax, see [IoT Hub message routing].
## Send telemetry data to IoT Hub
-When your Azure function is running, you can now send telemetry data to the IoT hub, which will route it to Event Grid. Use a C# application to simulate location data for an in-vehicle device of a rental car. To run the application, you need [.NET Core SDK 3.1](https://dotnet.microsoft.com/download/dotnet/3.1) on your development computer. Follow these steps to send simulated telemetry data to the IoT hub:
+When your Azure function is running, you can now send telemetry data to the IoT hub, which will route it to Event Grid. Use a C# application to simulate location data for an in-vehicle device of a rental car. To run the application, you need [.NET Core SDK 3.1] on your development computer. Follow these steps to send simulated telemetry data to the IoT hub:
-1. If you haven't done so already, download the [rentalCarSimulation](https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing/tree/master/src/rentalCarSimulation) C# project.
+1. If you haven't done so already, download the [rentalCarSimulation] C# project.
2. Open the `simulatedCar.cs` file in a text editor of your choice, and replace the value of the `connectionString` with the one you saved when you registered the device. Save changes to the file.
The following map shows four vehicle location points outside the geofence. Each
To explore the Azure Maps APIs used in this tutorial, see:
-* [Get Search Address Reverse](/rest/api/maps/search/getsearchaddressreverse)
-* [Get Geofence](/rest/api/maps/spatial/getgeofence)
+* [Get Search Address Reverse]
+* [Get Geofence]
For a complete list of Azure Maps REST APIs, see:
-* [Azure Maps REST APIs](/rest/api/maps/spatial/getgeofence)
+* [Azure Maps REST APIs]
-* [IoT Plug and Play](../iot-develop/index.yml)
+* [IoT Plug and Play]
To get a list of devices that are Azure certified for IoT, visit:
-* [Azure certified devices](https://devicecatalog.azure.com/)
+* [Azure certified devices]
## Clean up resources
To learn more about how to send device-to-cloud telemetry, and the other way aro
> [!div class="nextstepaction"] > [Send telemetry from a device](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp)+
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[free account]: https://azure.microsoft.com/free/
+[resource group]: ../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups
+[rentalCarSimulation]: https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing/tree/master/src/rentalCarSimulation
+[Postman]: https://www.postman.com/
+[Plug and Play schema for geospatial data]: https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v1-preview/schemas/geospatial.md
+[Spatial Geofence Get API]: /rest/api/maps/spatial/getgeofence
+[Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse
+[general-purpose v2 storage account]: ../storage/common/storage-account-overview.md
+[create a storage account]: ../storage/common/storage-account-create.md?tabs=azure-portal
+[upload the geofence]: ./geofence-geojson.md
+[Open the JSON data file]: https://raw.githubusercontent.com/Azure-Samples/iothub-to-azure-maps-geofencing/master/src/Data/geofence.json?token=AKD25BYJYKDJBJ55PT62N4C5LRNN4
+[create an IoT hub]: ../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp#create-an-iot-hub
+[register a new device in the IoT hub]: ../iot-hub/iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub
+[Azure Functions]: ../azure-functions/functions-overview.md
+[C# script]: https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing/blob/master/src/Azure%20Function/run.csx
+[Create an Azure storage account]: #create-an-azure-storage-account
+[Upload a geofence]: #upload-a-geofence
+[Use IoT Hub message routing]: ../iot-hub/iot-hub-devguide-messages-d2c.md
+[IoT Hub message routing]: ../iot-hub/iot-hub-devguide-routing-query-syntax.md
+[.NET Core SDK 3.1]: https://dotnet.microsoft.com/download/dotnet/3.1
+[Get Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse
+[Get Geofence]: /rest/api/maps/spatial/getgeofence
+[Azure Maps REST APIs]: /rest/api/maps/spatial/getgeofence
+[IoT Plug and Play]: ../iot-develop/index.yml
+[Azure certified devices]: https://devicecatalog.azure.com/
azure-maps Tutorial Prioritized Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-prioritized-routes.md
In this tutorial, you learn how to:
## Prerequisites
-1. An [Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account).
-
-1. A [subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account).
+* An [Azure Maps account]
+* A [subscription key]
> [!NOTE]
-> For more information on authentication in Azure Maps, see [manage authentication in Azure Maps](how-to-manage-authentication.md).
+> For more information on authentication in Azure Maps, see [manage authentication in Azure Maps].
## Create a new web page using the map control API
The next tutorial demonstrates the process of creating a simple store locator us
> [!div class="nextstepaction"] > [Create a store locator using Azure Maps](./tutorial-create-store-locator.md)+
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[manage authentication in Azure Maps]: how-to-manage-authentication.md
azure-maps Tutorial Route Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-route-location.md
# Tutorial: How to display route directions using Azure Maps Route service and Map control
-This tutorial shows you how to use the Azure Maps [Route service API](/rest/api/maps/route) and [Map control](./how-to-use-map-control.md) to display route directions from start to end point. In this tutorial, you'll learn how to:
+This tutorial shows you how to use the Azure Maps [Route service API] and [Map control] to display route directions from start to end point. In this tutorial, you'll learn how to:
> [!div class="checklist"] > * Create and display the Map control on a web page.
-> * Define the display rendering of the route by defining [Symbol layers](map-add-pin.md) and [Line layers](map-add-line-layer.md).
+> * Define the display rendering of the route by defining [Symbol layers] and [Line layers].
> * Create and add GeoJSON objects to the Map to represent start and end points.
-> * Get route directions from start and end points using the [Get Route directions API](/rest/api/maps/route/getroutedirections).
+> * Get route directions from start and end points using the [Get Route directions API].
-See the [route](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Route) tutorial in GitHub for the source code. See [Route to a destination](https://samples.azuremaps.com/?sample=route-to-a-destination) for a live sample.
+See the [route] tutorial in GitHub for the source code. See [Route to a destination] for a live sample.
## Prerequisites
-1. An [Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account).
-2. An [Azure Maps primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account).
+* An [Azure Maps account]
+* A [subscription key]
<a id="getcoordinates"></a>
The following steps show you how to create and display the Map control in a web
* The `onload` event in the body of the page calls the `GetMap` function when the body of the page has loaded. * The `GetMap` function will contain the inline JavaScript code used to access the Azure Maps APIs. It is added in the next step.
-3. Next, add the following JavaScript code to the `GetMap` function, just beneath the code added in the last step. This code creates a map control and initializes it using your Azure Maps primary subscription keys that you provide. Make sure and replace the string `<Your Azure Maps Key>` with the Azure Maps primary key that you copied from your Maps account.
+3. Next, add the following JavaScript code to the `GetMap` function, just beneath the code added in the last step. This code creates a map control and initializes it using your Azure Maps subscription keys that you provide. Make sure and replace the string `<Your Azure Maps Key>` with the Azure Maps primary key that you copied from your Maps account.
```javascript //Instantiate a map object var map = new atlas.Map('myMap', {
- // Replace <Your Azure Maps Key> with your Azure Maps primary subscription key. https://aka.ms/am-primaryKey
+ // Replace <Your Azure Maps Key> with your Azure Maps subscription key. https://aka.ms/am-primaryKey
authOptions: { authType: 'subscriptionKey', subscriptionKey: '<Your Azure Maps Key>'
The following steps show you how to create and display the Map control in a web
Some things to know about the above JavaScript: * This is the core of the `GetMap` function, which initializes the Map Control API for your Azure Maps account key.
- * [atlas](/javascript/api/azure-maps-control/atlas) is the namespace that contains the Azure Maps API and related visual components.
- * [atlas.Map](/javascript/api/azure-maps-control/atlas.map) provides the control for a visual and interactive web map.
+ * [atlas] is the namespace that contains the Azure Maps API and related visual components.
+ * [atlas.Map] provides the control for a visual and interactive web map.
4. Save your changes to the file and open the HTML page in a browser. The map shown is the most basic map that you can make by calling `atlas.Map` using your account key.
The following steps show you how to create and display the Map control in a web
## Define route display rendering
-In this tutorial, we'll render the route using a line layer. The start and end points will be rendered using a symbol layer. For more information on adding line layers, see [Add a line layer to a map](map-add-line-layer.md). To learn more about symbol layers, see [Add a symbol layer to a map](map-add-pin.md).
+In this tutorial, we'll render the route using a line layer. The start and end points will be rendered using a symbol layer. For more information on adding line layers, see [Add a line layer to a map](map-add-line-layer.md). To learn more about symbol layers, see [Add a symbol layer to a map].
1. In the `GetMap` function, after initializing the map, add the following JavaScript code.
In this tutorial, we'll render the route using a line layer. The start and end p
* In the map control's `ready` event handler, a data source is created to store the route from start to end point. * To define how the route line will be rendered, a line layer is created and attached to the data source. To ensure that the route line doesn't cover up the road labels, we've passed a second parameter with the value of `'labels'`.
- Next, a symbol layer is created and attached to the data source. This layer specifies how the start and end points are rendered.Expressions have been added to retrieve the icon image and text label information from properties on each point object. To learn more about expressions, see [Data-driven style expressions](data-driven-style-expressions-web-sdk.md).
+ Next, a symbol layer is created and attached to the data source. This layer specifies how the start and end points are rendered.Expressions have been added to retrieve the icon image and text label information from properties on each point object. To learn more about expressions, see [Data-driven style expressions].
2. Next, set the start point at Microsoft, and the end point at a gas station in Seattle. Do this by appending the following code in the Map control's `ready` event handler:
In this tutorial, we'll render the route using a line layer. The start and end p
Some things to know about the above JavaScript:
- * This code creates two [GeoJSON Point objects](https://en.wikipedia.org/wiki/GeoJSON) to represent start and end points, which are then added to the data source.
+ * This code creates two [GeoJSON Point objects] to represent start and end points, which are then added to the data source.
* The last block of code sets the camera view using the latitude and longitude of the start and end points. * The start and end points are added to the data source. * The bounding box for the start and end points is calculated using the `atlas.data.BoundingBox.fromData` function. This bounding box is used to set the map cameras view over the entire route using the `map.setCamera` function. * Padding is added to compensate for the pixel dimensions of the symbol icons.
- For more information about the Map control's setCamera property, see the [setCamera(CameraOptions | CameraBoundsOptions & AnimationOptions)](/javascript/api/azure-maps-control/atlas.map#setcamera-cameraoptionscameraboundsoptionsanimationoptions-) property.
+ For more information about the Map control's setCamera property, see the [setCamera(CameraOptions | CameraBoundsOptions & AnimationOptions)] property.
3. Save **MapRoute.html** and refresh your browser. The map is now centered over Seattle. The blue teardrop pin marks the start point. The blue round pin marks the end point.
In this tutorial, we'll render the route using a line layer. The start and end p
This section shows you how to use the Azure Maps Route Directions API to get route directions and the estimated time of arrival from one point to another. > [!TIP]
-> The Azure Maps Route services offer APIs to plan routes based on different route types such as *fastest*, *shortest*, *eco*, or *thrilling* routes based on distance, traffic conditions, and mode of transport used. The service also lets users plan future routes based on historical traffic conditions. Users can see the prediction of route durations for any given time. For more information, see [Get Route directions API](/rest/api/maps/route/getroutedirections).
+> The Azure Maps Route services offer APIs to plan routes based on different route types such as *fastest*, *shortest*, *eco*, or *thrilling* routes based on distance, traffic conditions, and mode of transport used. The service also lets users plan future routes based on historical traffic conditions. Users can see the prediction of route durations for any given time. For more information, see [Get Route directions API].
1. In the `GetMap` function, inside the control's `ready` event handler, add the following to the JavaScript code.
This section shows you how to use the Azure Maps Route Directions API to get rou
var routeURL = new atlas.service.RouteURL(pipeline); ```
- * Use [MapControlCredential](/javascript/api/azure-maps-rest/atlas.service.mapcontrolcredential) to share authentication between a map control and the service module when creating a new [pipeline](/javascript/api/azure-maps-rest/atlas.service.pipeline) object.
+ * Use [MapControlCredential] to share authentication between a map control and the service module when creating a new [pipeline] object.
- * The [routeURL](/javascript/api/azure-maps-rest/atlas.service.routeurl) represents a URL to Azure Maps [Route](/rest/api/maps/route) operations.
+ * The [routeURL] represents a URL to Azure Maps [Route] operations.
2. After setting up credentials and the URL, append the following code at the end of the control's `ready` event handler.
This section shows you how to use the Azure Maps Route Directions API to get rou
:::image type="content" source="./media/tutorial-route-location/map-route.png" alt-text="[A screenshot showing a map that demonstrates the Azure Map control and Route service.":::
-* For the completed code used in this tutorial, see the [route](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Route) tutorial on GitHub.
-* To view this sample live, see [Route to a destination](https://samples.azuremaps.com/?sample=route-to-a-destination) on the **Azure Maps Code Samples** site.
+* For the completed code used in this tutorial, see the [route] tutorial on GitHub.
+* To view this sample live, see [Route to a destination] on the **Azure Maps Code Samples** site.
## Next steps
The next tutorial shows you how to create a route query with restrictions, like
> [!div class="nextstepaction"] > [Find routes for different modes of travel](./tutorial-prioritized-routes.md)+
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[Route service API]: /rest/api/maps/route
+[Map control]: ./how-to-use-map-control.md
+[Symbol layers]: map-add-pin.md
+[Line layers]: map-add-line-layer.md
+[Get Route directions API]: /rest/api/maps/route/getroutedirections
+[route]: https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Route
+[Route to a destination]: https://samples.azuremaps.com/?sample=route-to-a-destination
+[atlas]: /javascript/api/azure-maps-control/atlas
+[atlas.Map]: /javascript/api/azure-maps-control/atlas.map
+[Add a symbol layer to a map]: map-add-pin.md
+[Data-driven style expressions]: data-driven-style-expressions-web-sdk.md
+[GeoJSON Point objects]: https://en.wikipedia.org/wiki/GeoJSON
+[setCamera(CameraOptions | CameraBoundsOptions & AnimationOptions)]: /javascript/api/azure-maps-control/atlas.map#setcamera-cameraoptionscameraboundsoptionsanimationoptions-
+[MapControlCredential]: /javascript/api/azure-maps-rest/atlas.service.mapcontrolcredential
+[pipeline]: /javascript/api/azure-maps-rest/atlas.service.pipeline
+[routeURL]: /javascript/api/azure-maps-rest/atlas.service.routeurl
azure-maps Tutorial Search Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-search-location.md
This tutorial shows how to set up an account with Azure Maps, then use the Maps
<a id="createaccount"></a> <a id="getkey"></a>
-1. Sign in to the [Azure portal](https://portal.azure.com). If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-2. [Make an Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account)
-3. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key. For more information on authentication in Azure Maps, see [manage authentication in Azure Maps](how-to-manage-authentication.md).
+If you don't have an Azure subscription, create a [free account] before you begin.
+
+* An [Azure Maps account]
+* A [subscription key]
+
+> [!NOTE]
+> For more information on authentication in Azure Maps, see [manage authentication in Azure Maps].
<a id="createmap"></a>
The Map Control API is a convenient client library. This API allows you to easil
var map = new atlas.Map("myMap", { view: 'Auto',
- // Add your Azure Maps primary subscription key. https://aka.ms/am-primaryKey
+ // Add your Azure Maps subscription key. https://aka.ms/am-primaryKey
authOptions: { authType: 'subscriptionKey', subscriptionKey: '<Your Azure Maps Key>'
The next tutorial demonstrates how to display a route between two locations.
> [!div class="nextstepaction"] > [Route to a destination](./tutorial-route-location.md)+
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[free account]: https://azure.microsoft.com/free/
+[manage authentication in Azure Maps]: how-to-manage-authentication.md
azure-maps Weather Service Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-service-tutorial.md
In this tutorial, you will:
## Prerequisites
-To complete this tutorial, you first need to:
+If you don't have an Azure subscription, create a [free account] before you begin.
-1. Create an Azure Maps account subscription in the S0 pricing tier by following instructions in [Create an account](quick-demo-map-app.md#create-an-azure-maps-account).
-2. Get the primary subscription key for your account, follow the instructions in [get primary key](quick-demo-map-app.md#get-the-primary-key-for-your-account).
+* An [Azure Maps account]
+* A [subscription key]
-For more information on authentication in Azure Maps, see [manage authentication in Azure Maps](./how-to-manage-authentication.md).
+> [!NOTE]
+> For more information on authentication in Azure Maps, see [manage authentication in Azure Maps].
To get familiar with Azure notebooks and to know how to get started, follow the instructions [Create an Azure Notebook](./tutorial-ev-routing.md#create-an-azure-notebooks-project).
-> [!Note]
+> [!NOTE]
> The Jupyter notebook file for this project can be downloaded from the [Weather Maps Jupyter Notebook repository](https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook/tree/master/AzureMapsJupyterSamples/Tutorials/Analyze%20Weather%20Data). ## Load the required modules and frameworks
To learn more about Azure Notebooks, see
> [!div class="nextstepaction"] > [Azure Notebooks](https://notebooks.azure.com)+
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[free account]: https://azure.microsoft.com/free/
+[manage authentication in Azure Maps]: how-to-manage-authentication.md
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
Title: Application Insights overview description: Learn how Application Insights in Azure Monitor provides performance management and usage tracking of your live web application. Previously updated : 02/14/2023 Last updated : 03/22/2023 # Application Insights overview
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
description: Monitor ASP.NET Core web applications for availability, performance
ms.devlang: csharp Previously updated : 01/24/2023 Last updated : 03/22/2023 # Application Insights for ASP.NET Core applications
azure-monitor Asp Net Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md
Title: Dependency tracking in Application Insights | Microsoft Docs description: Monitor dependency calls from your on-premises or Azure web application with Application Insights. Previously updated : 01/09/2023 Last updated : 03/22/2023 ms.devlang: csharp
azure-monitor Asp Net Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md
description: Search logs generated by Trace, NLog, or Log4Net.
ms.devlang: csharp Previously updated : 11/15/2022 Last updated : 03/22/2023
azure-monitor Asp Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net.md
Title: Configure monitoring for ASP.NET with Azure Application Insights | Microsoft Docs description: Configure performance, availability, and user behavior analytics tools for your ASP.NET website hosted on-premises or in Azure. Previously updated : 02/14/2023 Last updated : 03/22/2023 ms.devlang: csharp
azure-monitor Availability Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-alerts.md
Title: Set up availability alerts with Application Insights description: Learn how to set up web tests in Application Insights. Get alerts if a website becomes unavailable or responds slowly. Previously updated : 12/20/2022 Last updated : 03/22/2023
azure-monitor Availability Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-azure-functions.md
Title: Create and run custom availability tests by using Azure Functions description: This article explains how to create an Azure function with TrackAvailability() that will run periodically according to the configuration given in a TimerTrigger function. Previously updated : 01/06/2023 Last updated : 03/22/2023 ms.devlang: csharp
azure-monitor Availability Private Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-private-test.md
Title: Private availability testing - Azure Monitor Application Insights description: Learn how to use availability tests on internal servers that run behind a firewall with private testing. Previously updated : 11/15/2022 Last updated : 03/22/2023
azure-monitor Azure Vm Vmss Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-vm-vmss-apps.md
Title: Monitor performance on Azure VMs - Azure Application Insights description: Application performance monitoring for Azure virtual machines and virtual machine scale sets. Previously updated : 01/11/2023 Last updated : 03/22/2023 ms.devlang: csharp, java, javascript, python
azure-monitor Azure Web Apps Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-java.md
Title: Monitor Azure app services performance Java | Microsoft Docs description: Application performance monitoring for Azure app services using Java. Chart load and response time, dependency information, and set alerts on performance. Previously updated : 02/14/2023 Last updated : 03/22/2023 ms.devlang: java
azure-monitor Azure Web Apps Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net-core.md
Title: Monitor Azure App Service performance in .NET Core | Microsoft Docs description: Application performance monitoring for Azure App Service using ASP.NET Core. Chart load and response time, dependency information, and set alerts on performance. Previously updated : 01/09/2023 Last updated : 03/22/2023 ms.devlang: csharp
azure-monitor Configuration With Applicationinsights Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/configuration-with-applicationinsights-config.md
Title: ApplicationInsights.config reference - Azure | Microsoft Docs description: Enable or disable data collection modules and add performance counters and other parameters. Previously updated : 05/22/2019 Last updated : 03/22/2023 ms.devlang: csharp
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Title: Migrate an Application Insights classic resource to a workspace-based resource - Azure Monitor | Microsoft Docs description: Learn how to upgrade your Application Insights classic resource to the new workspace-based model. Previously updated : 02/14/2023 Last updated : 03/22/2023
Most of the columns have the same name with different capitalization. KQL is cas
#### AppAvailabilityResults
-Legacy table: availability
+Legacy table: availabilityResults
|ApplicationInsights|Type|LogAnalytics|Type| |:|:|:|:|
azure-monitor Correlation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/correlation.md
Title: Azure Application Insights telemetry correlation | Microsoft Docs description: This article explains Application Insights telemetry correlation. Previously updated : 06/07/2019 Last updated : 03/22/2023 ms.devlang: csharp, java, javascript, python
azure-monitor Data Retention Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-retention-privacy.md
Title: Data retention and storage in Application Insights | Microsoft Docs description: Retention and privacy policy statement for Application Insights. Previously updated : 06/30/2020 Last updated : 03/22/2023
azure-monitor Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-addresses.md
Title: IP addresses used by Azure Monitor | Microsoft Docs description: This article discusses server firewall exceptions that are required by Azure Monitor Previously updated : 01/10/2023 Last updated : 03/22/2023
azure-monitor Java Standalone Sampling Overrides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-sampling-overrides.md
To begin, create a configuration file named *applicationinsights.json*. Save it
## How it works
-`telemetryType` must be one of `request`, `dependency`, `trace` (log), or `exception`.
+`telemetryType` (`telemetryKind` in Application Insights 3.4.0) must be one of `request`, `dependency`, `trace` (log), or `exception`.
When a span is started, the type of span and the attributes present on it at that time are used to check if any of the sampling overrides match.
azure-monitor Opencensus Python Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python-request.md
Title: Incoming Request Tracking in Azure Application Insights with OpenCensus Python | Microsoft Docs description: Monitor request calls for your Python apps via OpenCensus Python. Previously updated : 8/19/2022 Last updated : 03/22/2023 ms.devlang: python
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md
Title: Monitor Python applications with Azure Monitor | Microsoft Docs description: This article provides instructions on how to wire up OpenCensus Python with Azure Monitor. Previously updated : 03/04/2023 Last updated : 03/22/2023 ms.devlang: python
azure-monitor Overview Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/overview-dashboard.md
Title: Application Insights Overview dashboard | Microsoft Docs description: Monitor applications with Application Insights and Overview dashboard functionality. Previously updated : 11/15/2022 Last updated : 03/22/2023 # Application Insights Overview dashboard
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
Title: Telemetry sampling in Azure Application Insights | Microsoft Docs description: How to keep the volume of telemetry under control. Previously updated : 02/14/2023 Last updated : 03/22/2023
azure-monitor Prometheus Api Promql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-api-promql.md
+
+ Title: Query Prometheus metrics using the API and PromQL
+description: Describes how to use the API to Query metrics in an Azure Monitor workspace using PromQL.
+++ Last updated : 09/28/2022+++
+# Query Prometheus metrics using the API and PromQL
+
+Azure Monitor managed service for Prometheus (preview), collects metrics from Azure Kubernetes Clusters and stores them in an Azure Monitor workspace. PromQL - Prometheus query language, is a functional query language that allows you to query and aggregate time series data. Use PromQL to query and aggregate metrics stored in an Azure Monitor workspace.
+
+This article describes how to query an Azure Monitor workspace using PromQL via the REST API.
+For more information on PromQL, see [Querying prometheus](https://prometheus.io/docs/prometheus/latest/querying/basics/).
+
+## Prerequisites
+To query an Azure monitor workspace using PromQL, you need the following prerequisites:
++ An Azure Kubernetes Cluster or remote Kubernetes cluster.++ Azure Monitor managed service for Prometheus (preview) scraping metrics from a Kubernetes cluster++ An Azure Monitor Workspace where Prometheus metrics are being stored.+
+## Authentication
+
+To query your Azure Monitor workspace, authenticate using Azure Active Directory.
+The API supports Azure Active Directory authentication using Client credentials. Register a client app with Azure Active Directory and request a token.
+
+To set up Azure Active Directory authentication, follow the steps below:
+
+1. Register an app with Azure Active Directory.
+1. Grant access for the app to your Azure Monitor workspace.
+1. Request a token.
++
+### Register an app with Azure Active Directory
+
+1. To register an app, follow the steps in [Register an App to request authorization tokens and work with APIs](../logs/api/register-app-for-token.md?tabs=portal)
+
+### Allow your app access to your workspace
+Allow your app to query data from your Azure Monitor workspace.
+
+1. Open your Azure Monitor workspace in the Azure portal.
+
+1. On the Overview page, take note of your Query endpoint for use in your REST request.
+
+1. Select Access control (IAM).
+
+1. Select **Add**, then **Add role assignment** from the Access Control (IAM) page.
+
+ :::image type="content" source="./media/prometheus-api-promql/access-control.png" lightbox="./media/prometheus-api-promql/access-control.png" alt-text="A screenshot showing the Azure Monitor workspace overview page.":::
+
+1. On the **Add role Assignment page**, search for *Monitoring*.
+
+1. Select **Monitoring Data Reader**, then select the Members tab.
+
+ :::image type="content" source="./media/prometheus-api-promql/add-role-assignment.png" lightbox="./media/prometheus-api-promql/add-role-assignment.png" alt-text="A screenshot showing the Add role assignment page.":::
+
+1. Select **Select members**.
+
+1. Search for the app that you registered and select it.
+
+1. Choose **Select**.
+
+1. Select **Review + assign**.
+
+ :::image type="content" source="./media/prometheus-api-promql/select-members.png" lightbox="./media/prometheus-api-promql/select-members.png" alt-text="A screenshot showing the Add role assignment, select members page.":::
+
+You've created your App registration and have assigned it access to query data from your Azure Monitor workspace. You can now generate a token and use it in a query.
++
+### Request a token
+Send the following request in the command prompt or by using a client like Postman.
+
+```shell
+curl -X POST 'https://login.microsoftonline.com/<tennant ID>/oauth2/token' \
+-H 'Content-Type: application/x-www-form-urlencoded' \
+--data-urlencode 'grant_type=client_credentials' \
+--data-urlencode 'client_id=<your apps client ID>' \
+--data-urlencode 'client_secret=<your apps client secret' \
+--data-urlencode 'resource= https://prometheus.monitor.azure.com'
+```
+
+Sample response body:
+
+```JSON
+{
+ "token_type": "Bearer",
+ "expires_in": "86399",
+ "ext_expires_in": "86399",
+ "expires_on": "1672826207",
+ "not_before": "1672739507",
+ "resource": "https:/prometheus.monitor.azure.com",
+ "access_token": "eyJ0eXAiOiJKV1Qi....gpHWoRzeDdVQd2OE3dNsLIvUIxQ"
+}
+```
+
+Save the access token from the response for use in the following HTTP requests.
+
+## Query endpoint
+
+Find your workspace's query endpoint on the Azure Monitor workspace overview page.
++
+## Supported APIs
+The following queries are supported:
+
+### Instant queries
+ For more information, see [Instant queries](https://prometheus.io/docs/prometheus/latest/querying/api/#instant-queries)
+
+Path: `/api/v1/query`
+Examples:
+```
+POST https://k8s-02-workspace-abcd.eastus.prometheus.monitor.azure.com/api/v1/query
+--header 'Authorization: Bearer <access token>'
+--header 'Content-Type: application/x-www-form-urlencoded'
+--data-urlencode 'query=sum( \
+ container_memory_working_set_bytes \
+ * on(namespace,pod) \
+ group_left(workload, workload_type) \
+ namespace_workload_pod:kube_pod_owner:relabel{ workload_type="deployment"}) by (pod)'
+
+```
+```
+GET 'https://k8s02-workspace-abcd.eastus.prometheus.monitor.azure.com/api/v1/query?query=container_memory_working_set_bytes'
+--header 'Authorization: Bearer <access token>'
+```
+### Range queries
+For more information, see [Range queries](https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries)
+Path: `/api/v1/query_range`
+Examples:
+```
+GET 'https://k8s02-workspace-abcd.eastus.prometheus.monitor.azure.com/api/v1/query_range?query=container_memory_working_set_bytes&start=2023-03-01T00:00:00.000Z&end=2023-03-20T00:00:00.000Z&step=6h'
+--header 'Authorization: Bearer <access token>
+```
+
+```
+POST 'https://k8s02-workspace-abcd.eastus.prometheus.monitor.azure.com/api/v1/query_range'
+--header 'Authorization: Bearer <access token>'
+--header 'Content-Type: application/x-www-form-urlencoded'
+--data-urlencode 'query=up'
+--data-urlencode 'start=2023-03-01T20:10:30.781Z'
+--data-urlencode 'end=2023-03-20T20:10:30.781Z'
+--data-urlencode 'step=6h'
+```
+### Series
+For more information, see [Series](https://prometheus.io/docs/prometheus/latest/querying/api/#finding-series-by-label-matchers)
+
+Path: `/api/v1/series`
+Examples:
+```
+POST 'https://k8s02-workspace-abcd.eastus.prometheus.monitor.azure.com/api/v1/series'
+--header 'Authorization: Bearer <access token>
+--header 'Content-Type: application/x-www-form-urlencoded'
+--data-urlencode 'match[]=kube_pod_info{pod="bestapp-123abc456d-4nmfm"}'
+
+```
+```
+GET 'https://k8s02-workspace-abcd.eastus.prometheus.monitor.azure.com/api/v1/series?match[]=container_network_receive_bytes_total{namespace="default-1669648428598"}'
+```
+
+### Labels
+
+For more information, see [Labels](https://prometheus.io/docs/prometheus/latest/querying/api/#getting-label-names)
+Path: `/api/v1/labels`
+Examples:
+```
+GET 'https://k8s02-workspace-abcd.eastus.prometheus.monitor.azure.com/api/v1/labels'
+
+```
+```
+POST 'https://k8s02-workspace-abcd.eastus.prometheus.monitor.azure.com/api/v1/labels'
+```
+
+### Label values
+For more information, see [Label values](https://prometheus.io/docs/prometheus/latest/querying/api/#query.ing-label-values)
+Path: `/api/v1/label/__name__/values.`
++
+> [!NOTE]
+> `__name__` is the only supported version of this API and returns all metric names. No other /api/v1/label/<label_name>/values are supported.
+
+Example:
+```
+GET 'https://k8s02-workspace-abcd.eastus.prometheus.monitor.azure.com/api/v1/label/__name__/values'
+```
+
+For the full specification of OSS prom APIs, see [Prometheus HTTP API](https://prometheus.io/docs/prometheus/latest/querying/api/#http-api )
+
+## API limitations
+The following limitations are in addition to those detailed in the Prometheus specification.
+++ Query must be scoped to metric
+ Any time series fetch queries (/series or /query or /query_range) must contain a \_\_name\_\_ label matcher. That is, each query must be scoped to a metric. There can only be one \_\_name\_\_ label matcher in a query.
++ Supported time range
+ + /query_range API supports a time range of 32 days. This is the maximum time range allowed including range selectors specified in the query itself.
+ For example, the query `rate(http_requests_total[1h]` for last 24 hours would actually mean data is being queried for 25 hours. A 24 hours range + 1 hour specified in query itself.
+ + /series API fetches data for a maximum 12-hour time range. If `endTime` isn't provided, endTime = time.now(). If the time rage is greater than 12 hours, the `startTime` is set to `endTime ΓÇô 12h`
++ Ignored time range
+ Start time and end time provided with `/labels` and `/label/__name__/values` are ignored, and all retained data in the Azure Monitor Workspace is queried.
++ Experimental features
+ Experimental features such as exemplars aren't supported.
+
+For more information on Prometheus metrics limits, see [Prometheus metrics](../../azure-monitor/service-limits.md#prometheus-metrics)
+
+## Next steps
+
+[Azure Monitor workspace overview (preview)](./azure-monitor-workspace-overview.md)
+[Manage an Azure Monitor workspace (preview)](./azure-monitor-workspace-manage.md)
+[Overview of Azure Monitor Managed Service for Prometheus (preview)](./prometheus-metrics-overview.md)
+[Query Prometheus metrics using Azure workbooks (preview)](./prometheus-workbooks.md)
azure-monitor Prometheus Metrics Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-enable.md
Use any of the following methods to install the Azure Monitor agent on your AKS
1. Open the **Azure Monitor workspaces** menu in the Azure portal and select your cluster. 2. Select **Managed Prometheus** to display a list of AKS clusters.
-3. Click **Configure** next to the cluster you want to enable.
+3. Select **Configure** next to the cluster you want to enable.
:::image type="content" source="media/prometheus-metrics-enable/azure-monitor-workspace-configure-prometheus.png" lightbox="media/prometheus-metrics-enable/azure-monitor-workspace-configure-prometheus.png" alt-text="Screenshot of Azure Monitor workspace with Prometheus configuration.":::
Use `az aks update` with the `-enable-azuremonitormetrics` option to install the
**Create a new default Azure Monitor workspace.**<br>
-If no Azure Monitor Workspace is specified, then a default Azure Monitor Workspace will be created in the `DefaultRG-<cluster_region>` following the format `DefaultAzureMonitorWorkspace-<mapped_region>`.
+If no Azure Monitor Workspace is specified, a default Azure Monitor Workspace is created in the `DefaultRG-<cluster_region>` following the format `DefaultAzureMonitorWorkspace-<mapped_region>`.
This Azure Monitor Workspace is in the region specific in [Region mappings](#region-mappings). ```azurecli
The output for each command looks similar to the following:
Following are optional parameters that you can use with the previous commands. - `--ksm-metric-annotations-allow-list` is a comma-separated list of Kubernetes annotations keys that will be used in the resource's labels metric. By default the metric contains only name and namespace labels. To include more annotations provide a list of resource names in their plural form and Kubernetes annotation keys, you would like to allow for them. A single `*` can be provided per resource instead to allow any annotations, but that has severe performance implications.-- `--ksm-metric-labels-allow-list` is a comma-separated list of more Kubernetes label keys that will be used in the resource's labels metric. By default the metric contains only name and namespace labels. To include more labels provide a list of resource names in their plural form and Kubernetes label keys, you would like to allow for them. A single `*` can be provided per resource instead to allow any labels, but that has severe performance implications.
+- `--ksm-metric-labels-allow-list` is a comma-separated list of more Kubernetes label keys that is used in the resource's labels metric. By default the metric contains only name and namespace labels. To include more labels provide a list of resource names in their plural form and Kubernetes label keys, you would like to allow for them. A single `*` can be provided per resource instead to allow any labels, but that has severe performance implications.
+- `--enable-windows-recording-rules` lets you enable the recording rule groups required for proper functioning of the windows dashboards.
**Use annotations and labels.**
The output is similar to the following:
### Prerequisites - Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command in Azure CLI: `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`.-- If the Azure Managed Grafana instance is in a subscription other than the Azure Monitor Workspaces subscription, then please register the Azure Monitor Workspace subscription with the `Microsoft.Dashboard` resource provider following this [documentation](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
+- If the Azure Managed Grafana instance is in a subscription other than the Azure Monitor Workspaces subscription, register the Azure Monitor Workspace subscription with the `Microsoft.Dashboard` resource provider following this [documentation](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
- The Azure Monitor workspace and Azure Managed Grafana workspace must already be created. - The template needs to be deployed in the same resource group as the Azure Managed Grafana workspace. - Users with 'User Access Administrator' role in the subscription of the AKS cluster can be able to enable 'Monitoring Data Reader' role directly by deploying the template.
The final `azureMonitorWorkspaceResourceId` entry is already in the template and
- Users with 'User Access Administrator' role in the subscription of the AKS cluster can be able to enable 'Monitoring Data Reader' role directly by deploying the template. ### Minor Limitation while deploying through bicep
-Currently in bicep, there is no way to explicitly "scope" the Monitoring Data Reader role assignment on a string parameter "resource id" for Azure Monitor Workspace (like in ARM template). Bicep expects a value of type "resource | tenant" and currently there is no rest api [spec](https://github.com/Azure/azure-rest-api-specs) for Azure Monitor Workspace. So, as a workaround, the default scoping for Monitoring Data Reader role is on the resource group and thus the role is applied on the same Azure monitor workspace (by inheritance) which is the expected behavior. Thus, after deploying this bicep template, the Grafana resource will get read permissions in all the Azure Monitor Workspaces under the subscription.
+Currently in bicep, there's no way to explicitly "scope" the Monitoring Data Reader role assignment on a string parameter "resource ID" for Azure Monitor Workspace (like in ARM template). Bicep expects a value of type "resource | tenant" and currently there's no rest api [spec](https://github.com/Azure/azure-rest-api-specs) for Azure Monitor Workspace. So, as a workaround, the default scoping for Monitoring Data Reader role is on the resource group and thus the role is applied on the same Azure monitor workspace (by inheritance) which is the expected behavior. Thus, after deploying this bicep template, the Grafana resource will get read permissions in all the Azure Monitor Workspaces under the subscription.
### Retrieve required values for Grafana resource
If you're using an existing Azure Managed Grafana instance that already has been
2. Download the parameter file from [here](https://aka.ms/azureprometheus-enable-bicep-template-parameters) and save it as **FullAzureMonitorMetricsProfileParameters.json** in the same directory as the main bicep template. 3. Download the [nested_azuremonitormetrics_dcra_clusterResourceId.bicep](https://aka.ms/nested_azuremonitormetrics_dcra_clusterResourceId) and [nested_azuremonitormetrics_profile_clusterResourceId.bicep](https://aka.ms/nested_azuremonitormetrics_profile_clusterResourceId) files in the same directory as the main bicep template. 4. Edit the values in the parameter file.
-5. The main bicep template creates all the required resources and uses two modules for creating the dcra and monitormetrics profile resources from the other two bicep files.
+5. The main bicep template creates all the required resources and uses two modules for creating the dcra and monitor metrics profile resources from the other two bicep files.
| Parameter | Value | |:|:|
The final `azureMonitorWorkspaceResourceId` entry is already in the template and
1. Download the main Azure policy rules template from [here](https://aka.ms/AddonPolicyMetricsProfile) and save it as **AddonPolicyMetricsProfile.rules.json**. 2. Download the parameter file from [here](https://aka.ms/AddonPolicyMetricsProfile.parameters) and save it as **AddonPolicyMetricsProfile.parameters.json** in the same directory as the rules template.
-3. Create the policy definition using a command like : `az policy definition create --name "(Preview) Prometheus Metrics addon" --display-name "(Preview) Prometheus Metrics addon" --mode Indexed --metadata version=1.0.0 category=Kubernetes --rules .\AddonPolicyMetricsProfile.rules.json --params .\AddonPolicyMetricsProfile.parameters.json`
+3. Create the policy definition using a command like: `az policy definition create --name "(Preview) Prometheus Metrics addon" --display-name "(Preview) Prometheus Metrics addon" --mode Indexed --metadata version=1.0.0 category=Kubernetes --rules .\AddonPolicyMetricsProfile.rules.json --params .\AddonPolicyMetricsProfile.parameters.json`
4. After creating the policy definition, go to Azure portal -> Policy -> Definitions and select the Policy definition you created.
-5. Click on 'Assign' and then go to the 'Parameters' tab and fill in the details. Then click 'Review + Create'.
+5. Select 'Assign' and then go to the 'Parameters' tab and fill in the details. Then select 'Review + Create'.
6. Now that the policy is assigned to the subscription, whenever you create a new cluster, which does not have Prometheus enabled, the policy will run and deploy the resources. If you want to apply the policy to existing AKS cluster, create a 'Remediation task' for that AKS cluster resource after going to the 'Policy Assignment'. 7. Now you should see metrics flowing in the existing linked Grafana resource, which is linked with the corresponding Azure Monitor Workspace.
-In case you create a new Managed Grafana resource from Azure portal, please link it with the corresponding Azure Monitor Workspace from the 'Linked Grafana Workspaces' tab of the relevant Azure Monitor Workspace page. Please assign the role 'Monitoring Data Reader' to the Grafana MSI on the Azure Monitor Workspace resource so that it can read data for displaying the charts, using the instructions below.
+In case you create a new Managed Grafana resource from Azure portal, please link it with the corresponding Azure Monitor Workspace from the 'Linked Grafana Workspaces' tab of the relevant Azure Monitor Workspace page. Assign the role 'Monitoring Data Reader' to the Grafana MSI on the Azure Monitor Workspace resource so that it can read data for displaying the charts, using the instructions below.
1. From the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**.
In case you create a new Managed Grafana resource from Azure portal, please link
4. Select `Monitoring Data Reader`. 5. Select **Managed identity** and then **Select members**. 6. Select the **system-assigned managed identity** with the `principalId` from the Grafana resource.
-7. Click **Select** and then **Review+assign**.
+7. Select **Select** and then **Review+assign**.
### Deploy template
Deploy the template with the parameter file using any valid method for deploying
### Limitations -- Ensure that you update the `kube-state metrics` Annotations and Labels list with proper formatting. There's a limitation in the Resource Manager template deployments that require exact values in the `kube-state` metrics pods. If the kuberenetes pod has any issues with malformed parameters and isn't running, then the feature won't work as expected.
+- Ensure that you update the `kube-state metrics` Annotations and Labels list with proper formatting. There's a limitation in the Resource Manager template deployments that require exact values in the `kube-state` metrics pods. If the Kubernetes pod has any issues with malformed parameters and isn't running, then the feature won't work as expected.
- A data collection rule and data collection endpoint is created with the name `MSProm-\<short-cluster-region\>-\<cluster-name\>`. These names can't currently be modified. - You must get the existing Azure Monitor workspace integrations for a Grafana workspace and update the Resource Manager template with it, otherwise it will overwrite and remove the existing integrations from the grafana workspace.-
+## Enable windows metrics collection
+
+As of version 6.4.0-main-02-22-2023-3ee44b9e, windows metric collection has been enabled for the AKS clusters. Onboarding to the Azure Monitor Metrics Addon will enable the windows daemonset pods to start running on your nodepools. Both Windows Server 2019 and Windows Server 2022 are supported. Follow the steps below to enable the pods to collect metrics from your windows node pools.
+
+1. Manually install the windows exporter on AKS nodes to access windows metrics.
+ Enable the following collectors:
+
+ * `[defaults]`
+ * `container`
+ * `memory`
+ * `process`
+ * `cpu_info`
+
+ Deploy the [windows-exporter-daemonset YAML](https://github.com/prometheus-community/windows_exporter/blob/master/kubernetes/windows-exporter-daemonset.yaml) file
+ ```
+ kubectl apply -f windows-exporter-daemonset.yaml
+ ```
+2. Apply the [ama-metrics-settings-configmap](https://github.com/Azure/prometheus-collector/blob/main/otelcollector/configmaps/ama-metrics-settings-configmap.yaml) to your cluster, setting the `windowsexporter` and `windowskubeproxy` booleans to rue`. For more information, see [Metrics addon settings configmap](./prometheus-metrics-scrape-configuration.md#metrics-addon-settings-configmap).
+3. While onboarding, enable the recording rules required for the default dashboards.
+
+ * For CLI include the option `--enable-windows-recording-rules`.
+ * For ARM template, Bicep, or Policy, set `enableWindowsRecordingRules` to `true` in the parameters file.
+
+ If the cluster is already onboarded to Azure Monitor Metrics, to enable windows recording rule groups use this [ARM template](https://github.com/Azure/prometheus-collector/blob/kaveesh/windows_recording_rules/AddonArmTemplate/WindowsRecordingRuleGroupTemplate/WindowsRecordingRules.json) and [Parameters](https://github.com/Azure/prometheus-collector/blob/kaveesh/windows_recording_rules/AddonArmTemplate/WindowsRecordingRuleGroupTemplate/WindowsRecordingRulesParameters.json) file to create the rule groups.
+ ## Verify Deployment
-Run the following command to verify that the DaemonSet was deployed properly:
+Run the following command to verify that the DaemonSet was deployed properly on the linux nodepools:
``` kubectl get ds ama-metrics-node --namespace=kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SEL
ama-metrics-node 1 1 1 1 1 <none> 10h ``` +
+Run the following command to verify that the DaemonSet was deployed properly on the windows nodepools:
+
+```
+kubectl get ds ama-metrics-win-node --namespace=kube-system
+```
+
+The output should resemble the following:
+
+```
+User@aksuser:~$ kubectl get ds ama-metrics-node --namespace=kube-system
+NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
+ama-metrics-win-node 3 3 3 3 3 <none> 10h
+```
+ Run the following command to which verify that the ReplicaSets were deployed properly: ```
ama-metrics-ksm-5fcf8dffcd 1 1 1 11h
## Feature Support - ARM64 and Mariner nodes are supported.-- HTTP Proxy is supported and will use the same settings as the HTTP Proxy settings for the AKS cluster configured with [these instructions](/articles/aks/http-proxy.md).
+- HTTP Proxy is supported and will use the same settings as the HTTP Proxy settings for the AKS cluster configured with [these instructions](../../../articles/aks/http-proxy.md).
## Limitations - CPU and Memory requests and limits can't be changed for Container insights metrics addon. If changed, they'll be reconciled and replaced by original values in a few seconds.-- Azure Monitor Private Link (AMPLS) is not currently supported.
+- Azure Monitor Private Link (AMPLS) isn't currently supported.
- Only public clouds are currently supported. ## Uninstall metrics addon Currently, Azure CLI is the only option to remove the metrics addon and stop sending Prometheus metrics to Azure Monitor managed service for Prometheus.
-If you don't already have it, install the aks-preview extension with the following command.
+Install the `aks-preview` extension using the following command:
-The `aks-preview` extension needs to be installed using the following command. For more information on how to install a CLI extension, see [Use and manage extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview).
+```
+az extension add --name aks-preview
+```
+For more information on installing a CLI extension, see [Use and manage extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview).
+
+> [!NOTE]
+> Upgrade your az cli version to the latest version and ensure that the aks-preview version you're using is at least '0.5.132'. Find your current version using the `az version`.
```azurecli az extension add --name aks-preview ```
When you allow a default Azure Monitor workspace to be created when you install
- [Customize Prometheus metric scraping for the cluster](./prometheus-metrics-scrape-configuration.md). - [Use Azure Monitor managed service for Prometheus (preview) as data source for Grafana](./prometheus-grafana.md) - [Configure self-hosted Grafana to use Azure Monitor managed service for Prometheus (preview)](./prometheus-self-managed-grafana-azure-active-directory.md)-
azure-monitor Prometheus Metrics Scrape Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-scrape-configuration.md
The following table has a list of all the default targets that the Azure Monitor
| Key | Type | Enabled | Description | |--||-|-|
-| kubelet | bool | `true` | Scrape kubelet in every node in the k8s cluster without any extra scrape config. |
-| cadvisor | bool | `true` | Scrape cAdvisor in every node in the k8s cluster without any extra scrape config.<br>Linux only. |
-| kubestate | bool | `true` | Scrape kube-state-metrics in the k8s cluster (installed as a part of the addon) without any extra scrape config. |
+| kubelet | bool | `true` | Scrape kubelet in every node in the K8s cluster without any extra scrape config. |
+| cadvisor | bool | `true` | Scrape cadvisor in every node in the K8s cluster without any extra scrape config.<br>Linux only. |
+| kubestate | bool | `true` | Scrape kube-state-metrics in the K8s cluster (installed as a part of the addon) without any extra scrape config. |
| nodeexporter | bool | `true` | Scrape node metrics without any extra scrape config.<br>Linux only. |
-| coredns | bool | `false` | Scrape coredns service in the k8s cluster without any extra scrape config. |
-| kubeproxy | bool | `false` | Scrape kube-proxy in every linux node discovered in the k8s cluster without any extra scrape config.<br>Linux only. |
-| apiserver | bool | `false` | Scrape the kubernetes api server in the k8s cluster without any extra scrape config. |
-| prometheuscollectorhealth | bool | `false` | Scrape info about the prometheus-collector container such as the amount and size of timeseries scraped. |
+| coredns | bool | `false` | Scrape coredns service in the K8s cluster without any extra scrape config. |
+| kubeproxy | bool | `false` | Scrape kube-proxy in every linux node discovered in the K8s cluster without any extra scrape config.<br>Linux only. |
+| apiserver | bool | `false` | Scrape the kubernetes api server in the K8s cluster without any extra scrape config. |
+| windowsexporter | bool | `false` | Scrape the windows exporter in every node in the K8s cluster without any extra scrape config.<br>Windows only. |
+| windowskubeproxy | bool | `false` | Scrape the windows kubeproxy in every node in the K8s cluster without any extra scrape config.<br>Windows only. |
+| prometheuscollectorhealth | bool | `false` | Scrape info about the prometheus-collector container such as the amount and size of time series scraped. |
If you want to turn on the scraping of the default targets that aren't enabled by default, edit the configmap `ama-metrics-settings-configmap` [configmap](https://aka.ms/azureprometheus-addon-settings-configmap) to update the targets listed under `default-scrape-settings-enabled` to `true`, and apply the configmap to your cluster.
Follow the instructions to [create, validate, and apply the configmap](prometheu
### Advanced Setup: Configure custom Prometheus scrape jobs for the daemonset
-The `ama-metrics` replicaset pod consumes the custom Prometheus config and scrapes the specified targets. For a cluster with a large number of nodes and pods and a large volume of metrics to scrape, some of the applicable custom scrape targets can be off-loaded from the single `ama-metrics` replicaset pod to the `ama-metrics` daemonset pod. The [ama-metrics-prometheus-config-node configmap](https://aka.ms/azureprometheus-addon-ds-configmap), similar to the regular configmap, can be created to have static scrape configs on each node. The scrape config should only target a single node and shouldn't use service discovery. Otherwise each node will try to scrape all targets and will make many calls to the Kubernetes API server. The `node-exporter` config below is one of the default targets for the daemonset pods. It uses the `$NODE_IP` environment variable, which is already set for every ama-metrics addon container to target a specific port on the node:
+The `ama-metrics` replicaset pod consumes the custom Prometheus config and scrapes the specified targets. For a cluster with a large number of nodes and pods and a large volume of metrics to scrape, some of the applicable custom scrape targets can be off-loaded from the single `ama-metrics` replicaset pod to the `ama-metrics` daemonset pod. The [ama-metrics-prometheus-config-node configmap](https://aka.ms/azureprometheus-addon-ds-configmap), similar to the regular configmap, can be created to have static scrape configs on each node. The scrape config should only target a single node and shouldn't use service discovery. Otherwise each node tries to scrape all targets and will make many calls to the Kubernetes API server. The `node-exporter` config below is one of the default targets for the daemonset pods. It uses the `$NODE_IP` environment variable, which is already set for every ama-metrics addon container to target a specific port on the node:
```yaml - job_name: node
The `ama-metrics` replicaset pod consumes the custom Prometheus config and scrap
- targets: ['$NODE_IP:9100'] ```
-Custom scrape targets can follow the same format using `static_configs` with targets using the `$NODE_IP` environment variable and specifying the port to scrape. Each pod of the daemonset will take the config, scrape the metrics, and send them for that node.
+Custom scrape targets can follow the same format using `static_configs` with targets using the `$NODE_IP` environment variable and specifying the port to scrape. Each pod of the daemonset takes the config, scrapes the metrics, and sends them for that node.
## Prometheus configuration tips and examples
scrape_configs:
- <job-y> ```
-Any other unsupported sections need to be removed from the config before applying as a configmap. Otherwise the custom configuration will fail validation and won't be applied.
+Any other unsupported sections need to be removed from the config before applying as a configmap. Otherwise the custom configuration fails validation and won't be applied.
Refer to [Apply config file](prometheus-metrics-scrape-validate.md#apply-config-file) section to create a configmap from the prometheus config.
The scrape config below uses the `__meta_*` labels added from the `kubernetes_sd
To scrape certain pods, specify the port, path, and scheme through annotations for the pod and the below job will scrape only the address specified by the annotation: - `prometheus.io/scrape`: Enable scraping for this pod-- `prometheus.io/scheme`: If the metrics endpoint is secured, then you'll need to set scheme to `https` & most likely set the tls config.
+- `prometheus.io/scheme`: If the metrics endpoint is secured, then you'll need to set scheme to `https` & most likely set the TLS config.
- `prometheus.io/path`: If the metrics path isn't /metrics, define it with this annotation. - `prometheus.io/port`: Specify a single, desired port to scrape
azure-monitor Prometheus Metrics Scrape Default https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-scrape-default.md
This article lists the default targets, dashboards, and recording rules when you
The default scrape frequency for all default targets and scrapes is **30 seconds**.
-## Targets scraped
+## Targets scraped
- `cadvisor` (`job=cadvisor`) - `nodeexporter` (`job=node`) - `kubelet` (`job=kubelet`) - `kube-state-metrics` (`job=kube-state-metrics`)
-
+ ## Metrics collected from default targets
-The following metrics are collected by default from each default target. All other metrics are dropped through relabeling rules.
+The following metrics are collected by default from each default target. All other metrics are dropped through relabeling rules.
**cadvisor (job=cadvisor)**<br> - `container_memory_rss`
The following metrics are collected by default from each default target. All oth
- `container_fs_reads_total` - `container_fs_writes_total` - `container_fs_reads_bytes_total`
- - `container_fs_writes_bytes_total|container_cpu_usage_seconds_total`
-
+ - `container_fs_writes_bytes_total`
+ - `container_cpu_usage_seconds_total`
+ **kubelet (job=kubelet)**<br> - `kubelet_node_name` - `kubelet_running_pods`
The following metrics are collected by default from each default target. All oth
- `process_cpu_seconds_total` - `go_goroutines` - `kubernetes_build_info`
-
+ **nodexporter (job=node)**<br> - `node_memory_MemTotal_bytes` - `node_cpu_seconds_total`
The following metrics are collected by default from each default target. All oth
- `node_disk_read_bytes_total` - `node_disk_written_bytes_total` - `node_uname_info`
-
+ **kube-state-metrics (job=kube-state-metrics)**<br> - `kube_node_status_allocatable` - `kube_pod_owner`
The following metrics are collected by default from each default target. All oth
- `kube_node_status_condition` - `kube_node_spec_taint`
+## Targets scraped for windows
+
+There are two default jobs that can be run for windows which scrape metrics required for the dashboards specific to windows.
+
+> [!NOTE]
+> This requires an update in the ama-metrics-settings-configmap and installing windows exporter on all windows nodepools. Please refer to the [enablement document](./prometheus-metrics-enable.md#enable-prometheus-metric-collection) for more information
+
+- `windows-exporter` (`job=windows-exporter`)
+- `kube-proxy-windows` (`job=kube-proxy-windows`)
+
+## Metrics scraped for windows
+
+The following metrics are collected when windows exporter and windows kube proxy are enabled.
+
+**windows-exporter (job=windows-exporter)**<br>
+ - `windows_system_system_up_time`
+ - `windows_cpu_time_total`
+ - `windows_memory_available_bytes`
+ - `windows_os_visible_memory_bytes`
+ - `windows_memory_cache_bytes`
+ - `windows_memory_modified_page_list_bytes`
+ - `windows_memory_standby_cache_core_bytes`
+ - `windows_memory_standby_cache_normal_priority_bytes`
+ - `windows_memory_standby_cache_reserve_bytes`
+ - `windows_memory_swap_page_operations_total`
+ - `windows_logical_disk_read_seconds_total`
+ - `windows_logical_disk_write_seconds_total`
+ - `windows_logical_disk_size_bytes`
+ - `windows_logical_disk_free_bytes`
+ - `windows_net_bytes_total`
+ - `windows_net_packets_received_discarded_total`
+ - `windows_net_packets_outbound_discarded_total`
+ - `windows_container_available`
+ - `windows_container_cpu_usage_seconds_total`
+ - `windows_container_memory_usage_commit_bytes`
+ - `windows_container_memory_usage_private_working_set_bytes`
+ - `windows_container_network_receive_bytes_total`
+ - `windows_container_network_transmit_bytes_total`
+
+**kube-proxy-windows (job=kube-proxy-windows)**<br>
+ - `kubeproxy_sync_proxy_rules_duration_seconds`
+ - `kubeproxy_sync_proxy_rules_duration_seconds_bucket`
+ - `kubeproxy_sync_proxy_rules_duration_seconds_sum`
+ - `kubeproxy_sync_proxy_rules_duration_seconds_count`
+ - `rest_client_requests_total`
+ - `rest_client_request_duration_seconds`
+ - `rest_client_request_duration_seconds_bucket`
+ - `rest_client_request_duration_seconds_sum`
+ - `rest_client_request_duration_seconds_count`
+ - `process_resident_memory_bytes`
+ - `process_cpu_seconds_total`
+ - `go_goroutines`
+ ## Dashboards Following are the default dashboards that are automatically provisioned and configured by Azure Monitor managed service for Prometheus when you [link your Azure Monitor workspace to an Azure Managed Grafana instance](../essentials/azure-monitor-workspace-manage.md#link-a-grafana-workspace). Source code for these dashboards can be found in [GitHub](https://aka.ms/azureprometheus-mixins)
Following are the default dashboards that are automatically provisioned and conf
- Kubernetes / Kubelet - Node Exporter / USE Method / Node - Node Exporter / Nodes
+- Kubernetes / Compute Resources / Cluster (Windows)
+- Kubernetes / Compute Resources / Namespace (Windows)
+- Kubernetes / Compute Resources / Pod (Windows)
+- Kubernetes / USE Method / Cluster (Windows)
+- Kubernetes / USE Method / Node (Windows)
## Recording rules
Following are the default recording rules that are automatically configured by A
- `instance_device:node_disk_io_time_seconds:rate5m` - `instance_device:node_disk_io_time_weighted_seconds:rate5m` - `instance:node_num_cpu:sum`
+- `node:windows_node:sum`
+- `node:windows_node_num_cpu:sum`
+- `:windows_node_cpu_utilisation:avg5m`
+- `node:windows_node_cpu_utilisation:avg5m`
+- `:windows_node_memory_utilisation:`
+- `:windows_node_memory_MemFreeCached_bytes:sum`
+- `node:windows_node_memory_totalCached_bytes:sum`
+- `:windows_node_memory_MemTotal_bytes:sum`
+- `node:windows_node_memory_bytes_available:sum`
+- `node:windows_node_memory_bytes_total:sum`
+- `node:windows_node_memory_utilisation:ratio`
+- `node:windows_node_memory_utilisation:`
+- `node:windows_node_memory_swap_io_pages:irate`
+- `:windows_node_disk_utilisation:avg_irate`
+- `node:windows_node_disk_utilisation:avg_irate`
+- `node:windows_node_filesystem_usage:`
+- `node:windows_node_filesystem_avail:`
+- `:windows_node_net_utilisation:sum_irate`
+- `node:windows_node_net_utilisation:sum_irate`
+- `:windows_node_net_saturation:sum_irate`
+- `node:windows_node_net_saturation:sum_irate`
+- `windows_pod_container_available`
+- `windows_container_total_runtime`
+- `windows_container_memory_usage`
+- `windows_container_private_working_set_usage`
+- `windows_container_network_received_bytes_total`
+- `windows_container_network_transmitted_bytes_total`
+- `kube_pod_windows_container_resource_memory_request`
+- `kube_pod_windows_container_resource_memory_limit`
+- `kube_pod_windows_container_resource_cpu_cores_request`
+- `kube_pod_windows_container_resource_cpu_cores_limit`
+- `namespace_pod_container:windows_container_cpu_usage_seconds_total:sum_rate`
## Next steps
azure-monitor Resource Logs Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-schema.md
The schema for resource logs varies depending on the resource and log category.
| Azure Database for MySQL | [Azure Database for MySQL diagnostic logs](../../mysql/concepts-server-logs.md#diagnostic-logs) | | Azure Database for PostgreSQL | [Azure Database for PostgreSQL logs](../../postgresql/concepts-server-logs.md#resource-logs) | | Azure Databricks | [Diagnostic logging in Azure Databricks](/azure/databricks/administration-guide/account-settings/azure-diagnostic-logs) |
-| Azure DDoS Protection | [Logging for Azure DDoS Protection](../../ddos-protection/monitor-ddos-protection-reference.md#log-schemas) |
+| Azure DDoS Protection | [Logging for Azure DDoS Protection](../../ddos-protection/ddos-view-diagnostic-logs.md#example-log-queries) |
| Azure Digital Twins | [Set up Azure Digital Twins diagnostics](../../digital-twins/troubleshoot-diagnostics.md#log-schemas) | Azure Event Hubs |[Azure Event Hubs logs](../../event-hubs/event-hubs-diagnostic-logs.md) | | Azure ExpressRoute | [Monitoring Azure ExpressRoute](../../expressroute/monitor-expressroute.md#collection-and-routing) | | Azure Firewall | [Logging for Azure Firewall](../../firewall/logs-and-metrics.md#diagnostic-logs) | | Azure Front Door | [Logging for Azure Front Door](../../frontdoor/front-door-diagnostics.md) |
-| Azure Functions | [Monitoring Azure Functions Data Reference Resource Logs](https://learn.microsoft.com/azure/azure-functions/monitor-functions-reference#resource-logs) |
+| Azure Functions | [Monitoring Azure Functions Data Reference Resource Logs](../../azure-functions/monitor-functions-reference.md#resource-logs) |
| Azure IoT Hub | [IoT Hub operations](../../iot-hub/monitor-iot-hub-reference.md#resource-logs) | | Azure IoT Hub Device Provisioning Service| [Device Provisioning Service operations](../../iot-dps/monitor-iot-dps-reference.md#resource-logs) | | Azure Key Vault |[Azure Key Vault logging](../../key-vault/general/logging.md) |
azure-monitor Profiler Aspnetcore Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-aspnetcore-linux.md
In this guide, you'll:
- Install the [latest and greatest .NET Core SDK](https://dotnet.microsoft.com/download/dotnet). - Install Git by following the instructions at [Getting Started - Installing Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git).
+- Review the following samples for context:
+ - [Enable Service Profiler for containerized ASP.NET Core Application (.NET 6)](https://github.com/microsoft/ApplicationInsights-Profiler-AspNetCore/tree/main/examples/EnableServiceProfilerForContainerAppNet6)
+ - [Application Insights Profiler for Worker Service Example](https://github.com/microsoft/ApplicationInsights-Profiler-AspNetCore/tree/main/examples/ServiceProfilerInWorkerNet6)
## Set up the project locally
In this guide, you'll:
dotnet add package Microsoft.ApplicationInsights.Profiler.AspNetCore ```
-1. In your preferred code editor, enable Application Insights and Profiler in `Program.cs`:
+1. In your preferred code editor, enable Application Insights and Profiler in `Program.cs`. [Add custom Profiler settings, if applicable](https://github.com/microsoft/ApplicationInsights-Profiler-AspNetCore/blob/main/Configurations.md).
+
+ For `WebAPI`:
```csharp
- public void ConfigureServices(IServiceCollection services)
- {
- services.AddApplicationInsightsTelemetry(); // Add this line of code to enable Application Insights.
- services.AddServiceProfiler(); // Add this line of code to Enable Profiler
- services.AddControllersWithViews();
- }
+ // Add services to the container.
+ builder.Services.AddApplicationInsightsTelemetry();
+ builder.Services.AddServiceProfiler();
```
-1. Add a line of code in the **HomeController.cs** section to randomly delay a few seconds:
+ For `Worker`:
```csharp
- using System.Threading;
- ...
-
- public IActionResult About()
+ IHost host = Host.CreateDefaultBuilder(args)
+ .ConfigureServices(services =>
{
- Random r = new Random();
- int delay = r.Next(5000, 10000);
- Thread.Sleep(delay);
- return View();
- }
+ services.AddApplicationInsightsTelemetryWorkerService();
+ services.AddServiceProfiler();
+
+ // Assuming Worker is your background service class.
+ services.AddHostedService<Worker>();
+ })
+ .Build();
+
+ await host.RunAsync();
``` 1. Save and commit your changes to the local repository:
azure-monitor Snapshot Debugger Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-vm.md
reviewer: cweining Previously updated : 08/18/2022 Last updated : 03/21/2023
If your ASP.NET or ASP.NET Core application runs in App Service and requires a c
If your application runs in Azure Service Fabric, Cloud Service, Virtual Machines, or on-premises machines, you can skip enabling Snapshot Debugger on App Services and jump into following this guide.
+## Before you begin
+
+- [Enable Application Insights in your web app](../app/asp-net.md).
+
+- Include the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package version 1.3.5 or above in your app.
+ ## Configure snapshot collection for ASP.NET applications
-### Prerequisite
-
-[Enable Application Insights in your web app](../app/asp-net.md).
-
-1. Include the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package in your app.
-
-1. If needed, customize the Snapshot Debugger configuration added to [ApplicationInsights.config](../app/configuration-with-applicationinsights-config.md).
-
- The default Snapshot Debugger configuration is mostly empty and all settings are optional. Here's an example showing a configuration equivalent to the default configuration:
-
- ```xml
- <TelemetryProcessors>
- <Add Type="Microsoft.ApplicationInsights.SnapshotCollector.SnapshotCollectorTelemetryProcessor, Microsoft.ApplicationInsights.SnapshotCollector">
- <!-- The default is true, but you can disable Snapshot Debugging by setting it to false -->
- <IsEnabled>true</IsEnabled>
- <!-- Snapshot Debugging is usually disabled in developer mode, but you can enable it by setting this to true. -->
- <!-- DeveloperMode is a property on the active TelemetryChannel. -->
- <IsEnabledInDeveloperMode>false</IsEnabledInDeveloperMode>
- <!-- How many times we need to see an exception before we ask for snapshots. -->
- <ThresholdForSnapshotting>1</ThresholdForSnapshotting>
- <!-- The maximum number of examples we create for a single problem. -->
- <MaximumSnapshotsRequired>3</MaximumSnapshotsRequired>
- <!-- The maximum number of problems that we can be tracking at any time. -->
- <MaximumCollectionPlanSize>50</MaximumCollectionPlanSize>
- <!-- How often we reconnect to the stamp. The default value is 15 minutes.-->
- <ReconnectInterval>00:15:00</ReconnectInterval>
- <!-- How often to reset problem counters. -->
- <ProblemCounterResetInterval>1.00:00:00</ProblemCounterResetInterval>
- <!-- The maximum number of snapshots allowed in ten minutes.The default value is 1. -->
- <SnapshotsPerTenMinutesLimit>3</SnapshotsPerTenMinutesLimit>
- <!-- The maximum number of snapshots allowed per day. -->
- <SnapshotsPerDayLimit>30</SnapshotsPerDayLimit>
- <!-- Whether or not to collect snapshot in low IO priority thread. The default value is true. -->
- <SnapshotInLowPriorityThread>true</SnapshotInLowPriorityThread>
- <!-- Agree to send anonymous data to Microsoft to make this product better. -->
- <ProvideAnonymousTelemetry>true</ProvideAnonymousTelemetry>
- <!-- The limit on the number of failed requests to request snapshots before the telemetry processor is disabled. -->
- <FailedRequestLimit>3</FailedRequestLimit>
- </Add>
- </TelemetryProcessors>
- ```
-
-1. Snapshots are collected only on exceptions that are reported to Application Insights. In some cases (for example, older versions of the .NET platform), you might need to [configure exception collection](../app/asp-net-exceptions.md#exceptions) to see exceptions with snapshots in the portal.
+The default Snapshot Debugger configuration is mostly empty and all settings are optional. You can customize the Snapshot Debugger configuration added to [ApplicationInsights.config](../app/configuration-with-applicationinsights-config.md).
+
+The following example shows a configuration equivalent to the default configuration:
+
+```xml
+<TelemetryProcessors>
+ <Add Type="Microsoft.ApplicationInsights.SnapshotCollector.SnapshotCollectorTelemetryProcessor, Microsoft.ApplicationInsights.SnapshotCollector">
+ <!-- The default is true, but you can disable Snapshot Debugging by setting it to false -->
+ <IsEnabled>true</IsEnabled>
+ <!-- Snapshot Debugging is usually disabled in developer mode, but you can enable it by setting this to true. -->
+ <!-- DeveloperMode is a property on the active TelemetryChannel. -->
+ <IsEnabledInDeveloperMode>false</IsEnabledInDeveloperMode>
+ <!-- How many times we need to see an exception before we ask for snapshots. -->
+ <ThresholdForSnapshotting>1</ThresholdForSnapshotting>
+ <!-- The maximum number of examples we create for a single problem. -->
+ <MaximumSnapshotsRequired>3</MaximumSnapshotsRequired>
+ <!-- The maximum number of problems that we can be tracking at any time. -->
+ <MaximumCollectionPlanSize>50</MaximumCollectionPlanSize>
+ <!-- How often we reconnect to the stamp. The default value is 15 minutes.-->
+ <ReconnectInterval>00:15:00</ReconnectInterval>
+ <!-- How often to reset problem counters. -->
+ <ProblemCounterResetInterval>1.00:00:00</ProblemCounterResetInterval>
+ <!-- The maximum number of snapshots allowed in ten minutes.The default value is 1. -->
+ <SnapshotsPerTenMinutesLimit>3</SnapshotsPerTenMinutesLimit>
+ <!-- The maximum number of snapshots allowed per day. -->
+ <SnapshotsPerDayLimit>30</SnapshotsPerDayLimit>
+ <!-- Whether or not to collect snapshot in low IO priority thread. The default value is true. -->
+ <SnapshotInLowPriorityThread>true</SnapshotInLowPriorityThread>
+ <!-- Agree to send anonymous data to Microsoft to make this product better. -->
+ <ProvideAnonymousTelemetry>true</ProvideAnonymousTelemetry>
+ <!-- The limit on the number of failed requests to request snapshots before the telemetry processor is disabled. -->
+ <FailedRequestLimit>3</FailedRequestLimit>
+ </Add>
+</TelemetryProcessors>
+```
+
+Snapshots are collected _only_ on exceptions reported to Application Insights. In some cases (for example, older versions of the .NET platform), you may need to [configure exception collection](../app/asp-net-exceptions.md#exceptions) to see exceptions with snapshots in the portal.
## Configure snapshot collection for applications using ASP.NET Core LTS or above ### Prerequisites
-[Enable Application Insights in your ASP.NET Core web app](../app/asp-net-core.md), if you haven't done it yet.
-> [!NOTE]
-> Be sure that your application references version 2.1.1, or newer, of the Microsoft.ApplicationInsights.AspNetCore package.
-
-1. Include the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package in your app.
-
-1. Modify your application's `Startup` class to add and configure the Snapshot Collector's telemetry processor.
- 1. If [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package version 1.3.5 or above is used, then add the following using statements to *Startup.cs*:
-
- ```csharp
- using Microsoft.ApplicationInsights.SnapshotCollector;
- ```
-
- Add the following at the end of the ConfigureServices method in the `Startup` class in *Startup.cs*:
-
- ```csharp
- services.AddSnapshotCollector((configuration) => Configuration.Bind(nameof(SnapshotCollectorConfiguration), configuration));
- ```
-
- 1. If [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package version 1.3.4 or below is used, then add the following using statements to *Startup.cs*.
-
- ```csharp
- using Microsoft.ApplicationInsights.SnapshotCollector;
- using Microsoft.Extensions.Options;
- using Microsoft.ApplicationInsights.AspNetCore;
- using Microsoft.ApplicationInsights.Extensibility;
- ```
-
- Add the following `SnapshotCollectorTelemetryProcessorFactory` class to `Startup` class:
-
- ```csharp
- class Startup
- {
- private class SnapshotCollectorTelemetryProcessorFactory : ITelemetryProcessorFactory
- {
- private readonly IServiceProvider _serviceProvider;
-
- public SnapshotCollectorTelemetryProcessorFactory(IServiceProvider serviceProvider) =>
- _serviceProvider = serviceProvider;
-
- public ITelemetryProcessor Create(ITelemetryProcessor next)
- {
- var snapshotConfigurationOptions = _serviceProvider.GetService<IOptions<SnapshotCollectorConfiguration>>();
- return new SnapshotCollectorTelemetryProcessor(next, configuration: snapshotConfigurationOptions.Value);
- }
- }
- ...
- ```
-
- Add the `SnapshotCollectorConfiguration` and `SnapshotCollectorTelemetryProcessorFactory` services to the startup pipeline:
-
- ```csharp
- // This method gets called by the runtime. Use this method to add services to the container.
- public void ConfigureServices(IServiceCollection services)
- {
- // Configure SnapshotCollector from application settings
- services.Configure<SnapshotCollectorConfiguration>(Configuration.GetSection(nameof(SnapshotCollectorConfiguration)));
-
- // Add SnapshotCollector telemetry processor.
- services.AddSingleton<ITelemetryProcessorFactory>(sp => new SnapshotCollectorTelemetryProcessorFactory(sp));
-
- // TODO: Add other services your application needs here.
- }
- }
- ```
-
-1. If needed, customize the Snapshot Debugger configuration by adding a `SnapshotCollectorConfiguration` section to *appsettings.json*.
-
- All settings in the Snapshot Debugger configuration are optional. Here's an example showing a configuration equivalent to the default configuration:
-
- ```json
- {
- "SnapshotCollectorConfiguration": {
- "IsEnabledInDeveloperMode": false,
- "ThresholdForSnapshotting": 1,
- "MaximumSnapshotsRequired": 3,
- "MaximumCollectionPlanSize": 50,
- "ReconnectInterval": "00:15:00",
- "ProblemCounterResetInterval":"1.00:00:00",
- "SnapshotsPerTenMinutesLimit": 1,
- "SnapshotsPerDayLimit": 30,
- "SnapshotInLowPriorityThread": true,
- "ProvideAnonymousTelemetry": true,
- "FailedRequestLimit": 3
- }
- }
- ```
+Create a new class called `SnapshotCollectorTelemetryProcessorFactory` to add and configure the Snapshot Collector's telemetry processor.
-## Configure snapshot collection for other .NET applications
+```csharp
+using Microsoft.ApplicationInsights.AspNetCore;
+using Microsoft.ApplicationInsights.Extensibility;
+using Microsoft.ApplicationInsights.SnapshotCollector;
+using Microsoft.Extensions.Options;
+
+internal class SnapshotCollectorTelemetryProcessorFactory : ITelemetryProcessorFactory
+{
+ private readonly IServiceProvider _serviceProvider;
-1. If your application isn't already instrumented with Application Insights, get started by [enabling Application Insights and setting the instrumentation key](https://github.com/Microsoft/appcenter).
+ public SnapshotCollectorTelemetryProcessorFactory(IServiceProvider serviceProvider) =>
+ _serviceProvider = serviceProvider;
-1. Add the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package in your app.
+ public ITelemetryProcessor Create(ITelemetryProcessor next)
+ {
+ IOptions<SnapshotCollectorConfiguration> snapshotConfigurationOptions = _serviceProvider.GetRequiredService<IOptions<SnapshotCollectorConfiguration>>();
+ return new SnapshotCollectorTelemetryProcessor(next, configuration: snapshotConfigurationOptions.Value);
+ }
+}
+```
+
+Add the `SnapshotCollectorConfiguration` and `SnapshotCollectorTelemetryProcessorFactory` services to `Program.cs`:
+
+```csharp
+using Microsoft.ApplicationInsights.AspNetCore;
+using Microsoft.ApplicationInsights.SnapshotCollector;
+
+var builder = WebApplication.CreateBuilder(args);
+
+builder.Services.AddApplicationInsightsTelemetry();
+builder.Services.AddSnapshotCollector(config => builder.Configuration.Bind(nameof(SnapshotCollectorConfiguration), config));
+builder.Services.AddSingleton<ITelemetryProcessorFactory>(sp => new SnapshotCollectorTelemetryProcessorFactory(sp));
+```
+
+If needed, customize the Snapshot Debugger configuration by adding a `SnapshotCollectorConfiguration` section to *appsettings.json*. The following example shows a configuration equivalent to the default configuration:
+
+```json
+{
+ "SnapshotCollectorConfiguration": {
+ "IsEnabledInDeveloperMode": false,
+ "ThresholdForSnapshotting": 1,
+ "MaximumSnapshotsRequired": 3,
+ "MaximumCollectionPlanSize": 50,
+ "ReconnectInterval": "00:15:00",
+ "ProblemCounterResetInterval":"1.00:00:00",
+ "SnapshotsPerTenMinutesLimit": 1,
+ "SnapshotsPerDayLimit": 30,
+ "SnapshotInLowPriorityThread": true,
+ "ProvideAnonymousTelemetry": true,
+ "FailedRequestLimit": 3
+ }
+}
+```
-1. Snapshots are collected only on exceptions that are reported to Application Insights. You may need to modify your code to report them. The exception handling code depends on the structure of your application, but an example is below:
+## Configure snapshot collection for other .NET applications
- ```csharp
- TelemetryClient _telemetryClient = new TelemetryClient();
+Snapshots are collected only on exceptions that are reported to Application Insights. You may need to modify your code to report them. The exception handling code depends on the structure of your application, but an example is below:
- void ExampleRequest()
+```csharp
+TelemetryClient _telemetryClient = new TelemetryClient();
+void ExampleRequest()
+{
+ try
+ {
+ // TODO: Handle the request.
+ }
+ catch (Exception ex)
{
- try
- {
- // TODO: Handle the request.
- }
- catch (Exception ex)
- {
- // Report the exception to Application Insights.
- _telemetryClient.TrackException(ex);
-
- // TODO: Rethrow the exception if desired.
- }
+ // Report the exception to Application Insights.
+ _telemetryClient.TrackException(ex);
+ // TODO: Rethrow the exception if desired.
}
- ```
+}
+```
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
azure-netapp-files Configure Virtual Wan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-virtual-wan.md
Last updated 01/05/2023
-# Configure Virtual WAN for Azure NetApp Files (preview)
+# Configure Virtual WAN for Azure NetApp Files
You can configure Azure NetApp Files volumes with Standard network features in one or more Virtual WAN spoke virtual networks (VNets). Virtual WAN spoke VNets allow access to the file storage service globally across your Virtual WAN environment.
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
Azure NetApp Files is updated regularly. This article provides a summary about t
## March 2023
+* [Azure Virtual WAN](configure-virtual-wan.md) is now generally available in [all regions](azure-netapp-files-network-topologies.md#supported-regions) that support standard network features
+ * [Disable showmount](disable-showmount.md) (Preview) By default, Azure NetApp Files enables [showmount functionality](/windows-server/administration/windows-commands/showmount) to show NFS exported paths. The setting allows NFS clients tp use the `showmount -e` command to see a list of exports available on the Azure NetApp Files NFS-enabled storage endpoint. This functionality might cause security scanners to flag the Azure NetApp Files NFS service as having a vulnerability because these scanners often use showmount to see what is being returned. In those scenarios, you might want to disable showmount on Azure NetApp Files. This setting allows you to enable/disable showmount for your NFS-enabled storage endpoints.
azure-resource-manager Publish Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-notifications.md
The following examples show how to add a notification endpoint URI using the por
### Azure portal
-To get started, see [Publish a service catalog application through Azure portal](./publish-portal.md).
+To get started, see [Quickstart: Create and publish an Azure Managed Application definition](./publish-service-catalog-app.md).
:::image type="content" source="./media/publish-notifications/service-catalog-notifications.png" alt-text="Screenshot of the Azure portal that shows a service catalog managed application definition and the notification endpoint.":::
azure-resource-manager Publish Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-portal.md
- Title: Publish managed apps through portal
-description: Shows how to use the Azure portal to create an Azure managed application that is intended for members of your organization.
--- Previously updated : 11/02/2017--
-# Publish a service catalog application through Azure portal
-
-You can use the Azure portal to publish [managed applications](overview.md) that are intended for members of your organization. For example, an IT department can publish managed applications that ensure compliance with organizational standards. These managed applications are available through the service catalog, not the Azure marketplace.
-
-## Prerequisites
-
-When publishing a managed application, you specify an identity to manage the resources. We recommend you specify an Azure Active Directory user group. To create an Azure Active Directory user group, see [Create a group and add members in Azure Active Directory](../../active-directory/fundamentals/active-directory-groups-create-azure-portal.md).
-
-The .zip file that contains the managed application definition must be available through a URI. We recommend that you upload your .zip file to a storage blob.
-
-## Create managed application with portal
-
-1. In the upper left corner, select **+ New**.
-
- ![New service](./media/publish-portal/new.png)
-
-1. Search for **service catalog**.
-
-1. In the results, scroll until you find **Service Catalog Managed Application Definition**. Select it.
-
- ![Search for managed application definitions](./media/publish-portal/select-managed-apps-definition.png)
-
-1. Select **Create** to start the process of creating the managed application definition.
-
- ![Create managed application definition](./media/publish-portal/create-definition.png)
-
-1. Provide values for name, display name, description, location, subscription, and resource group. For package file URI, provide the path to the zip file you created.
-
- ![Provide values](./media/publish-portal/fill-application-values.png)
-
-1. When you get to the Authentication and Lock Level section, select **Add Authorization**.
-
- ![Add authorization](./media/publish-portal/add-authorization.png)
-
-1. Select an Azure Active Directory group to manage the resources, and select **OK**.
-
- ![Add authorization group](./media/publish-portal/add-auth-group.png)
-
-1. When you have provided all the values, select **Create**.
-
- ![Create managed application](./media/publish-portal/create-app.png)
-
-## Next steps
-
-* For an introduction to managed applications, see [Managed application overview](overview.md).
-* For managed application examples, see [Sample projects for Azure managed applications](sample-projects.md).
-* To learn how to create a UI definition file for a managed application, see [Get started with CreateUiDefinition](create-uidefinition-overview.md).
azure-resource-manager Publish Service Catalog App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-service-catalog-app.md
Previously updated : 03/17/2023 Last updated : 03/21/2023 # Quickstart: Create and publish an Azure Managed Application definition
To learn more, see [Get started with CreateUiDefinition](create-uidefinition-ove
Add the two files to a package file named _app.zip_. The two files must be at the root level of the _.zip_ file. If the files are in a folder, when you create the managed application definition, you receive an error that states the required files aren't present.
-Upload _app.zip_ to an Azure storage account so you can use it when you deploy the managed application's definition. The storage account name must be globally unique across Azure and the length must be 3-24 characters with only lowercase letters and numbers. In the `Name` parameter, replace the placeholder `demostorageaccount` with your unique storage account name.
+Upload _app.zip_ to an Azure storage account so you can use it when you deploy the managed application's definition. The storage account name must be globally unique across Azure and the length must be 3-24 characters with only lowercase letters and numbers. In the command, replace the placeholder `<demostorageaccount>` including the angle brackets (`<>`), with your unique storage account name.
# [PowerShell](#tab/azure-powershell)
New-AzResourceGroup -Name packageStorageGroup -Location westus3
$storageAccount = New-AzStorageAccount ` -ResourceGroupName packageStorageGroup `
- -Name "demostorageaccount" `
+ -Name "<demostorageaccount>" `
-Location westus3 ` -SkuName Standard_LRS ` -Kind StorageV2
The command opens your default browser and prompts you to sign in to Azure. For
az group create --name packageStorageGroup --location westus3 az storage account create \
- --name demostorageaccount \
+ --name <demostorageaccount> \
--resource-group packageStorageGroup \ --location westus3 \ --sku Standard_LRS \
After you add the role to the storage account, it takes a few minutes to become
```azurecli az storage container create \
- --account-name demostorageaccount \
+ --account-name <demostorageaccount> \
--name appcontainer \ --auth-mode login \ --public-access blob az storage blob upload \
- --account-name demostorageaccount \
+ --account-name <demostorageaccount> \
--container-name appcontainer \ --auth-mode login \ --name "app.zip" \
The next step is to select a user, security group, or application for managing t
# [PowerShell](#tab/azure-powershell)
-This example uses a security group, and your Azure Active Directory account should be a member of the group. To get the group's object ID, replace the placeholder `managedAppDemo` with your group's name. You use this variable's value when you deploy the managed application definition.
+This example uses a security group, and your Azure Active Directory account should be a member of the group. To get the group's object ID, replace the placeholder `<managedAppDemo>` including the angle brackets (`<>`), with your group's name. You use this variable's value when you deploy the managed application definition.
To create a new Azure Active Directory group, go to [Manage Azure Active Directory groups and group membership](../../active-directory/fundamentals/how-to-manage-groups.md). ```azurepowershell
-$principalid=(Get-AzADGroup -DisplayName managedAppDemo).Id
+$principalid=(Get-AzADGroup -DisplayName <managedAppDemo>).Id
``` Next, get the role definition ID of the Azure built-in role you want to grant access to the user, group, or application. You use this variable's value when you deploy the managed application definition.
$roleid=(Get-AzRoleDefinition -Name Owner).Id
# [Azure CLI](#tab/azure-cli)
-This example uses a security group, and your Azure Active Directory account should be a member of the group. To get the group's object ID, replace the placeholder `managedAppDemo` with your group's name. You use this variable's value when you deploy the managed application definition.
+This example uses a security group, and your Azure Active Directory account should be a member of the group. To get the group's object ID, replace the placeholder `<managedAppDemo>` including the angle brackets (`<>`), with your group's name. You use this variable's value when you deploy the managed application definition.
To create a new Azure Active Directory group, go to [Manage Azure Active Directory groups and group membership](../../active-directory/fundamentals/how-to-manage-groups.md). ```azurecli
-principalid=$(az ad group show --group managedAppDemo --query id --output tsv)
+principalid=$(az ad group show --group <managedAppDemo> --query id --output tsv)
``` Next, get the role definition ID of the Azure built-in role you want to grant access to the user, group, or application. You use this variable's value when you deploy the managed application definition.
Create a resource group for your managed application definition.
az group create --name appDefinitionGroup --location westus3 ```
-In the `blob` command's `account-name` parameter, replace the placeholder `demostorageaccount` with your unique storage account name. The `blob` command creates a variable to store the URL for the package _.zip_ file. That variable is used in the command that creates the managed application definition.
+In the `blob` command's `account-name` parameter, replace the placeholder `<demostorageaccount>` including the angle brackets (`<>`), with your unique storage account name. The `blob` command creates a variable to store the URL for the package _.zip_ file. That variable is used in the command that creates the managed application definition.
```azurecli blob=$(az storage blob url \
- --account-name demostorageaccount \
+ --account-name <demostorageaccount> \
--container-name appcontainer \ --auth-mode login \ --name app.zip --output tsv)
azure-resource-manager Publish Service Catalog Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-service-catalog-bring-your-own-storage.md
Previously updated : 03/01/2023 Last updated : 03/21/2023 # Quickstart: Bring your own storage to create and publish an Azure Managed Application definition
To learn more, go to [Get started with CreateUiDefinition](create-uidefinition-o
Add the two files to a package file named _app.zip_. The two files must be at the root level of the _.zip_ file. If the files are in a folder, when you create the managed application definition, you receive an error that states the required files aren't present.
-Upload _app.zip_ to an Azure storage account so you can use it when you deploy the managed application's definition. The storage account name must be globally unique across Azure and the length must be 3-24 characters with only lowercase letters and numbers. In the `Name` parameter, replace the placeholder `demostorageaccount` with your unique storage account name.
+Upload _app.zip_ to an Azure storage account so you can use it when you deploy the managed application's definition. The storage account name must be globally unique across Azure and the length must be 3-24 characters with only lowercase letters and numbers. In the command, replace the placeholder `<demostorageaccount>` including the angle brackets (`<>`), with your unique storage account name.
# [PowerShell](#tab/azure-powershell)
New-AzResourceGroup -Name packageStorageGroup -Location westus3
$storageAccount = New-AzStorageAccount ` -ResourceGroupName packageStorageGroup `
- -Name "demostorageaccount" `
+ -Name "<demostorageaccount>" `
-Location westus3 ` -SkuName Standard_LRS ` -Kind StorageV2
$packageuri=(Get-AzStorageBlob -Container appcontainer -Blob app.zip -Context $c
az group create --name packageStorageGroup --location westus3 az storage account create \
- --name demostorageaccount \
+ --name <demostorageaccount> \
--resource-group packageStorageGroup \ --location westus3 \ --sku Standard_LRS \
After you add the role to the storage account, it takes a few minutes to become
```azurecli az storage container create \
- --account-name demostorageaccount \
+ --account-name <demostorageaccount> \
--name appcontainer \ --auth-mode login \ --public-access blob az storage blob upload \
- --account-name demostorageaccount \
+ --account-name <demostorageaccount> \
--container-name appcontainer \ --auth-mode login \ --name "app.zip" \
Use the following command to store the package file's URI in a variable named `p
```azurecli packageuri=$(az storage blob url \
- --account-name demostorageaccount \
+ --account-name <demostorageaccount> \
--container-name appcontainer \ --auth-mode login \ --name app.zip --output tsv)
You store your managed application definition in your own storage account so tha
Create the storage account for your managed application definition. The storage account name must be globally unique across Azure and the length must be 3-24 characters with only lowercase letters and numbers.
-This example creates a new resource group named `byosDefinitionStorageGroup`. In the `Name` parameter, replace the placeholder `definitionstorage` with your unique storage account name.
+This example creates a new resource group named `byosDefinitionStorageGroup`. In the command, replace the placeholder `<definitionstorage>` including the angle brackets (`<>`), with your unique storage account name.
# [PowerShell](#tab/azure-powershell)
New-AzResourceGroup -Name byosDefinitionStorageGroup -Location westus3
New-AzStorageAccount ` -ResourceGroupName byosDefinitionStorageGroup `
- -Name "definitionstorage" `
+ -Name "<definitionstorage>" `
-Location westus3 ` -SkuName Standard_LRS ` -Kind StorageV2
New-AzStorageAccount `
Use the following command to store the storage account's resource ID in a variable named `storageid`. You use the variable's value when you deploy the managed application definition. ```azurepowershell
-$storageid = (Get-AzStorageAccount -ResourceGroupName byosDefinitionStorageGroup -Name definitionstorage).Id
+$storageid = (Get-AzStorageAccount -ResourceGroupName byosDefinitionStorageGroup -Name <definitionstorage>).Id
``` # [Azure CLI](#tab/azure-cli)
$storageid = (Get-AzStorageAccount -ResourceGroupName byosDefinitionStorageGroup
az group create --name byosDefinitionStorageGroup --location westus3 az storage account create \
- --name definitionstorage \
+ --name <definitionstorage> \
--resource-group byosDefinitionStorageGroup \ --location westus3 \ --sku Standard_LRS \
az storage account create \
Use the following command to store the storage account's resource ID in a variable named `storageid`. You use the variable's value to set up the storage account's role assignment and when you deploy the managed application definition. ```azurecli
-storageid=$(az storage account show --resource-group byosDefinitionStorageGroup --name definitionstorage --query id --output tsv)
+storageid=$(az storage account show --resource-group byosDefinitionStorageGroup --name <definitionstorage> --query id --output tsv)
```
The _Appliance Resource Provider_ is a service principal in your Azure Active Di
The next step is to select a user, security group, or application for managing the resources for the customer. This identity has permissions on the managed resource group according to the assigned role. The role can be any Azure built-in role like Owner or Contributor.
-This example uses a security group, and your Azure Active Directory account should be a member of the group. To get the group's object ID, replace the placeholder `managedAppDemo` with your group's name. You use the variable's value when you deploy the managed application definition.
+This example uses a security group, and your Azure Active Directory account should be a member of the group. To get the group's object ID, replace the placeholder `<managedAppDemo>` including the angle brackets (`<>`), with your group's name. You use the variable's value when you deploy the managed application definition.
To create a new Azure Active Directory group, go to [Manage Azure Active Directory groups and group membership](../../active-directory/fundamentals/how-to-manage-groups.md). # [PowerShell](#tab/azure-powershell) ```azurepowershell
-$principalid=(Get-AzADGroup -DisplayName managedAppDemo).Id
+$principalid=(Get-AzADGroup -DisplayName <managedAppDemo>).Id
``` # [Azure CLI](#tab/azure-cli) ```azurecli
-principalid=$(az ad group show --group managedAppDemo --query id --output tsv)
+principalid=$(az ad group show --group <managedAppDemo> --query id --output tsv)
```
The managed application definition's deployment template needs input for several
In Visual Studio Code, create a new file named _deployDefinition.parameters.json_ and save it.
-Add the following to your parameter file and save it. Then, replace the `{placeholder values}` including the curly braces, with your values.
+Add the following to your parameter file and save it. Then, replace the `<placeholder values>` including the angle brackets (`<>`), with your values.
```json {
Add the following to your parameter file and save it. Then, replace the `{placeh
"contentVersion": "1.0.0.0", "parameters": { "managedApplicationDefinitionName": {
- "value": "{placeholder for managed application name}"
+ "value": "<placeholder for managed application name>"
}, "definitionStorageResourceID": {
- "value": "{placeholder for you storage account ID}"
+ "value": "<placeholder for you storage account ID>"
}, "packageFileUri": {
- "value": "{placeholder for the packageFileUri}"
+ "value": "<placeholder for the packageFileUri>"
}, "principalId": {
- "value": "{placeholder for principalid value}"
+ "value": "<placeholder for principalid value>"
}, "roleId": {
- "value": "{placeholder for roleid value}"
+ "value": "<placeholder for roleid value>"
} } }
The following table describes the parameter values for the managed application d
| `principalId` | The publishers Principal ID that needs permissions to manage resources in the managed resource group. Use your `principalid` variable's value. | | `roleId` | Role ID for permissions to the managed resource group. For example Owner, Contributor, Reader. Use your `roleid` variable's value. |
-To get your variable values from the command prompt:
-- Azure PowerShell: type `$variableName` to display the value.-- Azure CLI: type `echo $variableName` to display the value.
+To get your variable values:
+- Azure PowerShell: In PowerShell, type `$variableName` to display a variable's value.
+- Azure CLI: In Bash, type `echo $variableName` to display a variable's value.
## Deploy the definition
az deployment group create \
During deployment, the template's `storageAccountId` property uses your storage account's resource ID and creates a new container with the case-sensitive name `applicationdefinitions`. The files from the _.zip_ package you specified during the deployment are stored in the new container.
-You can use the following commands to verify that the managed application definition files are saved in your storage account's container. In the `Name` parameter, replace the placeholder `definitionstorage` with your unique storage account name.
+You can use the following commands to verify that the managed application definition files are saved in your storage account's container. In the command, replace the placeholder `<definitionstorage>` including the angle brackets (`<>`), with your unique storage account name.
# [PowerShell](#tab/azure-powershell) ```azurepowershell
-Get-AzStorageAccount -ResourceGroupName byosDefinitionStorageGroup -Name definitionstorage |
+Get-AzStorageAccount -ResourceGroupName byosDefinitionStorageGroup -Name <definitionstorage> |
Get-AzStorageContainer -Name applicationdefinitions | Get-AzStorageBlob | Select-Object -Property Name | Format-List ```
Get-AzStorageBlob | Select-Object -Property Name | Format-List
```azurecli az storage blob list \ --container-name applicationdefinitions \
- --account-name definitionstorage \
+ --account-name <definitionstorage> \
--query "[].{Name:name}" ```
azure-signalr Signalr Howto Work With App Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-work-with-app-gateway.md
Now the web app is deployed, let's go to the portal for **_WA1_** and make the f
* **HTTPS Only**: **Off**. To Simplify the demo, we used the HTTP frontend protocol on Application Gateway. Therefore, we need to turn off this option to avoid changing the HTTP URL to HTTPs automatically. * Go to the **Overview** tab and get the URL of **_WA1_**.
-* Get the URL, and replace scheme https with http, for example, http://wa1.azurewebsites.net, open the URL in the browser, now you can start chatting! Use F12 to open network traces, and you can see the SignalR connection is established through **_AG1_**.
+* Get the URL, and replace scheme https with http, for example, `http://wa1.azurewebsites.net`, open the URL in the browser, now you can start chatting! Use F12 to open network traces, and you can see the SignalR connection is established through **_AG1_**.
> [!NOTE] > > Sometimes you need to disable browser's auto https redirection and browser cache to prevent the URL from redirecting to HTTPS automatically.
azure-video-indexer Customize Speech Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-speech-model-overview.md
+
+ Title: Customize a speech model in Azure Video Indexer
+description: This article gives an overview of what is a speech model in Azure Video Indexer.
+ Last updated : 03/06/2023++
+# Customize a speech model
++
+Through Azure Video Indexer integration with [Azure speech services](../cognitive-services/speech-service/captioning-concepts.md), a Universal Language Model is utilized as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. The base model is pretrained with dialects and phonetics representing various common domains. The base model works well in most speech recognition scenarios.
+
+However, sometimes the base modelΓÇÖs transcription doesn't accurately handle some content. In these situations, a customized speech model can be used to improve recognition of domain-specific vocabulary or pronunciation that is specific to your content by providing text data to train the model. Through the process of creating and adapting speech customization models, your content can be properly transcribed. There is no additional charge for using Video Indexers speech customization.
+
+## When to use a customized speech model?
+
+If your content contains industry specific terminology or when reviewing Video Indexer transcription results you notice inaccuracies, you can create and train a custom speech model to recognize the terms and improve the transcription quality. It may only be worthwhile to create a custom model if the relevant words and names are expected to appear repeatedly in the content you plan to index. Training a model is sometimes an iterative process and you might find that after the initial training, results could still use improvement and would benefit from additional training, see [How to Improve your custom model](#how-to-improve-your-custom-models) section for guidance.
+
+However, if you notice a few words or names transcribed incorrectly in the transcript, a custom speech model might not be needed, especially if the words or names arenΓÇÖt expected to be commonly used in content you plan on indexing in the future. You can just edit and correct the transcript in the Video Indexer website (see [View and update transcriptions in Azure Video Indexer website](edit-transcript-lines-portal.md)) and donΓÇÖt have to address it through a custom speech model.
+
+For a list of languages that support custom models and pronunciation, see the Customization and Pronunciation columns of the language support table in [Language support in Azure Video Indexer](language-support.md).
+
+## Train datasets
+
+When indexing a video, you can use a customized speech model to improve the transcription. Models are trained by loading them with [datasets](../cognitive-services/speech-service/how-to-custom-speech-test-and-train.md) that can include plain text data and pronunciation data.
+
+Text used to test and train a custom model should include samples from a diverse set of content and scenarios that you want your model to recognize. Consider the following factors when creating and training your datasets:
+
+- Include text that covers the kinds of verbal statements that your users make when they're interacting with your model. For example, if your content is primarily related to a sport, train the model with content containing terminology and subject matter related to the sport.
+- Include all speech variances that you want your model to recognize. Many factors can vary speech, including accents, dialects, and language-mixing.
+- Only include data that is relevant to content you're planning to transcribe. Including other data can harm recognition quality overall.
+
+### Dataset types
+
+There are two dataset types that you can use for customization. To help determine which dataset to use to address your problems, refer to the following table:
+
+|Use case|Data type|
+|||
+|Improve recognition accuracy on industry-specific vocabulary and grammar, such as medical terminology or IT jargon. |Plain text|
+|Define the phonetic and displayed form of a word or term that has nonstandard pronunciation, such as product names or acronyms. |Pronunciation data |
+
+### Plain-text data for training
+
+A dataset including plain text sentences of related text can be used to improve the recognition of domain-specific words and phrases. Related text sentences can reduce substitution errors related to misrecognition of common words and domain-specific words by showing them in context. Domain-specific words can be uncommon or made-up words, but their pronunciation must be straightforward to be recognized.
+
+### Best practices for plain text datasets
+
+- Provide domain-related sentences in a single text file. Instead of using full sentences, you can upload a list of words. However, while this adds them to the vocabulary, it doesn't teach the system how the words are ordinarily used. By providing full or partial utterances (sentences or phrases of things that users are likely to say), the language model can learn the new words and how they're used. The custom language model is good not only for adding new words to the system, but also for adjusting the likelihood of known words for your application. Providing full utterances helps the system learn better.
+- Use text data thatΓÇÖs close to the expected spoken utterances. Utterances don't need to be complete or grammatically correct, but they must accurately reflect the spoken input that you expect the model to recognize.
+- Try to have each sentence or keyword on a separate line.
+- To increase the weight of a term such as product names, add several sentences that include the term.
+- For common phrases that are used in your content, providing many examples is useful because it tells the system to listen for these terms.ΓÇ»
+- Avoid including uncommon symbols (~, # @ % &) as they'll get discarded. The sentences in which they appear will also get discarded.
+- Avoid putting too large inputs, such as hundreds of thousands of sentences, because doing so will dilute the effect of boosting.
+
+Use this table to ensure that your plain text dataset file is formatted correctly:
+
+|Property|Value|
+|||
+|Text encoding |UTF-8 BOM|
+|Number of utterances per line |1 |
+|Maximum file size |200 MB |
+
+Try to follow these guidelines in your plain text files:
+
+- Avoid repeating characters, words, or groups of words more than three times, such as "yeah yeah yeah yeah" as the service might drop lines with too many repetitions.
+- Don't use special characters or UTF-8 characters above U+00A1.
+- URIs is rejected.
+- For some languages such as Japanese or Korean, importing large amounts of text data can take a long time or can time out. Consider dividing the dataset into multiple text files with up to 20,000 lines in each.
+
+## Pronunciation data for training
+
+You can add to your custom speech model a custom pronunciation dataset to improve recognition of mispronounced words, phrases, or names.
+
+Pronunciation datasets need to include the spoken form of a word or phrase as well as the recognized displayed form. The spoken form is the phonetic sequence spelled out, such as ΓÇ£Triple AΓÇ¥. It can be composed of letters, words, syllables, or a combination of all three. The recognized displayed form is how you would like the word or phrase to appear in the transcription. This table includes some examples:
+
+|Recognized displayed form |Spoken form |
+|||
+|3CPO |three c p o |
+|CNTK |c n t k |
+|AAA |Triple A |
+
+You provide pronunciation datasets in a single text file. Include the spoken utterance and a custom pronunciation for each. Each row in the file should begin with the recognized form, then a tab character, and then the space-delimited phonetic sequence.
+
+```
+3CPO three c p o
+CNTK c n t k
+IEEE i triple e
+```
+
+Consider the following when creating and training pronunciation datasets:
+
+ItΓÇÖs not recommended to use custom pronunciation files to alter the pronunciation of common words.
+
+If there are a few variations of how a word or name is incorrectly transcribed, consider using some or all of them when training the pronunciation dataset. For example, if Robert is mentioned five times in the video and transcribed as Robort, Ropert, and robbers. You can try including all variations in the file as in the following example but be cautious when training with actual words like robbers as if robbers is mentioned in the video, it is transcribed as Robert.
+
+`Robert Roport`
+`Robert Ropert`
+`Robert Robbers`
+
+Pronunciation model isn't meant to address acronyms. For example, if you want Doctor to be transcribed as Dr., this can't be achieved through a pronunciation model.
+
+Refer to the following table to ensure that your pronunciation dataset files are valid and correctly formatted.
+
+|Property |Value |
+|||
+|Text encoding |UTF-8 BOM (ANSI is also supported for English) |
+|Number of pronunciations per line |1 |
+|Maximum file size |1 MB (1 KB for free tier) |
+
+## How to improve your custom models
+
+Training a pronunciation model can be an iterative process, as you might gain more knowledge on the pronunciation of the subject after initial training and evaluation of your modelΓÇÖs results. Since existing models can't be edited or modified, training a model iteratively requires the creation and uploading of datasets with additional information as well as training new custom models based on the new datasets. You would then reindex the media files with the new custom speech model.
+
+Example:
+
+Let's say you plan on indexing sports content and anticipate transcript accuracy issues with specific sports terminology as well as in the names of players and coaches. Before indexing, you've created a speech model with a plain text dataset with content containing relevant sports terminology and a pronunciation dataset with some of the player and coachesΓÇÖ names. You index a few videos using the custom speech model and when reviewing the generated transcript, find that while the terminology is transcribed correctly, many names aren't. You can take the following steps to improve performance in the future:
+
+1. Review the transcript and note all the incorrectly transcribed names. They could fall into two groups:
+
+ - Names not in the pronunciation file.
+ - Names in the pronunciation file but they're still incorrectly transcribed.
+2. Create a new dataset file. Either download the pronunciation dataset file or modify your locally saved original. For group A, add the new names to the file with how they were incorrectly transcribed (MichaelΓÇâMikel). For group B, add additional lines with each line having the correct name and a unique example of how it was incorrectly transcribed. For example:
+
+ `StephenΓÇâSteven`
+ `StephenΓÇâSteafan`
+ `StephenΓÇâSteevan`
+3. Upload this file as a new dataset file.
+4. Create a new speech model and add the original plain text dataset and the new pronunciation dataset file.
+5. Reindex the video with the new speech model.
+6. If needed, repeat steps 1-5 until the results are satisfactory.
+
+## Next steps
+
+To get started with speech customization, see:
+
+- [Customize a speech model using the API](customize-speech-model-with-api.md)
+- [Customize a speech model using the website](customize-speech-model-with-website.md)
azure-video-indexer Customize Speech Model With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-speech-model-with-api.md
+
+ Title: Customize a speech model with the Azure Video Indexer API
+description: Learn how to customize a speech model with the Azure Video Indexer API.
+ Last updated : 03/06/2023++
+# Customize a speech model with the API
++
+Azure Video Indexer lets you create custom language models to customize speech recognition by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to or aligning word or name pronunciation with how it should be written.
+
+For a detailed overview and best practices for custom speech models, seeΓÇ»[Customize a speech model with Azure Video Indexer](customize-speech-model-overview.md).
+
+You can use the Azure Video Indexer APIs to create and edit custom language models in your account. You can also use the website, as described inΓÇ»[Customize speech model using the Azure Video Indexer website](customize-speech-model-with-website.md).
+
+The following are descriptions of some of the parameters:
+
+|NameΓÇâΓÇâΓÇâ|ΓÇâTypeΓÇâ|ΓÇâΓÇâDescriptionΓÇâ|ΓÇâ
+||||
+|`displayName`ΓÇâ |stringΓÇâ|The desired name of the dataset/model.|
+|`locale`ΓÇâ ΓÇâ|stringΓÇâ|The language code of the dataset/model. For full list, see [Language support](language-support.md).|
+|`kind` ΓÇâ|integer|0 for a plain text dataset, 1 for a pronunciation dataset.|
+|`description`ΓÇâΓÇâ |stringΓÇâ|Optional description of the dataset/model.|
+|`contentUrl`ΓÇâΓÇâ |uriΓÇâ |URL of source file used in creation of dataset.|
+|`customProperties`ΓÇâ|objectΓÇâ|Optional properties of dataset/model.|
+
+## Create a speech dataset
+
+The [create speech dataset](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Speech-Dataset) API creates a dataset for training a speech model. You upload a file that is used to create a dataset with this call. The content of a dataset can't be modified after it's created.
+To upload a file to a dataset, you must update parameters in the Body, including a URL to the text file to be uploaded. The description and custom properties fields are optional. The following is a sample of the body:
+
+```json
+{
+ "displayName": "Pronunciation Dataset",
+ "locale": "en-US",
+ "kind": "Pronunciation",
+ "description": "This is a pronunciation dataset.",
+ "contentUrl": https://contoso.com/location,
+ "customProperties": {
+ "tag": "Pronunciation Dataset Example"
+ }
+}
+```
+
+### Response
+
+The response provides metadata on the newly created dataset following the format of this example JSON output:
+
+```json
+{
+ "id": "000000-0000-0000-0000-f58ac7002ae9",
+ "properties": {
+ "acceptedLineCount": 0,
+ "rejectedLineCount": 0,
+ "duration": null,
+ "error": null
+ },
+ "displayName": "Contoso plain text",
+ "description": "AVI dataset",
+ "locale": "en-US",
+ "kind": "Language",
+ "status": "Waiting",
+ "lastActionDateTime": "2023-02-28T13:24:27Z",
+ "createdDateTime": "2023-02-28T13:24:27Z",
+ "customProperties": null
+}
+```
+
+## Create a speech model
+
+The [create a speech model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Speech-Model) API creates and trains a custom speech model that could then be used to improve the transcription accuracy of your videos. It must contain at least one plain text dataset and can optionally have pronunciation datasets. Create it with all of the relevant dataset files as a model’s datasets can't be added or updated after its creation.
+
+When creating a speech model, you must update parameters in the Body, including a list of strings where the strings are the dataset/s the model will include. The description and custom properties fiels are optional. The following is a sample of the body:
+
+```json
+{
+ "displayName": "Contoso Speech Model",
+ "locale": "en-US",
+ "datasets": ["ff3d2bc4-ab5a-4522-b599-b3d5ba768c75", "87c8962d-1d3c-44e5-a2b2-c696fddb9bae"],
+ "description": "Contoso ads example model",
+ "customProperties": {
+ "tag": "Example Model"
+ }
+}
+```
+
+### Response
+
+The response provides metadata on the newly created model following the format of this example JSON output:
+
+```json{
+ "id": "00000000-0000-0000-0000-85be4454cf",
+ "properties": {
+ "deprecationDates": {
+ "adaptationDateTime": null,
+ "transcriptionDateTime": "2025-04-15T00:00:00Z"
+ },
+ "error": null
+ },
+ "displayName": "Contoso speech model",
+ "description": "Contoso speech model for video indexer",
+ "locale": "en-US",
+ "datasets": ["00000000-0000-0000-0000-f58ac7002ae9"],
+ "status": "Processing",
+ "lastActionDateTime": "2023-02-28T13:36:28Z",
+ "createdDateTime": "2023-02-28T13:36:28Z",
+ "customProperties": null
+}
+```
+
+## Get speech dataset
+
+TheΓÇ»[get speech dataset](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Speech-Dataset) API returns information on the specified dataset.
+
+### Response
+
+The response provides metadata on the specified dataset following the format of this example JSON output:
+
+```json
+{
+ "id": "00000000-0000-0000-0000-f58002ae9",
+ "properties": {
+ "acceptedLineCount": 41,
+ "rejectedLineCount": 0,
+ "duration": null,
+ "error": null
+ },
+ "displayName": "Contoso plain text",
+ "description": "AVI dataset",
+ "locale": "en-US",
+ "kind": "Language",
+ "status": "Complete",
+ "lastActionDateTime": "2023-02-28T13:24:43Z",
+ "createdDateTime": "2023-02-28T13:24:27Z",
+ "customProperties": null
+}
+```
+
+## Get speech datasets files
+
+TheΓÇ»[get speech dataset files](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Speech-Dataset-Files) API returns the files and metadata of the specified dataset.
+
+### Response
+
+The response provides a URL with the dataset files and metadata following the format of this example JSON output:
+
+```json
+[{
+ "datasetId": "00000000-0000-0000-0000-f58ac72a",
+ "fileId": "00000000-0000-0000-0000-cb190769c",
+ "name": "languagedata",
+ "contentUrl": "",
+ "kind": "LanguageData",
+ "createdDateTime": "2023-02-28T13:24:43Z",
+ "properties": {
+ "size": 1517
+ }
+}, {
+ "datasetId": "00000000-0000-0000-0000-f58ac72ΓÇ¥
+ "fileId": "00000000-0000-0000-0000-2369192e",
+ "name": "normalized.txt",
+ "contentUrl": "",
+ "kind": "LanguageData",
+ "createdDateTime": "2023-02-28T13:24:43Z",
+ "properties": {
+ "size": 1517
+ }
+}, {
+ "datasetId": "00000000-0000-0000-0000-f58ac7",
+ "fileId": "00000000-0000-0000-0000-05f1e306",
+ "name": "report.json",
+ "contentUrl": "",
+ "kind": "DatasetReport",
+ "createdDateTime": "2023-02-28T13:24:43Z",
+ "properties": {
+ "size": 78
+ }
+}]
+```
+
+## Get the specified account datasets
+
+TheΓÇ»[get speech datasets](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Speech-Datasets) API returns information on all of the specified accounts datasets.
+
+### Response
+
+The response provides metadata on the datasets in the specified account following the format of this example JSON output:
+
+```json
+[{
+ "id": "00000000-0000-0000-abf5-4dad0f",
+ "properties": {
+ "acceptedLineCount": 41,
+ "rejectedLineCount": 0,
+ "duration": null,
+ "error": null
+ },
+ "displayName": "test",
+ "description": "string",
+ "locale": "en-US",
+ "kind": "Language",
+ "status": "Complete",
+ "lastActionDateTime": "2023-02-27T08:42:02Z",
+ "createdDateTime": "2023-02-27T08:41:39Z",
+ "customProperties": null
+}]
+```
+
+## Get the specified speech model
+
+TheΓÇ»[get speech model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Speech-Model) API returns information on the specified model.
+
+### Response
+
+The response provides metadata on the specified model following the format of this example JSON output:
+
+```json
+{
+ "id": "00000000-0000-0000-0000-5685be445",
+ "properties": {
+ "deprecationDates": {
+ "adaptationDateTime": null,
+ "transcriptionDateTime": "2025-04-15T00:00:00Z"
+ },
+ "error": null
+ },
+ "displayName": "Contoso speech model",
+ "description": "Contoso speech model for video indexer",
+ "locale": "en-US",
+ "datasets": ["00000000-0000-0000-0000-f58ac7002"],
+ "status": "Complete",
+ "lastActionDateTime": "2023-02-28T13:36:38Z",
+ "createdDateTime": "2023-02-28T13:36:28Z",
+ "customProperties": null
+}
+```
+
+## Get the specified account speech models
+
+TheΓÇ»[get speech models](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Speech-Models) API returns information on all of the models in the specified account.
+
+### Response
+
+The response provides metadata on all of the speech models in the specified account following the format of this example JSON output:
+
+```json
+[{
+ "id": "00000000-0000-0000-0000-5685be445",
+ "properties": {
+ "deprecationDates": {
+ "adaptationDateTime": null,
+ "transcriptionDateTime": "2025-04-15T00:00:00Z"
+ },
+ "error": null
+ },
+ "displayName": "Contoso speech model",
+ "description": "Contoso speech model for video indexer",
+ "locale": "en-US",
+ "datasets": ["00000000-0000-0000-0000-f58ac7002a"],
+ "status": "Complete",
+ "lastActionDateTime": "2023-02-28T13:36:38Z",
+ "createdDateTime": "2023-02-28T13:36:28Z",
+ "customProperties": null
+}]
+```
+
+## Delete speech dataset
+
+The [delete speech dataset](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Speech-Dataset) API deletes the specified dataset. Any model that was trained with the deleted dataset continues to be available until the model is deleted. You cannot delete a dataset while it is in use for indexing or training.
+
+### Response
+
+There's no returned content when the dataset is deleted successfully.
+
+## Delete a speech model
+
+The [delete speech model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Speech-Model) API deletes the specified speech model. You cannot delete a model while it is in use for indexing or training.
+
+### Response
+
+There's no returned content when the speech model is deleted successfully.
+
+## Next steps
+
+[Customize a speech model using the website](customize-speech-model-with-website.md)
+
azure-video-indexer Customize Speech Model With Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-speech-model-with-website.md
+
+ Title: Customize a speech model with Azure Video Indexer website
+description: Learn how to customize a speech model with the Azure Video Indexer website.
+ Last updated : 03/06/2023++
+# Customize a speech model in the website
+
+
+Azure Video Indexer lets you create custom speech models to customize speech recognition by uploading datasets that are used to create a speech model. This article goes through the steps to do so through the Video Indexer website. You can also use the API, as described inΓÇ»[Customize speech model using API](customize-speech-model-with-api.md).
+
+For a detailed overview and best practices for custom speech models, seeΓÇ»[Customize a speech model with Azure Video Indexer](customize-speech-model-overview.md).
+
+## Create a dataset
+
+As all custom models must contain a dataset, we'll start with the process of how to create and manage datasets.
+
+1. Go to the [Azure Video Indexer website](https://www.videoindexer.ai/) and sign in.
+1. Select the Model customization button on the left of the page.
+1. Select the Speech (new) tab. Here you'll begin the process of uploading datasets that are used to train the speech models.
+
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/customize-speech-model/speech-model.png" alt-text="Screenshot of uploading datasets which are used to train the speech models.":::
+1. Select Upload dataset.
+1. Select either Plain text or Pronunciation from the Dataset type dropdown menu. Every speech model must have a plain text dataset and can optionally have a pronunciation dataset. To learn more about each type, see Customize a speech model with Azure Video Indexer.
+1. Select Browse which will open the File Explorer. You can only use one file in each dataset. Choose the relevant text file.
+1. Select a Language for the model. Choose the language that is spoken in the media files you plan on indexing with this model.
+1. The Dataset name is pre-populated with the name of the file but you can modify the name.
+1. You can optionally add a description of the dataset. This could be helpful to distinguish each dataset if you expect to have multiple datasets.
+
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/customize-speech-model/dataset-type.png" alt-text="Screenshot of multiple datasets.":::
+1. Once you're ready, select Upload. You'll then see a list of all of your datasets and their properties, including the type, language, status, number of lines, and creation date. Once the status is complete, it can be used in the training and creation or new models.
+
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/customize-speech-model/datasets.png" alt-text="Screenshot of a new model.":::
+
+## Review and update a dataset
+
+Once a Dataset has been uploaded, you might need to review it or perform any number of updates to it. This section covers how to view, download, troubleshoot, and delete a dataset.
+
+**View dataset**: You can view a dataset and its properties by either clicking on the dataset name or when hovering over the dataset or clicking on the ellipsis and selecting **View Dataset**.
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/customize-speech-model/view-dataset.png" alt-text="Screenshot of how to view dataset.":::
+
+You'll then view the name, description, language and status of the dataset plus the following properties:
+
+**Number of lines**: indicates the number of lines successfully loaded out of the total number of lines in the file. If the entire file is loaded successfully the numbers will match (for example, 10 of 10 normalized). If the numbers don't match (for example, 7 of 10 normalized), this means that only some of the lines successfully loaded and the rest had errors. Common causes of errors are formatting issues with a line, such as not spacing a tab between each word in a pronunciation file. Reviewing the plain text and pronunciation data for training articles should be helpful in finding the issue. To troubleshoot the cause, review the error details, which are contained in the report. Select **View report** to view the error details regarding the lines that didn't load successfully (errorKind). This can also be viewed by selecting the **Report** tab.
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/customize-speech-model/report-tab.png" alt-text="Screenshot of how to view by selecting report tab.":::
+
+**Dataset ID**: Each dataset has a unique GUID, which is needed when using the API for operations that reference the dataset.
+
+**Plain text (normalized)**: This contains the normalized text of the loaded dataset file. Normalized text is the recognized text in plain form without formatting.
+
+**Edit Details**: To edit a dataset's name or description, when hovering over the dataset, click on the ellipsis and then select Edit details. You're then able to edit the dataset name and description.
+
+> [!Note]
+> The data in a dataset can't be edited or updated once the dataset has been uploaded. If you need to edit or update the data in a dataset, download the dataset, perform the edits, save the file, and upload the new dataset file.
+
+**Download**: To download a dataset file, when hovering over the dataset, click on the ellipsis and then select Download. Alternatively, when viewing the dataset, you can select Download and then have the option of downloading the dataset file or the upload report in JSON form.
+
+**Delete**: To delete a dataset, when hovering over the dataset, click on the ellipsis and then select Delete.
+
+## Create a custom speech model
+
+Datasets are used in the creation and training of models. Once you have created a plain text dataset, you are now able to create and start using a custom speech model.
+
+Keep in mind the following when creating and using custom speech models:
+
+* A new model must include at least one plain text dataset and can have multiple plain text datasets.
+* It's optional to include a pronunciation dataset and no more than one can be included.
+* Once a model is created, you can't add additional datasets to it or perform any modifications to its datasets. If you need to add or modify datasets, create a new model.
+* If you have indexed a video using a custom speech model and then delete the model, the transcript is not impacted unless you perform a re-index.
+* If you deleted a dataset that was used to train a custom model, as the speech model was already trained by the dataset, it continues to use it until the speech model is deleted.
+* If you delete a custom model, it has no impact of the transcription of videos that were already indexed using the model.
++
+**The following are instructions to create and manage custom speech models. There are two ways to train a model ΓÇô through the dataset tab and through the model tab.**
+
+## Train a model through the Datasets tab
+
+1. When viewing the list of datasets, if you select a plain text dataset by clicking on the circle to the left of a plain text datasetΓÇÖs name, the Train new model icon above the datasets will now turn from greyed out to blue and can be selected. Select Train new model.
+
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/customize-speech-model/train-model.png" alt-text="Screenshot of how to train new model.":::
+1. In the Train a new model popup, enter a name for the model, a language, and optionally add a description. A model can only contain datasets of the same language.
+1. Select the Datasets tab and then select from the list of your datasets the datasets you would like to be included in the model. Once a model is created, datasets can't be added.
+1. Select Create ad train.
+
+## Train a model through the Models tab
+
+1. Select the Models tab and then the Train new model icon. If no plain text datasets have been uploaded, the icon is greyed out. Select all the datasets that you want to be part of the model by clicking on the circle to the left of a plain text datasetΓÇÖs name.
+1. In the Train a new model pop-up, enter a name for the model, a language, and optionally add a description. A model can only contain datasets of the same language.
+1. Select the Datasets tab and then select from the list of your datasets the datasets you would like to be included in the model. Once a model is created, datasets can't be added.
+1. Select Create and train.
+
+## Model review and update
+
+Once a Model has been created, you might need to review its datasets, edits its name, or delete it.
+
+**View Model**: You can view a model and its properties by either clicking on the modelΓÇÖs name or when hovering over the model, clicking on the ellipsis and then selecting View Model.
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/customize-speech-model/view-model.png" alt-text="Screenshot of how to review and update a model.":::
+
+You'll then see in the Details tab the name, description, language and status of the model plus the following properties:
+
+**Model ID**: Each model has a unique GUID, which is needed when using the API for operations that reference the model.
+
+**Created on**: The date the model was created.
+
+**Edit Details**: To edit a modelΓÇÖs name or description, when hovering over the model, click on the ellipsis and then select Edit details. You're then able to edit the modelΓÇÖs name and description.
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/customize-speech-model/create-model.png" alt-text="Screenshot of how to hover over the model.":::
+
+> [!Note]
+> Only the modelΓÇÖs name and description can be edited. If you want to make any changes to its datasets or add datasets, a new model must be created.
+
+**Delete**: To delete a model, when hovering over the dataset, click on the ellipsis and then select Delete.
+
+**Included datasets**: Click on the Included datasets tab to view the modelΓÇÖs datasets.
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/customize-speech-model/included-datasets.png" alt-text="Screenshot of how to delete the model.":::
+
+## How to use a custom language model when indexing a video
+
+A custom language model isn't used by default for indexing jobs and must be selected during the index upload process. To learn how to index a video, see Upload and index videos with Azure Video Indexer - Azure Video Indexer | Microsoft Learn.
+
+During the upload process, you can select the source language of the video. In the Video source language drop-down menu, you'll see your custom model among the language list. The naming of the model is the language of your Language model and the name that you gave it in parentheses. For example:
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/customize-speech-model/contoso-model.png" alt-text="Screenshot of indexing a video.":::
+
+Select the Upload option in the bottom of the page, and your new video will be indexed using your Language model. The same steps apply when you want to re-index a video with a custom model.
+
+## Next steps
+
+[Customize a speech model using the API](customize-speech-model-with-api.md)
azure-video-indexer Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/language-support.md
Previously updated : 02/21/2023 Last updated : 03/10/2023 # Language support in Azure Video Indexer
This section explains the Video Indexer language options and has a table of the
### Column explanations - **Supported source language** ΓÇô The language spoken in the media file supported for transcription, translation, and search.-- **Language identification** - Whether the language can be automatically detected by Video Indexer when language identification is used for indexing. To learn more, see [Use Azure Video Indexer to auto identify spoken languages](language-identification-model.md) and the **Language Identification** section.-- **Customization (language model)** - Whether the language can be used when customizing language models in Video Indexer. To learn more, see [Customize a Language model in Azure Video Indexer](customize-language-model-overview.md)
+- **Language identification** - Whether the language can be automatically detected by Video Indexer when language identification is used for indexing. To learn more, see [Use Azure Video Indexer to auto identify spoken languages](language-identification-model.md) and the **Language Identification** section.
+- **Customization (language model)** - Whether the language can be used when customizing language models in Video Indexer. To learn more, see [Customize a language model in Azure Video Indexer](customize-language-model-overview.md).
+- **Pronunciation (language model)** - Whether the language can be used to create a pronunciation dataset as part of a custom speech model. To learn more, see [Customize a speech model with Azure Video Indexer](customize-speech-model-overview.md).
- **Website Translation** ΓÇô Whether the language is supported for translation when using the [Azure Video Indexer website](https://aka.ms/vi-portal-link). Select the translated language in the language drop-down menu.
- :::image type="content" source="media/language-support/website-translation.png" alt-text="Screenshow showing a menu with download, English and views as menu items. A tooltip is shown as mouseover on the English item and says Translation is set to English." lightbox="media/language-support/website-translation.png":::
+ :::image type="content" source="media/language-support/website-translation.png" alt-text="Screenshot showing a menu with download, English and views as menu items. A tooltip is shown as mouseover on the English item and says Translation is set to English." lightbox="media/language-support/website-translation.png":::
The following insights are translated:
This section explains the Video Indexer language options and has a table of the
- Frame patterns (Only to Hebrew as of now) All other insights appear in English when using translation.- - **Website Language** - Whether the language can be selected for use on the [Azure Video Indexer website](https://aka.ms/vi-portal-link). Select the **Settings icon** then select the language in the **Language settings** dropdown.
- :::image type="content" source="media/language-support/website-language.jpg" alt-text="Screenshow showing a menu with user settings show them all toggled to on." lightbox="media/language-support/website-language.jpg":::
-
-| **Language** | **Code** | **Supported source language** | **Language identification** | **Customization (language model)** | **Website Translation** | **Website Language** |
-|:-|:-:|:--:|::|:-:|:--:|:--:|
-| Afrikaans | af-ZA | | | | Γ£ö | |
-| Arabic (Israel) | ar-IL | Γ£ö | | Γ£ö | | |
-| Arabic (Iraq) | ar-IQ | Γ£ö | Γ£ö | | | |
-| Arabic (Jordan) | ar-JO | Γ£ö | Γ£ö | Γ£ö | | |
-| Arabic (Kuwait) | ar-KW | Γ£ö | Γ£ö | Γ£ö | | |
-| Arabic (Lebanon) | ar-LB | Γ£ö | | Γ£ö | | |
-| Arabic (Oman) | ar-OM | Γ£ö | Γ£ö | Γ£ö | | |
-| Arabic (Palestinian Authority) | ar-PS | Γ£ö | | Γ£ö | | |
-| Arabic (Qatar) | ar-QA | Γ£ö | Γ£ö | Γ£ö | | |
-| Arabic (Saudi Arabia) | ar-SA | Γ£ö | Γ£ö | Γ£ö | | |
-| Arabic (United Arab Emirates) | ar-AE | Γ£ö | Γ£ö | Γ£ö | | |
-| Arabic Egypt | ar-EG | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Arabic Modern Standard (Bahrain) | ar-BH | Γ£ö | Γ£ö | Γ£ö | | |
-| Arabic Syrian Arab Republic | ar-SY | Γ£ö | Γ£ö | Γ£ö | | |
-| Armenian | hy-AM | Γ£ö | | | | |
-| Bangla | bn-BD | | | | Γ£ö | |
-| Bosnian | bs-Latn | | | | Γ£ö | |
-| Bulgarian | bg-BG | Γ£ö | Γ£ö | | Γ£ö | |
-| Catalan | ca-ES | Γ£ö | Γ£ö | | Γ£ö | |
-| Chinese (Cantonese Traditional) | zh-HK | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Chinese (Simplified) | zh-Hans | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Chinese (Simplified) | zh-CK | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Chinese (Traditional) | zh-Hant | | | | Γ£ö | |
-| Croatian | hr-HR | Γ£ö | Γ£ö | | Γ£ö | |
-| Czech | cs-CZ | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Danish | da-DK | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Dutch | nl-NL | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| English Australia | en-AU | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| English United Kingdom | en-GB | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| English United States | en-US | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Estonian | et-EE | Γ£ö | Γ£ö | | Γ£ö | |
-| Fijian | en-FJ | | | | Γ£ö | |
-| Filipino | fil-PH | | | | Γ£ö | |
-| Finnish | fi-FI | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| French | fr-FR | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| French (Canada) | fr-CA | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| German | de-DE | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Greek | el-GR | Γ£ö | Γ£ö | | Γ£ö | |
-| Gujarati | gu-IN | Γ£ö | Γ£ö | | Γ£ö | |
-| Haitian | fr-HT | | | | Γ£ö | |
-| Hebrew | he-IL | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Hindi | hi-IN | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Hungarian | hu-HU | | Γ£ö | | Γ£ö | Γ£ö |
-| Icelandic | is-IS | Γ£ö | | | | |
-| Indonesian | id-ID | | | | Γ£ö | |
-| Irish | ga-IE | Γ£ö | Γ£ö | | | |
-| Italian | it-IT | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Japanese | ja-JP | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Kannada | kn-IN | Γ£ö | Γ£ö | | | |
-| Kiswahili | sw-KE | | | | Γ£ö | |
-| Korean | ko-KR | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Latvian | lv-LV | Γ£ö | Γ£ö | | Γ£ö | |
-| Lithuanian | lt-LT | | | | Γ£ö | |
-| Malagasy | mg-MG | | | | Γ£ö | |
-| Malay | ms-MY | Γ£ö | | | Γ£ö | |
-| Malayalam | ml-IN | Γ£ö | Γ£ö | | | |
-| Maltese | mt-MT | | | | Γ£ö | |
-| Norwegian | nb-NO | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Persian | fa-IR | Γ£ö | | Γ£ö | Γ£ö | |
-| Polish | pl-PL | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Portuguese | pt-BR | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Portuguese (Portugal) | pt-PT | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Romanian | ro-RO | Γ£ö | Γ£ö | | Γ£ö | |
-| Russian | ru-RU | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Samoan | en-WS | | | | | |
-| Serbian (Cyrillic) | sr-Cyrl-RS | | | | Γ£ö | |
-| Serbian (Latin) | sr-Latn-RS | | | | Γ£ö | |
-| Slovak | sk-SK | Γ£ö | Γ£ö | | Γ£ö | |
-| Slovenian | sl-SI | Γ£ö | Γ£ö | | Γ£ö | |
-| Spanish | es-ES | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Spanish (Mexico) | es-MX | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Swedish | sv-SE | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Tamil | ta-IN | Γ£ö | Γ£ö | | Γ£ö | |
-| Telugu | te-IN | Γ£ö | Γ£ö | | | |
-| Thai | th-TH | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Tongan | to-TO | | | | Γ£ö | |
-| Turkish | tr-TR | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Ukrainian | uk-UA | Γ£ö | Γ£ö | | Γ£ö | |
-| Urdu | ur-PK | | | | Γ£ö | |
-| Vietnamese | vi-VN | Γ£ö | Γ£ö | | Γ£ö | |
+ :::image type="content" source="media/language-support/website-language.jpg" alt-text="Screenshot showing a menu with user settings show them all toggled to on." lightbox="media/language-support/website-language.jpg":::
+
+| **Language** | **Code** | **Supported<br/>source language** | **Language<br/>identification** | **Customization<br/>(language model)** | **Pronunciation<br>(language model)** | **Website<br/>Translation** | **Website<br/>Language** |
+|||||||||
+| Afrikaans | af-ZA | | | | | Γ£ö | |
+| Arabic (Israel) | ar-IL | Γ£ö | | Γ£ö | | | |
+| Arabic (Iraq) | ar-IQ | Γ£ö | Γ£ö | | | | |
+| Arabic (Jordan) | ar-JO | Γ£ö | Γ£ö | Γ£ö | | | |
+| Arabic (Kuwait) | ar-KW | Γ£ö | Γ£ö | Γ£ö | | | |
+| Arabic (Lebanon) | ar-LB | Γ£ö | | Γ£ö | | | |
+| Arabic (Oman) | ar-OM | Γ£ö | Γ£ö | Γ£ö | | | |
+| Arabic (Palestinian Authority) | ar-PS | Γ£ö | | Γ£ö | | | |
+| Arabic (Qatar) | ar-QA | Γ£ö | Γ£ö | Γ£ö | | | |
+| Arabic (Saudi Arabia) | ar-SA | Γ£ö | Γ£ö | Γ£ö | | | |
+| Arabic (United Arab Emirates) | ar-AE | Γ£ö | Γ£ö | Γ£ö | | | |
+| Arabic Egypt | ar-EG | Γ£ö | Γ£ö | Γ£ö | | Γ£ö | |
+| Arabic Modern Standard (Bahrain) | ar-BH | Γ£ö | Γ£ö | Γ£ö | | | |
+| Arabic Syrian Arab Republic | ar-SY | Γ£ö | Γ£ö | Γ£ö | | | |
+| Armenian | hy-AM | Γ£ö | | | | | |
+| Bangla | bn-BD | | | | | Γ£ö | |
+| Bosnian | bs-Latn | | | | | Γ£ö | |
+| Bulgarian | bg-BG | Γ£ö | Γ£ö | | | Γ£ö | |
+| Catalan | ca-ES | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| Chinese (Cantonese Traditional) | zh-HK | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| Chinese (Simplified) | zh-Hans | Γ£ö | Γ£ö | | | Γ£ö | Γ£ö |
+| Chinese (Simplified) | zh-CK | Γ£ö | Γ£ö | | | Γ£ö | Γ£ö |
+| Chinese (Traditional) | zh-Hant | | | | | Γ£ö | |
+| Croatian | hr-HR | Γ£ö | Γ£ö | | Γ£ö | Γ£ö | |
+| Czech | cs-CZ | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Danish | da-DK | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| Dutch | nl-NL | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| English Australia | en-AU | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| English United Kingdom | en-GB | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| English United States | en-US | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Estonian | et-EE | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| Fijian | en-FJ | | | | | Γ£ö | |
+| Filipino | fil-PH | | | | | Γ£ö | |
+| Finnish | fi-FI | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| French | fr-FR | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| French (Canada) | fr-CA | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| German | de-DE | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Greek | el-GR | Γ£ö | Γ£ö | | | Γ£ö | |
+| Gujarati | gu-IN | Γ£ö | Γ£ö | | | Γ£ö | |
+| Haitian | fr-HT | | | | | Γ£ö | |
+| Hebrew | he-IL | Γ£ö | Γ£ö | Γ£ö | | Γ£ö | |
+| Hindi | hi-IN | Γ£ö | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
+| Hungarian | hu-HU | | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Icelandic | is-IS | Γ£ö | | | | | |
+| Indonesian | id-ID | | | Γ£ö | Γ£ö | Γ£ö | |
+| Irish | ga-IE | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | |
+| Italian | it-IT | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Japanese | ja-JP | Γ£ö | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
+| Kannada | kn-IN | Γ£ö | Γ£ö | | | | |
+| Kiswahili | sw-KE | | | | | Γ£ö | |
+| Korean | ko-KR | Γ£ö | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
+| Latvian | lv-LV | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| Lithuanian | lt-LT | | | Γ£ö | Γ£ö | Γ£ö | |
+| Malagasy | mg-MG | | | | | Γ£ö | |
+| Malay | ms-MY | Γ£ö | | | | Γ£ö | |
+| Malayalam | ml-IN | Γ£ö | Γ£ö | | | | |
+| Maltese | mt-MT | | | | | Γ£ö | |
+| Norwegian | nb-NO | Γ£ö | Γ£ö | Γ£ö | | Γ£ö | |
+| Persian | fa-IR | Γ£ö | | Γ£ö | | Γ£ö | |
+| Polish | pl-PL | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Portuguese | pt-BR | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Portuguese (Portugal) | pt-PT | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| Romanian | ro-RO | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| Russian | ru-RU | Γ£ö | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
+| Samoan | en-WS | | | | | | |
+| Serbian (Cyrillic) | sr-Cyrl-RS | | | | | Γ£ö | |
+| Serbian (Latin) | sr-Latn-RS | | | | | Γ£ö | |
+| Slovak | sk-SK | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| Slovenian | sl-SI | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| Spanish | es-ES | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Spanish (Mexico) | es-MX | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| Swedish | sv-SE | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Tamil | ta-IN | Γ£ö | Γ£ö | | | Γ£ö | |
+| Telugu | te-IN | Γ£ö | Γ£ö | | | | |
+| Thai | th-TH | Γ£ö | Γ£ö | Γ£ö | | Γ£ö | |
+| Tongan | to-TO | | | | | Γ£ö | |
+| Turkish | tr-TR | Γ£ö | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
+| Ukrainian | uk-UA | Γ£ö | Γ£ö | | | Γ£ö | |
+| Urdu | ur-PK | | | | | Γ£ö | |
+| Vietnamese | vi-VN | Γ£ö | Γ£ö | | | Γ£ö |
## Get supported languages through the API
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
Title: Azure Video Indexer release notes | Microsoft Docs
description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure Video Indexer. Previously updated : 02/07/2023 Last updated : 03/22/2023
To stay up-to-date with the most recent Azure Video Indexer developments, this a
[!INCLUDE [announcement](./includes/deprecation-announcement.md)]
+## March 2023
+
+### Support for storage behind firewall
+
+It is good practice to lock storage accounts and disable public access to enhance or comply with enterprise security policy. Video Indexer can now access non-public accessible storage accounts using the [Azure Trusted Service](https://learn.microsoft.com/azure/storage/common/storage-network-security?tabs=azure-portal#trusted-access-based-on-a-managed-identity) exception using Managed Identities. You can read more how to set it up in our [how-to](storage-behind-firewall.md).
+
+### New custom speech and pronunciation training
+
+Azure Video Indexer has added a new custom speech model experience. The experience includes ability to use custom pronunciation datasets to improve recognition of mispronounced words, phrases, or names. The custom models can be used to improve the transcription quality of content with industry specific terminology. To learn more, see [Customize speech model overview](customize-speech-model-overview.md).
+
+### Observed people quality improvements
+
+Observed people now supports people who are sitting. This is in addition to existing support of people who are standing or walking. This improvement makes observed people model more versatile and suitable for a wider range of use cases. We have also improved the model re-identification and grouping algorithms by 50%. The model can now more accurately track and group people across multiple camera views.
+
+### Observed people indexing duration optimization
+
+We have optimized the memory usage of the observed people model, resulting in a 60% reduction in indexing duration when using the advanced video analysis preset. You can now process your video footage more efficiently and get results faster.
+ ## February 2023 ### Pricing
Starting February 1st, weΓÇÖre excited to announce a 40% price reduction on the
This change will be implemented automatically, and customers who already have Azure discounts will continue to receive them in addition to the new pricing.
-| | **Basic Audio Analysis** | **Standard Audio Analysis** | **Advanced Audio Analysis** | **Standard Video Analysis** | **Advanced Video Analysis** |
-|-- | | | | | |
-| Per input minute | $0.0126 | $0.024 | $0.04 | $0.09 | $0.15 |
+|**Charge** | **Basic Audio Analysis** | **Standard Audio Analysis** | **Advanced Audio Analysis** | **Standard Video Analysis** | **Advanced Video Analysis** |
+| | | - | | | |
+| Per input minute | $0.0126 | $0.024 | $0.04 | $0.09 | $0.15 |
### Network Service Tag
azure-video-indexer Storage Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/storage-behind-firewall.md
When you create a Video Indexer account, you must associate it with a Media Serv
If you want to use a firewall to secure your storage account and enable trusted storage, [Managed Identities](/azure/media-services/latest/concept-managed-identities) authentication that allows Video Indexer access through the firewall is the preferred option. It allows Video Indexer and Media Services to access the storage account that has been configured without needing public access for [trusted storage access.](../storage/common/storage-network-security.md?tabs=azure-portal#grant-access-to-trusted-azure-services)
-[!IMPORTANT] When you lock your storage accounts without public access be aware that the client device you're using to download the video source file using the Video Indexer portal will be the source ip that the storage account will see and allow/deny depending on the network configuration of your storage account. For instance, if I'm accessing the Video Indexer portal from my home network and I want to download the video source file a sas url to the storage account is created, my device will initiate the request and as a consequence the storage account will see my home ip as source ip. If you did not add exception for this ip you will not be able to access the SAS url to the source video. Work with your network/storage administrator on a network strategy i.e. use your corporate network, VPN or Private Link.
+> [!IMPORTANT]
+> When you lock your storage accounts without public access be aware that the client device you're using to download the video source file using the Video Indexer portal will be the source ip that the storage account will see and allow/deny depending on the network configuration of your storage account. For instance, if I'm accessing the Video Indexer portal from my home network and I want to download the video source file a sas url to the storage account is created, my device will initiate the request and as a consequence the storage account will see my home ip as source ip. If you did not add exception for this ip you will not be able to access the SAS url to the source video. Work with your network/storage administrator on a network strategy i.e. use your corporate network, VPN or Private Link.
Follow these steps to enable Managed Identity for Media Services and Storage and then lock your storage account. It's assumed that you already created a Video Indexer account and associated with a Media Services and Storage account.
backup Blob Backup Configure Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-configure-manage.md
For more information about the supported scenarios, limitations, and availabilit
- Vaulted backup of blobs is a managed offsite backup solution that transfers data to the backup vault and retains as per the retention configured in the backup policy. You can retain data for a maximum of *10 years*. - Currently, you can use the vaulted backup solution to restore data to a different storage account only. While performing restores, ensure that the target storage account doesn't contain any *containers* with the same name as those backed up in a recovery point. If any conflicts arise due to the same name of containers, the restore operation fails.
+- Ensure the storage accounts that need to be backed up have cross-tenant replication enabled. You can check this by navigating to the storage account > Object replication > Advanced settings. Once here, ensure that the check-box is enabled.
For more information about the supported scenarios, limitations, and availability, See the [support matrix](blob-backup-support-matrix.md).
backup Blob Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-overview.md
For information about the limitations of the current solution, see the [support
Vaulted backup (preview) uses the platform capability of object replication to copy data to the Backup vault. Object replication asynchronously copies block blobs between a source storage account and a destination storage account. The contents of the blob, any versions associated with the blob, and the blob's metadata and properties are all copied from the source container to the destination container.
-When you configure protection, Azure Backup allocates a destination storage account (Backup vault's storage account managed by Azure Backup) and enables object replication policy at container level on both destination and source storage account. When a backup job is triggered, the Azure Backup service creates a recovery point marker on the source storage account and polls the destination account for the recovery point marker replication. When the data transfer completes, the recovery point marker is replicated. Once the replication point marker is present on the destination, a recovery point is created.
+When you configure protection, Azure Backup allocates a destination storage account (Backup vault's storage account managed by Azure Backup) and enables object replication policy at container level on both destination and source storage account. When a backup job is triggered, the Azure Backup service creates a recovery point marker on the source storage account and polls the destination account for the recovery point marker replication. Once the replication point marker is present on the destination, a recovery point is created.
For information about the limitations of the current solution, see the [support matrix](blob-backup-support-matrix.md).
You won't incur backup storage charges or instance fees during the preview. Howe
## Next steps -- [Configure and manage Azure Blobs backup](blob-backup-configure-manage.md)
+- [Configure and manage Azure Blobs backup](blob-backup-configure-manage.md)
cdn Cdn Custom Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-custom-ssl.md
Register Azure CDN as an app in your Azure Active Directory via PowerShell.
`New-AzADServicePrincipal -ApplicationId "205478c0-bd83-4e1b-a9d6-db63a3e1e1c8" -Role Contributor` > [!NOTE]
- > * **205478c0-bd83-4e1b-a9d6-db63a3e1e1c8** is the service principal for **Microsoft.AzureFrontDoor-Cdn**.
+ > * `205478c0-bd83-4e1b-a9d6-db63a3e1e1c8` is the service principal for `Microsoft.AzureFrontDoor-Cdn`.
> * You need to have the **Global Administrator** role to run this command.
+ > * The service principal name was changed from `Microsoft.Azure.Cdn` to `Microsoft.AzureFrontDoor-Cdn`.
```bash New-AzADServicePrincipal -ApplicationId "205478c0-bd83-4e1b-a9d6-db63a3e1e1c8" -Role Contributor
cognitive-services Image Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/image-retrieval.md
The Image Retrieval APIs enable the _vectorization_ of images and text queries.
You can try out the Image Retrieval feature quickly and easily in your browser using Vision Studio.
+> [!IMPORTANT]
+> The Vision Studio experience is limited to 500 images. To use a larger image set, create your own search application using the APIs in this guide.
+ > [!div class="nextstepaction"] > [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
cognitive-services Model Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/model-customization.md
The `datasets/<dataset-name>` API lets you create a new dataset object that refe
1. Replace `<endpoint>` with your Computer Vision endpoint. 1. Replace `<dataset-name>` with a name for your dataset. 1. Replace `<subscription-key>` with your Computer Vision key.
-1. In the request body, set `"annotationKind"` to either `"MultiClassClassification"` or `"ObjectDetection"`, depending on your project.
+1. In the request body, set `"annotationKind"` to either `"imageClassification"` or `"imageObjectDetection"`, depending on your project.
1. In the request body, set the `"annotationFileUris"` array to an array of string(s) that show the URI location(s) of your COCO file(s) in blob storage. ```bash curl.exe -v -X PUT "https://<endpoint>/computervision/datasets/<dataset-name>?api-version=2023-02-01-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii " {
-'annotationKind':'MultiClassClassification',
+'annotationKind':'imageClassification',
'annotationFileUris':['<URI>'] }" ```
cognitive-services Authoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/authoring.md
curl -X POST -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: applica
}' -i 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/:import?api-version=2021-10-01&format=tsv' ```
-A successful call to deploy a project results in an `Operation-Location` header being returned, which can be used to check the status of the import job. In many of our examples, we haven't needed to look at the response headers and thus haven't been displaying them. To retrieve the response headers our curl command uses `-i`. Without this additional parameter prior to the endpoint address, the response to this command would appear empty as if no response occurred.
+A successful call to import a project results in an `Operation-Location` header being returned, which can be used to check the status of the import job. In many of our examples, we haven't needed to look at the response headers and thus haven't been displaying them. To retrieve the response headers our curl command uses `-i`. Without this additional parameter prior to the endpoint address, the response to this command would appear empty as if no response occurred.
### Example response
apim-request-id: 92225e03-e83f-4c7f-b35a-223b1b0f29dd
strict-transport-security: max-age=31536000; includeSubDomains; preload x-content-type-options: nosniff date: Wed, 24 Nov 2021 04:02:56 GMT
-```
+```
communication-services Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/bring-your-own-storage.md
- Title: Azure Communication Services BYOS overview-
-description: Learn about the Azure Communication Services BYOS.
----- Previously updated : 02/25/2021----
-# Bring your own storage (BYOS) overview
--
-In many applications, end-users may want to store their Call Recording files long-term. Some of the common scenarios are compliance, quality assurance, assessment, post-call analysis, training, and coaching. Now with the BYOS (bring your own storage) being available, end-users will have an option to store their files long-term and manage the files in a way they need. The end-user will be responsible for legal regulations about storing the data. BYOS simplifies downloading of the files from Azure Communication Services and minimizes the number of support requests if the customer was unable to download the recording in 48 hours. Data will be transferred securely from Microsoft Azure blob storage to a customer Azure blob storage.
-Here are a few examples:
-- Contact Center Recording-- Compliance Recording Scenario-- Healthcare Virtual Visits Scenario-- Conference/meeting recordings and so on-
-BYOS can be easily integrated into any application regardless of the programming language. When creating a call recording resource in Azure Portal, enable the BYOS option and provide the URL to the storage. This simple experience allows developers to meet their needs, scale, and avoid investing time and resources into designing and maintaining a custom solution.
-
-![Bring your own storage concept diagram](../media/byos-diagramm.png)
-
-1. Contoso enables MI (managed identities) on an Azure Storage account.
-2. Contoso creates Azure Communication Services resource.
-
-![Bring your own storage resource page](../media/byos-link-storage.png)
-
-3. Contoso enables BYOS on the Azure Communication Services resource and specifies the URL to link with the storage.
-4. After the resource has been created Contoso will see linked storage and will be able to change settings later in time
-
-![Bring your own storage add storage page](../media/byos-add-storage.png)
-
-5. If Contoso has built an application with Call Recording, they can record a meeting. Once the recording file is available, Contoso will receive an event from Azure Communication Services that a file is copied over to their storage.
-6. After the notification has been received Contoso will see the file
-6. After the notification has been received Contoso will see the file located in the storage they have specified.
-7. Contoso has successfully linked their storage with Azure Communication Services!
-
-![Bring your own storage success page](../media/byos-storage-created.png)
-
-## Feature highlights
--- HIPAA complaint-
-## Next steps
-- TBD
communication-services Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-recording/bring-your-own-storage.md
+
+ Title: Azure Communication Services BYOS overview
+
+description: Learn about the Azure Communication Services BYOS.
++++ Last updated : 03/16/2023++++
+# Bring your own storage (BYOS) overview
+++
+Bring Your Own Storage (BYOS) for Call Recording allows you to specify an Azure blob storage account for storing call recording files. BYOS enables businesses to store their data in a way that meets their compliance requirements and business needs. For example, end-users could customize their own rules and access to the data, enabling them to store or delete content whenever they need it. BYOS provides a simple and straightforward solution that eliminates the need for developers to invest time and resources in downloading and exporting files.
+
+The same Azure Communication Services Call Recording APIs are used to export recordings to your Azure Blob Storage Container. While starting recording for a call, specify the container path where the recording needs to be exported. Upon recording completion, Azure Communication Services automatically fetches and uploads your recording to your storage.
+
+![Diagram showing a call recording being automatically exported to storage container](../media/byos-concept.png)
+
+## Azure Managed Identities
+
+BYOS uses [Azure Managed Identities](../../../../active-directory/managed-identities-azure-resources/overview.md) to access user-owned resources securely. Azure Managed Identities provides an identity for the application to use when it needs to access Azure resources, eliminating the need for developers to manage credentials.
++
+## Known issues
+
+- Azure Communication Services will also store your files in a built-in storage for 48 hours even if the exporting with BYOS is successful.
+- Randomly, recording files are duplicated during the exporting process when using BYOS. Make sure you delete the duplicated file to avoid extra storage costs in your storage account.
++
+## Next steps
+For more information, see the following articles:
+- Learn more about BYOS, check out the [BYOS Quickstart](../../../quickstarts/call-automation/call-recording/bring-your-own-storage.md).
+- Learn more about Call recording, check out the [Call Recording Quickstart](../../../quickstarts/voice-video-calling/get-started-call-recording.md).
+- Learn more about [Call Automation](../../../quickstarts/call-automation/callflows-for-customer-interactions.md).
+
communication-services Sub Eligibility Number Capability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/sub-eligibility-number-capability.md
The following tables summarize current availability:
| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* | | USA & Puerto Rico | Local | - | - | General Availability | General Availability\* | | USA | Short-Codes | General Availability | General Availability | - | - |
+| Canada | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
+| Canada | Local | - | - | General Availability | General Availability\* |
+| UK | Toll-Free | - | - | General Availability | General Availability\* |
+| UK | Local | - | - | General Availability | General Availability\* |
\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
The following tables summarize current availability:
| : | :-- | : | : | :- | : | | Italy | Toll-Free** | - | - | General Availability | General Availability\* | | Italy | Local** | - | - | General Availability | General Availability\* |
+| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
+| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* |
+| Canada | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
+| Canada | Local | - | - | General Availability | General Availability\* |
+| UK | Toll-Free | - | - | General Availability | General Availability\* |
+| UK | Local | - | - | General Availability | General Availability\* |
\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
The following tables summarize current availability:
| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | | :- | :-- | :- | :- | :- | : |
+| Germany | Toll-Free | - | - | Public Preview | Public Preview\* |
| Germany | Local | - | - | Public Preview | Public Preview\* | \* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
communication-services Pstn Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pstn-pricing.md
All prices shown below are in USD.
|Number type |Monthly fee | |--|--| |Geographic |USD 0.80/mo |
+|Toll-Free |USD 6.00/mo |
### Usage charges |Number type |To make calls* |To receive calls| |--|--|| |Geographic |Starting at USD 0.0150/min |USD 0.0100/min |
+|Toll-free |Starting at USD 0.0150/min | USD 0.1750/min |
\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
communication-services Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/troubleshooting.md
When troubleshooting happens for voice or video calls, you may be asked to provi
[!INCLUDE [Troubloshooting over the iOS UI library](./includes/troubleshooting/ios.md)] ::: zone-end
-User may find Call ID via the action bar on the bottom of the call screen. See more [Troubleshooting guide](../../concepts/ui-library/ui-library-use-cases.md?branch=pr-en-us-217148&pivots=platform-mobile#troubleshooting-guide)
+User may find Call ID via the action bar on the bottom of the call screen. See more [Troubleshooting guide](../../concepts/ui-library/ui-library-use-cases.md?&pivots=platform-mobile#troubleshooting-guide)
## Next steps
communication-services Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/call-automation/call-recording/bring-your-own-storage.md
++
+ Title: Azure Communication Services Call Recording Bring Your Own Storage
+
+description: Private Preview quickstart for Bring your own storage
++ Last updated : 03/17/2023+++
+zone_pivot_groups: acs-csharp-java
++
+# Call recording: Bring your own storage quickstart
++
+This quickstart gets you started with BYOS (Bring your own storage) for Call Recording. To start using BYOS, make sure you're familiar with the [Call Recording APIs](../../voice-video-calling/get-started-call-recording.md).
+
+## Pre-requisite: Setting up Managed Identity and RBAC role assignments
+
+### 1. Enable system assigned managed identity for Azure Communication Services
+
+![Diagram showing a communication service resource with managed identity disabled](../media/byos-managed-identity-1.png)
+
+1. Open your Azure Communication Services resource. Navigate to *Identity* on the left.
+2. System Assigned Managed Identity is disabled by default. Enable it and click of *Save*
+3. Once completed, you're able to see the Object principal ID of the newly created identity.
+
+![Diagram showing a communication service resource with managed identity enabled](../media/byos-managed-identity-2.png)
+
+4. Now that identity has been successfully created, click on *Azure role assignments* to start adding role assignments.
+
+### 2. Add role assignment
+
+1. Click on *"Add role assignment"*
+
+![Diagram showing a communication service resource managed identity adding role assignment](../media/role-assignment-1.png)
+
+2. On the *"Add role assignment"* panel, select the following values
+ 1. Scope: **Storage**
+ 2. Subscription: **Choose your subscription**
+ 3. Resource: **Choose your storage account**
+ 4. Role: **Azure Communication Services needs *"Storage Blob Data Contributor"* to be able to write to your storage account.**
+
+![Diagram showing a communication service resource managed identity adding role assignment details](../media/role-assignment-2.png)
+
+3. Click on *"Save"*.
+4. Once completed, you see the newly added role assignment in the *"Azure role assignment"* window.
+
+![Diagram showing a communication service resource managed identity role assignment success](../media/role-assignment-3.png)
+
+## Start recording session with external storage specified
+
+Use the server call ID received during initiation of the call.
+++
+### Notification on successful export
+
+Use an [Azure Event Grid](../../../../event-grid/overview.md) web hook, or other triggered action, to notify your services when the recorded media is ready and have been exported to the external storage location.
+
+Refer to this example of the event schema.
+
+```JSON
+{
+ "id": "string", // Unique guid for event
+ "topic": "string", // /subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}
+ "subject": "string", // /recording/call/{call-id}/serverCallId/{serverCallId}
+ "data": {
+ "storageType": "string", // acsstorage, blobstorage etc.
+ "recordingId": "string", // unique id for recording
+ "recordingStorageInfo": {
+ "recordingChunks": [
+ {
+ "documentId": "string", // Document id for for the recording chunk
+ "contentLocation": "string", //Azure Communication Services URL where the content is located
+ "metadataLocation": "string", // Azure Communication Services URL where the metadata for this chunk is located
+ "deleteLocation": "string", // Azure Communication Services URL to use to delete all content, including recording and metadata.
+ "index": "int", // Index providing ordering for this chunk in the entire recording
+ "endReason": "string", // Reason for chunk ending: "SessionEnded",ΓÇ»"ChunkMaximumSizeExceededΓÇ¥, etc.
+ }
+ ]
+ },
+ "recordingStartTime": "string", // ISO 8601 date time for the start of the recording
+ "recordingDurationMs": "int", // Duration of recording in milliseconds
+ "sessionEndReason": "string" // Reason for call ending: "CallEnded",ΓÇ»"InitiatorLeftΓÇ¥, etc.
+ },
+ "eventType": "string", // "Microsoft.Communication.RecordingFileStatusUpdated"
+ "dataVersion": "string", // "1.0"
+ "metadataVersion": "string", // "1"
+ "eventTime": "string" // ISO 8601 date time for when the event was created
+}
+```
+
+## Next steps
+
+For more information, see the following articles:
+
+- Download our [Java](https://github.com/Azure-Samples/communication-services-java-quickstarts/tree/main/ServerRecording) and [.NET](https://github.com/Azure-Samples/communication-services-dotnet-quickstarts/tree/main/ServerRecording) call recording sample apps
+- Learn more about [Call Recording](../../../concepts/voice-video-calling/call-recording.md)
+- Learn more about [Call Automation](../../../concepts/call-automation/call-automation.md)
+
container-apps Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity.md
For more code examples of the Azure Identity client library for Java, see [Azure
# [PowerShell](#tab/powershell)
-Use the following script to retrieve a token from the local endpoint by specifying a resource URI of an Azure service. Replace the place holder with the resource URI to obtain the token.
+Use the following script to retrieve a token from the local endpoint by specifying a resource URI of an Azure service. Replace the placeholder with the resource URI to obtain the token.
```powershell $resourceURI = "https://<AAD-resource-URI>"
$accessToken = $tokenResponse.access_token
A raw HTTP GET request looks like the following example.
-X-IDENTITY-HEADER contains the GUID that is stored in the IDENTITY_HEADER environment variable.
+Obtain the token endpoint URL from the `IDENTITY_ENDPOINT` environment variable. `x-identity-header` contains the GUID that is stored in the `IDENTITY_HEADER` environment variable.
```http GET http://localhost:42356/msi/token?resource=https://vault.azure.net&api-version=2019-08-01 HTTP/1.1
-X-IDENTITY-HEADER: 853b9a84-5bfa-4b22-a3f3-0b9a43d9ad8a
+x-identity-header: 853b9a84-5bfa-4b22-a3f3-0b9a43d9ad8a
``` A response might look like this example:
This response is the same as the [response for the Azure AD service-to-service a
### REST endpoint reference
-> [!NOTE]
-> An older version of this endpoint, using the "2017-09-01" API version, used the `secret` header instead of `X-IDENTITY-HEADER` and only accepted the `clientid` property for user-assigned. It also returned the `expires_on` in a timestamp format. `MSI_ENDPOINT` can be used as an alias for `IDENTITY_ENDPOINT`, and `MSI_SECRET` can be used as an alias for `IDENTITY_HEADER`. This version of the protocol is currently required for Linux Consumption hosting plans.
- A container app with a managed identity exposes the identity endpoint by defining two environment variables: -- IDENTITY_ENDPOINT - local URL from which your container app can request tokens.-- IDENTITY_HEADER - a header used to help mitigate server-side request forgery (SSRF) attacks. The value is rotated by the platform.
+- `IDENTITY_ENDPOINT` - local URL from which your container app can request tokens.
+- `IDENTITY_HEADER` - a header used to help mitigate server-side request forgery (SSRF) attacks. The value is rotated by the platform.
To get a token for a resource, make an HTTP GET request to the endpoint, including the following parameters:
To get a token for a resource, make an HTTP GET request to the endpoint, includi
> [!IMPORTANT] > If you are attempting to obtain tokens for user-assigned identities, you must include one of the optional properties. Otherwise the token service will attempt to obtain a token for a system-assigned identity, which may or may not exist.
-For more information on the REST endpoint, see [REST endpoint reference](#rest-endpoint-reference).
- ## View managed identities
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
Analytical store partitioning is completely independent of partitioning in
* **Data encryption with customer-managed keys** - You can seamlessly encrypt the data across transactional and analytical stores using the same customer-managed keys in an automatic and transparent manner. Azure Synapse Link only supports configuring customer-managed keys using your Azure Cosmos DB account's managed identity. You must configure your account's managed identity in your Azure Key Vault access policy before [enabling Azure Synapse Link](configure-synapse-link.md#enable-synapse-link) on your account. To learn more, see how to [Configure customer-managed keys using Azure Cosmos DB accounts' managed identities](how-to-setup-cmk.md#using-managed-identity) article.
+> [!NOTE]
+> If you change your database account from First Party to System or User Assigned Identy, and enable Azure Synapse Link in your database account, you won't be able to return to First Party identity since you can't disable Synapse Link from your database account.
+ ## Support for multiple Azure Synapse Analytics runtimes The analytical store is optimized to provide scalability, elasticity, and performance for analytical workloads without any dependency on the compute run-times. The storage technology is self-managed to optimize your analytics workloads without manual efforts.
cosmos-db How To Configure Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-configure-capabilities.md
Capabilities are features that can be added or removed to your API for MongoDB a
az cosmosdb update \ --resource-group <azure_resource_group> \ --name <azure_cosmos_db_account_name> \
- --capabilities EnableMongo, DisableRateLimitingResponses
+ --capabilities EnableMongo DisableRateLimitingResponses
```
+
+ > [!IMPORTANT]
+ > The list of capabilities must always specify all capabilities you wish to enable, inclusively. This includes capabilities already enabled for the account. In this example, the `EnableMongo` capability was already enabled, so both the `EnableMongo` and `DisableRateLimitingResponses` capabilities must be specified.
> [!TIP] > If you're using PowerShell and receive an error using the command above, try using a PowerShell array instead to list the capabilities:
cosmos-db Linq To Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/linq-to-sql.md
Title: LINQ to SQL translation in Azure Cosmos DB
+ Title: LINQ to SQL translation
+ description: Learn the LINQ operators supported and how the LINQ queries are mapped to SQL queries in Azure Cosmos DB. ++ - Previously updated : 08/06/2021-- Last updated : 03/22/2023+
-# LINQ to SQL translation
+
+# LINQ to SQL translation in Azure Cosmos DB for NoSQL
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
-The Azure Cosmos DB query provider performs a best effort mapping from a LINQ query into an Azure Cosmos DB SQL query. If you want to get the SQL query that is translated from LINQ, use the `ToString()` method on the generated `IQueryable`object. The following description assumes a basic familiarity with [LINQ](/dotnet/csharp/programming-guide/concepts/linq/introduction-to-linq-queries). In addition to LINQ, Azure Cosmos DB also supports [Entity Framework Core](/ef/core/providers/cosmos/?tabs=dotnet-core-cli) which works with API for NoSQL.
+The Azure Cosmos DB query provider performs a best effort mapping from a LINQ query into an Azure Cosmos DB SQL query. If you want to get the SQL query that is translated from LINQ, use the `ToString()` method on the generated `IQueryable` object. The following description assumes a basic familiarity with [LINQ](/dotnet/csharp/programming-guide/concepts/linq/introduction-to-linq-queries). In addition to LINQ, Azure Cosmos DB also supports [Entity Framework Core](/ef/core/providers/cosmos/?tabs=dotnet-core-cli), which works with API for NoSQL.
> [!NOTE]
-> We recommend using the latest [.NET SDK version](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/)
+> We recommend using the latest [.NET SDK (`Microsoft.Azure.Cosmos`) version](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/)
-The query provider type system supports only the JSON primitive types: numeric, Boolean, string, and null.
+The query provider type system supports only the JSON primitive types: `numeric`, `Boolean`, `string`, and `null`.
The query provider supports the following scalar expressions:
The query provider supports the following scalar expressions:
- Property/array index expressions that refer to the property of an object or an array element. For example:
- ```
+ ```csharp
family.Id; family.children[0].familyName; family.children[0].grade;
- family.children[n].grade; //n is an int variable
- ```
+ ```
+
+ ```csharp
+ int n = 1;
+
+ family.children[n].grade;
+ ```
- Arithmetic expressions, including common arithmetic expressions on numerical and Boolean values. For the complete list, see the [Azure Cosmos DB SQL specification](aggregate-functions.md).
- ```
+ ```csharp
2 * family.children[0].grade; x + y;
- ```
+ ```
- String comparison expressions, which include comparing a string value to some constant string value.
- ```
- mother.familyName == "Wakefield";
- child.givenName == s; //s is a string variable
- ```
+ ```csharp
+ mother.familyName.StringEquals("Wakefield");
+ ```
+
+ ```csharp
+ string s = "Rob";
+ string e = "in";
+ string c = "obi";
+
+ child.givenName.StartsWith(s);
+ child.givenName.EndsWith(e);
+ child.givenName.Contains(c);
+ ```
- Object/array creation expressions, which return an object of compound value type or anonymous type, or an array of such objects. You can nest these values.
- ```
+ ```csharp
new Parent { familyName = "Wakefield", givenName = "Robin" }; new { first = 1, second = 2 }; //an anonymous type with two fields new int[] { 3, child.grade, 5 };
- ```
+ ```
## Using LINQ You can create a LINQ query with `GetItemLinqQueryable`. This example shows LINQ query generation and asynchronous execution with a `FeedIterator`: ```csharp
-using (FeedIterator<Book> setIterator = container.GetItemLinqQueryable<Book>()
- .Where(b => b.Title == "War and Peace")
- .ToFeedIterator<Book>())
- {
- //Asynchronous query execution
- while (setIterator.HasMoreResults)
- {
- foreach(var item in await setIterator.ReadNextAsync()){
- {
- Console.WriteLine(item.cost);
- }
- }
- }
- }
+using FeedIterator<Book> setIterator = container.GetItemLinqQueryable<Book>()
+ .Where(b => b.Title == "War and Peace")
+ .ToFeedIterator<Book>());
+
+//Asynchronous query execution
+while (setIterator.HasMoreResults)
+{
+ foreach(var item in await setIterator.ReadNextAsync()){
+ {
+ Console.WriteLine(item.cost);
+ }
+}
```
-## <a id="SupportedLinqOperators"></a>Supported LINQ operators
+## Supported LINQ operators
The LINQ provider included with the SQL .NET SDK supports the following operators:
The LINQ provider included with the SQL .NET SDK supports the following operator
- **SelectMany**: Allows unwinding of arrays to the [JOIN](join.md) clause. Use to chain or nest expressions to filter on array elements. - **OrderBy** and **OrderByDescending**: Translate to [ORDER BY](order-by.md) with ASC or DESC. - **Count**, **Sum**, **Min**, **Max**, and **Average** operators for [aggregation](aggregate-functions.md), and their async equivalents **CountAsync**, **SumAsync**, **MinAsync**, **MaxAsync**, and **AverageAsync**.-- **CompareTo**: Translates to range comparisons. Commonly used for strings, since they're not comparable in .NET.
+- **CompareTo**: Translates to range comparisons. This operator is commonly used for strings, since they're not comparable in .NET.
- **Skip** and **Take**: Translates to [OFFSET and LIMIT](offset-limit.md) for limiting results from a query and doing pagination. - **Math functions**: Supports translation from .NET `Abs`, `Acos`, `Asin`, `Atan`, `Ceiling`, `Cos`, `Exp`, `Floor`, `Log`, `Log10`, `Pow`, `Round`, `Sign`, `Sin`, `Sqrt`, `Tan`, and `Truncate` to the equivalent [built-in mathematical functions](mathematical-functions.md). - **String functions**: Supports translation from .NET `Concat`, `Contains`, `Count`, `EndsWith`,`IndexOf`, `Replace`, `Reverse`, `StartsWith`, `SubString`, `ToLower`, `ToUpper`, `TrimEnd`, and `TrimStart` to the equivalent [built-in string functions](string-functions.md).
The syntax is `input.Select(x => f(x))`, where `f` is a scalar expression. The `
- **LINQ lambda expression**
- ```csharp
- input.Select(family => family.parents[0].familyName);
- ```
+ ```csharp
+ input.Select(family => family.parents[0].familyName);
+ ```
- **SQL**
- ```sql
- SELECT VALUE f.parents[0].familyName
- FROM Families f
+ ```sql
+ SELECT VALUE f.parents[0].familyName
+ FROM Families f
``` **Select operator, example 2:** - **LINQ lambda expression**
- ```csharp
- input.Select(family => family.children[0].grade + c); // c is an int variable
- ```
+ ```csharp
+ input.Select(family => family.children[0].grade + c); // c is an int variable
+ ```
- **SQL**
- ```sql
- SELECT VALUE f.children[0].grade + c
- FROM Families f
- ```
+ ```sql
+ SELECT VALUE f.children[0].grade + c
+ FROM Families f
+ ```
**Select operator, example 3:** - **LINQ lambda expression**
- ```csharp
+ ```csharp
input.Select(family => new { name = family.children[0].familyName, grade = family.children[0].grade + 3 });
- ```
+ ```
- **SQL**
- ```sql
- SELECT VALUE {"name":f.children[0].familyName,
- "grade": f.children[0].grade + 3 }
- FROM Families f
- ```
+ ```sql
+ SELECT VALUE {
+ "name":f.children[0].familyName,
+ "grade": f.children[0].grade + 3
+ }
+ FROM Families f
+ ```
### SelectMany operator
The syntax is `input.SelectMany(x => f(x))`, where `f` is a scalar expression th
- **LINQ lambda expression**
- ```csharp
- input.SelectMany(family => family.children);
- ```
+ ```csharp
+ input.SelectMany(family => family.children);
+ ```
- **SQL**
- ```sql
- SELECT VALUE child
- FROM child IN Families.children
- ```
+ ```sql
+ SELECT VALUE child
+ FROM child IN Families.children
+ ```
### Where operator
The syntax is `input.Where(x => f(x))`, where `f` is a scalar expression, which
- **LINQ lambda expression**
- ```csharp
- input.Where(family=> family.parents[0].familyName == "Wakefield");
- ```
+ ```csharp
+ input.Where(family=> family.parents[0].familyName == "Wakefield");
+ ```
- **SQL**
- ```sql
- SELECT *
- FROM Families f
- WHERE f.parents[0].familyName = "Wakefield"
- ```
+ ```sql
+ SELECT *
+ FROM Families f
+ WHERE f.parents[0].familyName = "Wakefield"
+ ```
**Where operator, example 2:** - **LINQ lambda expression**
- ```csharp
- input.Where(
- family => family.parents[0].familyName == "Wakefield" &&
- family.children[0].grade < 3);
- ```
+ ```csharp
+ input.Where(
+ family => family.parents[0].familyName == "Wakefield" &&
+ family.children[0].grade < 3);
+ ```
- **SQL**
- ```sql
- SELECT *
- FROM Families f
- WHERE f.parents[0].familyName = "Wakefield"
- AND f.children[0].grade < 3
- ```
+ ```sql
+ SELECT *
+ FROM Families f
+ WHERE f.parents[0].familyName = "Wakefield"
+ AND f.children[0].grade < 3
+ ```
## Composite SQL queries
The syntax is `input(.|.SelectMany())(.Select()|.Where())*`. A concatenated quer
- **LINQ lambda expression**
- ```csharp
- input.Select(family => family.parents[0])
- .Where(parent => parent.familyName == "Wakefield");
- ```
+ ```csharp
+ input.Select(family => family.parents[0])
+ .Where(parent => parent.familyName == "Wakefield");
+ ```
- **SQL**
- ```sql
- SELECT *
- FROM Families f
- WHERE f.parents[0].familyName = "Wakefield"
- ```
+ ```sql
+ SELECT *
+ FROM Families f
+ WHERE f.parents[0].familyName = "Wakefield"
+ ```
**Concatenation, example 2:** - **LINQ lambda expression**
- ```csharp
- input.Where(family => family.children[0].grade > 3)
- .Select(family => family.parents[0].familyName);
- ```
+ ```csharp
+ input.Where(family => family.children[0].grade > 3)
+ .Select(family => family.parents[0].familyName);
+ ```
- **SQL**
- ```sql
- SELECT VALUE f.parents[0].familyName
- FROM Families f
- WHERE f.children[0].grade > 3
- ```
+ ```sql
+ SELECT VALUE f.parents[0].familyName
+ FROM Families f
+ WHERE f.children[0].grade > 3
+ ```
**Concatenation, example 3:** - **LINQ lambda expression**
- ```csharp
- input.Select(family => new { grade=family.children[0].grade}).
- Where(anon=> anon.grade < 3);
- ```
+ ```csharp
+ input.Select(family => new { grade=family.children[0].grade}).
+ Where(anon=> anon.grade < 3);
+ ```
- **SQL**
- ```sql
- SELECT *
- FROM Families f
- WHERE ({grade: f.children[0].grade}.grade > 3)
- ```
+ ```sql
+ SELECT *
+ FROM Families f
+ WHERE ({grade: f.children[0].grade}.grade > 3)
+ ```
**Concatenation, example 4:** - **LINQ lambda expression**
- ```csharp
- input.SelectMany(family => family.parents)
- .Where(parent => parents.familyName == "Wakefield");
- ```
+ ```csharp
+ input.SelectMany(family => family.parents)
+ .Where(parent => parents.familyName == "Wakefield");
+ ```
- **SQL**
- ```sql
- SELECT *
- FROM p IN Families.parents
- WHERE p.familyName = "Wakefield"
- ```
+ ```sql
+ SELECT *
+ FROM p IN Families.parents
+ WHERE p.familyName = "Wakefield"
+ ```
### Nesting
A nested query applies the inner query to each element of the outer container. O
- **LINQ lambda expression**
- ```csharp
- input.SelectMany(family=>
- family.parents.Select(p => p.familyName));
- ```
+ ```csharp
+ input.SelectMany(family=>
+ family.parents.Select(p => p.familyName));
+ ```
- **SQL**
- ```sql
- SELECT VALUE p.familyName
- FROM Families f
- JOIN p IN f.parents
- ```
+ ```sql
+ SELECT VALUE p.familyName
+ FROM Families f
+ JOIN p IN f.parents
+ ```
**Nesting, example 2:** - **LINQ lambda expression**
- ```csharp
- input.SelectMany(family =>
- family.children.Where(child => child.familyName == "Jeff"));
- ```
+ ```csharp
+ input.SelectMany(family =>
+ family.children.Where(child => child.familyName == "Jeff"));
+ ```
- **SQL**
- ```sql
- SELECT *
- FROM Families f
- JOIN c IN f.children
- WHERE c.familyName = "Jeff"
- ```
+ ```sql
+ SELECT *
+ FROM Families f
+ JOIN c IN f.children
+ WHERE c.familyName = "Jeff"
+ ```
**Nesting, example 3:** - **LINQ lambda expression**
- ```csharp
- input.SelectMany(family => family.children.Where(
- child => child.familyName == family.parents[0].familyName));
- ```
+ ```csharp
+ input.SelectMany(family => family.children.Where(
+ child => child.familyName == family.parents[0].familyName));
+ ```
- **SQL**
- ```sql
- SELECT *
- FROM Families f
- JOIN c IN f.children
- WHERE c.familyName = f.parents[0].familyName
- ```
+ ```sql
+ SELECT *
+ FROM Families f
+ JOIN c IN f.children
+ WHERE c.familyName = f.parents[0].familyName
+ ```
## Next steps -- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [Azure Cosmos DB for NoSQL .NET SDK developer guide](../how-to-dotnet-get-started.md)
- [Model document data](../../modeling-data.md)
cosmos-db Tutorial Log Transformation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/tutorial-log-transformation.md
+
+ Title: |
+ Tutorial: Add a transformation for workspace data
+
+description: In this tutorial, add a custom transformation to data flowing through Azure Monitor Logs from Azure Cosmos DB by using the Azure portal.
+++++ Last updated : 03/22/2023++
+# Tutorial: Add a transformation for Azure Cosmos DB workspace data by using the Azure portal
+
+This tutorial walks you through configuration of a sample [transformation in a workspace data collection rule (DCR)](../azure-monitor/essentials/data-collection-transformations.md) by using the Azure portal.
+
+> [!NOTE]
+> To help improve costs for enabling Log Analytics, we now support adding Data Collection Rules and transformations on your Log Analytics resources to filter out columns, reduce number of results returned, and create new columns before the data is sent to the destination.
+
+Workspace transformations are stored together in a single [DCR](../azure-monitor/essentials/data-collection-rule-overview.md) for the workspace, which is called the workspace DCR. Each transformation is associated with a particular table. The transformation is applied to all data sent to this table from any workflow not using a DCR.
+
+> [!NOTE]
+> This tutorial uses the Azure portal to configure a workspace transformation. For the same tutorial using Azure Resource Manager templates and REST API, see [Tutorial: Add transformation in workspace data collection rule to Azure Monitor using resource manager templates](../azure-monitor/logs/tutorial-workspace-transformations-api.md).
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+>
+> - Configure a [workspace transformation](../azure-monitor/essentials/data-collection-transformations.md#workspace-transformation-dcr) for a table in a Log Analytics workspace.
+> - Write a log query for a workspace transformation.
+>
+
+## Prerequisites
+
+To complete this tutorial, you need:
+
+- A Log Analytics workspace where you have at least [contributor rights](../azure-monitor/logs/manage-access.md#azure-rbac).
+- [Permissions to create DCR objects](../azure-monitor/essentials/data-collection-rule-overview.md#permissions) in the workspace.
+- A table that already has some data.
+- The table can't be linked to the [workspace transformation DCR](../azure-monitor/essentials/data-collection-transformations.md#workspace-transformation-dcr).
+
+## Overview of the tutorial
+
+In this tutorial, you reduce the storage requirement for the `CDBDataPlaneRequests` table by filtering out certain records. You also remove the contents of a column while parsing the column data to store a piece of data in a custom column. The [CDBDataPlaneRequests table](monitor-resource-logs.md) is created when you enable [log analytics](monitor-resource-logs.md) in a workspace.
+
+This tutorial uses the Azure portal, which provides a wizard to walk you through the process of creating an ingestion-time transformation. After you finish the steps, you'll see that the wizard:
+
+- Updates the table schema with any other columns from the query.
+- Creates a `WorkspaceTransformation` DCR and links it to the workspace if a default DCR isn't already linked to the workspace.
+- Creates an ingestion-time transformation and adds it to the DCR.
+
+## Enable query audit logs
+
+You need to enable [log analytics](monitor-resource-logs.md) for your workspace to create the `CDBDataPlaneRequests` table that you're working with. This step isn't required for all ingestion time transformations. It's just to generate the sample data that we're working with.
+
+## Add a transformation to the table
+
+Now that the table's created, you can create the transformation for it.
+
+1. On the **Log Analytics workspaces** menu in the Azure portal, select **Tables**. Locate the `CDBDataPlaneRequests` table and select **Create transformation**.
+
+ :::image type="content" source="media/tutorial-log-transformation/create-transformation.png" lightbox="media/tutorial-log-transformation/create-transformation.png" alt-text="Screenshot that shows creating a new transformation.":::
+
+1. Because this transformation is the first one in the workspace, you must create a [workspace transformation DCR](../azure-monitor/essentials/data-collection-transformations.md#workspace-transformation-dcr). If you create transformations for other tables in the same workspace, they're stored in this same DCR. Select **Create a new data collection rule**. The **Subscription** and **Resource group** are already populated for the workspace. Enter a name for the DCR and select **Done**.
+
+1. Select **Next** to view sample data from the table. As you define the transformation, the result is applied to the sample data. For this reason, you can evaluate the results before you apply it to actual data. Select **Transformation editor** to define the transformation.
+
+ :::image type="content" source="media/tutorial-log-transformation/transformation-query-results.png" lightbox="media/tutorial-log-transformation/transformation-query-results.png" alt-text="Screenshot that shows sample data from the log table.":::
+
+1. In the transformation editor, you can see the transformation that is applied to the data prior to its ingestion into the table. A virtual table named `source` represents the incoming data, which has the same set of columns as the destination table itself. The transformation initially contains a simple query that returns the `source` table with no changes.
+
+1. Modify the query to the following example:
+
+ ``` kusto
+ source
+ | where StatusCode != 200 // searching for requests that are not successful
+ | project-away Type, TenantId
+ ```
+
+ The modification makes the following changes:
+
+ - Rows related to querying the `CDBDataPlaneRequests` table itself were dropped to save space because these log entries aren't useful.
+ - Data from the `TenantId` and `Type` columns were removed to save space.
+ - Transformations also support adding columns using the `extend` operator in your query.
+
+ > [!Note]
+ > Using the Azure portal, the output of the transformation will initiate changes to the table schema if required. Columns will be added to match the transformation output if they don't already exist. Make sure that your output doesn't contain any columns that you don't want added to the table. If the output doesn't include columns that are already in the table, those columns won't be removed, but data won't be added.
+ >
+ > Any custom columns added to a built-in table must end in `_CF`. Columns added to a custom table don't need to have this suffix. A custom table has a name that ends in `_CL`.
+
+1. Copy the query into the transformation editor and select **Run** to view results from the sample data. You can verify that the new `Workspace_CF` column is in the query.
+
+ :::image type="content" source="media/tutorial-log-transformation/select-transformation-editor.png" lightbox="media/tutorial-log-transformation/select-transformation-editor.png" alt-text="Screenshot that shows the transformation editor.":::
+
+1. Select **Apply** to save the transformation and then select **Next** to review the configuration. Select **Create** to update the DCR with the new transformation.
+
+ :::image type="content" source="media/tutorial-log-transformation/transformation-configuration-created.png" lightbox="media/tutorial-log-transformation/transformation-configuration-created.png" alt-text="Screenshot that shows saving the transformation.":::
+
+## Test the transformation
+
+Allow about 30 minutes for the transformation to take effect and then test it by running a query against the table. This transformation affects only data sent to the table after the transformation was applied.
+
+For this tutorial, run some sample queries to send data to the `CDBDataPlaneRequests` table. Include some queries against `CDBDataPlaneRequests` so that you can verify that the transformation filters these records.
+
+## Troubleshooting
+
+This section describes different error conditions you might receive and how to correct them.
+
+### IntelliSense in Log Analytics not recognizing new columns in the table
+
+The cache that drives IntelliSense might take up to 24 hours to update.
+
+### Transformation on a dynamic column isn't working
+
+A known issue currently affects dynamic columns. A temporary workaround is to explicitly parse dynamic column data by using `parse_json()` prior to performing any operations against them.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Data collection transformations](../azure-monitor/essentials/data-collection-transformations.md)
cost-management-billing View All Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/view-all-accounts.md
tags: billing
Previously updated : 11/17/2022 Last updated : 03/22/2023
Azure portal supports the following type of billing accounts:
- **Microsoft Online Services Program**: A billing account for a Microsoft Online Services Program is created when you sign up for Azure through the Azure website. For example, when you sign up for an [Azure Free Account](https://azure.microsoft.com/offers/ms-azr-0044p/), [account with pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) or as a [Visual studio subscriber](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers/). - A new billing account for a Microsoft Online Services Program can have a maximum of 5 subscriptions. However, subscriptions transferred to the new billing account don't count against the limit. - The ability to create other Microsoft Online Services Program subscriptions is determined on an individual basis according to your history with Azure.
+ - *If you have difficulty finding a new subscription* after you create it, you might need to change the global subscription filter. For more information about changing the global subscription filter, see [Can't view subscription](create-subscription.md#cant-view-subscription).
- **Enterprise Agreement**: A billing account for an Enterprise Agreement (EA) is created when your organization signs an [Enterprise Agreement](https://azure.microsoft.com/pricing/enterprise-agreement/) to use Azure. An EA enrollment can contain an unlimited number of EA accounts. - An EA account has a subscription limit of 5000. *Regardless of a subscription's state, it's included in the limit. So, deleted and disabled subscriptions are included in the limit*. If you need more subscriptions than the limit, create more EA accounts. Generally speaking, a subscription is a billing container.
data-factory Self Hosted Integration Runtime Proxy Ssis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/self-hosted-integration-runtime-proxy-ssis.md
If you haven't already done so, create an Azure Blob Storage linked service in t
- For **Connect via integration runtime**, select **AutoResolveIntegrationRuntime** (not your self-hosted IR), so we can ignore it and use your Azure-SSIS IR instead to fetch access credentials for your Azure Blob Storage. - For **Authentication method**, select **Account key**, **SAS URI**, **Service Principal**, **Managed Identity**, or **User-Assigned Managed Identity**.
+>[!TIP]
+>If your data factory instance is Git-enabled, a linked service without key authentication will not be immediately published, which means you cannot save the integration runtime that depends on the linked service in your feature-branch. Authenticating with account key or SAS URI will immediately publish the linked service.
+ >[!TIP] >If you select the **Service Principal** method, grant your service principal at least a *Storage Blob Data Contributor* role. For more information, see [Azure Blob Storage connector](connector-azure-blob-storage.md#linked-service-properties). If you select the **Managed Identity**/**User-Assigned Managed Identity** method, grant the specified system/user-assigned managed identity for your ADF a proper role to access Azure Blob Storage. For more information, see [Access Azure Blob Storage using Azure Active Directory (Azure AD) authentication with the specified system/user-assigned managed identity for your ADF](/sql/integration-services/connection-manager/azure-storage-connection-manager#managed-identities-for-azure-resources-authentication).
ddos-protection Ddos View Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-view-diagnostic-logs.md
+
+ Title: 'View Azure DDoS Protection logs in Log Analytics workspace'
+description: Learn how to view DDoS protection diagnostic logs in Log Analytics workspace.
+++++ Last updated : 03/22/2023+++
+# View Azure DDoS Protection logs in Log Analytics workspace
+
+In this guide, you'll learn how to view Azure DDoS Protection diagnostic logs, including notifications, mitigation reports and mitigation flow logs.
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- [DDoS Network Protection](manage-ddos-protection.md) must be enabled on a virtual network or [DDoS IP Protection](manage-ddos-protection-powershell-ip.md) must be enabled on a public IP address.
+- Configure DDoS Protection diagnostic logs. To learn more, see [Configure diagnostic logs](diagnostic-logging.md).
+- Simulate an attack using one of our simulation partners. To learn more, see [Test with simulation partners](test-through-simulations.md).
+
+### View in log analytics workspace
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. In the search box at the top of the portal, enter **Log Analytics workspace**. Select **Log Analytics workspace** in the search results.
+1. Under the **Log Analytics workspaces** blade, select your workspace.
+1. On the left-side tab, select **Logs**. Here you'll see the query explorer. Exit out the *Queries* pane to utilize the *Logs* page.
+
+ :::image type="content" source="./media/ddos-view-diagnostic-logs/ddos-select-logs-in-workspace.png" alt-text="Screenshot of viewing a log analytics workspace.":::
+
+1. In the *Logs* page, type in your query then hit *Run* to view results.
+
+ :::image type="content" source="./media/ddos-view-diagnostic-logs/ddos-notification-logs.png" alt-text="Screenshot of viewing DDoS Protection notification logs in log analytics workspace.":::
+
+## Example log queries
+
+### DDoS Protection Notifications
+
+Notifications will notify you anytime a public IP resource is under attack, and when attack mitigation is over.
+
+```kusto
+ AzureDiagnostics
+ | where Category == "DDoSProtectionNotifications"
+```
++
+The following table lists the field names and descriptions:
+
+| Field name | Description |
+| | |
+| **TimeGenerated** | The date and time in UTC when the notification was created. |
+| **ResourceId** | The resource ID of your public IP address. |
+| **Category** | For notifications, this will be `DDoSProtectionNotifications`.|
+| **ResourceGroup** | The resource group that contains your public IP address and virtual network. |
+| **SubscriptionId** | Your DDoS protection plan subscription ID. |
+| **Resource** | The name of your public IP address. |
+| **ResourceType** | This will always be `PUBLICIPADDRESS`. |
+| **OperationName** | For notifications, this will be `DDoSProtectionNotifications`. |
+| **Message** | Details of the attack. |
+| **Type** | Type of notification. Possible values include `MitigationStarted`. `MitigationStopped`. |
+| **PublicIpAddress** | Your public IP address. |
+
+### DDoS Mitigation FlowLogs
+
+Attack mitigation flow logs allow you to review the dropped traffic, forwarded traffic and other interesting data-points during an active DDoS attack in near-real time. You can ingest the constant stream of this data into Microsoft Sentinel or to your third-party SIEM systems via event hub for near-real time monitoring, take potential actions and address the need of your defense operations.
+
+```kusto
+ AzureDiagnostics
+ | where Category == "DDoSMitigationFlowLogs"
+```
+
+| Field name | Description |
+| | |
+| **TimeGenerated** | The date and time in UTC when the flow log was created. |
+| **ResourceId** | The resource ID of your public IP address. |
+| **Category** | For flow logs, this will be `DDoSMitigationFlowLogs`.|
+| **ResourceGroup** | The resource group that contains your public IP address and virtual network. |
+| **SubscriptionId** | Your DDoS protection plan subscription ID. |
+| **Resource** | The name of your public IP address. |
+| **ResourceType** | This will always be `PUBLICIPADDRESS`. |
+| **OperationName** | For flow logs, this will be `DDoSMitigationFlowLogs`. |
+| **Message** | Details of the attack. |
+| **SourcePublicIpAddress** | The public IP address of the client generating traffic to your public IP address. |
+| **SourcePort** | Port number ranging from 0 to 65535. |
+| **DestPublicIpAddress** | Your public IP address. |
+| **DestPort** | Port number ranging from 0 to 65535. |
+| **Protocol** | Type of protocol. Possible values include `tcp`, `udp`, `other`.|
+
+### DDoS Mitigation FlowLogs
+
+Attack mitigation reports use the Netflow protocol data, which is aggregated to provide detailed information about the attack on your resource. Anytime a public IP resource is under attack, the report generation will start as soon as the mitigation starts. There will be an incremental report generated every 5 mins and a post-mitigation report for the whole mitigation period. This is to ensure that in an event the DDoS attack continues for a longer duration of time, you'll be able to view the most current snapshot of mitigation report every 5 minutes and a complete summary once the attack mitigation is over.
+
+```kusto
+ AzureDiagnostics
+ | where Category == "DDoSMitigationReports"
+```
+
+| Field name | Description |
+| | |
+| **TimeGenerated** | The date and time in UTC when the notification was created. |
+| **ResourceId** | The resource ID of your public IP address. |
+| **Category** | For notifications, this will be `DDoSProtectionNotifications`.|
+| **ResourceGroup** | The resource group that contains your public IP address and virtual network. |
+| **SubscriptionId** | Your DDoS protection plan subscription ID. |
+| **Resource** | The name of your public IP address. |
+| **ResourceType** | This will always be `PUBLICIPADDRESS`. |
+| **OperationName** | For notifications, this will be `DDoSProtectionNotifications`. |
+| **Message** | Details of the attack. |
+| **Type** | Type of notification. Possible values include `MitigationStarted`. `MitigationStopped`. |
+| **PublicIpAddress** | Your public IP address. |
++
+## Next steps
+
+*[Engage DDoS Rapid Response](ddos-rapid-response.md)
ddos-protection Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/diagnostic-logging.md
In this guide, you'll learn how to configure Azure DDoS Protection diagnostic lo
### Query Azure DDOS Protection logs in log analytics workspace
-For more information on log schemas, see [Monitoring Azure DDoS Protection](monitor-ddos-protection-reference.md#diagnostic-logs).
+For more information on log schemas, see [View diagnostic logs](ddos-view-diagnostic-logs.md#example-log-queries).
#### DDoSProtectionNotifications logs 1. Under the **Log analytics workspaces** blade, select your log analytics workspace.
ddos-protection Monitor Ddos Protection Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/monitor-ddos-protection-reference.md
Previously updated : 12/19/2022 Last updated : 03/22/2023 # Monitoring Azure DDoS Protection
-See [Tutorial: View and configure Azure DDoS protection telemetry](telemetry.md) for details on collecting, analyzing, and monitoring DDoS Protection.
+The following section outlines the metrics of the Azure DDoS Protection service.
## Metrics
The following [Azure Monitor metrics](../azure-monitor/essentials/metrics-suppor
| UDPPacketsDroppedDDoSΓÇï | Inbound UDP packets dropped DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound UDP packets dropped DDoSΓÇï | | UDPPacketsForwardedDDoSΓÇï | Inbound UDP packets forwarded DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound UDP packets forwarded DDoSΓÇï | | UDPPacketsInDDoSΓÇï | Inbound UDP packets DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound UDP packets DDoSΓÇï |
-## Diagnostic logs
-
-See [Tutorial: View and configure Azure DDoS Protection diagnostic logging](diagnostic-logging.md) for details on attack insights and visualization with DDoS Attack Analytics.
-
-The following diagnostic logs are available for Azure DDoS Protection:
--- **DDoSProtectionNotifications**: Notifications will notify you anytime a public IP resource is under attack, and when attack mitigation is over.-- **DDoSMitigationFlowLogs**: Attack mitigation flow logs allow you to review the dropped traffic, forwarded traffic and other interesting data-points during an active DDoS attack in near-real time. You can ingest the constant stream of this data into Microsoft Sentinel or to your third-party SIEM systems via event hub for near-real time monitoring, take potential actions and address the need of your defense operations.-- **DDoSMitigationReports**: Attack mitigation reports use the Netflow protocol data, which is aggregated to provide detailed information about the attack on your resource. Anytime a public IP resource is under attack, the report generation will start as soon as the mitigation starts. There will be an incremental report generated every 5 mins and a post-mitigation report for the whole mitigation period. This is to ensure that in an event the DDoS attack continues for a longer duration of time, you'll be able to view the most current snapshot of mitigation report every 5 minutes and a complete summary once the attack mitigation is over.-- **AllMetrics**: Provides all possible metrics available during the duration of a DDoS attack.-
-## Log schemas
-
-The following table lists the field names and descriptions:
-
-# [DDoSProtectionNotifications](#tab/DDoSProtectionNotifications)
-
-| Field name | Description |
-| | |
-| **TimeGenerated** | The date and time in UTC when the notification was created. |
-| **ResourceId** | The resource ID of your public IP address. |
-| **Category** | For notifications, this will be `DDoSProtectionNotifications`.|
-| **ResourceGroup** | The resource group that contains your public IP address and virtual network. |
-| **SubscriptionId** | Your DDoS protection plan subscription ID. |
-| **Resource** | The name of your public IP address. |
-| **ResourceType** | This will always be `PUBLICIPADDRESS`. |
-| **OperationName** | For notifications, this will be `DDoSProtectionNotifications`. |
-| **Message** | Details of the attack. |
-| **Type** | Type of notification. Possible values include `MitigationStarted`. `MitigationStopped`. |
-| **PublicIpAddress** | Your public IP address. |
-
-# [DDoSMitigationFlowLogs](#tab/DDoSMitigationFlowLogs)
-
-| Field name | Description |
-| | |
-| **TimeGenerated** | The date and time in UTC when the flow log was created. |
-| **ResourceId** | The resource ID of your public IP address. |
-| **Category** | For flow logs, this will be `DDoSMitigationFlowLogs`.|
-| **ResourceGroup** | The resource group that contains your public IP address and virtual network. |
-| **SubscriptionId** | Your DDoS protection plan subscription ID. |
-| **Resource** | The name of your public IP address. |
-| **ResourceType** | This will always be `PUBLICIPADDRESS`. |
-| **OperationName** | For flow logs, this will be `DDoSMitigationFlowLogs`. |
-| **Message** | Details of the attack. |
-| **SourcePublicIpAddress** | The public IP address of the client generating traffic to your public IP address. |
-| **SourcePort** | Port number ranging from 0 to 65535. |
-| **DestPublicIpAddress** | Your public IP address. |
-| **DestPort** | Port number ranging from 0 to 65535. |
-| **Protocol** | Type of protocol. Possible values include `tcp`, `udp`, `other`.|
-
-# [DDoSMitigationReports](#tab/DDoSMitigationReports)
-
-| Field name | Description |
-| | |
-| **TimeGenerated** | The date and time in UTC when the report was created. |
-| **ResourceId** | The resource ID of your public IP address. |
-| **Category** | For notifications, this will be `DDoSMitigationReports`.|
-| **ResourceGroup** | The resource group that contains your public IP address and virtual network. |
-| **SubscriptionId** | Your DDoS protection plan subscription ID. |
-| **Resource** | The name of your public IP address. |
-| **ResourceType** | This will always be `PUBLICIPADDRESS`. |
-| **OperationName** | For mitigation reports, this will be `DDoSMitigationReports`. |
-| **ReportType** | Possible values include `Incremental`, `PostMitigation`.|
-| **MitigationPeriodStart** | The date and time in UTC when the mitigation started. |
-| **MitigationPeriodEnd** | The date and time in UTC when the mitigation ended. |
-| **IPAddress** | Your public IP address. |
-| **AttackVectors** | Breakdown of attack types. Keys include `TCP SYN flood`, `TCP flood`, `UDP flood`, `UDP reflection`, `Other packet flood`.|
-| **TrafficOverview** | Breakdown of attack traffic. Keys include `Total packets`, `Total packets dropped`, `Total TCP packets`, `Total TCP packets dropped`, `Total UDP packets`, `Total UDP packets dropped`, `Total Other packets`, `Total Other packets dropped`. |
-| **Protocols** | Breakdown of protocols involved. Keys include `TCP`, `UDP`, `Other`. |
-| **DropReasons** | Breakdown of reasons for dropped packets. Keys include `Protocol violation invalid TCP syn`, `Protocol violation invalid TCP`, `Protocol violation invalid UDP`, `UDP reflection`, `TCP rate limit exceeded`, `UDP rate limit exceeded`, `Destination limit exceeded`, `Other packet flood`, `Rate limit exceeded`, `Packet was forwarded to service`. |
-| **TopSourceCountries** | Breakdown of top 10 source countries/regions of incoming traffic. |
-| **TopSourceCountriesForDroppedPackets** | Breakdown of top 10 source countries/regions of attack traffic that is/was mitigated. |
-| **TopSourceASNs** | Breakdown of top 10 source autonomous system numbers (ASN) of the incoming traffic. |
-| **SourceContinents** | Breakdown of the source continents of incoming traffic. |
-***
## Next steps
-> [!div class="nextstepaction"]
-> [View and configure DDoS diagnostic logging](diagnostic-logging.md)
->
-> [Test with simulation partners](test-through-simulations.md)
+* [Configure DDoS Alerts](alerts.md)
+* [View alerts in Microsoft Defender for Cloud](ddos-view-alerts-defender-for-cloud.md)
+* [Test with simulation partners](test-through-simulations.md)
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 02/20/2023 Last updated : 03/20/2023 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
## March 2023 Updates in March include:+
+- [Some regulatory compliance standards are now available in government clouds](#some-regulatory-compliance-standards-are-now-available-in-government-clouds)
- [New preview recommendation for Azure SQL Servers](#new-preview-recommendation-for-azure-sql-servers) - [New alert in Defender for Key Vault](#new-alert-in-defender-for-key-vault)
+### Some regulatory compliance standards are now available in government clouds
+
+We are announcing that the following regulatory standards are being updated with latest version and are available for customers in Azure Government and Azure China 21Vianet.
+
+**Azure Government**:
+- [PCI DSS v4](/azure/compliance/offerings/offering-pci-dss)
+- [SOC 2 Type 2](/azure/compliance/offerings/offering-soc-2)
+- [ISO 27001:2013](/azure/compliance/offerings/offering-iso-27001)
+
+**Azure China 21Vianet**:
+- [SOC 2 Type 2](/azure/compliance/offerings/offering-soc-2)
+- [ISO 27001:2013](/azure/compliance/offerings/offering-iso-27001)
+
+Learn how to [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md).
+ ### New preview recommendation for Azure SQL Servers We have added a new recommendation for Azure SQL Servers, `Azure SQL Server authentication mode should be Azure Active Directory Only (Preview)`.
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 03/05/2023 Last updated : 03/20/2023 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you'll find them in the [What's
| Planned change | Estimated date for change | |--|--| | [Changes in the recommendation "Machines should be configured securely"](#changes-in-the-recommendation-machines-should-be-configured-securely) | March 2023 |
-| [Three alerts in the Defender for Resource Manager plan will be deprecated](#three-alerts-in-the-defender-for-resource-manager-plan-will-be-deprecated) | March 2023 |
+| [Three alerts in the Defender for Azure Resource Manager plan will be deprecated](#three-alerts-in-the-defender-for-resource-manager-plan-will-be-deprecated) | March 2023 |
| [Alerts automatic export to Log Analytics workspace will be deprecated](#alerts-automatic-export-to-log-analytics-workspace-will-be-deprecated) | March 2023 | | [Deprecation and improvement of selected alerts for Windows and Linux Servers](#deprecation-and-improvement-of-selected-alerts-for-windows-and-linux-servers) | April 2023 | | [Deprecation of App Service language monitoring policies](#deprecation-of-app-service-language-monitoring-policies) | April 2023 |
-| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | May 2023 |
+| [Deprecation of legacy compliance standards across cloud environments](#deprecation-of-legacy-compliance-standards-across-cloud-environments) | April 2023 |
+| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | April 2023 |
### Changes in the recommendation "Machines should be configured securely"
Customers can use alternative built-in policies to monitor any specified languag
These will no longer be in Defender for Cloud's built-in recommendations. You can add them as custom recommendations to have Defender for Cloud monitor them.
+### Deprecation of legacy compliance standards across cloud environments
+
+**Estimated date for change: April 2023**
+
+We are announcing the full deprecation of support of [`PCI DSS`](/azure/compliance/offerings/offering-pci-dss) standard/initiative in Azure China 21Vianet.
+
+Legacy PCI DSS v3.2.1 and legacy SOC TSP are set to be fully deprecated and replaced by [SOC 2 Type 2](/azure/compliance/offerings/offering-soc-2) initiative and [PCI DSS v4](/azure/compliance/offerings/offering-pci-dss) initiative.
+Learn how to [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md).
+ ### Multiple changes to identity recommendations **Estimated date for change: May 2023**
We recommend updating custom scripts, workflows, and governance rules to corresp
We've improved the coverage of the V2 identity recommendations by scanning all Azure resources (rather than just subscriptions) which allows security administrators to view role assignments per account. These changes may result in changes to your Secure Score throughout the GA process.
+### Deprecation of legacy compliance standards across cloud environments
+
+**Estimated date for change: April 2023**
+
+We are announcing the full deprecation of support of [`PCI DSS`](/azure/compliance/offerings/offering-pci-dss) standard/initiative in Azure China 21Vianet.
+
+Legacy PCI DSS v3.2.1 and legacy SOC TSP are set to be fully deprecated and replaced by [SOC 2 Type 2](/azure/compliance/offerings/offering-soc-2) initiative and [`PCI DSS v4`](/azure/compliance/offerings/offering-pci-dss) initiative.
+Learn how to [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md).
+ ## Next steps For all recent changes to Defender for Cloud, see [What's new in Microsoft Defender for Cloud?](release-notes.md).
defender-for-cloud Update Regulatory Compliance Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/update-regulatory-compliance-packages.md
Title: The regulatory compliance dashboard in Microsoft Defender for Cloud description: Learn how to add and remove regulatory standards from the regulatory compliance dashboard in Defender for Cloud Previously updated : 02/07/2023 Last updated : 03/20/2023
To see compliance data mapped as assessments in your dashboard, add a compliance
When you've assigned a standard or benchmark to your selected scope, the standard appears in your regulatory compliance dashboard with all associated compliance data mapped as assessments. You can also download summary reports for any of the standards that have been assigned.
-Microsoft tracks the regulatory standards themselves and automatically improves its coverage in some of the packages over time. When Microsoft releases new content for the initiative, it will appear automatically in your dashboard as new policies mapped to controls in the standard.
+Microsoft tracks the regulatory standards themselves and automatically improves its coverage in some of the packages over time. When Microsoft releases new content for the initiative, it appears automatically in your dashboard as new policies mapped to controls in the standard.
## What regulatory compliance standards are available in Defender for Cloud? By default, every Azure subscription has the Microsoft cloud security benchmark assigned. This is the Microsoft-authored, cloud specific guidelines for security and compliance best practices based on common compliance frameworks. [Learn more about Microsoft cloud security benchmark](/security/benchmark/azure/introduction).
-Available regulatory standards:
+**Available regulatory standards**:
-- PCI-DSS v3.2.1
+- PCI-DSS v3.2.1 **(deprecated)**
- PCI DSS v4 - SOC TSP
+- SOC 2 Type 2
- ISO 27001:2013 - Azure CIS 1.1.0 - Azure CIS 1.3.0
Available regulatory standards:
Users that have one Defender bundle enabled can enable other standards.
-Available AWS regulatory standards:
+**Available AWS regulatory standards**:
- CIS 1.2.0 - CIS 1.5.0
To add regulatory compliance standards on AWS accounts:
:::image type="content" source="media/update-regulatory-compliance-packages/Add-aws-regulatory-compliance.png" alt-text="Screenshot of adding regulatory compliance standard to AWS account." lightbox="media/update-regulatory-compliance-packages/Add-aws-regulatory-compliance.png":::
-More standards will be added to the dashboard and included in the information on [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md).
-
-**GCP**: When users onboard, every GCP project has the "GCP Default" standard assigned and can be viewed under Recommendations.
+**GCP**: When users onboard, every GCP project has the "GCP Default" standard assigned.
Users that have one Defender bundle enabled can enable other standards.
-Available GCP regulatory standards:
+**Available GCP regulatory standards**:
- CIS 1.1.0, 1.2.0 - PCI DSS 3.2.1
To remove a standard:
:::image type="content" source="./media/update-regulatory-compliance-packages/remove-standard-confirm.png" alt-text="Screenshot showing to confirm that you really want to remove the regulatory standard you selected." lightbox="media/update-regulatory-compliance-packages/remove-standard-confirm.png":::
-1. Select **Yes**. The standard will be removed.
+1. Select **Yes**.
## Next steps
defender-for-iot Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/billing.md
If you would like to evaluate Defender for IoT, you can use a trial commitment f
- **For OT networks**, use a trial to deploy one or more Defender for IoT sensors on your network to monitor traffic, analyze data, generate alerts, learn about network risks and vulnerabilities, and more. An OT trial supports 1,000 [committed devices](#defender-for-iot-committed-devices), which are the number of devices you want to monitor in your network.
+ The trial for OT networks is free of charge for the first 30 days. Any usage beyond 30 days incurs a charge based on the monthly plan for 1,000 devices. For more information, see [the Microsoft Defender for IoT pricing page](https://azure.microsoft.com/pricing/details/iot-defender/).
+ - **For Enterprise IoT networks**, use a trial to view alerts, recommendations, and vulnerabilities in Microsoft 365. An Enterprise IoT trial is not limited to a specific number of devices. ## Defender for IoT committed devices
You're billed based on the number of committed devices associated with each subs
[!INCLUDE [devices-inventoried](includes/devices-inventoried.md)]
+[Configure Windows Endpoint monitoring](configure-windows-endpoint-monitoring.md)
+[Configure DNS servers for reverse lookup resolution for OT monitoring](configure-reverse-dns-lookup.md)
+ ## Billing cycles and changes in your plans Billing cycles for Microsoft Defender for IoT follow each a calendar month. Changes you make to Defender for IoT plans are implemented one hour after confirming the updated, and are reflected in your monthly bill.
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md
This procedure describes how to add a trial Defender for IoT plan for OT network
> If your subscription isn't listed, check your account details and confirm your permissions with the subscription owner. Also make sure that you have the right subscriptions selected in your Azure settings > **Directories + subscriptions** page. - **Price plan**: For the sake of this quickstart, select **Trial - 30 days - 1000 assets limit**.
+
+ Microsoft Defender for IoT provides a 30-day free trial for the first 1,000 committed devices for evaluation purposes. Any usage beyond 30 days incurs a charge based on the monthly plan for 1,000 devices. For more information, see [the Microsoft Defender for IoT pricing page](https://azure.microsoft.com/pricing/details/iot-defender/).
For example:
defender-for-iot How To Manage Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md
This procedure describes how to add a Defender for IoT plan for OT networks to a
- **Price plan**. Select a monthly or annual commitment, or a [trial](billing.md#free-trial).
- Microsoft Defender for IoT provides a 30-day free trial for the first 1,000 committed devices for evaluation purposes.
-
- For more information, see the [Microsoft Defender for IoT pricing page](https://azure.microsoft.com/pricing/details/iot-defender/).
+ Microsoft Defender for IoT provides a 30-day free trial for the first 1,000 committed devices for evaluation purposes. Any usage beyond 30 days incurs a charge based on the monthly plan for 1,000 devices. For more information, see [the Microsoft Defender for IoT pricing page](https://azure.microsoft.com/pricing/details/iot-defender/).
- **Committed sites**. Relevant for annual commitments only. Enter the number of committed sites.
defender-for-iot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/overview.md
For more information, see the [Microsoft Defender for IoT for device builders do
Defender for IoT routes all traffic from all European regions to the *West Europe* regional datacenter. It routes traffic from all remaining regions to the *East US* regional datacenter.
+## Compliance scope
+
+Defender for IoT cloud services (formerly *Azure Defender for IoT* or *Azure Security for IoT*) are based on Microsoft AzureΓÇÖs infrastructure, which meets demanding US government and international compliance requirements that produce formal authorizations.
+
+Specifically:
+- Defender for IoT is in scope for the following provisional authorizations in Azure and Azure Government: [FedRAMP High](/azure/compliance/offerings/offering-fedramp) and [DoD IL2](/azure/compliance/offerings/offering-dod-il2). Moreover, Defender for IoT maintains extra [DoD IL4](/azure/compliance/offerings/offering-dod-il4) and [DoD IL5](/azure/compliance/offerings/offering-dod-il5) provisional authorizations in Azure Government. For more information, see [Azure and other Microsoft cloud services compliance scope](/azure/azure-government/compliance/azure-services-in-fedramp-auditscope#azure-public-services-by-audit-scope).
+- Defender for IoT is committed to developing technology that empowers everyone, including people with disabilities, and helps customers address global accessibility requirements. For more information, search for *Azure Security for IoT* in [Accessibility Conformance Reports | Microsoft Accessibility](https://www.microsoft.com/accessibility/conformance-reports?rtc=1).
+- Defender for IoT helps customers meet their compliance obligations across regulated industries and markets worldwide. For more information, see [Azure and other Microsoft cloud services compliance offerings](/azure/compliance/offerings/).
+ ## Next steps > [!div class="nextstepaction"]
digital-twins How To Integrate Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-integrate-maps.md
Replace the function code with the following code. It will filter out only updat
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/updateMaps.cs":::
-You'll need to set two environment variables in your function app. One is your [Azure Maps primary subscription key](../azure-maps/quick-demo-map-app.md#get-the-primary-key-for-your-account), and one is your [Azure Maps stateset ID](../azure-maps/tutorial-creator-feature-stateset.md).
+You'll need to set two environment variables in your function app. One is your [Azure Maps primary subscription key](../azure-maps/quick-demo-map-app.md#get-the-subscription-key-for-your-account), and one is your [Azure Maps stateset ID](../azure-maps/tutorial-creator-feature-stateset.md).
```azurecli-interactive az functionapp config appsettings set --name <your-function-app-name> --resource-group <your-resource-group> --settings "subscription-key=<your-Azure-Maps-primary-subscription-key>"
digital-twins How To Use Power Platform Logic Apps Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-power-platform-logic-apps-connector.md
description: Learn how to connect Power Platform and Logic Apps to Azure Digital Twins using the connector Previously updated : 01/03/2023 Last updated : 03/22/2023
For an introduction to the connector, including a quick demo, watch the followin
<iframe src="https://aka.ms/docs/player?id=d6c200c2-f622-4254-b61f-d5db613bbd11" width="1080" height="530"></iframe>
-For more information about the connector, including a complete list of the connector's actions and their parameters, see the [Azure Digital Twins connector reference documentation](/connectors/azuredigitaltwins).
+You can also complete a basic walkthrough in the blog post [Simplify building automated workflows and apps powered by Azure Digital Twins](https://techcommunity.microsoft.com/t5/internet-of-things-blog/simplify-building-automated-workflows-and-apps-powered-by-azure/ba-p/3763051). For more information about the connector, including a complete list of the connector's actions and their parameters, see the [Azure Digital Twins connector reference documentation](/connectors/azuredigitaltwins).
## Prerequisites
dms Dms Tools Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/dms-tools-matrix.md
description: Learn about the services and tools available to migrate databases a
Previously updated : 03/03/2020 Last updated : 03/21/2020
The following tables identify the services and tools you can use to plan for dat
| Source | Target | Discover /<br/>Inventory | Target and SKU<br/>recommendation | TCO/ROI and<br/>Business case | | | | | | |
-| SQL Server | Azure SQL DB | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
- SQL Server | Azure SQL DB MI | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
-| SQL Server | Azure SQL VM | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
+| SQL Server | Azure SQL DB | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/> [DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
+ SQL Server | Azure SQL MI | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/> [DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
+| SQL Server | Azure SQL VM | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/> [DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
| SQL Server | Azure Synapse Analytics | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/)<br/>[Cloudamize*](https://www.cloudamize.com/) | | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
-| RDS SQL | Azure SQL DB, MI, VM | | [DMA](/sql/dma/dma-overview) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
+| Amazon RDS for SQL Server | Azure SQL DB, MI, VM | | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/> [DMA](/sql/dma/dma-overview) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
| Oracle | Azure SQL DB, MI, VM | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[MigVisor*](https://www.migvisor.com/) | | | Oracle | Azure Synapse Analytics | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant) | | | Oracle | Azure DB for PostgreSQL -<br/>Single server | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | | |
The following tables identify the services and tools you can use to plan for dat
| Cassandra | Azure Cosmos DB | | | | | MySQL | Azure SQL DB, MI, VM | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) | | MySQL | Azure DB for MySQL | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
-| RDS MySQL | Azure DB for MySQL | | | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
+| Amazon RDS for MySQL | Azure DB for MySQL | | | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
| PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
-| RDS PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | | | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
+| Amazon RDS for PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | | | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
| DB2 | Azure SQL DB, MI, VM | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant) | | | Access | Azure SQL DB, MI, VM | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant) | | | Sybase - SAP ASE | Azure SQL DB, MI, VM | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
The following tables identify the services and tools you can use to plan for dat
| Source | Target | App Data Access<br/>Layer Assessment | Database<br/>Assessment | Performance<br/>Assessment | | | | | | | | SQL Server | Azure SQL DB | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090)<br/>[Cloudamize*](https://www.cloudamize.com/) |
-| SQL Server | Azure SQL DB MI | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090)<br/>[Cloudamize*](https://www.cloudamize.com/) |
+| SQL Server | Azure SQL MI | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090)<br/>[Cloudamize*](https://www.cloudamize.com/) |
| SQL Server | Azure SQL VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090)<br/>[Cloudamize*](https://www.cloudamize.com/) | | SQL Server | Azure Synapse Analytics | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) | | |
-| RDS SQL | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090) |
+| Amazon RDS for SQL | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090) |
| Oracle | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | | | Oracle | Azure Synapse Analytics | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | | | Oracle | Azure DB for PostgreSQL -<br/>Single server | | [Ora2Pg*](http://ora2pg.darold.net/start.html) | |
The following tables identify the services and tools you can use to plan for dat
| Cassandra | Azure Cosmos DB | | | | | MySQL | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/) | | | MySQL | Azure DB for MySQL | | | |
-| RDS MySQL | Azure DB for MySQL | | | |
+| Amazon RDS for MySQL | Azure DB for MySQL | | | |
| PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | | | | | PostgreSQL | Azure DB for PostgreSQL -<br/>Flexible server | | | |
-| RDS PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | | | |
+| Amazon RDS for PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | | | |
| DB2 | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | | | Access | Azure SQL DB, MI, VM | | [SSMA](/sql/ssma/sql-server-migration-assistant) | | | Sybase - SAP ASE | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
The following tables identify the services and tools you can use to plan for dat
| Source | Target | Schema | Data<br/>(Offline) | Data<br/>(Online) | | | | | | |
-| SQL Server | Azure SQL DB | [SQL Server dacpac extension](/sql/azure-data-studio/extensions/sql-server-dacpac-extension)<br/>[DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| SQL Server | Azure SQL DB MI | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| SQL Server | Azure SQL VM | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[DMA](/sql/dm)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| SQL Server | Azure SQL DB | [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension)<br/>[DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| SQL Server | Azure SQL MI | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| SQL Server | Azure SQL VM | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[DMA](/sql/dm)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
| SQL Server | Azure Synapse Analytics | | | |
-| RDS SQL | Azure SQL DB | [SQL Server dacpac extension](/sql/azure-data-studio/extensions/sql-server-dacpac-extension)<br/>[DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| RDS SQL | Azure SQL DB MI | [DMS](https://azure.microsoft.com/services/database-migration/) | [DMS](https://azure.microsoft.com/services/database-migration/) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| RDS SQL | Azure SQL VM | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[DMA](/sql/dm)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| Oracle | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| Oracle | Azure Synapse Analytics | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| Oracle | Azure DB for PostgreSQL -<br/>Single server | [Ora2Pg*](http://ora2pg.darold.net/start.html)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [Ora2Pg*](http://ora2pg.darold.net/start.html)<br/>[Ispirer*](https://www.ispirer.com/solutions) | <br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| Oracle | Azure DB for PostgreSQL -<br/>Flexible server | [Ora2Pg*](http://ora2pg.darold.net/start.html)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [Ora2Pg*](http://ora2pg.darold.net/start.html)<br/>[Ispirer*](https://www.ispirer.com/solutions) | <br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| Amazon RDS for SQL Server| Azure SQL DB | [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension)<br/>[DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)| [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| Amazon RDS for SQL | Azure SQL MI |[Azure SQL Migration extension](./migration-using-azure-data-studio.md) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| Amazon RDS for SQL Server | Azure SQL VM | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[DMA](/sql/dm)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| Oracle | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| Oracle | Azure Synapse Analytics | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| Oracle | Azure DB for PostgreSQL -<br/>Single server | [Ora2Pg*](http://ora2pg.darold.net/start.html)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Ora2Pg*](http://ora2pg.darold.net/start.html)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | <br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| Oracle | Azure DB for PostgreSQL -<br/>Flexible server | [Ora2Pg*](http://ora2pg.darold.net/start.html)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Ora2Pg*](http://ora2pg.darold.net/start.html)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | <br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
| MongoDB | Azure Cosmos DB | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) | | Cassandra | Azure Cosmos DB | [Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) |
-| MySQL | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| MySQL | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
| MySQL | Azure DB for MySQL | [MySQL dump*](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) | [DMS](https://azure.microsoft.com/services/database-migration/) | [MyDumper/MyLoader*](https://centminmod.com/mydumper.html) with [data-in replication](../mysql/concepts-data-in-replication.md)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| RDS MySQL | Azure DB for MySQL | [MySQL dump*](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) | [DMS](https://azure.microsoft.com/services/database-migration/) | [MyDumper/MyLoader*](https://centminmod.com/mydumper.html) with [data-in replication](../mysql/concepts-data-in-replication.md)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| Amazon RDS for MySQL | Azure DB for MySQL | [MySQL dump*](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) | [DMS](https://azure.microsoft.com/services/database-migration/) | [MyDumper/MyLoader*](https://centminmod.com/mydumper.html) with [data-in replication](../mysql/concepts-data-in-replication.md)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
| PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| RDS PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| DB2 | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| Amazon RDS for PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| DB2 | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
| Access | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) |
-| Sybase - SAP ASE | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| Sybase - SAP IQ | Azure SQL DB, MI, VM | [Ispirer*](https://www.ispirer.com/solutions) | [Ispirer*](https://www.ispirer.com/solutions) | |
+| Sybase - SAP ASE | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| Sybase - SAP IQ | Azure SQL DB, MI, VM | [Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | |
| | | | | | ## Post-migration phase
The following tables identify the services and tools you can use to plan for dat
| Source | Target | Optimize | | | | | | SQL Server | Azure SQL DB | [Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) |
-| SQL Server | Azure SQL DB MI | [Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) |
+| SQL Server | Azure SQL MI | [Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) |
| SQL Server | Azure SQL VM | [Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | | SQL Server | Azure Synapse Analytics | | | RDS SQL | Azure SQL DB, MI, VM | |
The following tables identify the services and tools you can use to plan for dat
| Cassandra | Azure Cosmos DB | | | MySQL | Azure SQL DB, MI, VM | | | MySQL | Azure DB for MySQL | |
-| RDS MySQL | Azure DB for MySQL | |
+| Amazon RDS for MySQL | Azure DB for MySQL | |
| PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | | | PostgreSQL | Azure DB for PostgreSQL -<br/>Flexible server | |
-| RDS PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | |
+| Amazon RDS for PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | |
| DB2 | Azure SQL DB, MI, VM | | | Access | Azure SQL DB, MI, VM | | | Sybase - SAP ASE | Azure SQL DB, MI, VM | |
dms Known Issues Azure Sql Migration Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-migration-azure-data-studio.md
description: Known issues, limitations and troubleshooting guide for Azure SQL Migration extension for Azure Data Studio Previously updated : 01/05/2023 Last updated : 03/14/2023
Known issues and troubleshooting steps associated with the Azure SQL Migration e
- **Message**: `Migration for Database 'DatabaseName' failed with error cannot find server certificate with thumbprint.` -- **Cause**: The source database is protected with Transparent Data Encryption (TDE). You need to migrate the Database Encryption Key (DEK) to the target Azure SQL Managed Instance or SQL Server on Azure Virtual Machine before starting the migration.
+- **Cause**: Before migrating data, you need to migrate the certificate of the source SQL Server instance from a database that is protected by Transparent Data Encryption (TDE) to the target Azure SQL Managed Instance or SQL Server on Azure Virtual Machine.
-- **Recommendation**: Migrate the TDE certificate to the target instance and retry the process. For more information about migrating TDE-enabled databases, see [Tutorial: Migrate TDE-enabled databases (preview) to Azure SQL in Azure Data Studio](./tutorial-transparent-data-encryption-migration-ads.md).
+- **Recommendation**: Migrate the TDE certificate to the target instance and retry the process. For more information about migrating TDE-enabled databases, see [Tutorial: Migrate TDE-enabled databases (preview) to Azure SQL in Azure Data Studio](/azure/dms/tutorial-transparent-data-encryption-migration-ads).
- **Message**: `Migration for Database <DatabaseName> failed with error 'Non retriable error occurred while restoring backup with index 1 - 3169 The database was backed up on a server running version %ls. That version is incompatible with this server, which is running version %ls. Either restore the database on a server that supports the backup, or use a backup that is compatible with this server.`
Known issues and troubleshooting steps associated with the Azure SQL Migration e
- **Recommendation**: If migrating multiple databases to **Azure SQL Managed Instance** using the same Azure Blob Storage container, you must place backup files for different databases in separate folders inside the container. For more information about LRS, see [Migrate databases from SQL Server to SQL Managed Instance by using Log Replay Service (Preview)](/azure/azure-sql/managed-instance/log-replay-service-migrate#limitations).
+- **Message**: `Migration for Database <Database Name> failed with error 'Non retriable error occurred while restoring backup with index 1 - 12824 The sp_configure value 'contained database authentication' must be set to 1 in order to restore a contained database. You may need to use RECONFIGURE to set the value_in_use.
+RESTORE DATABASE is terminating abnormally.`
- > [!NOTE]
- > For more information on general troubleshooting steps for Azure SQL Managed Instance errors, see [Known issues with Azure SQL Managed Instance](/azure/azure-sql/managed-instance/doc-changes-updates-known-issues)
+- **Cause**: The source database is a contained database. A specific configuration is needed to enable restoring a contained database. For more information about contained databases, see [Contained Database Users](/sql/relational-databases/security/contained-database-users-making-your-database-portable).
+
+- **Recommendation**: Run the following query connected to the source SQL Server in the context of the specific database before starting the migration. Then, attempt the migration of the contained database again.
+```sql
+-- Enable "contained database authentication"
+EXEC sp_configure 'contained', 1;
+RECONFIGURE;
+```
+
+> [!NOTE]
+> For more information on general troubleshooting steps for Azure SQL Managed Instance errors, see [Known issues with Azure SQL Managed Instance](/azure/azure-sql/managed-instance/doc-changes-updates-known-issues)
## Error code: 2012 - TestConnectionFailed - **Message**: `Failed to test connections using provided Integration Runtime. Error details: 'Remote name could not be resolved.'` -- **Cause**: DMS cannot connect to Self-Hosted Integration Runtime (SHIR) due to network settings in the firewall.
+- **Cause**: Your network settings in the firewall are causing the Self-Hosted Integration Runtime to be unable to connect to the service back end.
- **Recommendation**: There's a Domain Name System (DNS) issue. Contact your network team to fix the issue. For more information, see [Troubleshoot Self-Hosted Integration Runtime](../data-factory/self-hosted-integration-runtime-troubleshoot-guide.md).
Known issues and troubleshooting steps associated with the Azure SQL Migration e
- **Recommendation**: If desired, the target database can be returned to its original state by running the first query and all of the returned queries, then running the second query and doing the same.
-```
+```sql
SELECT [ROLLBACK] FROM [dbo].[__migration_status] WHERE STEP in (3,4,6);
SELECT [ROLLBACK] FROM [dbo].[__migration_status]
WHERE STEP in (5,7,8) ORDER BY STEP DESC; ``` + ## Error code: 2042 - PreCopyStepsCompletedDuringCancel - **Message**: `Pre Copy steps finished successfully before canceling completed. Target database Foreign keys and temporal tables have been altered. Schema migration may be required again for future migrations. Target server: <Target Server>, Target database: <Target Database>.`
WHERE STEP in (5,7,8) ORDER BY STEP DESC;
- **Recommendation**: If desired, target database can be returned to its original state by running the following query and all of the returned queries.
-```
+```sql
SELECT [ROLLBACK] FROM [dbo].[__migration_status] WHERE STEP in (3,4,6); ```
WHERE STEP in (3,4,6);
- **Recommendation**: For more troubleshooting steps, see [Troubleshoot Azure Data Factory and Synapse pipelines](../data-factory/data-factory-troubleshoot-guide.md#error-code-2108). - ## Error code: 2056 - SqlInfoValidationFailed - **Message**: CollationMismatch: `Source database collation <CollationOptionSource> is not the same as the target database <CollationOptionTarget>. Source database: <SourceDatabaseName> Target database: <TargetDatabaseName>.`
WHERE STEP in (3,4,6);
- **Recommendation**: Check if the selected tables exist in the target Azure SQL Database. If this migration is called from a PowerShell script, check if the table list parameter includes the correct table names and is passed into the migration. - ## Error code: Ext_RestoreSettingsError -- **Message**: Unable to read blobs in storage container, exception: The remote server returned an error: (403) Forbidden. The remote server returned an error: (403) Forbidden
+- **Message**: Unable to read blobs in storage container, exception: The remote server returned an error: (403) Forbidden.;The remote server returned an error: (403) Forbidden
-- **Cause**: Target is unable to connect to blob storage.
+- **Cause**: The Azure SQL target is unable to connect to blob storage.
-- **Recommendation**: Confirm that target network settings allow access to blob storage. For example, if migrating to SQL VM, ensure that outbound connections on VM aren't being blocked.
+- **Recommendation**: Confirm that target network settings allow access to blob storage. For example, if you're migrating to a SQL Server on Azure VM target, ensure that outbound connections on the Virtual Machine aren't being blocked.
-- **Message**: Failed to create restore job. Unable to read blobs in storage container, exception: The remote name could not be resolved.
+- **Message**: Failed to create restore job. Unable to read blobs in storage container, exception: The remote name couldn't be resolved.
-- **Cause**: Target is unable to connect to blob storage.
+- **Cause**: The Azure SQL target is unable to connect to blob storage.
-- **Recommendation**: Confirm that target network settings allow access to blob storage. For example, if migrating to SQL VM, ensure that outbound connections on VM are not being blocked.
+- **Recommendation**: Confirm that target network settings allow access to blob storage. For example, if migrating to SQL VM, ensure that outbound connections on VM aren't being blocked.
- **Message**: `Migration for Database <Database Name> failed with error 'Migration cannot be completed because provided backup file name <Backup File Name> should be the last restore backup file <Last Restore Backup File Name>'`. -- **Cause**: Most recent backup was not specified in backup settings.
+- **Cause**: The most recent backup wasn't specified in the backup settings.
+
+- **Recommendation**: Specify the most recent backup file name in backup settings and retry the operation.
++
+- **Message**: `Operation failed: errorCode: Ext_RestoreSettingsError, message: RestoreId: 1111111-aaaa-bbbb-cccc-dddddddd, OperationId: 2222222-aaaa-bbbb-cccc-dddddddd, Detail: Unable to read blobs in storage container, exception: Unable to connect to the remote server;Unable to connect to the remote server;A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 11.111.11.111:443.`
-- **Recommendation**: Specify most recent backup file name in backup settings and retry operation.
+- **Cause**: The error is possible to occur for both storage accounts with public network and private endpoint configuration. It's also possible that you have an on-premises DNS server that controls a hybrid network routing and DHCP. Unless you allow the Azure IP addresses configured in your DNS server, your SQL Server on Azure VM target has no chance to resolve the remote storage blob endpoint.
+- **Recommendation**: To debug this issue, you can try pinging your Azure Blob Storage URL from your SQL Server on Azure VM target and confirm if you have a connectivity problem. To solve this issue, you have to allow the Azure IP addresses configured in your DNS server. For more information, see [Troubleshoot Azure Private Endpoint connectivity problems](/azure/private-link/troubleshoot-private-endpoint-connectivity)
## Azure SQL Database limitations
dms Tutorial Sql Server Azure Sql Database Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-azure-sql-database-offline-ads.md
Before you begin the tutorial:
- If you're using Azure Database Migration Service for the first time, make sure that the Microsoft.DataMigration [resource provider is registered in your subscription](quickstart-create-data-migration-service-portal.md#register-the-resource-provider).
+> [!NOTE]
+> Make sure to migrate the database schema from source to target by using the [SQL Server dacpac extension](/sql/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio before selecting the list of tables to migrate.
+>
+> If no tables exists on the Azure SQL Database target, or no tables are selected before starting the migration. The **Next** button isn't available to select to initiate the migration task.
+>
+ ## Open the Migrate to Azure SQL wizard in Azure Data Studio To open the Migrate to Azure SQL wizard:
To open the Migrate to Azure SQL wizard:
> [!NOTE] > If no tables are selected or if a username and password aren't entered, the **Next** button isn't available to select.
+>
+> Make sure to migrate the database schema from source to target by using the [SQL Server dacpac extension](/sql/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio before selecting the list of tables to migrate.
## Create a Database Migration Service instance
event-hubs Apache Kafka Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/apache-kafka-troubleshooting-guide.md
You may receive Server Busy exception because of Kafka throttling. With AMQP cli
If the traffic is excessive, the service has the following behavior: -- If produce request's delay exceeds request timeout, Event Hubs returns **Policy Violation** error code.
+- If produce request's delay exceeds request timeout(*request.timeout.ms*), Event Hubs returns **Policy Violation** error code.
- If fetch request's delay exceeds request timeout, Event Hubs logs the request as throttled and responds with empty set of records and no error code. [Dedicated clusters](event-hubs-dedicated-overview.md) don't have throttling mechanisms. You're free to consume all of your cluster resources.
expressroute Expressroute Howto Reset Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-reset-peering.md
There are a two scenarios where you may find it helpful to reset your ExpressRou
AllowClassicOperations : False GatewayManagerEtag : ```
-6. Run the following commands to change the state of the peering.
+6. Run the following commands to change the peering state to disabled.
```azurepowershell-interactive $ckt.Peerings[0].State = "Disabled" Set-AzExpressRouteCircuit -ExpressRouteCircuit $ckt ```
- The peering should be in a state you set.
+ The peering should be in a disabled state you set.
+7. Run the following commands to change the peering state back to enabled.
+
+ ```azurepowershell-interactive
+ $ckt.Peerings[0].State = "Enabled"
+ Set-AzExpressRouteCircuit -ExpressRouteCircuit $ckt
+ ```
+ The peering should be in a enabled state you set.
+
## Next steps If you need help with troubleshooting an ExpressRoute problem, see the following articles: * [Verifying ExpressRoute connectivity](expressroute-troubleshooting-expressroute-overview.md)
frontdoor How To Enable Private Link Internal Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-enable-private-link-internal-load-balancer.md
In this section, you'll map the Private Link service to a private endpoint creat
| Resource | If you select **In my directory**, specify the Private Link Service resource for the ILB in your subscription. | | ID/alias | If you select **By ID or alias**, specify the resource ID of the Private Link Service resource you want to enable private link to. | | Region | Select the region that is the same or closest to your origin. |
- | Request message | Customize message or choose the default. |
+ | Request message | Custom message to see while approving the Private Endpoint. |
1. Then select **Add** and then **Update** to save the origin group settings.
iot-edge Module Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/module-development.md
description: Develop custom modules for Azure IoT Edge that can communicate with
Previously updated : 09/03/2021 Last updated : 3/17/2023
To deploy your program on an IoT Edge device, it must first be containerized and
## Using the IoT Edge hub
-The IoT Edge hub provides two main functionalities: proxy to IoT Hub, and local communications.
+The IoT Edge hub provides two main functionalities: a proxy to IoT Hub and local communications.
### Connecting to IoT Edge hub from a module
To use IoT Edge routing over AMQP, you can use the ModuleClient from the Azure I
### IoT Hub primitives
-IoT Hub sees a module instance analogously to a device, in the sense that:
+IoT Hub sees a module instance as similar to a device. A module instance can:
-* it can send [device-to-cloud messages](../iot-hub/iot-hub-devguide-messaging.md);
-* it can receive [direct methods](../iot-hub/iot-hub-devguide-direct-methods.md) targeted specifically at its identity.
-* it has a module twin that is distinct and isolated from the [device twin](../iot-hub/iot-hub-devguide-device-twins.md) and the other module twins of that device;
+* Send [device-to-cloud messages](../iot-hub/iot-hub-devguide-messaging.md)
+* Receive [direct methods](../iot-hub/iot-hub-devguide-direct-methods.md) targeted specifically at its identity
+* Have a module twin that's distinct and isolated from the [device twin](../iot-hub/iot-hub-devguide-device-twins.md) and the other module twins of that device
Currently, modules can't receive cloud-to-device messages or use the file upload feature.
-When writing a module, you can connect to the IoT Edge hub and use IoT Hub primitives as you would when using IoT Hub with a device application. The only difference between IoT Edge modules and IoT device applications is that you have to refer to the module identity instead of the device identity.
+When writing a module, you can connect to the IoT Edge hub and use IoT Hub primitives as you would when using IoT Hub with a device application. The only difference between IoT Edge modules and IoT device applications is that with modules you have to refer to the module identity instead of the device identity.
#### Device-to-cloud messages
-An IoT Edge module can send messages to the cloud via the IoT Edge hub that acts as a local broker and propagates messages to the cloud. To enable complex processing of device-to-cloud messages, an IoT Edge module can also intercept and process messages sent by other modules or devices to its local IoT Edge hub and send new messages with processed data. Chains of IoT Edge modules can thus be created to build local processing pipelines.
+An IoT Edge module can send messages to the cloud via the IoT Edge hub that acts as a local broker and propagates messages to the cloud. To enable complex processing of device-to-cloud messages, an IoT Edge module can intercept and process messages sent by other modules or devices to its local IoT Edge hub. The IoT Edge module will then send new messages with processed data. Chains of IoT Edge modules can thus be created to build local processing pipelines.
-To send device-to-cloud telemetry messages using routing, use the ModuleClient of the Azure IoT SDK. With the Azure IoT SDK, each module has the concept of module *input* and *output* endpoints. Use the `ModuleClient.sendMessageAsync` method and it will send messages on the output endpoint of your module. Then configure a route in edgeHub to send this output endpoint to IoT Hub.
+To send device-to-cloud telemetry messages using routes:
-To process messages using routing, first set up a route to send messages coming from another endpoint (module or device) to the input endpoint of your module, then listen for messages on the input endpoint of your module. Each time a new message comes back, a callback function is triggered by the Azure IoT SDK. Process your message with this callback function and optionally send new messages on your module endpoint queue.
+* Use the Module Client class of the [Azure IoT SDK](https://github.com/Azure/azure-iot-sdks). Each module has *input* and *output* endpoints.
+* Use a send message method from your Module Client class to send messages on the output endpoint of your module.
+* Set up a route in the edgeHub module of your device to send this output endpoint to IoT Hub.
+
+To process messages using routes:
+
+* Set up a route to send messages coming from another endpoint (module or device) to the input endpoint of your module.
+* Listen for messages on the input endpoint of your module. Each time a new message comes back, a callback function is triggered by the Azure IoT SDK.
+* Process your message with this callback function and (optionally) send new messages in your module endpoint queue.
+
+>[!NOTE]
+> To learn more about declaring a route, see [Learn how to deploy modules and establish routes in IoT Edge](module-composition.md#declare-routes)
#### Twins Twins are one of the primitives provided by IoT Hub. There are JSON documents that store state information including metadata, configurations, and conditions. Each module or device has its own twin.
-To get a module twin with the Azure IoT SDK, call the `ModuleClient.getTwin` method.
+* To get a module twin with the [Azure IoT SDK](https://github.com/Azure/azure-iot-sdks), call the `ModuleClient.getTwin` method.
-To receive a module twin patch with the Azure IoT SDK, implement a callback function and register it with the `ModuleClient.moduleTwinCallback` method from the Azure IoT SDK so that your callback function is triggered each time that a twin patch comes in.
+* To receive a module twin patch with the Azure IoT SDK, implement a callback function and register it with the `ModuleClient.moduleTwinCallback` method from the Azure IoT SDK so that your callback function is triggered each time a twin patch comes in.
#### Receive direct methods
-To receive a direct method with the Azure IoT SDK, implement a callback function and register it with the `ModuleClient.methodCallback` method from the Azure IoT SDK so that your callback function is triggered each time that a direct method comes in.
+To receive a direct method with the [Azure IoT SDK](https://github.com/Azure/azure-iot-sdks), implement a callback function and register it with the `ModuleClient.methodCallback` method from the Azure IoT SDK so that your callback function is triggered each time that a direct method comes in.
## Language and architecture support
-IoT Edge supports multiple operating systems, device architectures, and development languages so that you can build the scenario that matches your needs. Use this section to understand your options for developing custom IoT Edge modules. You can learn more about tooling support and requirements for each language in [Prepare your development and test environment for IoT Edge](development-environment.md).
+IoT Edge supports multiple operating systems, device architectures, and development languages so you can build the scenario that matches your needs. Use this section to understand your options for developing custom IoT Edge modules. You can learn more about tooling support and requirements for each language in [Prepare your development and test environment for IoT Edge](development-environment.md).
### Linux
-For all languages in the following table, IoT Edge supports development for AMD64 and ARM32 Linux containers.
+For all languages in the following table, IoT Edge [supports](support.md) development for AMD64 and most ARM64 Linux containers. There is support for Debian 11 ARM32 containers, as well.
| Development language | Development tools | | -- | -- |
-| C | Visual Studio Code<br>Visual Studio 2017/2019 |
-| C# | Visual Studio Code<br>Visual Studio 2017/2019 |
+| C | Visual Studio Code<br>Visual Studio 2019/2022 |
+| C# | Visual Studio Code<br>Visual Studio 2019/2022 |
| Java | Visual Studio Code | | Node.js | Visual Studio Code | | Python | Visual Studio Code | >[!NOTE]
->For cross-platform compilation, like compiling an ARM32 IoT Edge module on an AMD64 development machine, you need to configure the development machine to compile code on target device architecture matching the IoT Edge module. For more information, see [Build and debug IoT Edge modules on your remote device](https://devblogs.microsoft.com/iotdev/easily-build-and-debug-iot-edge-modules-on-your-remote-device-with-azure-iot-edge-for-vs-code-1-9-0/) to configure the development machine to compile code on target device architecture matching the IoT Edge module.
+>For cross-platform compilation, like compiling an ARM32 IoT Edge module on an AMD64 development machine, you need to configure the development machine to compile code on target device architecture matching the IoT Edge module. For more information, see [Use Visual Studio Code to develop and debug modules for Azure IoT Edge](how-to-vs-code-develop-module.md).
>
->In addition, support for ARM64 Linux containers is in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). For more information, see [Develop and debug ARM64 IoT Edge modules in Visual Studio Code (preview)](https://devblogs.microsoft.com/iotdev/develop-and-debug-arm64-iot-edge-modules-in-visual-studio-code-preview).
+> For more information about ARM64 Linux containers, see [Use Visual Studio Code to develop and debug modules for Azure IoT Edge](how-to-vs-code-develop-module.md).
### Windows
-IoT Edge 1.1 LTS is the last release channel that supports Windows containers. Starting with version 1.2, Windows containers are not supported.
+We no longer support Windows containers. [IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md) is the recommended way to run IoT Edge on Windows devices.
## Module security
To help improve module security, IoT Edge disables some container features by de
### Allow elevated Docker permissions
-In the config file on an IoT Edge device, there's a parameter called `allow_elevated_docker_permissions`. When set to **true**, this flag allows the `--privileged` flag as well as any additional capabilities that you define in the `CapAdd` field of the Docker HostConfig in the [container create options](how-to-use-create-options.md).
+In the config file on an IoT Edge device, there's a parameter called `allow_elevated_docker_permissions`. When set to **true**, this flag allows the `--privileged` flag and any additional capabilities that you define in the `CapAdd` field of the Docker HostConfig in the [container create options](how-to-use-create-options.md).
->[!NOTE]
->Currently, this flag is **true** by default, which allows deployments to grant privileged permissions to modules. We recommend that you set this flag to false to improve device security. In the future, this flag will be set to **false** by default.
+> [!NOTE]
+> Currently, this flag is true by default, which allows deployments to grant privileged permissions to modules. We recommend that you set this flag to false to improve device security.
### Enable CAP_CHOWN and CAP_SETUID
iot-edge Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/support.md
Title: IoT Edge supported platforms
description: Azure IoT Edge supported operating systems, runtimes, and container engines. Previously updated : 1/31/2023 Last updated : 3/17/2023
If you experience problems while using the Azure IoT Edge service, there are sev
Azure IoT Edge modules are implemented as containers, so IoT Edge needs a container engine to launch them. Microsoft provides a container engine, moby-engine, to fulfill this requirement. This container engine is based on the Moby open-source project. Docker CE and Docker EE are other popular container engines. They're also based on the Moby open-source project and are compatible with Azure IoT Edge. Microsoft provides best effort support for systems using those container engines; however, Microsoft can't ship fixes for issues in them. For this reason, Microsoft recommends using moby-engine on production systems.
-<br>
-<center>
-
-![The Moby engine as container runtime](./media/support/only-moby-for-production.png)
-</center>
## Operating systems
Azure IoT Edge runs on most operating systems that can run containers; however,
The systems listed in the following tables are supported by Microsoft, either generally available or in public preview, and are tested with each new release.
-Azure IoT Edge version 1.2 and later only supports modules built as Linux containers. [IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md) is the recommended way to run IoT Edge on Windows devices.
-- #### Linux containers Modules built as Linux containers can be deployed to either Linux or Windows devices. For Linux devices, the IoT Edge runtime is installed directly on the host device. For Windows devices, a Linux virtual machine prebuilt with the IoT Edge runtime runs on the host device.
Modules built as Linux containers can be deployed to either Linux or Windows dev
All Windows operating systems must be minimum build 17763 with all current cumulative updates installed.
->[!NOTE]
->Ubuntu Server 16.04 support ended with the release of IoT Edge version 1.1.
- #### Windows containers
-IoT Edge 1.1 LTS is the last release channel that supports Windows containers. Starting with version 1.2, Windows containers aren't supported.
+We no longer support Windows containers. [IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md) is the recommended way to run IoT Edge on Windows devices.
### Tier 2
The following table lists the currently supported releases. IoT Edge release ass
| Release notes and assets | Type | Release Date | End of Support Date | | | - | | - | | [1.4](https://github.com/Azure/azure-iotedge/releases/tag/1.4.0) | Long-term support (LTS) | August 2022 | November 12, 2024 |
-| [1.1](https://github.com/Azure/azure-iotedge/releases/tag/1.1.0) | Long-term support (LTS) | February 2021 | December 13, 2022 |
For more information on IoT Edge version history, see, [Version history](version-history.md#version-history).
-IoT Edge 1.1 is the first long-term support (LTS) release channel. This version introduced no new features, but will receive security updates and fixes to regressions. IoT Edge 1.1 LTS uses .NET Core 3.1, and will be supported until December 13, 2022 to match the [.NET Core and .NET 5 release lifecycle](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
- > [!IMPORTANT] > * Every Microsoft product has a lifecycle. The lifecycle begins when a product is released and ends when it's no longer supported. Knowing key dates in this lifecycle helps you make informed decisions about when to upgrade or make other changes to your software. IoT Edge is governed by Microsoft's [Modern Lifecycle Policy](/lifecycle/policies/modern).
-> * With the release of a long-term support channel, we recommend that all current customers running 1.0.x upgrade their devices to 1.1.x to receive ongoing support.
IoT Edge uses the Microsoft.Azure.Devices.Client SDK. For more information, see the [Azure IoT C# SDK GitHub repo](https://github.com/Azure/azure-iot-sdk-csharp) or the [Azure SDK for .NET reference content](/dotnet/api/overview/azure/iot/client). The following list shows the version of the client SDK that each release is tested against:
IoT Edge uses the Microsoft.Azure.Devices.Client SDK. For more information, see
Azure IoT Edge can be run in virtual machines, such as an [Azure Virtual Machine](../virtual-machines/index.yml). Using a virtual machine as an IoT Edge device is common when customers want to augment existing infrastructure with edge intelligence. The family of the host VM OS must match the family of the guest OS used inside a module's container. This requirement is the same as when Azure IoT Edge is run directly on a device. Azure IoT Edge is agnostic of the underlying virtualization technology and works in VMs powered by platforms like Hyper-V and vSphere.
-<center>
-
-![Azure IoT Edge in a VM](./media/support/edge-on-vm-linux.png)
-
-</center>
## Minimum system requirements
machine-learning Resource Curated Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/resource-curated-environments.md
This article lists the curated environments with latest framework versions in Az
### Azure Container for PyTorch (ACPT)
-**Name**: AzureML-ACPT-pytorch-1.12-py39-cuda11.6-gpu
-**Description**: The Azure Curated Environment for PyTorch is our latest PyTorch curated environment. It's optimized for large, distributed deep learning workloads and comes prepackaged with the best of Microsoft technologies for accelerated training, for example, OnnxRuntime Training (ORT), DeepSpeed, MSCCL, etc.
+**Description**: Recommended environment for Deep Learning with PyTorch on Azure containing the Azure Machine Learning SDK with the latest compatible versions of Ubuntu, Python, PyTorch, CUDA\RocM,NebulaML combined with optimizers like ORT Training,+DeepSpeed+MSCCL+ORT MoE, and checkpointing using NebulaML and more.
To learn more, see [Azure Container for PyTorch (ACPT)](resource-azure-container-for-pytorch.md).
managed-grafana Known Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/known-limitations.md
Azure Managed Grafana has the following known limitations:
* Reporting is currently not supported.
-* Unified alerting isn't activated by default. Activation is done manually by the Azure Managed Grafana team. For activation, [contact us](mailto:ad4g@microsoft.com).
+* Unified alerting is enabled by default for all instances created after December 2022. For instances created before this date, unified alerting must be enabled manually by the Azure Managed Grafana team. For activation, [contact us](mailto:ad4g@microsoft.com)
## Next steps
network-watcher Network Watcher Connectivity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-connectivity-overview.md
Title: Introduction to connection troubleshoot
+ Title: Connection troubleshoot overview
-description: This page provides an overview of Azure Network Watcher connection troubleshoot capability.
+description: Learn about Azure Network Watcher connection troubleshoot capability.
Previously updated : 11/10/2022 Last updated : 03/22/2023 +
-# Introduction to Azure Network Watcher connection troubleshoot in Azure Network Watcher
+# Connection troubleshoot overview
-The connection troubleshoot feature of Network Watcher provides the capability to check a direct TCP connection from a virtual machine to a virtual machine (VM), fully qualified domain name (FQDN), URI, or IPv4 address. Network scenarios are complex, they're implemented using network security groups, firewalls, user-defined routes, and resources provided by Azure. Complex configurations make troubleshooting connectivity issues challenging. Network Watcher helps reduce the amount of time to find and detect connectivity issues. The results returned can provide insights into whether a connectivity issue is due to a platform or a user configuration issue. Connectivity can be checked with [PowerShell](network-watcher-connectivity-powershell.md), [Azure CLI](network-watcher-connectivity-cli.md), and [REST API](network-watcher-connectivity-rest.md).
+With the increase of sophisticated and high-performance workloads in Azure, there's a critical need for increased visibility and control over the operational state of complex networks running these workloads. Such complex networks are implemented using network security groups, firewalls, user-defined routes, and resources provided by Azure. Complex configurations make troubleshooting connectivity issues challenging.
+
+The connection troubleshoot feature of Azure Network Watcher helps reduce the amount of time to diagnose and troubleshoot network connectivity issues. The results returned can provide insights about the root cause of the connectivity problem and whether it's due to a platform or user configuration issue.
+
+Connection troubleshoot reduces the Mean Time To Resolution (MTTR) by providing a comprehensive method of performing all connection major checks to detect issues pertaining to network security groups, user-defined routes, and blocked ports. It provides the following results with actionable insights where a step-by-step guide or corresponding documentation is provided for faster resolution:
+
+- Connectivity test with different destination types (VM, URI, FQDN, or IP Address)
+- Configuration issues that impact reachability
+- All possible hop by hop paths from the source to destination
+- Hop by hop latency
+- Latency (minimum, maximum, and average between source and destination)
+- Graphical topology view from source to destination
+- Number of probes failed during the connection troubleshoot check
+
+## Supported source and destination types
+
+Connection troubleshoot provides the capability to check TCP or ICMP connections from any of these Azure resources:
+
+- Virtual machines
+- Azure Bastion instances
+- Application gateways (except v1)
> [!IMPORTANT]
-> Connection troubleshoot requires that the VM you troubleshoot from has the `AzureNetworkWatcherExtension` VM extension installed. For installing the extension on a Windows VM visit [Azure Network Watcher Agent virtual machine extension for Windows](../virtual-machines/extensions/network-watcher-windows.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json) and for Linux VM visit [Azure Network Watcher Agent virtual machine extension for Linux](../virtual-machines/extensions/network-watcher-linux.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json). The extension is not required on the destination endpoint.
+> Connection troubleshoot requires that the virtual machine you troubleshoot from has the `AzureNetworkWatcherExtension` extension installed. The extension is not required on the destination virtual machine.
+> - To install the extension on a Windows VM, see [Azure Network Watcher Agent virtual machine extension for Windows](../virtual-machines/extensions/network-watcher-windows.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json).
+> - To install the extension on a Linux VM, see [Azure Network Watcher Agent virtual machine extension for Linux](../virtual-machines/extensions/network-watcher-linux.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json).
+
+Connection troubleshoot can test connections to any of these destinations:
+
+- Virtual machines
+- Fully qualified domain names (FQDNs)
+- Uniform resource identifiers (URIs)
+- IP addresses
-## Supported source types
+## Issues detected by connection troubleshoot
-The following sources are supported by Network Watcher:
+Connection troubleshoot can detect the following types of issues that can impact connectivity:
-- Virtual Machines-- Bastion-- Application Gateways (except v1)
+- High VM CPU utilization
+- High VM memory utilization
+- Virtual machine (guest) firewall rules blocking traffic
+- DNS resolution failures
+- Misconfigured or missing routes
+- Network security group (NSG) rules that are blocking traffic
+- Inability to open a socket at the specified source port
+- Missing address resolution protocol entries for Azure ExpressRoute circuits
+- Servers not listening on designated destination ports
## Response
-The following table shows the properties returned when connection troubleshoot has finished running.
+The following table shows the properties returned after running connection troubleshoot.
|**Property** |**Description** | |||
Connection troubleshoot returns fault types about the connection. The following
||| |CPU | High CPU utilization. | |Memory | High Memory utilization. |
-|GuestFirewall | Traffic is blocked due to a virtual machine firewall configuration. <br><br> Note that a TCP ping is a unique use case in which, if there's no allowed rule, the firewall itself responds to the client's TCP ping request even though the TCP ping doesn't reach the target IP address/FQDN. This event isn't logged. If there's a network rule that allows access to the target IP address/FQDN, the ping request reaches the target server and its response is relayed back to the client. This event is logged in the Network rules log. |
+|GuestFirewall | Traffic is blocked due to a virtual machine firewall configuration. <br><br> A TCP ping is a unique use case in which, if there's no allowed rule, the firewall itself responds to the client's TCP ping request even though the TCP ping doesn't reach the target IP address/FQDN. This event isn't logged. If there's a network rule that allows access to the target IP address/FQDN, the ping request reaches the target server and its response is relayed back to the client. This event is logged in the Network rules log. |
|DNSResolution | DNS resolution failed for the destination address. |
-|NetworkSecurityRule | Traffic is blocked by an NSG Rule (Rule is returned) |
+|NetworkSecurityRule | Traffic is blocked by a network security group rule (security rule is returned) |
|UserDefinedRoute|Traffic is dropped due to a user defined or system route. | ### Next steps
-Learn how to troubleshoot connections using the [Azure portal](network-watcher-connectivity-portal.md), [PowerShell](network-watcher-connectivity-powershell.md), the [Azure CLI](network-watcher-connectivity-cli.md), or [REST API](network-watcher-connectivity-rest.md).
+- To learn how to use connection troubleshoot to test and troubleshoot connections, see [Troubleshoot connections with Azure Network Watcher using the Azure portal](network-watcher-connectivity-portal.md).
+- To learn more about Network Watcher and its other capabilities, see [What is Azure Network Watcher?](network-watcher-monitoring-overview.md).
network-watcher Network Watcher Connectivity Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-connectivity-portal.md
Previously updated : 01/04/2021 Last updated : 03/22/2023 + # Troubleshoot connections with Azure Network Watcher using the Azure portal
-> [!div class="op_single_selector"]
-> - [Portal](network-watcher-connectivity-portal.md)
-> - [PowerShell](network-watcher-connectivity-powershell.md)
-> - [Azure CLI](network-watcher-connectivity-cli.md)
-> - [Azure REST API](network-watcher-connectivity-rest.md)
+In this article, you learn how to use [Azure Network Watcher connection troubleshoot](network-watcher-connectivity-overview.md) to diagnose and troubleshoot connectivity issues.
-Learn how to use connection troubleshoot to verify whether a direct TCP connection from a virtual machine to a given endpoint can be established.
+## Prerequisites
-## Before you begin
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Virtual machines (VMs) to troubleshoot connections with.
-This article assumes you have the following resources:
+> [!IMPORTANT]
+> Connection troubleshoot requires that the virtual machine you troubleshoot from has the `AzureNetworkWatcherExtension` extension installed. The extension is not required on the destination virtual machine.
+> - To install the extension on a Windows VM, see [Azure Network Watcher Agent virtual machine extension for Windows](../virtual-machines/extensions/network-watcher-windows.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json).
+> - To install the extension on a Linux VM, see [Azure Network Watcher Agent virtual machine extension for Linux](../virtual-machines/extensions/network-watcher-linux.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json).
-* An instance of Network Watcher in the region you want to troubleshoot a connection.
-* Virtual machines to troubleshoot connections with.
+## Test connectivity between two connected virtual machines
-> [!IMPORTANT]
-> Connection troubleshoot requires that the VM you troubleshoot from has the `AzureNetworkWatcherExtension` VM extension installed. For installing the extension on a Windows VM visit [Azure Network Watcher Agent virtual machine extension for Windows](../virtual-machines/extensions/network-watcher-windows.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json) and for Linux VM visit [Azure Network Watcher Agent virtual machine extension for Linux](../virtual-machines/extensions/network-watcher-linux.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json). The extension is not required on the destination endpoint.
+In this section, you test connectivity between two connected virtual machines.
-## Check connectivity to a virtual machine
+1. Sign in to the [Azure portal](https://portal.azure.com).
-This example checks connectivity to a destination virtual machine over port 80.
+1. In the search box at the top of the portal, enter *network watcher*. Select **Network Watcher** in the search results.
-Navigate to your Network Watcher and click **Connection troubleshoot**. Select the virtual machine to check connectivity from. In the **Destination** section choose **Select a virtual machine** and choose the correct virtual machine and port to test.
+1. Under **Network diagnostic tools**, select **Connection troubleshoot**. Enter or select the following information:
-Once you click **Check**, connectivity between the virtual machines on the port specified is checked. In the example, the destination VM is unreachable, a listing of hops are shown.
+ | Setting | Value |
+ | - | |
+ | **Source** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select **myResourceGroup**. |
+ | Source type | Select **Virtual machine**. |
+ | Virtual machine | Select **VM1**. |
+ | **Destination** | |
+ | Destination type | Select **Select a virtual machine**. |
+ | Resource group | Select **myResourceGroup**. |
+ | Virtual machine | Select **VM2**. |
+ | **Probe Settings** | |
+ | Preferred IP version | Select **IPv4**. |
+ | Protocol | Select **TCP**. |
+ | Destination port | Enter *80*. |
+ | **Connection Diagnostics** | |
+ | Diagnostics tests | Select **Select all**. |
-![Check connectivity results for a virtual machine][1]
+ :::image type="content" source="./media/network-watcher-connectivity-portal/test-virtual-machines-connected.png" alt-text="Screenshot of Network Watcher connection troubleshoot in Azure portal to test the connection between two connected virtual machines.":::
-## Check remote endpoint connectivity
+1. Select **Test connection**.
-To check the connectivity and latency to a remote endpoint, choose the **Specify manually** radio button in the **Destination** section, input the url and the port and click **Check**. This is used for remote endpoints like websites and storage endpoints.
+ The test results show that the two virtual machines are communicating with no issues:
-![Check connectivity results for a web site][2]
+ - Network security group rules allow traffic between the two virtual machines.
+ - The two virtual machines are directly connected (VM2 is the next hop of VM1).
+ - Azure default system route is used to route traffic between the two virtual machines (Route table ID: System route).
+ - 66 probes were successfully sent with average latency of 2 ms.
-## Next steps
+ :::image type="content" source="./media/network-watcher-connectivity-portal/virtual-machine-connected-test-result.png" alt-text="Screenshot of connection troubleshoot results after testing the connection between two connected virtual machines.":::
+
+## Troubleshoot connectivity issue between two virtual machines
+
+In this section, you test connectivity between two virtual machines that have connectivity issue.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. In the search box at the top of the portal, enter *network watcher*. Select **Network Watcher** in the search results.
+
+1. Under **Network diagnostic tools**, select **Connection troubleshoot**. Enter or select the following information:
+
+ | Setting | Value |
+ | - | |
+ | **Source** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select **myResourceGroup**. |
+ | Source type | Select **Virtual machine**. |
+ | Virtual machine | Select **VM1**. |
+ | **Destination** | |
+ | Destination type | Select **Select a virtual machine**. |
+ | Resource group | Select **myResourceGroup**. |
+ | Virtual machine | Select **VM3**. |
+ | **Probe Settings** | |
+ | Preferred IP version | Select **IPv4**. |
+ | Protocol | Select **TCP**. |
+ | Destination port | Enter *80*. |
+ | **Connection Diagnostics** | |
+ | Diagnostics tests | Select **Select all**. |
+
+ :::image type="content" source="./media/network-watcher-connectivity-portal/test-two-virtual-machines.png" alt-text="Screenshot of Network Watcher connection troubleshoot in Azure portal to test the connection between two virtual machines.":::
+
+1. Select **Test connection**.
+
+ The test results show that the two virtual machines aren't communicating:
+
+ - The two virtual machines aren't connected (no probes were sent from VM1 to VM3).
+ - There's no route between the two virtual machines (Next hop type: None).
+ - Azure default system route is the route table used (Route table ID: System route).
+ - Network security group rules allow traffic between the two virtual machines.
-Learn how to automate packet captures with Virtual machine alerts by viewing [Create an alert triggered packet capture](network-watcher-alert-triggered-packet-capture.md)
+ :::image type="content" source="./media/network-watcher-connectivity-portal/virtual-machines-test-result.png" alt-text="Screenshot of connection troubleshoot results after testing the connection between two virtual machines that aren't communicating.":::
-Find if certain traffic is allowed in or out of your VM by visiting [Check IP flow verify](diagnose-vm-network-traffic-filtering-problem.md)
+## Test connectivity with `www.bing.com`
+
+In this section, you test connectivity between a virtual machines and `www.bing.com`.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. In the search box at the top of the portal, enter *network watcher*. Select **Network Watcher** in the search results.
+
+1. Under **Network diagnostic tools**, select **Connection troubleshoot**. Enter or select the following information:
+
+ | Setting | Value |
+ | - | |
+ | **Source** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select **myResourceGroup**. |
+ | Source type | Select **Virtual machine**. |
+ | Virtual machine | Select **VM1**. |
+ | **Destination** | |
+ | Destination type | Select **Specify manually**. |
+ | Resource group | Enter *www\.bing.com*. |
+ | **Probe Settings** | |
+ | Preferred IP version | Select **IPv4**. |
+ | Protocol | Select **TCP**. |
+ | Destination port | Enter *443*. |
+ | **Connection Diagnostics** | |
+ | Diagnostics tests | Select **Connectivity**. |
+
+ :::image type="content" source="./media/network-watcher-connectivity-portal/test-bing.png" alt-text="Screenshot of Network Watcher connection troubleshoot in Azure portal to test the connection between a virtual machines and Microsoft Bing search engine.":::
+
+1. Select **Test connection**.
+
+ The test results show that `www.bing.com` is reachable from **VM1** virtual machine:
+
+ - Connectivity test is successful with 66 probes sent with an average latency of 3 ms.
+
+ :::image type="content" source="./media/network-watcher-connectivity-portal/bing-test-result.png" alt-text="Screenshot of connection troubleshoot results after testing the connection with Microsoft Bing search engine.":::
+
+## Next steps
-[1]: ./media/network-watcher-connectivity-portal/figure1.png
-[2]: ./media/network-watcher-connectivity-portal/figure2.png
+Learn how to [automate virtual machines packet captures](network-watcher-alert-triggered-packet-capture.md)
network-watcher Network Watcher Packet Capture Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-overview.md
Title: Introduction to packet capture in Azure Network Watcher
-description: Learn about the Network Watcher packet capture capability.
+ Title: Packet capture overview
+
+description: Learn about Azure Network Watcher packet capture capability.
Previously updated : 06/07/2022 Last updated : 03/22/2023 -+
-# Introduction to packet capture in Azure Network Watcher
-
-> [!Important]
-> Packet capture is now also available for **virtual machine scale sets**. To check it out, visit [Manage packet captures in virtual machine scale sets with Azure Network Watcher using the Azure portal](network-watcher-packet-capture-manage-portal-vmss.md).
+# Packet capture overview
-Network Watcher packet capture allows you to create packet capture sessions to track traffic to and from a virtual machine. Packet capture helps to diagnose network anomalies both reactively and proactively. Other uses include gathering network statistics, gaining information on network intrusions, to debug client-server communications and much more.
+Azure Network Watcher packet capture allows you to create packet capture sessions to track traffic to and from a virtual machine (VM) or a scale set. Packet capture helps to diagnose network anomalies both reactively and proactively. Other uses include gathering network statistics, gaining information on network intrusions, to debug client-server communications and much more.
-Packet capture is an extension that is remotely started through Network Watcher. This capability eases the burden of running a packet capture manually on the desired virtual machine or virtual machine scale set instance(s), which saves valuable time. Packet capture can be triggered through the portal, PowerShell, CLI, or REST API. One example of how packet capture can be triggered is with Virtual Machine alerts. Filters are provided for the capture session to ensure you capture traffic you want to monitor. Filters are based on 5-tuple (protocol, local IP address, remote IP address, local port, and remote port) information. The captured data is stored in the local disk or a storage blob.
+Packet capture is an extension that is remotely started through Network Watcher. This capability eases the burden of running a packet capture manually on the desired virtual machine or virtual machine scale set instance(s), which saves valuable time. Packet capture can be triggered through the portal, PowerShell, Azure CLI, or REST API. One example of how packet capture can be triggered is with virtual machine alerts. Filters are provided for the capture session to ensure you capture traffic you want to monitor. Filters are based on 5-tuple (protocol, local IP address, remote IP address, local port, and remote port) information. The captured data can be stored in the local disk or a storage blob.
> [!IMPORTANT]
-> Packet capture requires a virtual machine extension `AzureNetworkWatcherExtension`. For installing the extension on a Windows VM visit [Azure Network Watcher Agent virtual machine extension for Windows](../virtual-machines/extensions/network-watcher-windows.md) and for Linux VM visit [Azure Network Watcher Agent virtual machine extension for Linux](../virtual-machines/extensions/network-watcher-linux.md).
+> Packet capture requires a virtual machine extension `AzureNetworkWatcherExtension`.
+> - To install the extension on a Windows virtual machine, see [Network Watcher Agent VM extension for Windows](../virtual-machines/extensions/network-watcher-windows.md).
+> - To install the extension on a Linux virtual machine, see [Network Watcher Agent VM extension for Linux](../virtual-machines/extensions/network-watcher-linux.md).
-To reduce the information in order to capture only required information, following options are available for a packet capture session:
+To control the size of captured data and only capture required information, use the following options:
-**Capture configuration**
+#### Capture configuration
|Property|Description| |||
-|**Maximum bytes per packet (bytes)** | The number of bytes from each packet that are captured, all bytes are captured if left blank. If you need only the IPv4 header ΓÇô indicate 34 here |
+|**Maximum bytes per packet (bytes)** | The number of bytes from each packet. All bytes are captured if left blank. Enter 34 if you only need to capture IPv4 header.|
|**Maximum bytes per session (bytes)** | Total number of bytes that are captured, once the value is reached the session ends.|
-|**Time limit (seconds)** | Sets a time constraint on the packet capture session. The default value is 18000 seconds or 5 hours.|
+|**Time limit (seconds)** | Packet capture session time limit, once the value is reached the session ends. The default value is 18000 seconds (5 hours).|
-**Filtering (optional)**
+#### Filtering (optional)
|Property|Description| |||
To reduce the information in order to capture only required information, followi
## Considerations+ There's a limit of 10,000 parallel packet capture sessions per region per subscription. This limit applies only to the sessions and doesn't apply to the saved packet capture files either locally on the VM or in a storage account. See the [Network Watcher service limits page](../azure-resource-manager/management/azure-subscription-service-limits.md#network-watcher-limits) for a full list of limits.
-### Next steps
+## Next steps
-Learn how you can manage packet captures through the portal by visiting [Manage packet capture in the Azure portal for VM](network-watcher-packet-capture-manage-portal.md)and [Manage packet capture in the Azure portal for Virtual Machine Scale Sets](network-watcher-packet-capture-manage-portal-vmss.md) or with PowerShell by visiting [Manage Packet Capture with PowerShell for VM](network-watcher-packet-capture-manage-powershell.md)and [Manage Packet Capture with PowerShell for Virtual Machine Scale Sets](network-watcher-packet-capture-manage-powershell-vmss.md)
+- To learn how to manage packet captures using the Azure portal, see [Manage packet captures in virtual machines using the Azure portal](network-watcher-packet-capture-manage-portal.md) and [Manage packet captures in Virtual Machine Scale Sets using the Azure portal](network-watcher-packet-capture-manage-portal-vmss.md).
+- To learn how to manage packet captures using Azure PowerShell, see [Manage packet captures in virtual machines using PowerShell](network-watcher-packet-capture-manage-powershell.md) and [Manage packet captures in Virtual Machine Scale Sets using PowerShell](network-watcher-packet-capture-manage-powershell-vmss.md).
+- To learn how to create proactive packet captures based on virtual machine alerts, see [Create an alert triggered packet capture](network-watcher-alert-triggered-packet-capture.md).
-Learn how to create proactive packet captures based on virtual machine alerts by visiting [Create an alert triggered packet capture](network-watcher-alert-triggered-packet-capture.md)
-<!--Image references-->
-[1]: ./media/network-watcher-packet-capture-overview/figure1.png
networking Troubleshoot Failed State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/troubleshoot-failed-state.md
Title: 'Troubleshoot Azure Microsoft.Network failed Provisioning State'
-description: Learn how to troubleshoot Azure Microsoft.Network failed Provisioning State.
+description: Learn about the meaning of various provisioning states and how to troubleshoot Azure Microsoft.Network failed Provisioning State.
- Previously updated : 04/08/2022 Last updated : 03/21/2023 # Troubleshoot Azure Microsoft.Network failed provisioning state
-This article helps understand the meaning of various provisioning states for Microsoft.Network resources and how to effectively troubleshoot situations when the state is **Failed**.
+This article helps you understand the meaning of various provisioning states for Microsoft.Network resources. You can effectively troubleshoot situations when the state is **Failed**.
[!INCLUDE [support-disclaimer](../../includes/support-disclaimer.md)] ## Provisioning states
-The provisioning state is the status of a user-initiated, control-plane operation on an Azure Resource Manager resource.
+The provisioning state is the status of a user-initiated, control-plane operation on an Azure Resource Manager resource.
| Provisioning state | Description | ||| | Updating | Resource is being created or updated. |
-| Failed | Last operation on the resource was not successful. |
-| Succeeded | Last operation on the resource was successful. |
-| Deleting | Resource is being deleted. |
-| Migrating | Seen when migrating from Azure Service Manager to Azure Resource Manager. |
+| Failed | Last operation on the resource wasn't successful. |
+| Succeeded | Last operation on the resource was successful. |
+| Deleting | Resource is being deleted. |
+| Migrating | Seen when migrating from Azure Service Manager to Azure Resource Manager. |
-These states are just metadata properties of the resource and are independent from the functionality of the resource itself.
-Being in a failed state does not necessarily mean that the resource is not functional, in fact in most cases it can continue operating and servicing traffic without issues.
+These states are metadata properties of the resource. They're independent from the functionality of the resource itself. Being in the failed state doesn't necessarily mean that the resource isn't functional. In most cases, it can continue operating and serving traffic without issues.
-However in several scenarios further operations on the resource or on other resources that depend on it may fail if the resource is in failed state, so the state needs to be reverted back to succeeded before executing other operations.
+In several scenarios, if the resource is in the failed state, further operations on the resource or on other resources that depend on it might fail. You need to revert the state back to succeeded before running other operations.
-For example, you cannot execute an operation on a VirtualNetworkGateway if it has a dependent VirtualNetworkGatewayConnection object in failed state and viceversa.
+For example, you can't run an operation on a `VirtualNetworkGateway` if it has a dependent `VirtualNetworkGatewayConnection` object in failed state.
## Restoring succeeded state through a PUT operation
-The correct way to restore succeeded state is to execute another write (PUT) operation on the resource.
+To restore succeeded state, run another write (`PUT`) operation on the resource.
-Most times, the issue that caused the previous operation might no longer be current, hence the newer write operation should be successful and restore the provisioning state.
+The issue that caused the previous operation might no longer be current. The newer write operation should be successful and restore the provisioning state.
-The easiest way to achieve this task is to use Azure PowerShell. You will need to issue a resource-specific "Get" command that fetches all the current configuration for the impacted resource as it is deployed. Next, you can execute a "Set" command (or equivalent) to commit to Azure a write operation containing all the resource properties as they are currently configured.
+The easiest way to achieve this task is to use Azure PowerShell. Issue a resource-specific *Get* command that fetches all the current configuration for the resource. Next, run a *Set* command, or equivalent, to commit to Azure a write operation that contains all the resource properties as currently configured.
> [!IMPORTANT]
-> 1. Executing a "Set" command on the resource without running a "Get" first will result in overwriting the resource with default settings which might be different from those you currently have configured. Do not just run a "Set" command unless resetting settings is intentional.
-> 2. Executing a "Get" and "Set" operation using third party software or otherwise any tool using older API version may also result in loss of some settings, as those may not be present in the API version with which you have executed the command.
+>
+> - Running a `Set` command on the resource without first running a `Get` results in overwriting the resource with default settings. Those settings might be different from the ones you currently have configured. Don't just run a `Set` command unless you intend to reset to default.
+> - Running a `Get` and `Set` operation using third party software or any tool using older API version might also result in loss of some settings. Those settings might not be present in the API version with which you run the command.
> ## Azure PowerShell cmdlets to restore succeeded provisioning state
The easiest way to achieve this task is to use Azure PowerShell. You will need t
### Preliminary operations
-1. Install the latest version of the Azure Resource Manager PowerShell cmdlets. For more information, see [Install and configure Azure PowerShell](/powershell/azure/install-az-ps).
+1. Install the latest version of the Azure Resource Manager PowerShell cmdlets. For more information, see [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
2. Open your PowerShell console with elevated privileges, and connect to your account. Use the following example to help you connect: ```azurepowershell-interactive Connect-AzAccount ```+ 3. If you have multiple Azure subscriptions, check the subscriptions for the account. ```azurepowershell-interactive Get-AzSubscription ```+ 4. Specify the subscription that you want to use. ```azurepowershell-interactive Select-AzSubscription -SubscriptionName "Replace_with_your_subscription_name" ```
-5. Run the resource-specific commands listed below to reset the provisioning state to succeeded.
-
+
+5. Run the resource-specific commands in the following sections to reset the provisioning state.
+ > [!NOTE]
->Every sample command in this article uses "your_resource_name" for the name of the Resource and "your_resource_group_name" for the name of the Resource Group. Make sure to replace these strings with the appropriate Resource and Resource Group names according to your deployment.
+> Every sample command in this article uses `your_resource_name` for the name of the resource and `your_resource_group_name` for the name of the resource group. Make sure to replace these strings with the appropriate resource and resource group names for your deployment.
### Microsoft.Network/applicationGateways
Get-AzApplicationGateway -Name "your_resource_name" -ResourceGroupName "your_res
```azurepowershell-interactive Get-AzApplicationGatewayFirewallPolicy -Name "your_resource_name" -ResourceGroupName "your_resource_group_name" | Set-AzApplicationGatewayFirewallPolicy ```+ ### Microsoft.Network/azureFirewalls ```azurepowershell-interactive Get-AzFirewall -Name "your_resource_name" -ResourceGroupName "your_resource_group_name" | Set-AzFirewall ```+ ### Microsoft.Network/bastionHosts ```azurepowershell-interactive
Get-AzExpressRouteGateway -Name "your_resource_name" -ResourceGroupName "your_re
``` > [!NOTE]
-> **Microsoft.Network/expressRouteGateways** are those gateways deployed within a Virtual WAN. If you have a standalone gateway of ExpressRoute type in your Virtual Network you need to execute the commands related to [Microsoft.Network/virtualNetworkGateways](#microsoftnetworkvirtualnetworkgateways).
+> `Microsoft.Network/expressRouteGateways` are deployed within a Virtual WAN. If you have a standalone ExpressRoute gateway in your virtual network, run the commands related to [Microsoft.Network/virtualNetworkGateways](#microsoftnetworkvirtualnetworkgateways).
### Microsoft.Network/expressRoutePorts
Get-AzNetworkSecurityGroup -Name "your_resource_name" -ResourceGroupName "your_r
```azurepowershell-interactive Get-AzNetworkVirtualAppliance -Name "your_resource_name" -ResourceGroupName "your_resource_group_name" | Update-AzNetworkVirtualAppliance ```+ > [!NOTE]
-> Most Virtual WAN related resources such as networkVirtualAppliances leverage the "Update" cmdlet and not the "Set" for write operations.
->
+> Most Virtual WAN related resources, such as networkVirtualAppliances, use the `Update` cmdlet, not the `Set`, for write operations.
+ ### Microsoft.Network/privateDnsZones ```azurepowershell-interactive
Get-AzRouteTable -Name "your_resource_name" -ResourceGroupName "your_resource_gr
```azurepowershell-interactive Get-AzVirtualHub -Name "your_resource_name" -ResourceGroupName "your_resource_group_name" | Update-AzVirtualHub ```+ > [!NOTE]
-> Most Virtual WAN related resources such as virtualHubs leverage the "Update" cmdlet and not the "Set" for write operations.
->
+> Most Virtual WAN related resources, such as virtualHubs, use the `Update` cmdlet, not the `Set`, for write operations.
+ ### Microsoft.Network/virtualNetworkGateways ```azurepowershell-interactive
Get-AzVirtualNetwork -Name "your_resource_name" -ResourceGroupName "your_resourc
```azurepowershell-interactive Get-AzVirtualWan -Name "your_resource_name" -ResourceGroupName "your_resource_group_name" | Update-AzVirtualWan ```+ > [!NOTE]
-> Most Virtual WAN related resources such as virtualWans leverage the "Update" cmdlet and not the "Set" for write operations.
+> Most Virtual WAN related resources, such as virtualWans, use the `Update` cmdlet, not the `Set`, for write operations.
### Microsoft.Network/vpnGateways ```azurepowershell-interactive Get-AzVpnGateway -Name "your_resource_name" -ResourceGroupName "your_resource_group_name" | Update-AzVpnGateway ```+ > [!NOTE]
-> 1. **Microsoft.Network/vpnGateways** are those gateways deployed within a Virtual WAN. If you have a standalone gateway of VPN type in your Virtual Network you need to execute the commands related to [Microsoft.Network/virtualNetworkGateways](#microsoftnetworkvirtualnetworkgateways).
-> 2. Most Virtual WAN related resources such as vpnGateways leverage the "Update" cmdlet and not the "Set" for write operations.
+>
+> - `Microsoft.Network/vpnGateways` are deployed within a Virtual WAN. If you have a standalone VPN gateway in your virtual network, run the commands related to [Microsoft.Network/virtualNetworkGateways](#microsoftnetworkvirtualnetworkgateways).
+> - Most Virtual WAN related resources, such as vpnGateways, use the `Update` cmdlet, not the `Set` for write operations.
### Microsoft.Network/vpnSites ```azurepowershell-interactive Get-AzVpnSite -Name "your_resource_name" -ResourceGroupName "your_resource_group_name" | Update-AzVpnSite ```
-> [!NOTE]
-> Most Virtual WAN related resources such as vpnSites leverage the "Update" cmdlet and not the "Set" for write operations.
-
+> [!NOTE]
+> Most Virtual WAN related resources, such as vpnSites, use the `Update` cmdlet, not the `Set`, for write operations.
## Next steps
-If the command executed didn't fix the failed state, it should return an error code for you.
+If the command that you ran didn't resolve the failed state, it should return an error code.
Most error codes contain a detailed description of what the problem might be and offer hints to solve it.
-Open a support ticket with [Microsoft support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) if you're still experiencing issues.
-Make sure you specify to the Support Agent both the error code you received in the latest operation, as well as the timestamp of when the operation was executed.
+If you're still experiencing issues, open a support ticket with [Microsoft support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade). Specify to the support agent both the error code that you received in the latest operation and the timestamp when you ran the operation.
postgresql Concepts Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-connectivity.md
+
+ Title: Handle transient connectivity errors - Azure Database for PostgreSQL - Flexible Server
+description: Learn how to handle transient connectivity errors for Azure Database for PostgreSQL - Flexible Server.
+++++ Last updated : 03/22/2023++
+# Handling transient connectivity errors for Azure Database for PostgreSQL - Flexible Server
++
+This article describes how to handle transient errors connecting to Azure Database for PostgreSQL.
+
+## Transient errors
+
+A transient error, also known as a transient fault, is an error that will resolve itself. Most typically these errors manifest as a connection to the database server being dropped. Also new connections to a server can't be opened. Transient errors can occur for example when hardware or network failure happens. Another reason could be a new version of a PaaS service that is being rolled out. Most of these events are automatically mitigated by the system in less than 60 seconds. A best practice for designing and developing applications in the cloud is to expect transient errors. Assume they can happen in any component at any time and to have the appropriate logic in place to handle these situations.
+
+## Handling transient errors
+
+Transient errors should be handled using retry logic. Situations that must be considered:
+
+* An error occurs when you try to open a connection
+* An idle connection is dropped on the server side. When you try to issue a command, it can't be executed
+* An active connection that currently is executing a command is dropped.
+
+The first and second cases are fairly straight forward to handle. Try to open the connection again. When you succeed, the transient error has been mitigated by the system. You can use your Azure Database for PostgreSQL again. We recommend having waits before retrying the connection. Back off if the initial retries fail. This way the system can use all resources available to overcome the error situation. A good pattern to follow is:
+
+* Wait for 5 seconds before your first retry.
+* For each following retry, increase the wait exponentially, up to 60 seconds.
+* Set a max number of retries at which point your application considers the operation failed.
+
+When a connection with an active transaction fails, it is more difficult to handle the recovery correctly. There are two cases: If the transaction was read-only in nature, it is safe to reopen the connection and to retry the transaction. If however if the transaction was also writing to the database, you must determine if the transaction was rolled back, or if it succeeded before the transient error happened. In that case, you might just not have received the commit acknowledgment from the database server.
+
+One way of doing this, is to generate a unique ID on the client that is used for all the retries. You pass this unique ID as part of the transaction to the server and to store it in a column with a unique constraint. This way you can safely retry the transaction. It will succeed if the previous transaction was rolled back and the client generated unique ID does not yet exist in the system. It will fail indicating a duplicate key violation if the unique ID was previously stored because the previous transaction completed successfully.
+
+When your program communicates with Azure Database for PostgreSQL through third-party middleware, ask the vendor whether the middleware contains retry logic for transient errors.
+
+Make sure to test your retry logic. For example, try to execute your code while scaling up or down the compute resources of your Azure Database for PostgreSQL server. Your application should handle the brief downtime that is encountered during this operation without any problems.
postgresql Concepts Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-data-encryption.md
Some of the reasons why server state can become *Inaccessible* are:
- If you set up overly restrictive Azure KeyVault firewall rules that cause Azure Database for PostgreSQL- Flexible Server inability to communicate with Azure KeyVault to retrieve keys. If you enable [KeyVault firewall](../../key-vault/general/overview-vnet-service-endpoints.md#trusted-services), make sure you check an option to *'Allow Trusted Microsoft Services to bypass this firewall.'*
+## Using Data Encryption with Customer Managed Key (CMK) and Geo-redundant Business Continuity features, such as Replicas and Geo-redundant backup
+
+Azure Database for PostgreSQL - Flexible Server supports advanced [Data Recovery (DR)](../flexible-server/concepts-business-continuity.md) features, such as [Replicas](../../postgresql/flexible-server/concepts-read-replicas.md) and [geo-redundant backup](../flexible-server/concepts-backup-restore.md). Following are requirements for setting up data encryption with CMK and these features, additional to [basic requirements for data encryption with CMK](#requirements-for-configuring-data-encryption-for-azure-database-for-postgresql-flexible-server):
+
+* The Geo-redundant backup encryption key needs to be the created in an Azure Key Vault (AKV) in the region where the Geo-redundant backup is stored
+* The [Azure Resource Manager (ARM) REST API](../../azure-resource-manager/management/overview.md) version for supporting Geo-redundant backup enabled CMK servers is '2022-11-01-preview'. Therefore, using [ARM templates](../../azure-resource-manager/templates/overview.md) for automation of creation of servers utilizing both encryption with CMK and geo-redundant backup features, please use this ARM API version.
+* Same [user managed identity](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md)can't be used to authenticate for primary database Azure Key Vault (AKV) and Azure Key Vault (AKV) holding encryption key for Geo-redundant backup. To make sure that we maintain regional resiliency we recommend creating user managed identity in the same region as the geo-backups.
+* As support for Geo-redundant backup with data encryption using CMK is currently in preview, there is currently no Azure CLI support for server creation with both of these features enabled.
+* If [Read replica database](../flexible-server/concepts-read-replicas.md) is setup to be encrypted with CMK during creation, its encryption key needs to be resident in an Azure Key Vault (AKV) in the region where Read replica database resides. [User assigned identity](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) to authenticate against this Azure Key Vault (AKV) needs to be created in the same region.
++ > [!NOTE] > CLI examples below are based on 2.45.0 version of Azure Database for PostgreSQL - Flexible Server CLI libraries
postgresql Quickstart Create Server Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-bicep.md
New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile "./mai
You'll be prompted to enter these values: -- **serverName**: enter a name for the PostgreSQL server.-- **administratorLogin**: enter the Azure Database for PostgreSQL server's administrator account name.-- **administratorLoginPassword**: enter the administrator password.
+- **serverName**: enter a unique name that identifies your Azure Database for PostgreSQL server. For example, `mydemoserver-pg`. The domain name `postgres.database.azure.com` is appended to the server name you provide. The server can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain at least 3 through 63 characters.
+- **administratorLogin**: enter your own login account to use when you connect to the server. For example, `myadmin`. The admin login name can't be `azure_superuser`, `azure_pg_admin`, `admin`, `administrator`, `root`, `guest`, or `public`. It can't start with `pg_`.
+- **administratorLoginPassword**: enter a new password for the server admin account. It must contain between 8 and 128 characters. Your password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (!, $, #, %, etc.).
## Review deployed resources
postgresql Quickstart Create Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-portal.md
To create an Azure Database for PostgreSQL server, take the following steps:
1. Select **Create a resource** (+) in the upper-left corner of the portal.
-2. Select **Databases** > **Azure Database for PostgreSQL**.
+1. Select **Databases** > **Azure Database for PostgreSQL**.
:::image type="content" source="./media/quickstart-create-database-portal/1-create-database.png" alt-text="The Azure Database for PostgreSQL in menu":::
-3. Select the **Flexible server** deployment option.
+1. Select the **Flexible server** deployment option.
:::image type="content" source="./media/quickstart-create-database-portal/2-select-deployment-option.png" alt-text="Select Azure Database for PostgreSQL - Flexible server deployment option":::
-4. Fill out the **Basics** form with the following information:
+1. Fill out the **Basics** form with the following information:
- :::image type="content" source="./media/quickstart-create-database-portal/3-create-basics.png" alt-text="Create a server":::
+ :::image type="content" source="./media/quickstart-create-database-portal/3-create-basics.png" alt-text="Create a server.":::
Setting|Suggested Value|Description ||
To create an Azure Database for PostgreSQL server, take the following steps:
Version|The latest major version| The latest PostgreSQL major version, unless you have specific requirements otherwise. Compute + storage | **General Purpose**, **4 vCores**, **512 GB**, **7 days** | The compute, storage, and backup configurations for your new server. Select **Configure server**. *General Purpose*, *4 vCores*, *512 GB*, and *7 days* are the default values for **Compute tier**, **vCore**, **Storage**, and **Backup Retention Period**. You can leave those sliders as is or adjust them. <br> <br> To configure your server with **Geo-redundant Backup** to protect from region-level failures, you can check the box ON. Note that the Geo-redundant backup can be configured only at the time of server creation. To save this pricing tier selection, select **OK**. The next screenshot captures these selections.
- :::image type="content" source="./media/quickstart-create-database-portal/4-pricing-tier-geo-backup.png" alt-text="The Pricing tier pane":::
+ :::image type="content" source="./media/quickstart-create-database-portal/4-pricing-tier-geo-backup.png" alt-text="The Pricing tier pane.":::
-
-5. Configure Networking options
-6.
- On the **Networking** tab, you can choose how your server is reachable. Azure Database for PostgreSQL Flexible Server provides two ways to connect to your server:
+1. Configure Networking options
+
+1. On the **Networking** tab, you can choose how your server is reachable. Azure Database for PostgreSQL Flexible Server provides two ways to connect to your server:
- Public access (allowed IP addresses) - Private access (VNet Integration)
To create an Azure Database for PostgreSQL server, take the following steps:
> [!NOTE] > You can't change the connectivity method after you create the server. For example, if you select **Public access (allowed IP addresses)** when you create the server, you can't change to **Private access (VNet Integration)** after the server is created. We highly recommend that you create your server with private access to help secure access to your server via VNet Integration. [Learn more about private access in the concepts article.](./concepts-networking.md)
+ :::image type="content" source="./media/quickstart-create-database-portal/5-networking.png" alt-text="The Networking pane.":::
- :::image type="content" source="./media/quickstart-create-database-portal/5-networking.png" alt-text="The Networking pane":::
-
-
-
-6. Select **Review + create** to review your selections. Select **Create** to provision the server. This operation may take a few minutes.
+1. Select **Review + create** to review your selections. Select **Create** to provision the server. This operation may take a few minutes.
-7. On the toolbar, select the **Notifications** icon (a bell) to monitor the deployment process. Once the deployment is done, you can select **Pin to dashboard**, which creates a tile for this server on your Azure portal dashboard as a shortcut to the server's **Overview** page. Selecting **Go to resource** opens the server's **Overview** page.
+1. On the toolbar, select the **Notifications** icon (a bell) to monitor the deployment process. Once the deployment is done, you can select **Pin to dashboard**, which creates a tile for this server on your Azure portal dashboard as a shortcut to the server's **Overview** page. Selecting **Go to resource** opens the server's **Overview** page.
- :::image type="content" source="./media/quickstart-create-database-portal/7-notifications.png" alt-text="The Notifications pane":::
+ :::image type="content" source="./media/quickstart-create-database-portal/7-notifications.png" alt-text="The Notifications pane.":::
By default, a **postgres** database is created under your server. The [postgres](https://www.postgresql.org/docs/current/static/app-initdb.html) database is a default database that's meant for use by users, utilities, and third-party applications. (The other default database is **azure_maintenance**. Its function is to separate the managed service processes from user actions. You cannot access this database.)
When you create your Azure Database for PostgreSQL server, a default database na
Open your server's **Overview** page. Make a note of the **Server name** and the **Server admin login name**. Hover your cursor over each field, and the copy symbol appears to the right of the text. Select the copy symbol as needed to copy the values.
- :::image type="content" source="./media/quickstart-create-database-portal/8-server-name.png" alt-text="The server Overview page":::
+ :::image type="content" source="./media/quickstart-create-database-portal/8-server-name.png" alt-text="The server Overview page.":::
## Connect to the PostgreSQL database using psql
There are a number of applications you can use to connect to your Azure Database
``` For example, the following command connects to the default database called **postgres** on your PostgreSQL server **mydemoserver.postgres.database.azure.com** using access credentials. Enter the `<server_admin_password>` you chose when prompted for password.
-
+ ```bash psql --host=mydemoserver-pg.postgres.database.azure.com --port=5432 --username=myadmin --dbname=postgres ```
There are a number of applications you can use to connect to your Azure Database
> > Confirm your client's IP is allowed in the firewall rules step above.
-2. Create a blank database called "mypgsqldb" at the prompt by typing the following command:
+1. Create a blank database called "mypgsqldb" at the prompt by typing the following command:
```bash CREATE DATABASE mypgsqldb; ```
-3. At the prompt, execute the following command to switch connections to the newly created database **mypgsqldb**:
+1. At the prompt, execute the following command to switch connections to the newly created database **mypgsqldb**:
```bash \c mypgsqldb ```
-4. Type `\q`, and then select the Enter key to quit psql.
+1. Type `\q`, and then select the Enter key to quit psql.
You connected to the Azure Database for PostgreSQL server via psql, and you created a blank user database.
To delete the entire resource group, including the newly created server:
1. Locate your resource group in the portal. On the menu on the left, select **Resource groups**. Then select the name of your resource group, such as the example, **myresourcegroup**.
-2. On your resource group page, select **Delete**. Enter the name of your resource group, such as the example, **myresourcegroup**, in the text box to confirm deletion. Select **Delete**.
+1. On your resource group page, select **Delete**. Enter the name of your resource group, such as the example, **myresourcegroup**, in the text box to confirm deletion. Select **Delete**.
To delete only the newly created server: 1. Locate your server in the portal, if you don't have it open. On the menu on the left, select **All resources**. Then search for the server you created.
-2. On the **Overview** page, select **Delete**.
+1. On the **Overview** page, select **Delete**.
:::image type="content" source="./media/quickstart-create-database-portal/9-delete.png" alt-text="The Delete button":::
-3. Confirm the name of the server you want to delete, and view the databases under it that are affected. Enter your server name in the text box, such as the example, **mydemoserver**. Select **Delete**.
+1. Confirm the name of the server you want to delete, and view the databases under it that are affected. Enter your server name in the text box, such as the example, **mydemoserver**. Select **Delete**.
## Next steps+ > [!div class="nextstepaction"] > [Deploy a Django app with App Service and PostgreSQL](tutorial-django-app-service-postgres.md)
private-5g-core Collect Required Information For A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-a-site.md
You can use this information to create a site in an existing private mobile netw
- You must have completed the steps in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md). - If you want to give Azure role-based access control (Azure RBAC) to storage accounts, you must have the relevant permissions on your account.
+- Make a note of the resource group that contains your private mobile network that was collected in [Collect the required information to deploy a private mobile network](collect-required-information-for-private-mobile-network.md). We recommend that the mobile network site resource you create in this procedure belongs to the same resource group.
## Collect mobile network site resource values
Collect all the values in the following table for the mobile network site resour
|The Azure resource group in which to create the mobile network site resource. We recommend that you use the same resource group that already contains your private mobile network. |**Project details: Resource group**| |The name for the site. |**Instance details: Name**| |The region in which you deployed the private mobile network. |**Instance details: Region**|
+ |The packet core in which to create the mobile network site resource. |**Instance details: Packet core name**|
|The [region code name](region-code-names.md) of the region in which you deployed the private mobile network. For the East US region, this is *eastus*; for West Europe, this is *westeurope*. </br></br>You only need to collect this value if you're going to create your site using an ARM template. |Not applicable.| |The mobile network resource representing the private mobile network to which youΓÇÖre adding the site. </br></br>You only need to collect this value if you're going to create your site using an ARM template. |Not applicable.|
- |The billing plan for the site that you are creating. The available plans have the following throughput, activated SIMs and radio access network (RAN) allowances:</br></br>G0 - 100 Mbps per site, 20 activated SIMs per network and 2 RAN connections. </br> G1 - 1 Gbps per site, 100 activated SIMs per network and 5 RAN connections. </br> G2 - 2 Gbps per site, 200 activated SIMs per network and 10 RAN connections. </br> G3 - 3 Gbps per site, 300 activated SIMs per network and unlimited RAN connections. </br> G4 - 4 Gbps per site, 400 activated SIMs per network and unlimited RAN connections. </br> G5 - 5 Gbps per site, 500 activated SIMs per network and unlimited RAN connections. </br> G10 - 10 Gbps per site, 1000 activated SIMs per network and unlimited RAN connections.|**Instance details: Site plan**|
+ |The billing plan for the site that you are creating. The available plans have the following throughput, activated SIMs and radio access network (RAN) allowances:</br></br>G0 - 100 Mbps per site, 20 activated SIMs per network and 2 RAN connections. </br> G1 - 1 Gbps per site, 100 activated SIMs per network and 5 RAN connections. </br> G2 - 2 Gbps per site, 200 activated SIMs per network and 10 RAN connections. </br> G5 - 5 Gbps per site, 500 activated SIMs per network and unlimited RAN connections. </br> G10 - 10 Gbps per site, 1000 activated SIMs per network and unlimited RAN connections.|**Instance details: Service plan**|
## Collect packet core configuration values
private-5g-core Collect Required Information For Private Mobile Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-private-mobile-network.md
Collect all of the following values for the mobile network resource that will re
|Value |Field name in Azure portal | ||| |The Azure subscription to use to deploy the mobile network resource. You must use the same subscription for all resources in your private mobile network deployment. You identified this subscription in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md). |**Project details: Subscription**
- |The Azure resource group to use to deploy the mobile network resource. You should use a new resource group for this resource. It's useful to include the purpose of this resource group in its name for future identification (for example, *contoso-pmn-rg*). |**Project details: Resource group**|
+ |The Azure resource group to use to deploy the mobile network resource. You should use a new resource group for this resource. It's useful to include the purpose of this resource group in its name for future identification (for example, *contoso-pmn-rg*). </br></br> Note: We recommend that this resource group is also used when [Collecting the required information for a site](collect-required-information-for-a-site.md). |**Project details: Resource group**|
|The name for the private mobile network. |**Instance details: Mobile network name**| |The region in which you're deploying the private mobile network. This can be the East US or the West Europe region. |**Instance details: Region**| |The mobile country code for the private mobile network. |**Network configuration: Mobile country code (MCC)**|
private-5g-core Commission Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/commission-cluster.md
The packet core instances in the Azure Private 5G Core service run on an Arc-ena
- [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md). - You will need Owner permission on the resource group for your Azure Stack Edge resource.
+ > [!NOTE]
+ > Make a note of the Azure Stack Edge's resource group. The AKS cluster and custom location, created in this procedure, must belong to this resource group.
## Enter a minishell session
private-5g-core Create A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-a-site.md
In this step, you'll create the mobile network site resource representing the ph
- Use the information you collected in [Collect packet core configuration values](collect-required-information-for-a-site.md#collect-packet-core-configuration-values) to fill out the **Technology type**, **Azure Stack Edge device**, and **Custom location** fields. - Select the recommended packet core version in the **Version** field.
- > [!NOTE]
- > If a warning appears about an incompatibility between the selected packet core version and the current Azure Stack Edge version, you'll need to update ASE first. Select **Upgrade ASE** from the warning prompt and follow the instructions in [Update your Azure Stack Edge Pro GPU](../databox-online/azure-stack-edge-gpu-install-update.md). Once you've finished updating your ASE, go back to the beginning of this step to create the site resource.
+ > [!NOTE]
+ > If a warning appears about an incompatibility between the selected packet core version and the current Azure Stack Edge version, you'll need to update ASE first. Select **Upgrade ASE** from the warning prompt and follow the instructions in [Update your Azure Stack Edge Pro GPU](../databox-online/azure-stack-edge-gpu-install-update.md). Once you've finished updating your ASE, go back to the beginning of this step to create the site resource.
- Ensure **AKS-HCI** is selected in the **Platform** field.
-1. Use the information you collected in [Collect access network values](collect-required-information-for-a-site.md#collect-access-network-values) to fill out the fields in the **Access network** section. Note:
+1. Use the information you collected in [Collect access network values](collect-required-information-for-a-site.md#collect-access-network-values) to fill out the fields in the **Access network** section.
- - **ASE N2 virtual subnet** and **ASE N3 virtual subnet** (if this site will support 5G UEs) or **ASE S1-MME virtual subnet** and **ASE S1-U virtual subnet** (if this site will support 4G UEs) must match the corresponding virtual network names on port 5 on your Azure Stack Edge Pro device.
+ > [!NOTE]
+ > **ASE N2 virtual subnet** and **ASE N3 virtual subnet** (if this site will support 5G UEs) or **ASE S1-MME virtual subnet** and **ASE S1-U virtual subnet** (if this site will support 4G UEs) must match the corresponding virtual network names on port 5 on your Azure Stack Edge Pro device.
1. In the **Attached data networks** section, select **Attach data network**. Choose whether you want to use an existing data network or create a new one, then use the information you collected in [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values) to fill out the fields. Note the following: - **ASE N6 virtual subnet** (if this site will support 5G UEs) or **ASE SGi virtual subnet** (if this site will support 4G UEs) must match the corresponding virtual network name on port 6 on your Azure Stack Edge Pro device.
If you decided not to configure diagnostics packet collection or use a user assi
:::image type="content" source="media/create-a-site/create-site-local-access-tab.png" alt-text="Screenshot of the Azure portal showing the Local access configuration tab for a site resource."::: - Under **Authentication type**, select the authentication method you decided to use in [Choose the authentication method for local monitoring tools](collect-required-information-for-a-site.md#choose-the-authentication-method-for-local-monitoring-tools).
- - under **Provide custom HTTPS certificate?**, select **Yes** or **No** based on whether you decided to provide a custom HTTPS certificate in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values). If you selected **Yes**, use the information you collected in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values) to select a certificate.
+ - Under **Provide custom HTTPS certificate?**, select **Yes** or **No** based on whether you decided to provide a custom HTTPS certificate in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values). If you selected **Yes**, use the information you collected in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values) to select a certificate.
1. Select **Review + create**. 1. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
If you decided not to configure diagnostics packet collection or use a user assi
:::image type="content" source="media/create-a-site/site-related-resources.png" alt-text="Screenshot of the Azure portal showing a resource group containing a site and its related resources." lightbox="media/create-a-site/site-related-resources.png":::
+1. If you want to assign additional packet cores to the site, for each new packet core resource see LINK
+ ## Next steps If you decided to set up Azure AD for local monitoring access, follow the steps in [Enable Azure Active Directory (Azure AD) for local monitoring tools](enable-azure-active-directory.md).
private-5g-core Create Additional Packet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-additional-packet-core.md
+
+ Title: Create additional Packet Core instances for a site - Azure portal
+
+description: This how-to guide shows how to create additional packet core instances for a site in your private mobile network.
++++ Last updated : 03/21/2023+++
+# Create additional Packet Core instances for a site using the Azure portal
+
+Azure Private 5G Core private mobile networks include one or more sites. Once deployed, each site can have multiple packet core instances for redundancy. In this how-to guide, you'll learn how to add additional packet core instances to a site in your private mobile network using the Azure portal.
+
+## Prerequisites
+
+- You must already have a site deployed in your private mobile network.
+- Collect all of the information in [Collect required information for a site](collect-required-information-for-a-site.md) that you used for the site.
+- Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope.
+
+## Create the packet core instance
+
+In this step, you'll create an additional packet core instance for a site in your private mobile network.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Search for and select the **Mobile Network** resource representing the private mobile network containing the site that you want to add a packet core instance to.
+1. Select the **Sites** blade on the resource menu.
+1. Select the **Site** resource that you want to add a packet core instance to.
+1. Select **Add packet core**.
+1. Specify a **Packet core name** and select **Next : Packet core >**.
+1. You'll now see the **Packet core** configuration tab.
+1. In the **Packet core** section, set the fields as follows:
+
+ - Use the information you collected in [Collect packet core configuration values](collect-required-information-for-a-site.md#collect-packet-core-configuration-values) to fill out the **Technology type**, **Azure Stack Edge device**, and **Custom location** fields.
+ - Select the recommended packet core version in the **Version** field.
+
+ > [!NOTE]
+ > If a warning appears about an incompatibility between the selected packet core version and the current Azure Stack Edge version, you'll need to update ASE first. Select **Upgrade ASE** from the warning prompt and follow the instructions in [Update your Azure Stack Edge Pro GPU](../databox-online/azure-stack-edge-gpu-install-update.md). Once you've finished updating your ASE, go back to the beginning of this step to create the packet core resource.
+
+ - Ensure **AKS-HCI** is selected in the **Platform** field.
+
+1. Use the information you collected in [Collect access network values](collect-required-information-for-a-site.md#collect-access-network-values) for the site to fill out the fields in the **Access network** section.
+ > [!NOTE]
+ > **ASE N2 virtual subnet** and **ASE N3 virtual subnet** (if this site supports 5G UEs) or **ASE S1-MME virtual subnet** and **ASE S1-U virtual subnet** (if this site supports 4G UEs) must match the corresponding virtual network names on port 5 on your Azure Stack Edge Pro device.
+
+1. In the **Attached data networks** section, select **Attach data network**. Select the existing data network you used for the site then use the information you collected in [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values) to fill out the fields. Note the following:
+ - **ASE N6 virtual subnet** (if this site supports 5G UEs) or **ASE SGi virtual subnet** (if this site supports 4G UEs) must match the corresponding virtual network name on port 6 on your Azure Stack Edge Pro device.
+ - If you decided not to configure a DNS server, clear the **Specify DNS addresses for UEs?** checkbox.
+ - If you decided to keep NAPT disabled, ensure you configure your data network router with static routes to the UE IP pools via the appropriate user plane data IP address for the corresponding attached data network.
+
+ Once you've finished filling out the fields, select **Attach**.
+
+1. Repeat the previous step for each additional data network configured on the site.
+1. If you decided to configure diagnostics packet collection or use a user assigned managed identity for HTTPS certificate for this site, select **Next : Identity >**.
+If you decided not to configure diagnostics packet collection or use a user assigned managed identity for HTTPS certificates for this site, you can skip this step.
+ 1. Select **+ Add** to configure a user assigned managed identity.
+ 1. In the **Select Managed Identity** side panel:
+ - Select the **Subscription** from the dropdown.
+ - Select the **Managed identity** from the dropdown.
+1. If you decided you want to provide a custom HTTPS certificate in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values), select **Next : Local access >**. If you decided not to provide a custom HTTPS certificate for monitoring this site, you can skip this step.
+ 1. Under **Provide custom HTTPS certificate?**, select **Yes**.
+ 1. Use the information you collected in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values) to select a certificate.
+1. In the **Local access** section, set the fields as follows:
+ - Under **Authentication type**, select the authentication method you decided to use in [Choose the authentication method for local monitoring tools](collect-required-information-for-a-site.md#choose-the-authentication-method-for-local-monitoring-tools).
+ - Under **Provide custom HTTPS certificate?**, select **Yes** or **No** based on whether you decided to provide a custom HTTPS certificate in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values). If you selected **Yes**, use the information you collected in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values) to select a certificate.
+
+1. Select **Review + create**.
+1. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
+
+ If the validation fails, you'll see an error message and the **Configuration** tab(s) containing the invalid configuration will be flagged with red dots. Select the flagged tab(s) and use the error messages to correct invalid configuration before returning to the **Review + create** tab.
+
+1. Once your configuration has been validated, you can select **Create** to create the packet core instance. The Azure portal will display a confirmation screen when the packet core instance has been created.
+
+1. Return to the **Site** overview, and confirm that it contains the new packet core instance.
+
+## Next steps
+
+If you decided to set up Azure AD for local monitoring access, follow the steps in [Enable Azure Active Directory (Azure AD) for local monitoring tools](enable-azure-active-directory.md).
+
+If you haven't already done so, you should now design the policy control configuration for your private mobile network. This allows you to customize how your packet core instances apply quality of service (QoS) characteristics to traffic. You can also block or limit certain flows. See [Policy control](policy-control.md) to learn more about designing the policy control configuration for your private mobile network.
private-5g-core Gather Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/gather-diagnostics.md
You must already have an AP5GC site deployed to collect diagnostics.
1. Select **Diagnostics collection**. 1. AP5GC online service will generate a package and upload it to the provided storage account URL. Once AP5GC reports that the upload has succeeded, report the SAS URL to Azure support. 1. Generate a SAS URL by selecting **Generate SAS** on the blob details blade.
- 1. Copy the contents of the **Blob SAS URL** field and share the URL with your support representative.
+ 1. Copy the contents of the **Blob SAS URL** field and share the URL with your support representative via a [support request ticket](/azure/azure-portal/supportability/how-to-create-azure-support-request).
+ > [!IMPORTANT]
+ > You must always set **Service type** to **Azure Private 5G Core** when raising a support request for any issues related to AP5GC.
1. Azure support will access the diagnostics using the provided SAS URL and provide support based on the information. ## Troubleshooting
You must already have an AP5GC site deployed to collect diagnostics.
- If an invalid container URL was passed, the request will be rejected and report **400 Bad Request**. Repeat the process with the correct container URL. - If the asynchronous part of the operation fails, the asynchronous operation resource is set to **Failed** and reports a failure reason. - Additionally, check that the same user-assigned identity was added to both the site and storage account.-- If this does not resolve the issue, share the correlation ID of the failed request with AP5GC support for investigation.
+- If this does not resolve the issue, share the correlation ID of the failed request with AP5GC support for investigation via a [support request ticket](/azure/azure-portal/supportability/how-to-create-azure-support-request).
+ > [!IMPORTANT]
+ > You must always set **Service type** to **Azure Private 5G Core** when raising a support request for any issues related to AP5GC.
## Next steps
private-5g-core Modify Service Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/modify-service-plan.md
+
+ Title: Modify a service plan
+
+description: In this how-to guide, you'll learn how to modify a service plan using the Azure portal.
++++ Last updated : 10/13/2022+++
+# Modify a service plan
+
+The *service plan* determines an allowance for the throughput and the number of radio access network (RAN) connections for each site, as well as the number of devices that each network supports. The plan you selected when creating the site can be updated to support your deployment requirements as they change. In this how-to guide, you'll learn how to modify a service plan using the Azure portal.
+
+## Prerequisites
+
+- Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope.
+- Verify pricing and charges associated with the service plan to which you want to move. See the [Azure Private 5G Core Pricing page](https://azure.microsoft.com/pricing/details/private-5g-core/) for pricing information.
+
+## Choose the new service plan
+
+Use the following table to choose the new service plan that will best fit your requirements.
+
+| Service Plan | Licensed Throughput | Licensed Activated SIMs | Licensed RANs |
+|||||
+| G0 | 100 Mbps | 20 | 2 |
+| G1 | 1 Gbps | 100 | 5 |
+| G2 | 2 Gbps | 200 | 10 |
+| G5 | 5 Gbps | 500 | Unlimited |
+| G10 | 10 Gbps | 1000 | Unlimited |
+
+## View the current service plan
+
+You can view your current service plan in the Azure portal.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Search for and select the **Mobile Network** resource representing the private mobile network.
+
+ :::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for a Mobile Network resource.":::
+
+1. Select the **Sites** page, then select the site you're interested in.
+
+ :::image type="content" source="media/mobile-network-sites.png" alt-text="Screenshot of the Azure portal showing the Sites view in the Mobile Network resource.":::
+
+1. Under the **Network Functions** group, select the **Packet Core** resource.
+
+ :::image type="content" source="media/modify-service-plan/select-packet-core.png" alt-text="Screenshot of the Azure portal. It shows a Mobile Network Site with a Packet Core resource highlighted.":::
+
+1. Check the **Service Plan** field under the **Essentials** heading to view the current service plan.
+
+ :::image type="content" source="media/modify-service-plan/service-plan.png" alt-text="Screenshot of the Azure portal showing a packet core control plane resource. The Service Plan field is highlighted.":::
+
+## Modify the service plan
+
+To modify your service plan:
+
+1. If you haven't already, navigate to the service plan that you're interested in modifying as described in [View the current service plan](#view-the-current-service-plan).
+2. Select **Change plan**.
+
+ :::image type="content" source="media/modify-service-plan/service-plan.png" alt-text="Screenshot of the Azure portal showing a packet core control plane resource. The Service Plan field is highlighted.":::
+
+3. In **Service Plan** on the right, select the new service plan you collected in [Choose the new service plan](#choose-the-new-service-plan). Save your change with **Select**.
+
+ :::image type="content" source="media/modify-service-plan/service-plan-selection-tab.png" alt-text="Screenshot of the Azure portal showing the Service Plan screen.":::
+
+4. Wait while the Azure portal applies the new service plan configuration to your site. You'll see a confirmation screen when the deployment is complete.
+5. Navigate to the **Mobile Network Site** resource as described in [View the current service plan](#view-the-current-service-plan). Check that the field under **Service Plan** contains the updated information.
+
+## Next steps
+
+Use [Azure Monitor](monitor-private-5g-core-with-log-analytics.md) or the [packet core dashboards](packet-core-dashboards.md) to confirm your packet core instance is operating normally after you modify the service plan.
private-5g-core Modify Site Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/modify-site-plan.md
- Title: Modify a site plan-
-description: In this how-to guide, you'll learn how to modify a site plan using the Azure portal.
---- Previously updated : 10/13/2022---
-# Modify a site plan
-
-The *site plan* determines an allowance for the throughput and number of radio access network (RAN) connections for each site, as well as the number of devices that each network supports. The plan you selected when creating the site can be updated to support your deployment requirements as they change. In this how-to guide, you'll learn how to modify a site plan using the Azure portal.
-
-## Prerequisites
--- Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope.-- Verify pricing and charges associated with the site plan to which you want to move. See the [Azure Private 5G Core Pricing page](https://azure.microsoft.com/pricing/details/private-5g-core/) for pricing information.-
-## Choose the new site plan
-
-Use the following table to choose the new site plan that will best fit your requirements.
-
-| Site Plan | Licensed Throughput | Licensed Activated SIMs | Licensed RANs |
-|||||
-| G0 | 100 Mbps | 20 | 2 |
-| G1 | 1 Gbps | 100 | 5 |
-| G2 | 2 Gbps | 200 | 10 |
-| G3 | 3 Gbps | 300 | Unlimited |
-| G4 | 4 Gbps | 400 | Unlimited |
-| G5 | 5 Gbps | 500 | Unlimited |
-| G10 | 10 Gbps | 1000 | Unlimited |
-
-## View the current site plan
-
-You can view your current site plan in the Azure portal.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Search for and select the **Mobile Network** resource representing the private mobile network.
-
- :::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for a Mobile Network resource.":::
-
-1. Select the **Sites** page, then select the site you're interested in.
-
- :::image type="content" source="media/mobile-network-sites.png" alt-text="Screenshot of the Azure portal showing the Sites view in the Mobile Network resource.":::
-
-1. Check the **Site Plan** field under the **Essentials** heading to view the current site plan.
-
- :::image type="content" source="media/modify-site-plan/view-site-plan.png" alt-text="Screenshot of the Azure portal showing a site resource. The Site Plan field is highlighted.":::
-
-## Modify the site plan
-
-To modify your site plan:
-
-1. If you haven't already, navigate to the site that you're interested in modifying as described in [View the current site plan](#view-the-current-site-plan).
-2. Select **Change plan**.
-
- :::image type="content" source="media/modify-site-plan/change-site-plan.png" alt-text="Screenshot of the Azure portal showing the Change plan option.":::
-
-3. In **Site Plan** on the right, select the new site plan you collected in [Choose the new site plan](#choose-the-new-site-plan). Save your change with **Select**.
-
- :::image type="content" source="media/modify-site-plan/site-plan-selection-tab.png" alt-text="Screenshot of the Azure portal showing the Site Plan screen.":::
-
-4. Wait while the Azure portal applies the new site plan configuration to your site. You'll see a confirmation screen when the deployment is complete.
-5. Navigate to the **Mobile Network Site** resource as described in [View the current site plan](#view-the-current-site-plan). Check that the field under **Site Plan** contains the updated information.
-
-## Next steps
-
-Use [Azure Monitor](monitor-private-5g-core-with-log-analytics.md) or the [packet core dashboards](packet-core-dashboards.md) to confirm your packet core instance is operating normally after you modify the site plan.
public-multi-access-edge-compute-mec Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/key-concepts.md
Every Azure public MEC site is associated with a parent Azure region. This regio
| -- | | | - | | AT&T | ATT Atlanta A | attatlanta1 | East US 2 | | AT&T | ATT Dallas A | attdallas1 | South Central US |
+| AT&T | ATT Detroit A | attdetroit1 | Central US |
## Azure services
public-multi-access-edge-compute-mec Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/overview.md
The following diagram shows how services are deployed at the Azure public MEC lo
Azure public MEC solutions are available in partnership with mobile network operators. The current operator partnerships are as follows: -- AT&T: Atlanta, Dallas
+- AT&T: Atlanta, Dallas, Detroit
## Next steps
public-multi-access-edge-compute-mec Partner Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/partner-solutions.md
The table in this article provides information on Partner solutions that can be
| **Couchbase** | [Server](https://www.couchbase.com/products/server), [Sync-Gateway](https://www.couchbase.com/products/sync-gateway) | [Couchbase Server Enterprise](https://azuremarketplace.microsoft.com/en/marketplace/apps/couchbase.couchbase-enterprise?tab=Overview) [Couchbase Sync Gateway Enterprise](https://azuremarketplace.microsoft.com/en/marketplace/apps/couchbase.couchbase-sync-gateway-enterprise?tab=Overview) | | **Fortinet** | [FortiGate](https://www.fortinet.com/products/private-cloud-security/fortigate-virtual-appliances) |[FortiGate](https://azuremarketplace.microsoft.com/marketplace/apps/fortinet.fortinet-fortigate?tab=Overview) | | **Fortinet** | [FortiWeb](https://www.fortinet.com/products/web-application-firewall/fortiweb?tab=saas) | [FortiWeb](https://azuremarketplace.microsoft.com/marketplace/apps/fortinet.fortinet_waas?tab=Overview) |
+| **Multicasting.io** | [Multicasting](https://multicasting.io/) | |
| **Net Foundry** | [Zero Trust Edge Fabric for Azure MEC](https://netfoundry.io/zero-trust-edge-fabric-azure-public-mec/) | [NetFoundry Edge Router](https://azuremarketplace.microsoft.com/marketplace/apps/netfoundryinc.ziti-edge-router?tab=Overview) |
+| **Palo Alto Networks** | [VM-Series](https://docs.paloaltonetworks.com/vm-series/9-1/vm-series-performance-capacity/vm-series-performance-capacity/vm-series-on-azure-models-and-vms) | [VM-Series Next-Generation Firewall](https://ms.portal.azure.com/#view/Microsoft_Azure_Marketplace/GalleryItemDetailsBladeNopdl/id/paloaltonetworks.vmseries-ngfw/product/%7B%22displayName%22%3A%22VM-Series%20Next-Generation%20Firewall%20from%20Palo%20Alto%20Networks%22%2C%22itemDisplayName%22%3A%22VM-Series%20Next-Generation%20Firewall%20from%20Palo%20Alto%20Networks%22%2C%22id%22%3A%22paloaltonetworks.vmseries-ngfw%22%2C%22bigId%22%3A%22DZH318Z0BP7N%22%2C%22legacyId%22%3A%22paloaltonetworks.vmseries-ngfw%22%2C%22offerId%22%3A%22vmseries-ngfw%22%2C%22publisherId%22%3A%22paloaltonetworks%22%2C%22publisherDisplayName%22%3A%22Palo%20Alto%20Networks%2C%20Inc.%22%2C%22summary%22%3A%22Looking%20to%20secure%20your%20applications%20in%20Azure%2C%20protect%20against%20threats%20and%20prevent%20data%20exfiltration%3F%22%2C%22longSummary%22%3A%22VM-Series%20next-generation%20firewall%20is%20for%20developers%2C%20architects%2C%20and%20security%20teams.%20Enable%20firewall%2C%20inline%20threat%2C%20and%20data%20theft%20preventions%20into%20your%20application%20development%20workflows%20using%20native%20Azure%20services%20and%20VM-Series%20automation%20features.%22%2C%22description%22%3A%22%3Cp%3EThe%20VM-Series%20virtualized%20next-generation%20firewall%20allows%20developers%2C%20and%20cloud%20security%20architects%20to%20automate%20and%20deploy%20inline%20firewall%20and%20threat%20prevention%20along%20with%20their%20application%20deployment%20workflows.%20Users%20can%20achieve%20%E2%80%98touchless%E2%80%99%20deployment%20of%20advanced%20firewall%2C%20threat%20prevention%20capabilities%20using%20ARM%20templates%2C%20native%20Azure%20services%2C%20and%20VM-Series%20firewall%20automation%20features%20such%20as%20bootstrapping.%20Auto-scaling%20using%20Azure%20VMSS%20and%20tag-based%20dynamic%20security%20policies%20are%20supported%20using%20the%20Panorama%20Plugin%20for%20Azure.%3C%2Fp%3E%20%5Cn%5Cn%3Cp%3EProtect%20your%20applications%20and%20data%20with%20whitelisting%20and%20segmentation%20policies.%20Policies%20update%20dynamically%20based%20on%20Azure%20tags%20assigned%20to%20application%20VMs%2C%20allowing%20you%20to%20re) |
| **Spirent** | [Spirent](https://www.spirent.com/solutions/edge-computing-validating-services) | [Spirent for Azure public MEC](https://azuremarketplace.microsoft.com/marketplace/apps/spirentcommunications1641943316121.umetrix-mec?tab=Overview) | | **Summit Tech** | [Odience](https://odience.com/interactions) | | | **Veeam** | [Veeam Backup & Replication](https://www.veeam.com/kb4375)| [Veeam Backup & Replication](https://azuremarketplace.microsoft.com/marketplace/apps/veeam.veeam-backup-replication?tab=Overview) |
Currently, the solutions can be deployed at the following locations:
| | | | AT&T Atlanta | attatlanta1 | | AT&T Dallas | attdallas1 |-
+| AT&T Detroit | attdetroit1 |
## Next steps * For more information about Public MEC, see the [Overview](Overview.md).
purview Concept Scans And Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-scans-and-ingestion.md
When scanning a source, you have a choice to scan the entire data source or choo
For example, when [creating and running a scan for an Azure SQL Database](register-scan-azure-sql-database.md#create-the-scan), you can choose which tables to scan, or select the entire database.
+For each entity (folder/table), there will be three selection states: fully selected, partially selected and not selected. In the example below, if you select ΓÇ£Department 1ΓÇ¥ on the folder hierarchy, ΓÇ£Department 1ΓÇ¥ is considered as fully selected. The parent entities for ΓÇ£Department 1ΓÇ¥ like ΓÇ£CompanyΓÇ¥ and ΓÇ£exampleΓÇ¥ are considered as partially selected as thereΓÇÖre other entities under the same parent not having been selected, for example ΓÇ£Department 2ΓÇ¥. Different icons will be used on UI for entities with different selection states.
++
+After you run the scan itΓÇÖs likely that there will be new assets added in the source system. By default the future assets under a certain parent will be automatically selected if the parent is fully or partially selected when you run the scan again. In the example above, after you select ΓÇ£Department 1ΓÇ¥ and run the scan, any new assets under folder ΓÇ£Department 1ΓÇ¥ or under ΓÇ£CompanyΓÇ¥ and ΓÇ£exampleΓÇ¥ will be included when you run the scan again.
+
+A toggle button will be introduced for users to control the automatic inclusion for new assets under partially selected parent. By default the toggle will be turned off and the automatic inclusion behavior for partially selected parent is disabled. In the same example with the toggle turned off, any new assets under partially selected parents like ΓÇ£CompanyΓÇ¥ and ΓÇ£exampleΓÇ¥ will not be included when you run the scan again, only new assets under ΓÇ£Department 1ΓÇ¥ will be included in future scan.
++
+If the toggle button is turned on, the new assets under a certain parent will be automatically selected if the parent is fully or partially selected when you run the scan again. The inclusion behavior will be the same as before the toggle button is introduced.
++
+> [!NOTE]
+> * The availability of the toggle button will depend on the data source type. It will be available in public preview for sources including Azure Blob Storage, Azure Data Lake Storage Gen 1, Azure Data Lake Storage Gen 2, Azure Files and Azure Dedicated SQL pool (formerly SQL DW).
+> * For any scans created or scheduled before the toggle button is introduced, the toggle state is set as on and canΓÇÖt be changed. For any scans created or scheduled after the toggle button is introduced, the toggle state canΓÇÖt be changed after the scan is saved. You need to create a new scan to change the toggle state.
+> * When the toggle button is turned off, for sources of storage type like Azure Data Lake Storage Gen 2 it may take up to 4 hours before the [browse by source type](how-to-browse-catalog.md#browse-by-source-type) experience becomes fully available after your scan job is completed.
+
+### Known limitations
+When the toggle button is turned off:
+* The file entities under a partially selected parent will not be scanned.
+* If all existing entities under a parent are explicitly selected, the parent will be considered as fully selected and any new assets under the parent will be included when you run the scan again.
+ ### Scan rule set A scan rule set determines the kinds of information a scan will look for when it's running against one of your sources. Available rules depend on the kind of source you're scanning, but include things like the [file types](sources-and-scans.md#file-types-supported-for-scanning) you should scan, and the kinds of [classifications](supported-classifications.md) you need.
purview Deployment Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/deployment-best-practices.md
Previously updated : 08/04/2022 Last updated : 03/21/2023 # Microsoft Purview (formerly Azure Purview) deployment best practices
In the Microsoft Purview Data Map, there are several areas where the Catalog Adm
## Moving tenants
-If your Azure Subscription moves tenants while you have a Microsoft Purview account, there are some steps you should follow after the move.
+If your Azure Subscription moves tenants while you have a Microsoft Purview account, you will need to create a new Microsoft Purview account and re-register and scan your sources.
-Currently your Microsoft Purview account's system assigned and user assigned managed identities will be cleared during the move to the new tenant. This is because your Azure tenant houses all authentication information, so these need to be updated for your Microsoft Purview account in the new tenant.
-
-After the move, follow the below steps to clear the old identities, and create new ones:
-
-1. If you're running locally, sign in to Azure through the Azure CLI.
-
- ```azurecli-interactive
- az login
- ```
- Alternatively, you can use the [Azure Cloud Shell](../cloud-shell/overview.md) in the Azure portal.
- Direct browser link: [https://shell.azure.com](https://shell.azure.com).
-
-1. Obtain an access token by using [az account get-access-token](/cli/azure/account#az-account-get-access-token).
- ```azurecli-interactive
- az account get-access-token
- ```
-
-1. Run the following bash command to disable all managed identities (user and system assigned managed identities):
-
- > [!IMPORTANT]
- > Be sure to replace these values in the below commands:
- > - \<Subscription_Id>: Your Azure Subscription ID
- > - \<Resource_Group_Name>: Name of the resource group where your Microsoft Purview account is housed.
- > - \<Account_Name>: Your Microsoft Purview account name
- > - \<Access_Token>: The token from the first two steps.
-
- ```bash
- curl 'https://management.azure.com/subscriptions/<Subscription_Id>/resourceGroups/<Resource_Group_Name>/providers/Microsoft.Purview/accounts/<Account_Name>?api-version=2021-07-01' -X PATCH -d'{"identity":{"type":"None"}}' -H "Content-Type: application/json" -H "Authorization:Bearer <Access_Token>"
- ```
-
-1. To enable your new system managed assigned identity (SAMI), run the following bash command:
-
- ```bash
- curl 'https://management.azure.com/subscriptions/<Subscription_Id>/resourceGroups/<Resource_Group_Name>/providers/Microsoft.Purview/accounts/<Account_Name>?api-version=2021-07-01' -X PATCH -d '{"identity":{"type":"SystemAssigned"}}' -H "Content-Type: application/json" -H "Authorization:Bearer <Access_Token>"
- ```
-
-1. If you had a user assigned managed identity (UAMI), to enable one on your new tenant, register your UAMI in Microsoft Purview as you did originally by following [the steps from the manage credentials article](manage-credentials.md#create-a-user-assigned-managed-identity).
+Moving tenants is not currently supported for Microsoft Purview.
## Next steps
remote-rendering View Remote Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/tutorials/unity/view-remote-models/view-remote-models.md
In this example, we'll assume the project is being created in a folder called **
Follow the instructions on how to [add the Azure Remote Rendering and OpenXR packages](../../../how-tos/unity/install-remote-rendering-unity-package.md) to your Unity Project.
+> [!NOTE]
+> If Unity displays a warning dialog after importing the OpenXR package asking whether to enable the native platform backends for the new input system, click **No** for now. You will enable it in a later step.
+ ## Configure the camera 1. Select the **Main Camera** node.
public async void InitializeSessionService()
} catch (ArgumentException argumentException) {
- NotificationBar.Message("InitializeSessionService failed: SessionConfiguration is invalid.");
Debug.LogError(argumentException.Message); CurrentCoordinatorState = RemoteRenderingState.NotAuthorized; return;
sentinel Normalization Develop Parsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-develop-parsers.md
To understand how parsers fit within the ASIM architecture, refer to the [ASIM a
> ASIM is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-## Custom parser development process
+## Custom ASIM parser development process
The following workflow describes the high level steps in developing a custom ASIM, source-specific parser:
You may want to contribute the parser to the primary ASIM distribution. If accep
To contribute your parsers:
-| Step | Description |
-| - | -- |
-| Develop the parsers | - Develop both a filtering parser and a parameter-less parser.<br>- Create a YAML file for the parser as described in [Deploying Parsers](#deploy-parsers) above.|
-| Test the parsers | - Make sure that your parsers pass all [testings](#test-parsers) with no errors.<br>- If any warnings are left, document them in the parser YAML file as described below. |
-| Contribute | - Create a pull request against the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel)<br>- Add to the PR your parsers YAML files to the ASIM parser folders (`/Parsers/ASim<schema>/Parsers`)<br>- Adds representative sample data to the sample data folder (`/Sample Data`) |
+- Develop both a filtering parser and a parameter-less parser.
+- Create a YAML file for the parser as described in [Deploying Parsers](#deploy-parsers) above.
+- Make sure that your parsers pass all [testings](#test-parsers) with no errors. If any warnings are left, [document them](#documenting-accepted-warnings) in the parser YAML file.
+- Create a pull request against the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel), including:
+ - Your parsers YAML files in the ASIM parser folders (`/Parsers/ASim<schema>/Parsers`)
+ - Representative sample data according to the [samples submission guidelines](#samples-submission-guidelines).
+ - Test results according to the [test results submission guidelines](#test-results-submission-guidelines).
### Documenting accepted warnings
Exceptions:
The warning specified in the YAML file should be a short form of the warning message uniquely identifying. The value is used to match warning messages when performing automated testings and ignore them.
+### Samples submission guidelines
+
+Sample data is needed when troubleshooting parser issues and for ensuring future updates to the parser conform to older samples. The samples you submit should include any event variant that the parser supports. Make sure that the sample events include all possible event types, event formats and variations such as events representing successful and failed activity. Also make sure that variations in value formats are represented. For example, if a hostname can be represented as an FQDN or a simple hostname, the sample events should include both formats.
+
+To submit the event samples, use the following steps:
+
+- In the `Logs` screen, run a query that will extract from the source table only the events selected by the parser. For example, for the [Infoblox DNS parser](https://github.com/Azure/Azure-Sentinel/blob/master/Parsers/ASimDns/Parsers/ASimDnsInfobloxNIOS.yaml), use the following query:
+
+``` KQL
+ Syslog
+ | where ProcessName == "named"
+```
+
+- Export the results using the **Export to CSV** option to a file named `<EventVendor>_<EventProduct>_<EventSchema>_IngestedLogs.csv`, Where `EventProduct`, `EventProduct`, and `EventSchema` are the values assigned by the parser to those fields.
+
+- In the `Logs` screen, run a query that will output the schema or the parser input table. For example, for the same Infoblox DNS parser, the query is:
+
+``` KQL
+ Syslog
+ | getschema
+```
+
+- Export the results using the **Export to CSV** option to a file named `<TableName>_schema.csv`, where `TableName` is the name of source table the parser uses.
+
+- Include both files in your PR in the folder `/Sample Data/ASIM`. If the file already exists, add your GitHub handle to the name, for example: `<EventVendor>_<EventProduct>_<EventSchema>_SchemaTest_<GitHubHanlde>.csv`
+
+### Test results submission guidelines
+
+Test results are important to verify the correctness of the parser and understand any reported exception.
+
+To submit your test results, use the following steps:
+
+- Run the parser tests and described in the [testings](#test-parsers) section.
+
+- and export the tests results using the **Export to CSV** option to files named `<EventVendor>_<EventProduct>_<EventSchema>_SchemaTest.csv` and `<EventVendor>_<EventProduct>_<EventSchema>_DataTest.csv` respectively.
+
+- Include both files in your PR in the folder `/Parsers/ASim<schema>/Tests`.
++ ## Next steps This article discusses developing ASIM parsers.
sentinel Normalization Parsers List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-parsers-list.md
To use ASIM authentication parsers, deploy the parsers from the [Microsoft Senti
- reported by Microsoft 365 Defender for Endpoint, collected using the Microsoft 365 Defender connector. - **Linux sign-ins** - reported by Microsoft 365 Defender for Endpoint, collected using the Microsoft 365 Defender connector.
+ - `su`, `sudu`, and `sshd` activity reported using Syslog.
- reported by Microsoft Defender to IoT Endpoint. - **Azure Active Directory sign-ins**, collected using the Azure Active Directory connector. Separate parsers are provided for regular, Non-Interactive, Managed Identities and Service Principles Sign-ins. - **AWS sign-ins**, collected using the AWS CloudTrail connector.
ASIM Web Session parsers are available in every workspace. Microsoft Sentinel pr
| **Source** | **Notes** | **Parser** | | | | |
+| **Palo Alto PanOS threat logs** | Collected using CEF. | `_Im_WebSession_PaloAltoCEF` |
| **Squid Proxy** | | `_Im_WebSession_SquidProxyVxx` | | **Vectra AI Streams** | Supports the [pack](normalization-about-parsers.md#the-pack-parameter) parameter. | `_Im_WebSession_VectraAIVxx` |
-| **Zscaler ZIA** | Collected using CEF | `_Im_WebSessionZscalerZIAVxx` |
+| **Zscaler ZIA** | Collected using CEF. | `_Im_WebSessionZscalerZIAVxx` |
Deploy the workspace deployed parsers version from the [Microsoft Sentinel GitHub repository](https://aka.ms/DeployASIM).
sentinel Normalization Schema Web https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-schema-web.md
The following are additional fields that are specific to web sessions:
| Field | Class | Type | Description | | | | | |
-| <a name="url"></a>**Url** | Mandatory | String | The HTTP request URL, including parameters. For `HTTPSession` events, the URL should include the schema and server parts. For `WebServerSession` and for `ApiRequest` the URL would typically not include the schema and server, which can be found in the `NetworkApplicationProtocol` and `DstFQDN` fields respectively. <br><br>Example: `https://contoso.com/fo/?k=v&amp;q=u#f` |
+| <a name="url"></a>**Url** | Mandatory | String | The HTTP request URL, including parameters. For `HTTPSession` events, the URL may include the schema and should include the server name. For `WebServerSession` and for `ApiRequest` the URL would typically not include the schema and server, which can be found in the `NetworkApplicationProtocol` and `DstFQDN` fields respectively. <br><br>Example: `https://contoso.com/fo/?k=v&amp;q=u#f` |
| **UrlCategory** | Optional | String | The defined grouping of a URL or the domain part of the URL. The category is commonly provided by web security gateways and is based on the content of the site the URL points to.<br><br>Example: search engines, adult, news, advertising, and parked domains. | | **UrlOriginal** | Optional | String | The original value of the URL, when the URL was modified by the reporting device and both values are provided. | | **HttpVersion** | Optional | String | The HTTP Request Version.<br><br>Example: `2.0` |
sentinel Sap Solution Security Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-security-content.md
These watchlists provide the configuration for the Microsoft Sentinel solution f
| <a name="modules"></a>**SAP - Obsolete Function Modules** | Obsolete function modules, whose execution should be governed. <br><br>- **FunctionModule**: ABAP Function Module, such as TH_SAPREL <br>- **Description**: A meaningful function module description | | <a name="programs"></a>**SAP - Obsolete Programs** | Obsolete ABAP programs (reports), whose execution should be governed. <br><br>- **ABAPProgram**:ABAP Program, such as TH_ RSPFLDOC <br>- **Description**: A meaningful ABAP program description | | <a name="transactions"></a>**SAP - Transactions for ABAP Generations** | Transactions for ABAP generations whose execution should be governed. <br><br>- **TransactionCode**:Transaction Code, such as SE11. <br>- **Description**: A meaningful Transaction Code description |
-| <a name="servers"></a>**SAP - FTP Servers** | FTP Servers for identification of unauthorized connections. <br><br>- **Client**:such as 100. <br>- **FTP_Server_Name**: FTP server name, such as http://contoso.com/ <br>-**FTP_Server_Port**:FTP server port, such as 22. <br>- **Description**A meaningful FTP Server description |
+| <a name="servers"></a>**SAP - FTP Servers** | FTP Servers for identification of unauthorized connections. <br><br>- **Client**:such as 100. <br>- **FTP_Server_Name**: FTP server name, such as `http://contoso.com/` <br>-**FTP_Server_Port**:FTP server port, such as 22. <br>- **Description**A meaningful FTP Server description |
| <a name="objects"></a>**SAP_Dynamic_Audit_Log_Monitor_Configuration** | Configure the SAP audit log alerts by assigning each message ID a severity level as required by you, per system role (production, non-production). This watchlist details all available SAP standard audit log message IDs. The watchlist can be extended to contain additional message IDs you might create on your own using ABAP enhancements on their SAP NetWeaver systems. This watchlist also allows for configuring a designated team to handle each of the event types, and excluding users by SAP roles, SAP profiles or by tags from the **SAP_User_Config** watchlist. This watchlist is one of the core components used for [configuring](configure-audit-log-rules.md) the [built-in SAP analytics rules for monitoring the SAP audit log](#built-in-sap-analytics-rules-for-monitoring-the-sap-audit-log). <br><br>- **MessageID**: The SAP Message ID, or event type, such as `AUD` (User master record changes), or `AUB` (authorization changes). <br>- **DetailedDescription**: A markdown enabled description to be shown on the incident pane. <br>- **ProductionSeverity**: The desired severity for the incident to be created with for production systems `High`, `Medium`. Can be set as `Disabled`. <br>- **NonProdSeverity**: The desired severity for the incident to be created with for non-production systems `High`, `Medium`. Can be set as `Disabled`. <br>- **ProductionThreshold** The "Per hour" count of events to be considered as suspicious for production systems `60`. <br>- **NonProdThreshold** The "Per hour" count of events to be considered as suspicious for non-production systems `10`. <br>- **RolesTagsToExclude**: This field accepts SAP role name, SAP profile names or tags from the SAP_User_Config watchlist. These are then used to exclude the associated users from specific event types. See options for role tags at the end of this list. <br>- **RuleType**: Use `Deterministic` for the event type to be sent off to the [SAP - Dynamic Deterministic Audit Log Monitor](#sapdynamic-deterministic-audit-log-monitor), or `AnomaliesOnly` to have this event covered by the [SAP - Dynamic Anomaly based Audit Log Monitor Alerts (PREVIEW)](#sapdynamic-anomaly-based-audit-log-monitor-alerts-preview).<br><br>For the **RolesTagsToExclude** field:<br>- If you list SAP roles or [SAP profiles](sap-solution-deploy-alternate.md#configuring-user-master-data-collection), this excludes any user with the listed roles or profiles from these event types for the same SAP system. For example, if you define the `BASIC_BO_USERS` ABAP role for the RFC related event types, Business Objects users won't trigger incidents when making massive RFC calls.<br>- Tagging an event type is similar to specifying SAP roles or profiles, but tags can be created in the workspace, so SOC teams can exclude users by activity without depending on the SAP team. For example, the audit message IDs AUB (authorization changes) and AUD (user master record changes) are assigned the `MassiveAuthChanges` tag. Users assigned this tag are excluded from the checks for these activities. Running the workspace `SAPAuditLogConfigRecommend` function produces a list of recommended tags to be assigned to users, such as `Add the tags ["GenericTablebyRFCOK"] to user SENTINEL_SRV using the SAP_User_Config watchlist`. | <a name="objects"></a>**SAP_User_Config** | Allows for fine tuning alerts by excluding /including users in specific contexts and is also used for [configuring](configure-audit-log-rules.md) the [built-in SAP analytics rules for monitoring the SAP audit log](#built-in-sap-analytics-rules-for-monitoring-the-sap-audit-log). <br><br> - **SAPUser**: The SAP user <br> - **Tags**: Tags are used to identify users against certain activity. For example Adding the tags ["GenericTablebyRFCOK"] to user SENTINEL_SRV will prevent RFC related incidents to be created for this specific user <br>**Other active directory user identifiers** <br>- AD User Identifier <br>- User On-Premises Sid <br>- User Principal Name |
service-bus-messaging Enable Partitions Premium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/enable-partitions-premium.md
Service Bus partitions enable queues and topics, or messaging entities, to be pa
> [!NOTE]
-> - This feature is currently available only in the East US and South Central US regions, with other regions being added during the public preview.
+> - This feature is currently available only in the East US, North Europe and West Europe regions, with other regions being added during the public preview.
> - Partitioning is available at entity creation for namespaces in the Premium SKU. Any previously existing partitioned entities in Premium namespaces continue to work as expected. > - It's not possible to change the partitioning option on any existing namespace. You can only set the option when you create a namespace. > - The assigned messaging units are always a multiplier of the amount of partitions in a namespace, and are equally distributed across the partitions. For example, in a namespace with 16MU and 4 partitions, each partition will be assigned 4MU.
service-bus-messaging Message Sessions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-sessions.md
On session-aware queues or subscriptions, sessions come into existence when ther
Typically, however, an application has a clear notion of where a set of related messages starts and ends. Service Bus doesn't set any specific rules. For example, your application could set the **Label** property for the first message to **start**, for intermediate messages to **content**, and for the last message to **end**. The relative position of the content messages can be computed as the current message *SequenceNumber* delta from the **start** message *SequenceNumber*. > [!IMPORTANT]
-> When sessions are enabled on a queue or a subscription, the client applications can ***no longer*** send/receive regular messages. All messages must be sent as part of a session (by setting the session id) and received by accepting the session.
+> When sessions are enabled on a queue or a subscription, the client applications can ***no longer*** send/receive regular messages. All messages must be sent as part of a session (by setting the session id) and received by accepting the session. Clients may still peek a queue or subscription that has sessions enabled. See [Message browsing](message-browsing.md).
The APIs for sessions exist on queue and subscription clients. There's an imperative model that controls when sessions and messages are received, and a handler-based model that hides the complexity of managing the receive loop.
spring-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/overview.md
Azure Spring Apps supports both Java [Spring Boot](https://spring.io/projects/sp
As part of the Azure ecosystem, Azure Spring Apps allows easy binding to other Azure services including storage, databases, monitoring, and more. * Azure Spring Apps is a fully managed service for Spring Boot apps that lets you focus on building and running apps without the hassle of managing infrastructu