Updates from: 07/21/2023 01:17:32
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Authorization Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/authorization-code-flow.md
grant_type=authorization_code&client_id=90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6&sco
| client_id |Required |The application ID assigned to your app in the [Azure portal](https://portal.azure.com).| | client_secret | Yes, in Web Apps | The application secret that was generated in the [Azure portal](https://portal.azure.com/). Client secrets are used in this flow for Web App scenarios, where the client can securely store a client secret. For Native App (public client) scenarios, client secrets cannot be securely stored, and therefore are not used in this call. If you use a client secret, please change it on a periodic basis. | | grant_type |Required |The type of grant. For the authorization code flow, the grant type must be `authorization_code`. |
-| scope |Required |A space-separated list of scopes. A single scope value indicates to Azure AD both of the permissions that are being requested. Using the client ID as the scope indicates that your app needs an access token that can be used against your own service or web API, represented by the same client ID. The `offline_access` scope indicates that your app needs a refresh token for long-lived access to resources. You also can use the `openid` scope to request an ID token from Azure AD B2C. |
+| scope |Recommended |A space-separated list of scopes. A single scope value indicates to Azure AD both of the permissions that are being requested. Using the client ID as the scope indicates that your app needs an access token that can be used against your own service or web API, represented by the same client ID. The `offline_access` scope indicates that your app needs a refresh token for long-lived access to resources. You also can use the `openid` scope to request an ID token from Azure AD B2C. |
| code |Required |The authorization code that you acquired in from the `/authorize` endpoint. | | redirect_uri |Required |The redirect URI of the application where you received the authorization code. | | code_verifier | recommended | The same `code_verifier` used to obtain the authorization code. Required if PKCE was used in the authorization code grant request. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). |
active-directory Application Provisioning Config Problem Scim Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-config-problem-scim-compatibility.md
This article describes current and past issues with the Azure AD user provisioni
## Understanding the provisioning job The provisioning service uses the concept of a job to operate against an application. The jobID can be found in the [progress bar](application-provisioning-when-will-provisioning-finish-specific-user.md#view-the-provisioning-progress-bar). All new provisioning applications are created with a jobID starting with "scim". The scim job represents the current state of the service. Older jobs have the ID "customappsso". This job represents the state of the service in 2018.
-If you are using an application in the gallery, the job generally contains the name of the app (e.g. zoom snowFlake, dataBricks, etc.). You can skip this documentation when using a gallery application. This primarily applies for non-gallery applications with jobID SCIM or customAppSSO.
+If you are using an application in the gallery, the job generally contains the name of the app (such as zoom snowFlake or dataBricks). You can skip this documentation when using a gallery application. This primarily applies for non-gallery applications with jobID SCIM or customAppSSO.
## SCIM 2.0 compliance issues and status In the table below, any item marked as fixed means that the proper behavior can be found on the SCIM job. We have worked to ensure backwards compatibility for the changes we have made. We recommend using the new behavior for any new implementations and updating existing implementations. Please note that the customappSSO behavior that was the default prior to December 2018 is not supported anymore.
Below are sample requests to help outline what the sync engine currently sends v
## Upgrading from the older customappsso job to the SCIM job
-Following the steps below will delete your existing customappsso job and create a new scim job.
+Following the steps below will delete your existing customappsso job and create a new SCIM job.
-1. Sign into the Azure portal at https://portal.azure.com.
+1. Sign into the [Azure portal](https://portal.azure.com).
2. In the **Azure Active Directory > Enterprise Applications** section of the Azure portal, locate and select your existing SCIM application. 3. In the **Properties** section of your existing SCIM app, copy the **Object ID**.
-4. In a new web browser window, go to https://developer.microsoft.com/graph/graph-explorer
- and sign in as the administrator for the Azure AD tenant where your app is added.
+4. In a new web browser window, go to https://developer.microsoft.com/graph/graph-explorer and sign in as the administrator for the Azure AD tenant where your app is added.
5. In the Graph Explorer, run the command below to locate the ID of your provisioning job. Replace "[object-id]" with the service principal ID (object ID) copied from the third step. `GET https://graph.microsoft.com/beta/servicePrincipals/[object-id]/synchronization/jobs`
Following the steps below will delete your existing customappsso job and create
## Downgrading from the SCIM job to the customappsso job (not recommended) We allow you to downgrade back to the old behavior but don't recommend it as the customappsso does not benefit from some of the updates we make, and may not be supported forever.
-1. Sign into the Azure portal at https://portal.azure.com.
-2. in the **Azure Active Directory > Enterprise Applications > Create application** section of the Azure portal, create a new **Non-gallery** application.
+1. Sign into the [Azure portal](https://portal.azure.com).
+2. In the **Azure Active Directory > Enterprise Applications > Create application** section of the Azure portal, create a new **Non-gallery** application.
3. In the **Properties** section of your new custom app, copy the **Object ID**.
-4. In a new web browser window, go to https://developer.microsoft.com/graph/graph-explorer
- and sign in as the administrator for the Azure AD tenant where your app is added.
+4. In a new web browser window, go to https://developer.microsoft.com/graph/graph-explorer and sign in as the administrator for the Azure AD tenant where your app is added.
5. In the Graph Explorer, run the command below to initialize the provisioning configuration for your app. Replace "[object-id]" with the service principal ID (object ID) copied from the third step.
active-directory Configure Automatic User Provisioning Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/configure-automatic-user-provisioning-portal.md
This article describes the general steps for managing automatic user account pro
Use the Azure portal to view and manage all applications that are configured for single sign-on in a directory. Enterprise apps are apps that are deployed and used within your organization. Follow these steps to view and manage your enterprise applications:
-1. Open the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Browse to **Azure Active Directory** > **Enterprise applications**. 1. A list of all configured apps is shown, including apps that were added from the gallery. 1. Select any app to load its resource pane, where you can view reports and manage app settings.
active-directory Customize Application Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/customize-application-attributes.md
Applications and systems that support customization of the attribute list includ
> [!NOTE]
-> Editing the list of supported attributes is only recommended for administrators who have customized the schema of their applications and systems, and have first-hand knowledge of how their custom attributes have been defined or if a source attribute isn't automatically displayed in the Azure Portal UI. This sometimes requires familiarity with the APIs and developer tools provided by an application or system. The ability to edit the list of supported attributes is locked down by default, but customers can enable the capability by navigating to the following URL: https://portal.azure.com/?Microsoft_AAD_Connect_Provisioning_forceSchemaEditorEnabled=true . You can then navigate to your application to view the [attribute list](#editing-the-list-of-supported-attributes).
+> Editing the list of supported attributes is only recommended for administrators who have customized the schema of their applications and systems, and have first-hand knowledge of how their custom attributes have been defined or if a source attribute isn't automatically displayed in the Azure portal UI. This sometimes requires familiarity with the APIs and developer tools provided by an application or system. The ability to edit the list of supported attributes is locked down by default, but customers can enable the capability by navigating to the following URL: https://portal.azure.com/?Microsoft_AAD_Connect_Provisioning_forceSchemaEditorEnabled=true . You can then navigate to your application to view the [attribute list](#editing-the-list-of-supported-attributes).
> [!NOTE] > When a directory extension attribute in Azure AD doesn't show up automatically in your attribute mapping drop-down, you can manually add it to the "Azure AD attribute list". When manually adding Azure AD directory extension attributes to your provisioning app, note that directory extension attribute names are case-sensitive. For example: If you have a directory extension attribute named `extension_53c9e2c0exxxxxxxxxxxxxxxx_acmeCostCenter`, make sure you enter it in the same format as defined in the directory.
Custom attributes can't be referential attributes, multi-value or complex-typed
**Example representation of a user with an extension attribute:** ```json
- {
- "schemas": ["urn:ietf:params:scim:schemas:core:2.0:User",
- "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User"],
- "userName":"bjensen",
- "id": "48af03ac28ad4fb88478",
- "externalId":"bjensen",
- "name":{
- "formatted":"Ms. Barbara J Jensen III",
- "familyName":"Jensen",
- "givenName":"Barbara"
- },
- "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User": {
- "employeeNumber": "701984",
- "costCenter": "4130",
- "organization": "Universal Studios",
- "division": "Theme Park",
- "department": "Tour Operations",
- "manager": {
- "value": "26118915-6090-4610-87e4-49d8ca9f808d",
- "$ref": "../Users/26118915-6090-4610-87e4-49d8ca9f808d",
- "displayName": "John Smith"
- }
- },
- "urn:ietf:params:scim:schemas:extension:CustomExtensionName:2.0:User": {
- "CustomAttribute": "701984",
- },
- "meta": {
- "resourceType": "User",
- "created": "2010-01-23T04:56:22Z",
- "lastModified": "2011-05-13T04:42:34Z",
- "version": "W\/\"3694e05e9dff591\"",
- "location":
- "https://example.com/v2/Users/2819c223-7f76-453a-919d-413861904646"
- }
- }
+{
+ "schemas":[
+ "urn:ietf:params:scim:schemas:core:2.0:User",
+ "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User"
+ ],
+ "userName":"bjensen",
+ "id": "48af03ac28ad4fb88478",
+ "externalId":"bjensen",
+ "name":{
+ "formatted":"Ms. Barbara J Jensen III",
+ "familyName":"Jensen",
+ "givenName":"Barbara"
+ },
+ "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User": {
+ "employeeNumber": "701984",
+ "costCenter": "4130",
+ "organization": "Universal Studios",
+ "division": "Theme Park",
+ "department": "Tour Operations",
+ "manager": {
+ "value": "26118915-6090-4610-87e4-49d8ca9f808d",
+ "$ref": "../Users/26118915-6090-4610-87e4-49d8ca9f808d",
+ "displayName": "John Smith"
+ }
+ },
+ "urn:ietf:params:scim:schemas:extension:CustomExtensionName:2.0:User": {
+ "CustomAttribute": "701984",
+ },
+ "meta": {
+ "resourceType": "User",
+ "created": "2010-01-23T04:56:22Z",
+ "lastModified": "2011-05-13T04:42:34Z",
+ "version": "W\/\"3694e05e9dff591\"",
+ "location": "https://example.com/v2/Users/2819c223-7f76-453a-919d-413861904646"
+ }
+}
``` - ## Provisioning a role to a SCIM app Use the steps in the example to provision roles for a user to your application. The description is specific to custom SCIM applications. For gallery applications such as Salesforce and ServiceNow, use the predefined role mappings. The bullets describe how to transform the AppRoleAssignments attribute to the format your application expects. - Mapping an appRoleAssignment in Azure AD to a role in your application requires that you transform the attribute using an [expression](../app-provisioning/functions-for-customizing-application-data.md). The appRoleAssignment attribute **shouldn't be mapped directly** to a role attribute without using an expression to parse the role details. -- **SingleAppRoleAssignment**
+- **SingleAppRoleAssignment**
+ - **When to use:** Use the SingleAppRoleAssignment expression to provision a single role for a user and to specify the primary role. - **How to configure:** Use the steps described to navigate to the attribute mappings page and use the SingleAppRoleAssignment expression to map to the roles attribute. There are three role attributes to choose from (`roles[primary eq "True"].display`, `roles[primary eq "True"].type`, and `roles[primary eq "True"].value`). You can choose to include any or all of the role attributes in your mappings. If you would like to include more than one, just add a new mapping and include it as the target attribute.
- ![Add SingleAppRoleAssignment](./media/customize-application-attributes/edit-attribute-singleapproleassignment.png)
+ ![Add SingleAppRoleAssignment](./media/customize-application-attributes/edit-attribute-singleapproleassignment.png)
+ - **Things to consider** - Ensure that multiple roles aren't assigned to a user. There's no guarantee which role is provisioned. - SingleAppRoleAssignments isn't compatible with setting scope to "Sync All users and groups." + - **Example request (POST)**
- ```json
+ ```json
{ "schemas": [ "urn:ietf:params:scim:schemas:core:2.0:User"
Use the steps in the example to provision roles for a user to your application.
"value": "Admin" } ]
- }
- ```
-
+ }
+ ```
+ - **Example output (PATCH)**
-
- ```json
- "Operations": [
- {
- "op": "Add",
- "path": "roles",
- "value": [
- {
- "value": "{\"id\":\"06b07648-ecfe-589f-9d2f-6325724a46ee\",\"value\":\"25\",\"displayName\":\"Role1234\"}"
- }
- ]
- ```
+
+ ```json
+ "Operations": [
+ {
+ "op": "Add",
+ "path": "roles",
+ "value": [
+ {
+ "value": "{\"id\":\"06b07648-ecfe-589f-9d2f-6325724a46ee\",\"value\":\"25\",\"displayName\":\"Role1234\"}"
+ }
+ ]
+ }
+ ]
+ ```
+ The request formats in the PATCH and POST differ. To ensure that POST and PATCH are sent in the same format, you can use the feature flag described [here](./application-provisioning-config-problem-scim-compatibility.md#flags-to-alter-the-scim-behavior). -- **AppRoleAssignmentsComplex**
+- **AppRoleAssignmentsComplex**
+ - **When to use:** Use the AppRoleAssignmentsComplex expression to provision multiple roles for a user. - **How to configure:** Edit the list of supported attributes as described to include a new attribute for roles:
The request formats in the PATCH and POST differ. To ensure that POST and PATCH
Then use the AppRoleAssignmentsComplex expression to map to the custom role attribute as shown in the image: ![Add AppRoleAssignmentsComplex](./media/customize-application-attributes/edit-attribute-approleassignmentscomplex.png)<br>+ - **Things to consider**+ - All roles are provisioned as primary = false. - The POST contains the role type. The PATCH request doesn't contain type. We're working on sending the type in both POST and PATCH requests. - AppRoleAssignmentsComplex isn't compatible with setting scope to "Sync All users and groups." - The AppRoleAssignmentsComplex only supports the PATCH add function. For multi-role SCIM applications, roles deleted in Azure Active Directory will therefore not be deleted from the application. We're working to support additional PATCH functions and address the limitation.
- - **Example output**
+ - **Example output**
- ```json
- {
+ ```json
+ {
"schemas": [ "urn:ietf:params:scim:schemas:core:2.0:User" ],
The request formats in the PATCH and POST differ. To ensure that POST and PATCH
"value": "User" } ]
- }
- ```
-
-
-
+ }
+ ```
## Provisioning a multi-value attribute+ Certain attributes such as phoneNumbers and emails are multi-value attributes where you may need to specify different types of phone numbers or emails. Use the expression for multi-value attributes. It allows you to specify the attribute type and map that to the corresponding Azure AD user attribute for the value.
-* phoneNumbers[type eq "work"].value
-* phoneNumbers[type eq "mobile"].value
-* phoneNumbers[type eq "fax"].value
+* `phoneNumbers[type eq "work"].value`
+* `phoneNumbers[type eq "mobile"]`.value
+* `phoneNumbers[type eq "fax"].value`
- ```json
- "phoneNumbers": [
- {
- "value": "555-555-5555",
- "type": "work"
- },
- {
- "value": "555-555-5555",
- "type": "mobile"
- },
- {
- "value": "555-555-5555",
- "type": "fax"
- }
- ]
- ```
+ ```json
+ "phoneNumbers": [
+ {
+ "value": "555-555-5555",
+ "type": "work"
+ },
+ {
+ "value": "555-555-5555",
+ "type": "mobile"
+ },
+ {
+ "value": "555-555-5555",
+ "type": "fax"
+ }
+ ]
+ ```
## Restoring the default attributes and attribute-mappings
active-directory Sap Successfactors Integration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/sap-successfactors-integration-reference.md
https://[SuccessFactorsAPIEndpoint]/odata/v2/PerPerson/$count?$format=json&$filt
## How pre-hire processing works This section explains how the SAP SuccessFactors connector processes pre-hire records (workers with hire date / start date in future).
-Let's say there is a pre-hire with employeeId "1234" in SuccessFactors Employee Central with start date on 1-June-2023. Let's further assume that this pre-hire record was first created either in Employee Central or in the Onboarding module on 15-May-2023. When the provisioning service first observes this record on 15-May-2023 (either as part of full sync or incremental sync), this record is still in pre-hire state. Due to this, SuccessFactors does not send the provisioning service all attributes (example: userNav/username) associated with the user. Only bare minimum data about the user such as `personIdExternal`, `firstname`, `lastname` and `startDate` is available. To process pre-hires successfully, the following pre-requisites must be met:
+Let's say there is a pre-hire with employeeId "1234" in SuccessFactors Employee Central with start date on 1-June-2023. Let's further assume that this pre-hire record was first created either in Employee Central or in the Onboarding module on 15-May-2023. When the provisioning service first observes this record on 15-May-2023 (either as part of full sync or incremental sync), this record is still in pre-hire state. Due to this, SuccessFactors does not send the provisioning service all attributes (example: userNav/username) associated with the user. Only bare minimum data about the user such as `companyName`, `personIdExternal`, `firstname`, `lastname` and `startDate` is available. To process pre-hires successfully, the following pre-requisites must be met:
1) The `personIdExternal` attribute must be set as the primary matching identifier (joining property). If you configure a different attribute (example: userName) as the joining property then the provisioning service will not be able to retrieve the pre-hire information. 2) The `startDate` attribute must be available and it's JSONPath must be set to either `$.employmentNav.results[0].startDate` or `$.employmentNav.results[-1:].startDate`.
active-directory Application Proxy Integrate With Tableau https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-tableau.md
Application Proxy supports the OAuth 2.0 Grant Flow, which is required for Table
## Publish your applications in Azure
-To publish Tableau, you need to publish an application in the Azure Portal.
+To publish Tableau, you need to publish an application in the Azure portal.
For:
Your application is now ready to test. Access the external URL you used to publi
## Next steps For more information about Azure AD Application Proxy, see [How to provide secure remote access to on-premises applications](application-proxy.md).-
active-directory Concept Password Ban Bad Combined Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-password-ban-bad-combined-policy.md
The following Azure AD password policy requirements apply for all passwords that
| Characters not allowed | Unicode characters | | Password length |Passwords require<br>- A minimum of eight characters<br>- A maximum of 256 characters</li> | | Password complexity |Passwords require three out of four of the following categories:<br>- Uppercase characters<br>- Lowercase characters<br>- Numbers <br>- Symbols<br> Note: Password complexity check isn't required for Education tenants. |
-| Password not recently used | When a user changes their password, the new password can't be the same as the current or recently used passwords. |
+| Password not recently used | When a user changes their password, the new password should not be the same as the current password. |
| Password isn't banned by [Azure AD Password Protection](concept-password-ban-bad.md) | The password can't be on the global list of banned passwords for Azure AD Password Protection, or on the customizable list of banned passwords specific to your organization. | ## Password expiration policies
active-directory Howto Authentication Sms Signin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-sms-signin.md
There are three main steps to enable and use SMS-based authentication in your or
First, let's enable SMS-based authentication for your Azure AD tenant.
-1. Sign-in to the [Azure portal](https://portal.azure.com) using an account with *global administrator* permissions.
+1. Sign in to the [Azure portal](https://portal.azure.com) using an account with *global administrator* permissions.
1. Search for and select **Azure Active Directory**, then choose **Security** from the menu on the left-hand side. 1. Under the **Manage** menu header, select **Authentication methods** > **Policies**. 1. From the list of available authentication methods, select **SMS**.
active-directory Howto Mfa Nps Extension Rdg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-rdg.md
To query successful sign-in events in the Gateway operational logs _(Event Viewe
* `Get-WinEvent -Logname Microsoft-Windows-TerminalServices-Gateway/Operational | where {$_.ID -eq '200'} | FL` * This command displays the events that show when user met connection authorization policy requirements.
-![viewing the connection authorization policy using PowerShell](./media/howto-mfa-nps-extension-rdg/image29.png)
+![Viewing the connection authorization policy using PowerShell](./media/howto-mfa-nps-extension-rdg/image29.png)
You can also view this log and filter on event IDs, 300 and 200. To query successful logon events in the Security event viewer logs, use the following command:
On the server where you installed the NPS extension for Azure AD MFA, you can fi
## Troubleshoot Guide
-If the configuration is not working as expected, the first place to start to troubleshoot is to verify that the user is configured to use Azure AD MFA. Have the user connect to the [Azure portal](https://portal.azure.com). If users are prompted for secondary verification and can successfully authenticate, you can eliminate an incorrect configuration of Azure AD MFA.
+If the configuration is not working as expected, the first place to start to troubleshoot is to verify that the user is configured to use Azure AD MFA. Have the user sign in to the [Azure portal](https://portal.azure.com). If users are prompted for secondary verification and can successfully authenticate, you can eliminate an incorrect configuration of Azure AD MFA.
If Azure AD MFA is working for the user(s), you should review the relevant Event logs. These include the Security Event, Gateway operational, and Azure AD MFA logs that are discussed in the previous section.
active-directory Howto Mfa Nps Extension Vpn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-vpn.md
In this section, you configure your VPN server to use RADIUS authentication. The
7. In the **Add RADIUS Server** window, do the following:
- a. In the **Server name** box, enter the name or IP address of the RADIUS server that you configured in the previous section.
+ 1. In the **Server name** box, enter the name or IP address of the RADIUS server that you configured in the previous section.
- b. For the **Shared secret**, select **Change**, and then enter the shared secret password that you created and recorded earlier.
+ 1. For the **Shared secret**, select **Change**, and then enter the shared secret password that you created and recorded earlier.
- c. In the **Time-out (seconds)** box, enter a value of **60**.
- To minimize discarded requests, we recommend that VPN servers are configured with a timeout of at least 60 seconds. If needed, or to reduce discarded requests in the event logs, you can increase the VPN server timeout value to 90 or 120 seconds.
+ 1. In the **Time-out (seconds)** box, enter a value of **60**. To minimize discarded requests, we recommend that VPN servers are configured with a timeout of at least 60 seconds. If needed, or to reduce discarded requests in the event logs, you can increase the VPN server timeout value to 90 or 120 seconds.
8. Select **OK**.
Get-WinEvent -Logname Security | where {$_.ID -eq '6272'} | FL
## Troubleshooting guide
-If the configuration is not working as expected, begin troubleshooting by verifying that the user is configured to use MFA. Have the user connect to the [Azure portal](https://portal.azure.com). If the user is prompted for secondary authentication and can successfully authenticate, you can eliminate an incorrect configuration of MFA as an issue.
+If the configuration is not working as expected, begin troubleshooting by verifying that the user is configured to use MFA. Have the user sign in to the [Azure portal](https://portal.azure.com). If the user is prompted for secondary authentication and can successfully authenticate, you can eliminate an incorrect configuration of MFA as an issue.
If MFA is working for the user, review the relevant Event Viewer logs. The logs include the security event, Gateway operational, and Azure AD Multi-Factor Authentication logs that are discussed in the previous section.
For more information, see [Integrate your existing NPS infrastructure with Azure
[Remote Desktop Gateway and Azure Multi-Factor Authentication Server using RADIUS](howto-mfaserver-nps-rdg.md)
-[Integrate your on-premises directories with Azure Active Directory](../hybrid/whatis-hybrid-identity.md)
+[Integrate your on-premises directories with Azure Active Directory](../hybrid/whatis-hybrid-identity.md)
active-directory Howto Mfa Nps Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension.md
import-module MSOnline
Connect-MsolService New-MsolServicePrincipal -AppPrincipalId 981f26a1-7f43-403b-a875-f8b09b8cd720 -DisplayName "Azure Multi-Factor Auth Client" ```
-Once done , go to the [Azure portal](https://portal.azure.com) > **Azure Active Directory** > **Enterprise Applications** > Search for "Azure Multi-Factor Auth Client" > Check properties for this app > Confirm if the service principal is enabled or disabled > Click on the application entry > Go to Properties of the app > If the option "Enabled for users to sign-in? is set to No in Properties of this app , please set it to Yes.
+Once done, sign in to the [Azure portal](https://portal.azure.com) > **Azure Active Directory** > **Enterprise Applications** > Search for "Azure Multi-Factor Auth Client" > Check properties for this app > Confirm if the service principal is enabled or disabled > Click on the application entry > Go to Properties of the app > If the option "Enabled for users to sign-in?" is set to `No` in Properties of this app, please set it to `Yes`.
Run the `AzureMfaNpsExtnConfigSetup.ps1` script again and it should not return the `Service principal was not found` error.
active-directory Howto Sspr Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-deployment.md
For a guided walkthrough of many of the recommendations in this article, see the
The following example describes the password reset solution architecture for common hybrid environments.
-![diagram of solution architecture](./media/howto-sspr-deployment//solutions-architecture.png)
+![Diagram of solution architecture](./media/howto-sspr-deployment//solutions-architecture.png)
Description of workflow
By default, Azure AD unlocks accounts when it performs a password reset.
Administrator accounts have elevated permissions. The on-premises enterprise or domain administrators can't reset their passwords through SSPR. On-premises admin accounts have the following restrictions:
-* can only change their password in their on-prem environment.
-* can never use the secret questions and answers as a method to reset their password.
+* Can only change their password in their on-prem environment.
+* Can never use the secret questions and answers as a method to reset their password.
We recommend that you don't sync your on-prem Active Directory admin accounts with Azure AD.
To enable your support team's success, you can create a FAQ based on questions y
To roll back the deployment:
-* for a single user, remove the user from the security group
+* For a single user, remove the user from the security group
-* for a group, remove the group from SSPR configuration
+* For a group, remove the group from SSPR configuration
* For everyone, disable SSPR for the Azure AD tenant
Azure AD can provide additional information on your SSPR performance through aud
You can use pre-built reports on Azure portal to measure the SSPR performance. If you're appropriately licensed, you can also create custom queries. For more information, see [Reporting options for Azure AD password management](./howto-sspr-reporting.md) > [!NOTE]
-> You must be [a global administrator](../roles/permissions-reference.md), and you must opt-in for this data to be gathered for your organization. To opt in, you must visit the Reporting tab or the audit logs on the Azure Portal at least once. Until then, the data doesn't collect for your organization.
+> You must be [a global administrator](../roles/permissions-reference.md), and you must opt-in for this data to be gathered for your organization. To opt in, you must visit the Reporting tab or the audit logs on the Azure portal at least once. Until then, the data doesn't collect for your organization.
Audit logs for registration and password reset are available for 30 days. If security auditing within your corporation requires longer retention, the logs need to be exported and consumed into a SIEM tool such as [Microsoft Sentinel](../../sentinel/connect-azure-active-directory.md), Splunk, or ArcSight.
active-directory Howto Sspr Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-reporting.md
The following questions can be answered by the reports that exist in the [Azure
In the Azure portal experience, we have improved the way that you can view password reset and password reset registration activity. Use the following the steps to find the password reset and password reset registration events:
-1. Browse to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. Select **All services** in the left pane. 3. Search for **Azure Active Directory** in the list of services and select it. 4. Select **Users** from the Manage section.
In the Azure portal experience, we have improved the way that you can view passw
### Combined registration
-[combined registration](./concept-registration-mfa-sspr-combined.md) security information registration and management events can be found in the audit logs under **Security** > **Authentication Methods**.
+[Combined registration](./concept-registration-mfa-sspr-combined.md) security information registration and management events can be found in the audit logs under **Security** > **Authentication Methods**.
## Description of the report columns in the Azure portal
active-directory V1 Protocols Openid Connect Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/v1-protocols-openid-connect-code.md
post_logout_redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F
When you redirect the user to the `end_session_endpoint`, Azure AD clears the user's session from the browser. However, the user may still be signed in to other applications that use Azure AD for authentication. To enable those applications to sign the user out simultaneously, Azure AD sends an HTTP GET request to the registered `LogoutUrl` of all the applications that the user is currently signed in to. Applications must respond to this request by clearing any session that identifies the user and returning a `200` response. If you wish to support single sign out in your application, you must implement such a `LogoutUrl` in your application's code. You can set the `LogoutUrl` from the Azure portal:
-1. Navigate to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. Choose your Active Directory by clicking on your account in the top right corner of the page. 3. From the left hand navigation panel, choose **Azure Active Directory**, then choose **App registrations** and select your application. 4. Click on **Settings**, then **Properties** and find the **Logout URL** text box.
active-directory Require Tou https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/require-tou.md
The scenario in this quickstart uses:
In the previous section, you created a Conditional Access policy requiring terms of use be accepted.
-To test your policy, try to sign-in to your [Azure portal](https://portal.azure.com) using your test account. You should see a dialog that requires you to accept your terms of use.
+To test your policy, try to sign in to the [Azure portal](https://portal.azure.com) using your test account. You should see a dialog that requires you to accept your terms of use.
:::image type="content" source="./media/require-tou/57.png" alt-text="Screenshot of a dialog box titled Identity Security Protection terms of use, with Decline and Accept buttons and a button labeled My TOU." border="false":::
To test your policy, try to sign-in to your [Azure portal](https://portal.azure.
When no longer needed, delete the test user and the Conditional Access policy: - If you don't know how to delete an Azure AD user, see [Delete users from Azure AD](../fundamentals/add-users-azure-active-directory.md#delete-a-user).-- To delete your policy, select the ellipsis (...) next to your policies name, then select **Delete**.
+- To delete your policy, select the ellipsis (`...`) next to your policies name, then select **Delete**.
- To delete your terms of use, select it, and then select **Delete terms**. :::image type="content" source="./media/require-tou/29.png" alt-text="Screenshot showing part of a table listing terms of use documents. The My T O U document is visible. In the menu, Delete terms is highlighted." border="false":::
active-directory Console Quickstart Portal Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/console-quickstart-portal-nodejs.md
> > ||| > > | `clientId` | Is the **Application (client) ID** for the application registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal. | > > | `authority` | The STS endpoint for user to authenticate. Usually `https://login.microsoftonline.com/{tenant}` for public cloud, where {tenant} is the name of your tenant or your tenant Id.|
-> > | `clientSecret` | Is the client secret created for the application in Azure Portal. |
+> > | `clientSecret` | Is the client secret created for the application in Azure portal. |
> > For more information, please see the [reference documentation for `ConfidentialClientApplication`](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-node/docs/initialize-confidential-client-application.md) >
> > > |Where:| Description | > > |||
-> > | `tokenRequest` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure Portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under **Expose an API** section in Azure Portal's Application Registration. |
+> > | `tokenRequest` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under **Expose an API** section in Azure portal's Application Registration. |
> > | `tokenResponse` | The response contains an access token for the scopes requested. | > > [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)]
active-directory Deploy Web App Authentication Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/deploy-web-app-authentication-pipeline.md
+
+ Title: Deploy a web app with App Service auth in a pipeline
+description: Describes how to set up a pipeline in Azure Pipelines to build and deploy a web app to Azure and enable the Azure App Service built-in authentication. The article provides step-by-step instructions on how to configure Azure resources, build and deploy a web application, create an Azure AD app registration, and configure App Service built-in authentication using Azure Pipelines.
++++++++ Last updated : 07/17/2023++++
+# Deploy a web app in a pipeline and configure App Service authentication
+
+This article describes how to set up a pipeline in [Azure Pipelines](/azure/devops/pipelines/) to build and deploy a web app to Azure and enable the [Azure App Service built-in authentication](/azure/app-service/overview-authentication-authorization).
+
+You'll learn how to:
+
+- Configure Azure resources using scripts in Azure Pipelines
+- Build a web application and deploy to App Service using Azure Pipelines
+- Create an Azure AD app registration in Azure Pipelines
+- Configure App Service built-in authentication in Azure Pipelines.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure DevOps organization. [Create one for free](/azure/devops/pipelines/get-started/pipelines-sign-up).
+ - To use Microsoft-hosted agents, your Azure DevOps organization must have access to Microsoft-hosted parallel jobs. [Check your parallel jobs and request a free grant](/azure/devops/pipelines/troubleshooting/troubleshooting#check-for-available-parallel-jobs).
+- An Azure Active Directory [tenant](/azure/active-directory/develop/quickstart-create-new-tenant).
+- A [GitHub account](https://github.com) and Git [setup locally](https://docs.github.com/en/get-started/quickstart/set-up-git).
+- .NET 6.0 SDK or later.
+
+## Create a sample ASP.NET Core web app
+
+Create a sample app and push it to your GitHub repo.
+
+### Create and clone a repo in GitHub
+
+[Create a new repo](https://docs.github.com/en/get-started/quickstart/create-a-repo?tool=webui) in GitHub, specify a name like "PipelinesTest". Set it to **Private** and add a *.gitignore* file with `.getignore template: VisualStudio`.
+
+Open a terminal window and change the current working directory to the location where you want the cloned directory:
+
+```
+cd c:\temp\
+```
+
+Enter the following command to clone the repo:
+
+```
+git clone https://github.com/YOUR-USERNAME/PipelinesTest
+cd PipelinesTest
+```
+
+### Create an ASP.NET Core web app
+
+1. Open a terminal window on your machine to a working directory. Create a new .NET web app using the [dotnet new webapp](/dotnet/core/tools/dotnet-new#web-options) command, and then change directories into the newly created app.
+
+ ```dotnetcli
+ dotnet new webapp -n PipelinesTest --framework net7.0
+ cd PipelinesTest
+ dotnet new sln
+ dotnet sln add .
+ ```
+
+1. From the same terminal session, run the application locally using the dotnet run command.
+
+ ```dotnetcli
+ dotnet run --urls=https://localhost:5001/
+ ```
+
+1. To verify the web app is running, open a web browser and navigate to the app at `https://localhost:5001`.
+
+You see the template ASP.NET Core 7.0 web app displayed in the page.
++
+Enter *CTRL-C* at the command line to stop running the web app.
+
+### Push the sample to GitHub
+
+Commit your changes and push to GitHub:
+
+```
+git add .
+git commit -m "Initial check-in"
+git push origin main
+```
+
+## Set up your Azure DevOps environment
+
+Sign in to your Azure DevOps organization (`https://dev.azure.com/{yourorganization}`).
+
+Create a new project:
+
+1. Select **New project**.
+1. Enter a **Project name**, such as "PipelinesTest".
+1. Select **Private** visibility.
+1. Select **Create**.
+
+## Create a new pipeline
+
+After the project is created, add a pipeline:
+
+1. In the left navigation pane, select **Pipelines**->**Pipelines**, and then select **Create Pipeline**.
+1. Select **GitHub YAML**.
+1. On the **Connect** tab, select **GitHub YAML**. When prompted, enter your GitHub credentials.
+1. When the list of repositories appears, select your `PipelinesTest` repository.
+1. You might be redirected to GitHub to install the Azure Pipelines app. If so, select **Approve & install**.
+1. In **Configure your pipeline**, select the **Starter pipeline**.
+1. A new pipeline with a basic configuration appears. The default configuration uses a Microsoft-hosted agent.
+1. When you're ready, select **Save and run**. To commit your changes to GitHub and start the pipeline, choose Commit directly to the main branch and select Save and run a second time. If prompted to grant permission with a message like **This pipeline needs permission to access a resource before this run can continue**, choose **View** and follow the prompts to permit access.
+
+## Add a build stage and build tasks to your pipeline
+
+Now that you have a working pipeline, you can add a build stage and build tasks in order to build the web app.
+
+Update *azure-pipelines.yml* and replace the basic pipeline configuration with the following:
+
+[!code-yml[](includes/deploy-web-app-authentication-pipeline/azure-pipeline-1.yml)]
+
+Save your changes and run the pipeline.
+
+A stage `Build` is defined to build the web app. Under the `steps` section, you see various tasks to build the web app and publish artifacts to the pipeline.
+
+- [NuGetToolInstaller@1](/azure/devops/pipelines/tasks/reference/nuget-tool-installer-v1) acquires NuGet and adds it to the PATH.
+- [NuGetCommand@2](/azure/devops/pipelines/tasks/reference/nuget-command-v2) restores NuGet packages in the solution.
+- [VSBuild@1](/azure/devops/pipelines/tasks/reference/vsbuild-v1) builds the solution with MSBuild and packages the app's build results (including its dependencies) as a .zip file into a folder.
+- [PublishBuildArtifacts@1](/azure/devops/pipelines/tasks/reference/publish-build-artifacts-v1) publishes the .zip file to Azure Pipelines.
+
+## Create a service connection
+
+Add a [service connection](/azure/devops/pipelines/library/service-endpoints) so your pipeline can connect and deploy resources to Azure:
+
+1. Select **Project settings**.
+1. In the left navigation pane, select **Service connections** and then **Create service connection**.
+1. Select **Azure Resource Manager** and then **Next**.
+1. Select **Service principal (automatic)** and then **Next**.
+1. Select **Subscription** for **scope level** and select your Azure subscription. Enter a service connection name such as "PipelinesTestServiceConnection" and select **Next**. The service connection name is used in the following steps.
+
+An application is also created in your Azure AD tenant that provides an identity for the pipeline. You need the display name of the app registration in later steps. To find the display name:
+
+1. Sign into the [Entra admin portal](https://entra.microsoft.com/).
+1. Select **App registrations** in the left navigation pane, and then the **All applications**.
+1. Find the display name of the app registration, which is of the form `{organization}-{project}-{guid}`.
+
+Grant the service connection permission to access the pipeline:
+
+1. In the left navigation pane, select **Project settings** and then **Service connections**.
+1. Select the **PipelinesTestServiceConnection** service connection, then the **Ellipsis**, and then **Security** from the drop-down menu.
+1. In the **Pipeline permissions** section, select **Add pipeline** and select the **PipelinesTest** service connection from the list.
+
+## Add a variable group
+
+The `DeployAzureResources` stage that you create in the next section uses several values to create and deploy resources to Azure:
+
+- The Azure AD tenant ID (find in the [Entra admin portal](https://entra.microsoft.com/)).
+- The region, or location, where the resources are deployed.
+- A resource group name.
+- The App Service service plan name.
+- The name of the web app.
+- The name of the service connection used to connect the pipeline to Azure. In the pipeline, this value is used for the Azure subscription.
+
+Create a [variable group](/azure/devops/pipelines/library/variable-groups) and add values to use as variables in the pipeline.
+
+Select **Library** in the left navigation pane and create a new **Variable group**. Give it the name "AzureResourcesVariableGroup".
+
+Add the following variables and values:
+
+| Variable name | Example value |
+| | |
+| LOCATION | centralus |
+| TENANTID | {tenant-id}|
+| RESOURCEGROUPNAME | pipelinetestgroup |
+| SVCPLANNAME | pipelinetestplan |
+| WEBAPPNAMETEST | pipelinetestwebapp |
+| AZURESUBSCRIPTION | PipelinesTestServiceConnection |
+
+Select **Save**.
+
+Give the pipeline permissions to access the variable group. In the variable group page, select **Pipeline permissions**, add your pipeline, and then close the window.
+
+Update *azure-pipelines.yml* and add the variable group to the pipeline.
+
+[!code-yml[](includes/deploy-web-app-authentication-pipeline/azure-pipeline-2.yml?highlight=1-2)]
+
+Save your changes and run the pipeline.
+
+## Deploy Azure resources
+
+Next, add a stage to the pipeline that deploys Azure resources. The pipeline uses an [inline script](/azure/devops/pipelines/scripts/powershell) to create the App Service instance. In a later step, the inline script creates an Azure AD app registration for App Service authentication. An Azure CLI bash script is used because Azure Resource Manager (and Azure pipeline tasks) can't create an app registration.
+
+The inline script runs in the context of the pipeline, assign the [Application.Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) role to the app so the script can create app registrations:
+
+1. Sign into the [Entra admin portal](https://entra.microsoft.com/).
+1. In the left navigation pane, select **Roles & admins**.
+1. Select **Application Administrator** from the list of built-in roles and then **Add assignment**.
+1. Search for the pipeline app registration by display name.
+1. Select the app registration from the list and select **Add**.
+
+Update *azure-pipelines.yml* to add the inline script, which creates a resource group in Azure, creates an App Service plan, and creates an App Service instance.
+
+[!code-yml[](includes/deploy-web-app-authentication-pipeline/azure-pipeline-3.yml?highlight=40-68)]
+
+Save your changes and run the pipeline. In the [Azure portal](https://portal.azure.com), navigate to **Resource groups** and verify that a new resource group and App Service instance are created.
+
+## Deploy the web app to App Service
+
+Now that your pipeline is creating resources in Azure, a deployment stage to deploy the web app to App Service.
+
+Update *azure-pipelines.yml* to add the deployment stage.
+
+[!code-yml[](includes/deploy-web-app-authentication-pipeline/azure-pipeline-4.yml?highlight=70-96)]
+
+Save your changes and run the pipeline.
+
+A `DeployWebApp` stage is defined with several tasks:
+
+- [DownloadBuildArtifacts@1](/azure/devops/pipelines/tasks/reference/download-build-artifacts-v1) downloads the build artifacts that were published to the pipeline in a previous stage.
+- [AzureRmWebAppDeployment@4](/azure/devops/pipelines/tasks/reference/azure-rm-web-app-deployment-v4) deploys the web app to App Service.
+
+View the deployed website on App Service. Navigate to your App Service in Azure portal and select the instance's **Default domain**: `https://pipelinetestwebapp.azurewebsites.net`.
++
+The *pipelinetestwebapp* has been successfully deployed to App Service.
++
+## Configure App Service authentication
+
+Now that the pipeline is deploying the web app to App Service, you can configure the [App Service built-in authentication](/azure/app-service/overview-authentication-authorization). Modify the inline script in the `DeployAzureResources` to:
+
+1. Create an Azure AD app registration as an identity for your web app. To create an app registration, the service principal for running the pipeline needs Application Administrator role in the directory.
+1. Get a secret from the app.
+1. Configure the secret setting for the App Service web app.
+1. Configure the redirect URI, home page URI, and issuer settings for the App Service web app.
+1. Configure other settings on the web app.
+
+[!code-yml[](includes/deploy-web-app-authentication-pipeline/azure-pipeline-5.yml?highlight=68-108)]
+
+Save your changes and run the pipeline.
+
+## Verify limited access to the web app
+
+To verify that access to your app is limited to users in your organization, navigate to your App Service in the [Azure portal](https://portal.azure.com) and select the instance's **Default domain**: `https://pipelinetestwebapp.azurewebsites.net`.
+
+You should be directed to a secured sign-in page, verifying that unauthenticated users aren't allowed access to the site. Sign in as a user in your organization to gain access to the site.
+
+You can also start up a new browser and try to sign in by using a personal account to verify that users outside the organization don't have access.
+
+## Clean up resources
+
+Clean up your Azure resources and Azure DevOps environment so you're not charged for resources after you're done.
+
+### Delete the resource group
+
+In the Azure portal, select **Resource groups** from the menu and select the resource group that contains your deployed web app.
+
+Select **Delete resource group** to delete the resource group and all the resources.
+
+### Disable the pipeline or delete the Azure DevOps project
+
+You created a project that points to a GitHub repository. The pipeline is triggered to run every time you push a change to your GitHub repository, consuming free build minutes or your resources.
+
+#### Option 1: Disable your pipeline
+
+Choose this option if you want to keep your project and your build pipeline for future reference. You can re-enable your pipeline later if you need to.
+
+1. In your Azure DevOps project, select **Pipelines** and then select your pipeline.
+1. Select the ellipsis button at the far right, and then select **Settings**.
+1. Select **Disabled**, and then select **Save**. Your pipeline will no longer process new run requests.
+
+#### Option 2: Delete your project
+
+Choose this option if you don't need your DevOps project for future reference. This deletes your Azure DevOps project.
+
+1. Navigate to your Azure DevOps project.
+1. Select **Project settings** in the lower-left corner.
+1. Under **Overview**, scroll down to the bottom of the page and then select **Delete**.
+1. Type your project name in the text box, and then select **Delete**.
+
+### Delete app registrations in Azure AD
+
+In the [Entra admin center](https://entra.microsoft.com/), select **Applications** > **App registrations** > **All applications**.
+
+Select the application for the pipeline, the display name has the form `{organization}-{project}-{guid}`, and delete it.
+
+Select the application for the web app, *pipelinetestwebapp*, and delete it.
+
+## Next steps
+
+Learn more about:
+
+- [App Service built-in authentication](/azure/app-service/overview-authentication-authorization).
+- [Deploy to App Service using Azure Pipelines](/azure/app-service/deploy-azure-pipelines)
active-directory Howto Create Service Principal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-create-service-principal-portal.md
You must have sufficient permissions to register an application with your Azure
## Register an application with Azure AD and create a service principal
-1. Sign-in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Search for and Select **Azure Active Directory**. 1. Select **App registrations**, then select **New registration**. 1. Name the application, for example "example-app".
To access resources in your subscription, you must assign a role to the applicat
You can set the scope at the level of the subscription, resource group, or resource. Permissions are inherited to lower levels of scope.
-1. Sign-in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select the level of scope you wish to assign the application to. For example, to assign a role at the subscription scope, search for and select **Subscriptions**. If you don't see the subscription you're looking for, select **global subscriptions filter**. Make sure the subscription you want is selected for the tenant. 1. Select **Access control (IAM)**. 1. Select **Add**, then select **Add role assignment**.
You might need to configure extra permissions on resources that your application
To configure access policies:
-1. Sign-in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select your key vault and select **Access policies**. 1. Select **Add access policy**, then select the key, secret, and certificate permissions you want to grant your application. Select the service principal you created previously. 1. Select **Add** to add the access policy.
active-directory Optional Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/optional-claims.md
You can configure optional claims for your application through the Azure portal or application manifest.
-1. Go to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations**. 1. Choose the application for which you want to configure optional claims based on your scenario and desired outcome.
active-directory Quickstart V2 Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-javascript.md
See [How the sample works](#how-the-sample-works) for an illustration.
> > ### Option 1 (Express): Register and auto configure your app and then download your code sample >
-> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade/quickStartType/JavascriptSpaQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
+> 1. Sign in to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade/quickStartType/JavascriptSpaQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
> 1. Enter a name for your application. > 1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**. > 1. Select **Register**.
active-directory Quickstart V2 Nodejs Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-nodejs-console.md
> > ||| > > | `clientId` | Is the **Application (client) ID** for the application registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal. | > > | `authority` | The STS endpoint for user to authenticate. Usually `https://login.microsoftonline.com/{tenant}` for public cloud, where {tenant} is the name of your tenant or your tenant Id.|
-> > | `clientSecret` | Is the client secret created for the application in Azure Portal. |
+> > | `clientSecret` | Is the client secret created for the application in Azure portal. |
> > For more information, please see the [reference documentation for `ConfidentialClientApplication`](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-node/docs/initialize-confidential-client-application.md) >
> > > |Where:| Description | > > |||
-> > | `tokenRequest` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure Portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under **Expose an API** section in Azure Portal's Application Registration. |
+> > | `tokenRequest` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under **Expose an API** section in Azure portal's Application Registration. |
> > | `tokenResponse` | The response contains an access token for the scopes requested. | > > [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)]
active-directory Reference App Manifest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-app-manifest.md
You can configure an app's attributes through the Azure portal or programmatical
To configure the application manifest:
-1. Go to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>. Search for and select the **Azure Active Directory** service.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>. Search for and select the **Azure Active Directory** service.
1. Select **App registrations**. 1. Select the app you want to configure. 1. From the app's **Overview** page, select the **Manifest** section. A web-based manifest editor opens, allowing you to edit the manifest within the portal. Optionally, you can select **Download** to edit the manifest locally, and then use **Upload** to reapply it to your application.
active-directory Reference Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-error-codes.md
The `error` field has several possible values - review the protocol documentatio
| AADSTS53000 | DeviceNotCompliant - Conditional Access policy requires a compliant device, and the device isn't compliant. The user must enroll their device with an approved MDM provider like Intune. For additional information, please visit [Conditional Access device remediation](../conditional-access/troubleshoot-conditional-access.md). | | AADSTS53001 | DeviceNotDomainJoined - Conditional Access policy requires a domain joined device, and the device isn't domain joined. Have the user use a domain joined device. | | AADSTS53002 | ApplicationUsedIsNotAnApprovedApp - The app used isn't an approved app for Conditional Access. User needs to use one of the apps from the list of approved apps to use in order to get access. |
-| AADSTS53003 | BlockedByConditionalAccess - Access has been blocked by Conditional Access policies. The access policy does not allow token issuance. If this is unexpected, see the conditional access policy that applied to this request in the Azure Portal or contact your administrator. For additional information, please visit [troubleshooting sign-in with Conditional Access](../conditional-access/troubleshoot-conditional-access.md). |
+| AADSTS53003 | BlockedByConditionalAccess - Access has been blocked by Conditional Access policies. The access policy does not allow token issuance. If this is unexpected, see the conditional access policy that applied to this request in the Azure portal or contact your administrator. For additional information, please visit [troubleshooting sign-in with Conditional Access](../conditional-access/troubleshoot-conditional-access.md). |
| AADSTS53004 | ProofUpBlockedDueToRisk - User needs to complete the multi-factor authentication registration process before accessing this content. User should register for multi-factor authentication. | | AADSTS53010 | ProofUpBlockedDueToSecurityInfoAcr - Cannot configure multi-factor authentication methods because the organization requires this information to be set from specific locations or devices. | | AADSTS53011 | User blocked due to risk on home tenant. |
The `error` field has several possible values - review the protocol documentatio
| AADSTS54000 | MinorUserBlockedLegalAgeGroupRule | | AADSTS54005 | OAuth2 Authorization code was already redeemed, please retry with a new valid code or use an existing refresh token. | | AADSTS65001 | DelegationDoesNotExist - The user or administrator has not consented to use the application with ID X. Send an interactive authorization request for this user and resource. |
-| AADSTS65002 | Consent between first party application '{applicationId}' and first party resource '{resourceId}' must be configured via preauthorization - applications owned and operated by Microsoft must get approval from the API owner before requesting tokens for that API. A developer in your tenant may be attempting to reuse an App ID owned by Microsoft. This error prevents them from impersonating a Microsoft application to call other APIs. They must move to another app ID they register in https://portal.azure.com.|
+| AADSTS65002 | Consent between first party application '{applicationId}' and first party resource '{resourceId}' must be configured via preauthorization - applications owned and operated by Microsoft must get approval from the API owner before requesting tokens for that API. A developer in your tenant may be attempting to reuse an App ID owned by Microsoft. This error prevents them from impersonating a Microsoft application to call other APIs. They must move to another app ID they register in the [Azure portal](https://portal.azure.com).|
| AADSTS65004 | UserDeclinedConsent - User declined to consent to access the app. Have the user retry the sign-in and consent to the app| | AADSTS65005 | MisconfiguredApplication - The app required resource access list does not contain apps discoverable by the resource or The client app has requested access to resource, which was not specified in its required resource access list or Graph service returned bad request or resource not found. If the app supports SAML, you may have configured the app with the wrong Identifier (Entity). To learn more, see the troubleshooting article for error [AADSTS650056](/troubleshoot/azure/active-directory/error-code-aadsts650056-misconfigured-app). | | AADSTS650052 | The app needs access to a service `(\"{name}\")` that your organization `\"{organization}\"` has not subscribed to or enabled. Contact your IT Admin to review the configuration of your service subscriptions. |
active-directory Registration Config Specific Application Property How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/registration-config-specific-application-property-how-to.md
This article gives you a brief description of all the available fields in the ap
## Register a new application -- To register a new application, navigate to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
+- To register a new application, sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
- From the left navigation pane, click **Azure Active Directory.**
active-directory Test Automate Integration Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/test-automate-integration-testing.md
Replace *{tenant}* with your tenant ID, *{your_client_ID}* with the client ID of
Your tenant likely has a conditional access policy that [requires multifactor authentication (MFA) for all users](../conditional-access/howto-conditional-access-policy-all-users-mfa.md), as recommended by Microsoft. MFA won't work with ROPC, so you'll need to exempt your test applications and test users from this requirement. To exclude user accounts:
-1. Navigate to the [Azure portal](https://portal.azure.com) and sign in to your tenant. Select **Azure Active Directory**. Select **Security** in the left navigation pane and then select **Conditional Access**.
+1. Sign in to the [Azure portal](https://portal.azure.com) and sign in to your tenant. Select **Azure Active Directory**. Select **Security** in the left navigation pane and then select **Conditional Access**.
1. In **Policies**, select the conditional access policy that requires MFA. 1. Select **Users or workload identities**. 1. Select the **Exclude** tab and then the **Users and groups** checkbox.
test.describe('Testing Authentication with MSAL.js ', () => {
```
-For more information, please check the following code sample [MSAL.js Testing Example](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/samples/msal-browser-samples/TestingSample).
+For more information, please check the following code sample [MSAL.js Testing Example](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/samples/msal-browser-samples/TestingSample).
active-directory Test Setup Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/test-setup-environment.md
Viewing your production tenant conditional access policies may need to be perfor
1. Navigate to **Cloud apps or actions**. 1. If the policy only applies to a select group of apps, then move on to the next policy. If not, then it will likely apply to your app as well when you move to production. You should copy the policy over to your test tenant.
-In a new tab or browser session, navigate to the [Azure portal](https://portal.azure.com), and sign into your test tenant.
+In a new tab or browser session, sign in to the [Azure portal](https://portal.azure.com) to access your test tenant.
1. Go to **Azure Active Directory** > **Enterprise applications** > **Conditional Access**. 1. Click on **New policy**
active-directory Tutorial V2 Windows Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-windows-desktop.md
You can register your application in either of two ways.
Use the following steps to register your application:
-1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/WinDesktopQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
+1. Sign in to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/WinDesktopQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
1. Enter a name for your application and select **Register**. 1. Follow the instructions to download and automatically configure your new application.
active-directory V2 Protocols Oidc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-protocols-oidc.md
The *ID token* introduced by OpenID Connect is issued by the authorization serve
ID tokens aren't issued by default for an application registered with the Microsoft identity platform. ID tokens for an application are enabled by using one of the following methods:
-1. Navigate to the [Azure portal](https://portal.azure.com) and select **Azure Active Directory** > **App registrations** > *\<your application\>* > **Authentication**.
+1. Sign in to the [Azure portal](https://portal.azure.com) and select **Azure Active Directory** > **App registrations** > *\<your application\>* > **Authentication**.
1. Under **Implicit grant and hybrid flows**, select the **ID tokens (used for implicit and hybrid flows)** checkbox. Or:
The value of `{tenant}` varies based on the application's sign-in audience as sh
> [!TIP] > Note that when using the `common` or `consumers` authority for personal Microsoft accounts, the consuming resource application must be configured to support such type of accounts in accordance with [signInAudience](./supported-accounts-validation.md).
-To find the OIDC configuration document in the Azure portal, navigate to the [Azure portal](https://portal.azure.com) and then:
+To find the OIDC configuration document in the Azure portal, sign in to the [Azure portal](https://portal.azure.com) and then:
1. Select **Azure Active Directory** > **App registrations** > *\<your application\>* > **Endpoints**. 1. Locate the URI under **OpenID Connect metadata document**.
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
Azure China 21Vianet:
### Authentication requirements
-[Azure AD Guest accounts](https://learn.microsoft.com/azure/active-directory/external-identities/what-is-b2b) cannot connect to Azure Bastion via Azure AD authentication.
+[Azure AD Guest accounts](/azure/active-directory/external-identities/what-is-b2b) cannot connect to Azure Bastion via Azure AD authentication.
## Enable Azure AD login for a Windows VM in Azure
Try these solutions:
- Verify that the user doesn't have a temporary password. Temporary passwords can't be used to log in to a remote desktop connection.
- Sign in with the user account in a web browser. For instance, open the [Azure portal](https://portal.azure.com) in a private browsing window. If you're prompted to change the password, set a new password. Then try connecting again.
+ Sign in with the user account in a web browser. For instance, sign in to the [Azure portal](https://portal.azure.com) in a private browsing window. If you're prompted to change the password, set a new password. Then try connecting again.
### MFA sign-in method required
active-directory Plan Device Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/plan-device-deployment.md
The key benefits of giving your devices an Azure AD identity:
* Improve user experience ΓÇô Provide your users with easy access to your organizationΓÇÖs cloud-based resources from both personal and corporate devices. Administrators can enable [Enterprise State Roaming](enterprise-state-roaming-overview.md) for a unified experience across all Windows devices.
-* Simplify deployment and management ΓÇô Simplify the process of bringing devices to Azure AD with [Windows Autopilot](/windows/deployment/windows-autopilot/windows-10-autopilot), [bulk provisioning](/mem/intune/enrollment/windows-bulk-enroll), or [self-service: Out of Box Experience (OOBE)](https://support.microsoft.com/account-billing/join-your-work-device-to-your-work-or-school-network-ef4d6adb-5095-4e51-829e-5457430f3973). Manage devices with Mobile Device Management (MDM) tools like [Microsoft Intune](/mem/intune/fundamentals/what-is-intune), and their identities in [Azure portal](https://portal.azure.com/).
+* Simplify deployment and management ΓÇô Simplify the process of bringing devices to Azure AD with [Windows Autopilot](/windows/deployment/windows-autopilot/windows-10-autopilot), [bulk provisioning](/mem/intune/enrollment/windows-bulk-enroll), or [self-service: Out of Box Experience (OOBE)](https://support.microsoft.com/account-billing/join-your-work-device-to-your-work-or-school-network-ef4d6adb-5095-4e51-829e-5457430f3973). Manage devices with Mobile Device Management (MDM) tools like [Microsoft Intune](/mem/intune/fundamentals/what-is-intune), and their identities in the [Azure portal](https://portal.azure.com/).
## Plan the deployment project
active-directory Groups Bulk Download Members https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-bulk-download-members.md
You can bulk download the members of a group in your organization to a comma-sep
## To bulk download group membership
-1. Sign in to [the Azure portal](https://portal.azure.com) with an account in the organization.
+1. Sign in to the [Azure portal](https://portal.azure.com) with an account in the organization.
1. In Azure AD, select **Groups** > **All groups**. 1. Open the group whose membership you want to download, and then select **Members**. 1. On the **Members** page, select **Bulk operations** and choose, **Download members** to download a CSV file listing the group members.
active-directory Groups Bulk Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-bulk-download.md
You can download a list of all the groups in your organization to a comma-separa
>[!NOTE] > The columns downloaded are pre-defined
-1. Sign in to [the Azure portal](https://portal.azure.com) with an account in your organization.
+1. Sign in to the [Azure portal](https://portal.azure.com) with an account in your organization.
1. In Azure AD, select **Groups** > **Download groups**. 1. On the **Groups download** page, select **Start** to receive a CSV file listing your groups.
active-directory Groups Bulk Import Members https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-bulk-import-members.md
The rows in a downloaded CSV template are as follows:
## To bulk import group members
-1. Sign in to [the Azure portal](https://portal.azure.com) with a User administrator account in the organization. Group owners can also bulk import members of groups they own.
+1. Sign in to the [Azure portal](https://portal.azure.com) with a User administrator account in the organization. Group owners can also bulk import members of groups they own.
1. In Azure AD, select **Groups** > **All groups**. 1. Open the group to which you're adding members and then select **Members**. 1. On the **Members** page, select **bulk operations** and then choose **Import members**.
active-directory Groups Bulk Remove Members https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-bulk-remove-members.md
The rows in a downloaded CSV template are as follows:
## To bulk remove group members
-1. Sign in to [the Azure portal](https://portal.azure.com) with a User administrator account in the organization. Group owners can also bulk remove members of groups they own.
+1. Sign in to the [Azure portal](https://portal.azure.com) with a User administrator account in the organization. Group owners can also bulk remove members of groups they own.
1. In Azure AD, select **Groups** > **All groups**. 1. Open the group from which you're removing members and then select **Members**. 1. On the **Members** page, select **Remove members**.
active-directory Groups Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-lifecycle.md
For more information on permissions to restore a deleted group, see [Restore a d
## Set group expiration
-1. Open the [Azure portal](https://portal.azure.com) with an account that is a Global Administrator in your Azure AD organization.
+1. Sign in to the [Azure portal](https://portal.azure.com) with an account that is a Global Administrator in your Azure AD organization.
2. Browse to **Azure Active Directory** > **Groups**, then select **Expiration** to open the expiration settings.
Here are examples of how you can use PowerShell cmdlets to configure the expirat
Connect-AzureAD ```
-1. Configure the expiration settings
- Use the New-AzureADMSGroupLifecyclePolicy cmdlet to set the lifetime for all Microsoft 365 groups in the Azure AD organization to 365 days. Renewal notifications for Microsoft 365 groups without owners will be sent to 'emailaddress@contoso.com'
+1. Configure the expiration settings Use the New-AzureADMSGroupLifecyclePolicy cmdlet to set the lifetime for all Microsoft 365 groups in the Azure AD organization to 365 days. Renewal notifications for Microsoft 365 groups without owners will be sent to `emailaddress@contoso.com`
``` PowerShell New-AzureADMSGroupLifecyclePolicy -GroupLifetimeInDays 365 -ManagedGroupTypes All -AlternateNotificationEmails emailaddress@contoso.com ```
-1. Retrieve the existing policy
- Get-AzureADMSGroupLifecyclePolicy: This cmdlet retrieves the current Microsoft 365 group expiration settings that have been configured. In this example, you can see:
+1. Retrieve the existing policy Get-AzureADMSGroupLifecyclePolicy: This cmdlet retrieves the current Microsoft 365 group expiration settings that have been configured. In this example, you can see:
- The policy ID - The lifetime for all Microsoft 365 groups in the Azure AD organization is set to 365 days
Here are examples of how you can use PowerShell cmdlets to configure the expirat
26fcc232-d1c3-4375-b68d-15c296f1f077 365 All emailaddress@contoso.com ```
-1. Update the existing policy
- Set-AzureADMSGroupLifecyclePolicy: This cmdlet is used to update an existing policy. In the example below, the group lifetime in the existing policy is changed from 365 days to 180 days.
+1. Update the existing policy Set-AzureADMSGroupLifecyclePolicy: This cmdlet is used to update an existing policy. In the example below, the group lifetime in the existing policy is changed from 365 days to 180 days.
```powershell Set-AzureADMSGroupLifecyclePolicy -Id "26fcc232-d1c3-4375-b68d-15c296f1f077" -GroupLifetimeInDays 180 -AlternateNotificationEmails "emailaddress@contoso.com" ```
-1. Add specific groups to the policy
- Add-AzureADMSLifecyclePolicyGroup: This cmdlet adds a group to the lifecycle policy. As an example:
+1. Add specific groups to the policy Add-AzureADMSLifecyclePolicyGroup: This cmdlet adds a group to the lifecycle policy. As an example:
```powershell Add-AzureADMSLifecyclePolicyGroup -Id "26fcc232-d1c3-4375-b68d-15c296f1f077" -groupId "cffd97bd-6b91-4c4e-b553-6918a320211c" ```
-1. Remove the existing Policy
- Remove-AzureADMSGroupLifecyclePolicy: This cmdlet deletes the Microsoft 365 group expiration settings but requires the policy ID. This cmdlet disables expiration for Microsoft 365 groups.
+1. Remove the existing Policy Remove-AzureADMSGroupLifecyclePolicy: This cmdlet deletes the Microsoft 365 group expiration settings but requires the policy ID. This cmdlet disables expiration for Microsoft 365 groups.
```powershell Remove-AzureADMSGroupLifecyclePolicy -Id "26fcc232-d1c3-4375-b68d-15c296f1f077"
active-directory Licensing Directory Independence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-directory-independence.md
You can configure each Azure AD organization independently to get data synchroni
## Add an Azure AD organization
-To add an Azure AD organization in the Azure portal, sign in to [the Azure portal](https://portal.azure.com) with an account that is an Azure AD global administrator, and select **New**.
+To add an Azure AD organization in the Azure portal, sign in to the [Azure portal](https://portal.azure.com) with an account that is an Azure AD global administrator, and select **New**.
> [!NOTE] > Unlike other Azure resources, your Azure AD organizations are not child resources of an Azure subscription. If your Azure subscription is canceled or expired, you can still access your Azure AD organization's data using Azure PowerShell, the Microsoft Graph API, or the Microsoft 365 admin center. You can also [associate another subscription with the organization](../fundamentals/active-directory-how-subscriptions-associated-directory.md).
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
# Product names and service plan identifiers for licensing
-When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Microsoft_AAD_IAM/LicensesMenuBlade/Products) or the [Microsoft 365 admin center](https://admin.microsoft.com), you see product names that look something like *Office 365 E3*. When you use PowerShell v1.0 cmdlets, the same product is identified using a specific but less friendly name: *ENTERPRISEPACK*. When using PowerShell v2.0 cmdlets or [Microsoft Graph](/graph/api/resources/subscribedsku), the same product is identified using a GUID value: *6fd2c87f-b296-42f0-b197-1e91e994b900*. The following table lists the most commonly used Microsoft online service products and provides their various ID values. These tables are for reference purposes in Azure Active Directory (Azure AD), part of Microsoft Entra, and are accurate only as of the date when this article was last updated. Microsoft will continue to make periodic updates to this document.
+When managing licenses in the [Azure portal](https://portal.azure.com/#blade/Microsoft_AAD_IAM/LicensesMenuBlade/Products) or the [Microsoft 365 admin center](https://admin.microsoft.com), you see product names that look something like *Office 365 E3*. When you use PowerShell v1.0 cmdlets, the same product is identified using a specific but less friendly name: *ENTERPRISEPACK*. When using PowerShell v2.0 cmdlets or [Microsoft Graph](/graph/api/resources/subscribedsku), the same product is identified using a GUID value: *6fd2c87f-b296-42f0-b197-1e91e994b900*. The following table lists the most commonly used Microsoft online service products and provides their various ID values. These tables are for reference purposes in Azure Active Directory (Azure AD), part of Microsoft Entra, and are accurate only as of the date when this article was last updated. Microsoft will continue to make periodic updates to this document.
- **Product name**: Used in management portals - **String ID**: Used by PowerShell v1.0 cmdlets when performing operations on licenses or by the **skuPartNumber** property of the **subscribedSku** Microsoft Graph API
active-directory B2b Tutorial Require Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-tutorial-require-mfa.md
To complete the scenario in this tutorial, you need:
## Test the sign-in experience before MFA setup
-1. Use your test user name and password to sign in to your [Azure portal](https://portal.azure.com/).
+1. Use your test user name and password to sign in to the [Azure portal](https://portal.azure.com/).
1. You should be able to access the Azure portal using only your sign-in credentials. No other authentication is required. 1. Sign out. ## Create a Conditional Access policy that requires MFA
-1. Sign in to your [Azure portal](https://portal.azure.com/) as a security administrator or a Conditional Access administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com/) as a security administrator or a Conditional Access administrator.
1. In the Azure portal, select **Azure Active Directory**. 1. In the left menu, under **Manage**, select **Security**. 1. Under **Protect**, select **Conditional Access**.
To complete the scenario in this tutorial, you need:
## Test your Conditional Access policy
-1. Use your test user name and password to sign in to your [Azure portal](https://portal.azure.com/).
+1. Use your test user name and password to sign in to the [Azure portal](https://portal.azure.com/).
1. You should see a request for more authentication methods. It can take some time for the policy to take effect. :::image type="content" source="media/tutorial-mfa/mfa-required.PNG" alt-text="Screenshot showing the More information required message.":::
active-directory Direct Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/direct-federation.md
Next, configure federation with the IdP configured in step 1 in Azure AD. You ca
7. (Optional) To add more domain names to this federating identity provider:
- a. Select the link in the **Domains** column.
+ 1. Select the link in the **Domains** column.
- ![Screenshot showing the link for adding domains to the SAML/WS-Fed identity provider.](media/direct-federation/new-saml-wsfed-idp-add-domain.png)
+ ![Screenshot showing the link for adding domains to the SAML/WS-Fed identity provider.](media/direct-federation/new-saml-wsfed-idp-add-domain.png)
- b. Next to **Domain name of federating IdP**, type the domain name, and then select **Add**. Repeat for each domain you want to add. When you're finished, select **Done**.
+ 1. Next to **Domain name of federating IdP**, type the domain name, and then select **Add**. Repeat for each domain you want to add. When you're finished, select **Done**.
- ![Screenshot showing the Add button in the domain details pane.](media/direct-federation/add-domain.png)
+ ![Screenshot showing the Add button in the domain details pane.](media/direct-federation/add-domain.png)
### To configure federation using the Microsoft Graph API
On the **All identity providers** page, you can view the list of SAML/WS-Fed ide
You can remove your federation configuration. If you do, federation guest users who have already redeemed their invitations can no longer sign in. But you can give them access to your resources again by [resetting their redemption status](reset-redemption-status.md). To remove a configuration for an IdP in the Azure portal:
-1. Go to the [Azure portal](https://portal.azure.com/). In the left pane, select **Azure Active Directory**.
+1. Sign in to the [Azure portal](https://portal.azure.com/). In the left pane, select **Azure Active Directory**.
1. Select **External Identities**. 1. Select **All identity providers**. 1. Under **SAML/WS-Fed identity providers**, scroll to the identity provider in the list or use the search box.
active-directory Google Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/google-federation.md
First, create a new project in the Google Developers Console to obtain a client
- `https://login.microsoftonline.com/te/<tenant name>.onmicrosoft.com/oauth2/authresp` <br>(where `<tenant name>` is your tenant name) > [!NOTE]
- > To find your tenant ID, go to the [Azure portal](https://portal.azure.com). Under **Azure Active Directory**, select **Properties** and copy the **Tenant ID**.
+ > To find your tenant ID, sign in to the [Azure portal](https://portal.azure.com). Under **Azure Active Directory**, select **Properties** and copy the **Tenant ID**.
1. Select **Create**. Copy your client ID and client secret. You'll use them when you add the identity provider in the Azure portal.
You'll now set the Google client ID and client secret. You can use the Azure por
You can delete your Google federation setup. If you do so, Google guest users who have already redeemed their invitation won't be able to sign in. But you can give them access to your resources again by [resetting their redemption status](reset-redemption-status.md). **To delete Google federation in the Azure portal**
-1. Go to the [Azure portal](https://portal.azure.com). On the left pane, select **Azure Active Directory**.
+1. Sign in to the [Azure portal](https://portal.azure.com). On the left pane, select **Azure Active Directory**.
2. Select **External Identities**. 3. Select **All identity providers**. 4. On the **Google** line, select the ellipsis button (**...**) and then select **Delete**.
You can delete your Google federation setup. If you do so, Google guest users wh
`Remove-AzureADMSIdentityProvider -Id Google-OAUTH` > [!NOTE]
- > For more information, see [Remove-AzureADMSIdentityProvider](/powershell/module/azuread/Remove-AzureADMSIdentityProvider?view=azureadps-2.0-preview&preserve-view=true).
+ > For more information, see [Remove-AzureADMSIdentityProvider](/powershell/module/azuread/Remove-AzureADMSIdentityProvider?view=azureadps-2.0-preview&preserve-view=true).
active-directory Hybrid Cloud To On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/hybrid-cloud-to-on-premises.md
You can use an [Azure AD B2B sample script](https://github.com/Azure-Samples/B2B
### Create B2B guest user objects through MIM
-You can use MIM 2016 Service Pack 1, and the MIM management agent for Microsoft Graph to create the guest user objects in the on-premises directory. To learn more, see [Azure AD business-to-business (B2B) collaboration with Microsoft Identity Manager (MIM) 2016 SP1 with Azure Application Proxy](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario).
+You can use MIM and the MIM connector for Microsoft Graph to create the guest user objects in the on-premises directory. To learn more, see [Azure AD business-to-business (B2B) collaboration with Microsoft Identity Manager (MIM) 2016 SP1 with Azure Application Proxy](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario).
## License considerations
-Make sure that you have the correct Client Access Licenses (CALs) for external guest users who access on-premises apps. For more information, see the "External Connectors" section of [Client Access Licenses and Management Licenses](https://www.microsoft.com/licensing/product-licensing/client-access-license.aspx). Consult your Microsoft representative or local reseller regarding your specific licensing needs.
+Make sure that you have the correct Client Access Licenses (CALs) or External Connectors for external guest users who access on-premises apps or whose identities are managed on-premises. For more information, see the "External Connectors" section of [Client Access Licenses and Management Licenses](https://www.microsoft.com/licensing/product-licensing/client-access-license.aspx). Consult your Microsoft representative or local reseller regarding your specific licensing needs.
## Next steps
active-directory Use Dynamic Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/use-dynamic-groups.md
# Create dynamic groups in Azure Active Directory B2B collaboration ## What are dynamic groups?
-A dynamic group is a dynamic configuration of security group membership for Azure Active Directory (Azure AD) available in [the Azure portal](https://portal.azure.com). Administrators can set rules to populate groups that are created in Azure AD based on user attributes (such as [userType](user-properties.md), department, or country/region). Members can be automatically added to or removed from a security group based on their attributes. These groups can provide access to applications or cloud resources (SharePoint sites, documents) and to assign licenses to members. Learn more about [dedicated groups in Azure Active Directory](../fundamentals/active-directory-groups-create-azure-portal.md).
+A dynamic group is a dynamic configuration of security group membership for Azure Active Directory (Azure AD) available in the [Azure portal](https://portal.azure.com). Administrators can set rules to populate groups that are created in Azure AD based on user attributes (such as [userType](user-properties.md), department, or country/region). Members can be automatically added to or removed from a security group based on their attributes. These groups can provide access to applications or cloud resources (SharePoint sites, documents) and to assign licenses to members. Learn more about [dedicated groups in Azure Active Directory](../fundamentals/active-directory-groups-create-azure-portal.md).
## Prerequisites [Azure AD Premium P1 or P2 licensing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing) is required to create and use dynamic groups. Learn more in [Create attribute-based rules for dynamic group membership in Azure Active Directory](../enterprise-users/groups-dynamic-membership.md).
active-directory Create New Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/create-new-tenant.md
After you sign in to the Azure portal, you can create a new tenant for your orga
### To create a new tenant
-1. Sign in to your organization's [Azure portal](https://portal.azure.com/).
+1. Sign in to the [Azure portal](https://portal.azure.com/).
1. From the Azure portal menu, select **Azure Active Directory**.
active-directory How To Get Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-get-support.md
Explore the range of [Azure support options and choose the plan](https://azure.m
> [!NOTE] > If you're using Azure AD B2C, open a support ticket by first switching to an Azure AD tenant that has an Azure subscription associated with it. Typically, this is your employee tenant or the default tenant created for you when you signed up for an Azure subscription. To learn more, see [how an Azure subscription is related to Azure AD](active-directory-how-subscriptions-associated-directory.md).
-1. Sign in to [the Azure portal](https://portal.azure.com) and open **Azure Active Directory**.
+1. Sign in to the [Azure portal](https://portal.azure.com) and open **Azure Active Directory**.
1. Scroll down to **Troubleshooting + Support** and select **New support request**.
Things can change quickly. The following resources provide updates and informati
* [Join the Microsoft Technical Community](https://techcommunity.microsoft.com/)]
-* [Learn about the diagnostic data Azure identity support can access](https://azure.microsoft.com/support/legal/support-diagnostic-information-collection/)
+* [Learn about the diagnostic data Azure identity support can access](https://azure.microsoft.com/support/legal/support-diagnostic-information-collection/)
active-directory New Name https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/new-name.md
Microsoft Entra helps you protect all identities and secure network access every
### Where can I manage Microsoft Entra ID?
-You can manage Microsoft Entra ID and all other Microsoft Entra solutions in the [Microsoft Entra admin center](https://entra.microsoft.com) or [Azure portal](https://portal.azure.com).
+You can manage Microsoft Entra ID and all other Microsoft Entra solutions in the [Microsoft Entra admin center](https://entra.microsoft.com) or the [Azure portal](https://portal.azure.com).
### What are the display names for service plans and SKUs?
active-directory Recover From Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/recover-from-deletions.md
Applications have two objects: the application registration and the service prin
To restore an application from the Azure portal, select **App registrations** > **Deleted applications**. Select the application registration to restore, and then select **Restore app registration**.
-[![Screenshot that shows the app registration restore process in the azure portal.](./media/recoverability/deletion-restore-application.png)](./media/recoverability/deletion-restore-application.png#lightbox)
+[![Screenshot that shows the app registration restore process in the Azure portal.](./media/recoverability/deletion-restore-application.png)](./media/recoverability/deletion-restore-application.png#lightbox)
Currently, service principals can be listed, viewed, hard deleted, or restored via the deletedItems Microsoft Graph API. To restore applications using Microsoft Graph, see [Restore deleted item - Microsoft Graph v1.0.](/graph/api/directory-deleteditems-restore?tabs=http).
active-directory Whats New Sovereign Clouds Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-sovereign-clouds-archive.md
The primary [What's new in sovereign clouds release notes](whats-new-sovereign-c
+## December 2022
+
+### General Availability - Risk-based Conditional Access for workload identities
+
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** Identity Security & Protection
+
+Customers can now bring one of the most powerful forms of access control in the industry to workload identities. Conditional Access supports risk-based policies for workload identities. Organizations can block sign-in attempts when Identity Protection detects compromised apps or services. For more information, see: [Create a risk-based Conditional Access policy](../conditional-access/workload-identity.md#create-a-risk-based-conditional-access-policy).
+++
+### General Availability - API to recover accidentally deleted Service Principals
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** Identity Lifecycle Management
+
+Restore a recently deleted application, group, servicePrincipal, administrative unit, or user object from deleted items. If an item was accidentally deleted, you can fully restore the item. This isn't applicable to security groups, which are deleted permanently. A recently deleted item remains available for up to 30 days. After 30 days, the item is permanently deleted. For more information, see: [servicePrincipal resource type](/graph/api/resources/serviceprincipal).
+++
+### General Availability - Using Staged rollout to test Cert Based Authentication (CBA)
+
+**Type:** New feature
+**Service category:** Authentications (Logins)
+**Product capability:** Identity Security & Protection
+
+We're excited to announce the general availability of hybrid cloud Kerberos trust, a new Windows Hello for Business deployment model to enable a password-less sign-in experience. With this new model, weΓÇÖve made Windows Hello for Business easier to deploy than the existing key trust and certificate trust deployment models by removing the need for maintaining complicated public key infrastructure (PKI), and Azure Active Directory (AD) Connect synchronization wait times. For more information, see: [Migrate to cloud authentication using Staged Rollout](../hybrid/how-to-connect-staged-rollout.md).
++ ## November 2022
active-directory Whats New Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-sovereign-clouds.md
Azure AD receives improvements on an ongoing basis. To stay up to date with the
This page updates monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Sovereign Clouds](whats-new-archive.md).
+## June 2023
+
+### General Availability - Apply RegEx Replace to groups claim content
+++
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** SSO
+
+Today, when group claims are added to tokens Azure Active Directory attempts to include all of the groups the user is a member of. In larger organizations where users are members of hundreds of groups this can often exceed the limits of what can go in the token. This feature enables more customers to connect their apps to Azure Active Directory by making connections easier and more robust through automation of the applicationΓÇÖs creation process. This specifically allows the set of groups included in the token to be limited to only those that are assigned to the application. For more information, see: [Regex-based claims transformation](../develop/saml-claims-customization.md#regex-based-claims-transformation).
+++
+### General Availability - Azure Active Directory SSO integration with Cisco Unified Communications Manager
+++
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** Platform
+
+Cisco Unified Communications Manager (Unified CM) provides reliable, secure, scalable, and manageable call control and session management. When you integrate Cisco Unified Communications Manager with Azure Active Directory, you can:
+
+- Control in Azure Active Directory who has access to Cisco Unified Communications Manager.
+- Enable your users to be automatically signed-in to Cisco Unified Communications Manager with their Azure AD accounts.
+- Manage your accounts in one central location - the Azure portal.
++
+For more information, see: [Azure Active Directory SSO integration with Cisco Unified Communications Manager](../saas-apps/cisco-unified-communications-manager-tutorial.md).
+++
+### General Availability - Number Matching for Microsoft Authenticator notifications
+
+**Type:** Plan for Change
+**Service category:** Microsoft Authenticator App
+**Product capability:** User Authentication
+
+Microsoft Authenticator appΓÇÖs number matching feature has been Generally Available since Nov 2022! If you haven't already used the rollout controls (via Azure portal Admin UX and MSGraph APIs) to smoothly deploy number matching for users of Microsoft Authenticator push notifications, we highly encourage you to do so. We previously announced that we'll remove the admin controls and enforce the number match experience tenant-wide for all users of Microsoft Authenticator push notifications starting February 27, 2023. After listening to customers, we'll extend the availability of the rollout controls for a few more weeks. Organizations can continue to use the existing rollout controls until May 8, 2023, to deploy number matching in their organizations. Microsoft services will start enforcing the number matching experience for all users of Microsoft Authenticator push notifications after May 8, 2023. We'll also remove the rollout controls for number matching after that date.
+
+If customers donΓÇÖt enable number match for all Microsoft Authenticator push notifications prior to May 8, 2023, Authenticator users may experience inconsistent sign-ins while the services are rolling out this change. To ensure consistent behavior for all users, we highly recommend you enable number match for Microsoft Authenticator push notifications in advance.
+
+For more information, see: [How to use number matching in multifactor authentication (MFA) notifications - Authentication methods policy](../authentication/how-to-mfa-number-match.md)
+++ ## May 2023 ### General Availability - Admins can now restrict users from self-service accessing their BitLocker keys
Represents a tenant's customizable terms of use agreement that is created, and m
-## December 2022
-
-### General Availability - Risk-based Conditional Access for workload identities
-
-**Type:** New feature
-**Service category:** Conditional Access
-**Product capability:** Identity Security & Protection
-
-Customers can now bring one of the most powerful forms of access control in the industry to workload identities. Conditional Access supports risk-based policies for workload identities. Organizations can block sign-in attempts when Identity Protection detects compromised apps or services. For more information, see: [Create a risk-based Conditional Access policy](../conditional-access/workload-identity.md#create-a-risk-based-conditional-access-policy).
---
-### General Availability - API to recover accidentally deleted Service Principals
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** Identity Lifecycle Management
-
-Restore a recently deleted application, group, servicePrincipal, administrative unit, or user object from deleted items. If an item was accidentally deleted, you can fully restore the item. This isn't applicable to security groups, which are deleted permanently. A recently deleted item remains available for up to 30 days. After 30 days, the item is permanently deleted. For more information, see: [servicePrincipal resource type](/graph/api/resources/serviceprincipal).
---
-### General Availability - Using Staged rollout to test Cert Based Authentication (CBA)
-
-**Type:** New feature
-**Service category:** Authentications (Logins)
-**Product capability:** Identity Security & Protection
-
-We're excited to announce the general availability of hybrid cloud Kerberos trust, a new Windows Hello for Business deployment model to enable a password-less sign-in experience. With this new model, weΓÇÖve made Windows Hello for Business easier to deploy than the existing key trust and certificate trust deployment models by removing the need for maintaining complicated public key infrastructure (PKI), and Azure Active Directory (AD) Connect synchronization wait times. For more information, see: [Migrate to cloud authentication using Staged Rollout](../hybrid/how-to-connect-staged-rollout.md).
-- ## Next steps <!-- Add a context sentence for the following links -->
active-directory Entitlement Management External Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-external-users.md
To ensure people outside of your organization can request access packages and ge
:::image type="content" source="media/entitlement-management-external-users/exclude-app-guests-selection.png" alt-text="Screenshot of the exclude guests app selection."::: > [!NOTE]
-> The Entitlement Management app includes the entitlement management side of MyAccess, the Entitlement Management side of Azure Portal and the Entitlement Management part of MS graph. The latter two require additional permissions for access, hence won't be accessed by guests unless explicit permission is provided.
+> The Entitlement Management app includes the entitlement management side of MyAccess, the Entitlement Management side of Azure portal and the Entitlement Management part of MS graph. The latter two require additional permissions for access, hence won't be accessed by guests unless explicit permission is provided.
### Review your SharePoint Online external sharing settings
active-directory Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/sap.md
After your users are in Azure AD, you can provision accounts into the various Sa
Customers who have yet to transition from applications such as SAP ERP Central Component (SAP ECC) to SAP S/4HANA can still rely on the Azure AD provisioning service to provision user accounts. Within SAP ECC, you expose the necessary Business Application Programming Interfaces (BAPIs) for creating, updating, and deleting users. Within Azure AD, you have two options:
-* Use the lightweight Azure AD provisioning agent and [web services connector](/azure/active-directory/app-provisioning/on-premises-web-services-connector) to [provision users into apps such as SAP ECC](https://learn.microsoft.com/azure/active-directory/app-provisioning/on-premises-sap-connector-configure?branch=pr-en-us-243167).
+* Use the lightweight Azure AD provisioning agent and [web services connector](/azure/active-directory/app-provisioning/on-premises-web-services-connector) to [provision users into apps such as SAP ECC](/azure/active-directory/app-provisioning/on-premises-sap-connector-configure?branch=pr-en-us-243167).
* In scenarios where you need to do more complex group and role management, use [Microsoft Identity Manager](/microsoft-identity-manager/reference/microsoft-identity-manager-2016-ma-ws) to manage access to your legacy SAP applications. ## Trigger custom workflows
active-directory Tutorial Prepare Azure Ad User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-prepare-azure-ad-user-accounts.md
The off-boarding tutorials only require one account that has group and Teams mem
## Before you begin
-In most cases, users are going to be provisioned to Azure AD either from an on-premises solution (Azure AD Connect, Cloud sync, etc.) or with an HR solution. These users have the attributes and values populated at the time of creation. Setting up the infrastructure to provision users is outside the scope of this tutorial. For information, see [Tutorial: Basic Active Directory environment](../cloud-sync/tutorial-basic-ad-azure.md) and [Tutorial: Integrate a single forest with a single Azure AD tenant](../cloud-sync/tutorial-single-forest.md)
+In most cases, users are going to be provisioned to Azure AD either from an on-premises solution (such as Azure AD Connect or Cloud sync) or with an HR solution. These users have the attributes and values populated at the time of creation. Setting up the infrastructure to provision users is outside the scope of this tutorial. For information, see [Tutorial: Basic Active Directory environment](../cloud-sync/tutorial-basic-ad-azure.md) and [Tutorial: Integrate a single forest with a single Azure AD tenant](../cloud-sync/tutorial-single-forest.md)
## Create users in Azure AD
First we create our employee, Melva Prince.
6. Select **Run query** 7. Copy the ID that is returned in the results. This is used later to assign a manager.
- ```HTTP
- {
- "accountEnabled": true,
- "displayName": "Melva Prince",
- "mailNickname": "mprince",
- "department": "sales",
- "mail": "mprince@<your tenant name here>",
- "employeeHireDate": "2022-04-15T22:10:00Z",
- "userPrincipalName": "mprince@<your tenant name here>",
- "passwordProfile" : {
- "forceChangePasswordNextSignIn": true,
- "password": "xWwvJ]6NMw+bWH-d"
- }
- }
+ ```json
+ {
+ "accountEnabled": true,
+ "displayName": "Melva Prince",
+ "mailNickname": "mprince",
+ "department": "sales",
+ "mail": "mprince@<your tenant name here>",
+ "employeeHireDate": "2022-04-15T22:10:00Z",
+ "userPrincipalName": "mprince@<your tenant name here>",
+ "passwordProfile" : {
+ "forceChangePasswordNextSignIn": true,
+ "password": "xWwvJ]6NMw+bWH-d"
+ }
+ }
``` :::image type="content" source="media/tutorial-lifecycle-workflows/graph-post-user.png" alt-text="Screenshot of POST create Melva in graph explorer." lightbox="media/tutorial-lifecycle-workflows/graph-post-user.png":::
Next, we create Britta Simon. This is the account that is used as our manager.
3. Copy the following code in to the **Request body** 4. Replace `<your tenant here>` in the following code with the value of your Azure AD tenant. 5. Select **Run query**
- 6. Copy the ID that is returned in the results. This is used later to assign a manager.
- ```HTTP
- {
- "accountEnabled": true,
- "displayName": "Britta Simon",
- "mailNickname": "bsimon",
- "department": "sales",
- "mail": "bsimon@<your tenant name here>",
- "employeeHireDate": "2021-01-15T22:10:00Z",
- "userPrincipalName": "bsimon@<your tenant name here>",
- "passwordProfile" : {
- "forceChangePasswordNextSignIn": true,
- "password": "xWwvJ]6NMw+bWH-d"
- }
- }
- ```
+ 6. Copy the ID that is returned in the results. This is used later to assign a manager.
+ ```json
+ {
+ "accountEnabled": true,
+ "displayName": "Britta Simon",
+ "mailNickname": "bsimon",
+ "department": "sales",
+ "mail": "bsimon@<your tenant name here>",
+ "employeeHireDate": "2021-01-15T22:10:00Z",
+ "userPrincipalName": "bsimon@<your tenant name here>",
+ "passwordProfile" : {
+ "forceChangePasswordNextSignIn": true,
+ "password": "xWwvJ]6NMw+bWH-d"
+ }
+ }
+ ```
>[!NOTE] > You need to change the &lt;your tenant name here&gt; section of the code to match your Azure AD tenant.
For the tutorial, the **mail** attribute only needs to be set on the manager acc
8. Go back to users and select **Britta Simon**. 9. At the top, select **Edit**. 10. Under **Email**, enter a valid email address.
- 11. select **Save**.
+ 11. Select **Save**.
### Edit employeeHireDate The employeeHireDate attribute is new to Azure AD. It isn't exposed through the UI and must be updated using Graph. To edit this attribute, we can use [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer).
The employeeHireDate attribute is new to Azure AD. It isn't exposed through the
In order to do this, we must get the object ID for our user Melva Prince.
- 1. Sign in to [Azure portal](https://portal.azure.com).
+ 1. Sign in to the [Azure portal](https://portal.azure.com).
2. On the right, select **Azure Active Directory**. 3. Select **Users**. 4. Select **Melva Prince**.
In order to do this, we must get the object ID for our user Melva Prince.
``` :::image type="content" source="media/tutorial-lifecycle-workflows/update-1.png" alt-text="Screenshot of the PATCH employeeHireDate." lightbox="media/tutorial-lifecycle-workflows/update-1.png":::
- 10. Verify the change by changing **PATCH** back to **GET** and **v1.0** to **beta**. select **Run query**. You should see the attributes for Melva set.
+ 10. Verify the change by changing **PATCH** back to **GET** and **v1.0** to **beta**. Select **Run query**. You should see the attributes for Melva set.
:::image type="content" source="media/tutorial-lifecycle-workflows/update-3.png" alt-text="Screenshot of the GET employeeHireDate." lightbox="media/tutorial-lifecycle-workflows/update-3.png"::: ### Edit the manager attribute on the employee account
A user with groups and Teams memberships is required before you begin the tutori
- [On-boarding users to your organization using Lifecycle workflows with Azure portal](tutorial-onboard-custom-workflow-portal.md) - [On-boarding users to your organization using Lifecycle workflows with Microsoft Graph](tutorial-onboard-custom-workflow-graph.md) - [Tutorial: Off-boarding users from your organization using Lifecycle workflows with Azure portal](tutorial-offboard-custom-workflow-portal.md)-- [Tutorial: Off-boarding users from your organization using Lifecycle workflows with Microsoft Graph](tutorial-offboard-custom-workflow-graph.md)
+- [Tutorial: Off-boarding users from your organization using Lifecycle workflows with Microsoft Graph](tutorial-offboard-custom-workflow-graph.md)
active-directory What Is Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/what-is-provisioning.md
For more information, see [What is HR driven provisioning?](../app-provisioning/
In Azure AD, the term **[app provisioning](../app-provisioning/user-provisioning.md)** refers to automatically creating copies of user identities in the applications that users need access to, for applications that have their own data store, distinct from Azure AD or Active Directory. In addition to creating user identities, app provisioning includes the maintenance and removal of user identities from those apps, as the user's status or roles change. Common scenarios include provisioning an Azure AD user into applications like [Dropbox](../saas-apps/dropboxforbusiness-provisioning-tutorial.md), [Salesforce](../saas-apps/salesforce-provisioning-tutorial.md), [ServiceNow](../saas-apps/servicenow-provisioning-tutorial.md), as each of these applications have their own user repository distinct from Azure AD.
-Azure AD also supports provisioning users into applications hosted on-premises or in a virtual machine, without having to open up any firewalls. If your application supports [SCIM](https://aka.ms/scimoverview), or you've built a SCIM gateway to connect to your legacy application, you can use the Azure AD Provisioning agent to [directly connect](https://learn.microsoft.com/azure/active-directory/app-provisioning/on-premises-scim-provisioning) with your application and automate provisioning and deprovisioning. If you have legacy applications that don't support SCIM and rely on an [LDAP](https://learn.microsoft.com/azure/active-directory/app-provisioning/on-premises-ldap-connector-configure) user store or a [SQL](https://learn.microsoft.com/azure/active-directory/app-provisioning/on-premises-sql-connector-configure) database, Azure AD can support those as well.
+Azure AD also supports provisioning users into applications hosted on-premises or in a virtual machine, without having to open up any firewalls. If your application supports [SCIM](https://aka.ms/scimoverview), or you've built a SCIM gateway to connect to your legacy application, you can use the Azure AD Provisioning agent to [directly connect](/azure/active-directory/app-provisioning/on-premises-scim-provisioning) with your application and automate provisioning and deprovisioning. If you have legacy applications that don't support SCIM and rely on an [LDAP](/azure/active-directory/app-provisioning/on-premises-ldap-connector-configure) user store or a [SQL](/azure/active-directory/app-provisioning/on-premises-sql-connector-configure) database, Azure AD can support those as well.
For more information, see [What is app provisioning?](../app-provisioning/user-provisioning.md)
active-directory Tutorial Basic Ad Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/tutorial-basic-ad-azure.md
Now that you have the VM created and it has been renamed and has a static IP add
1. Open up the PowerShell ISE as Administrator. 2. Run the following script.
- ```powershell
+ ```powershell
#Declare variables $DatabasePath = "c:\windows\NTDS" $DomainMode = "WinThreshold"
Now that you have our Active Directory environment, you need to a test account.
1. Open up the PowerShell ISE as Administrator. 2. Run the following script.
- ```powershell
+ ```powershell
# Filename: 4_CreateUser.ps1 # Description: Creates a user in Active Directory. This is part of # the Azure AD Connect password hash sync tutorial.
Now that you have our Active Directory environment, you need to a test account.
## Create an Azure AD tenant Now you need to create an Azure AD tenant so that you can synchronize our users to the cloud. To create a new Azure AD tenant, do the following.
-1. Browse to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription.
+1. Sign in to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription.
2. Select the **plus icon (+)** and search for **Azure Active Directory**. 3. Select **Azure Active Directory** in the search results. 4. Select **Create**.</br>
active-directory Tutorial Existing Forest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/tutorial-existing-forest.md
If you're using the [Basic AD and Azure environment](tutorial-basic-ad-azure.md
You'll now verify that the users that you had in our on-premises directory have been synchronized and now exist in our Azure AD tenant. This process may take a few hours to complete. To verify users are synchronized, do the following:
-1. Browse to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription.
+1. Sign in to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription.
2. On the left, select **Azure Active Directory** 3. Under **Manage**, select **Users**. 4. Verify that you see the new users in our tenant
active-directory Tutorial Single Forest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/tutorial-single-forest.md
Use the following steps to configure and start the provisioning:
You'll now verify that the users that you had in your on-premises directory have been synchronized and now exist in your Azure AD tenant. The sync operation may take a few hours to complete. To verify users are synchronized, follow these steps:
-1. Browse to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription.
+1. Sign in to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription.
2. On the left, select **Azure Active Directory** 3. Under **Manage**, select **Users**. 4. Verify that the new users appear in your tenant
active-directory Choose Ad Authn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/choose-ad-authn.md
The following diagrams outline the high-level architecture components required f
|What are the on-premises server requirements beyond the provisioning system: Azure AD Connect?|None|One server for each additional authentication agent|Two or more AD FS servers<br><br>Two or more WAP servers in the perimeter/DMZ network| |What are the requirements for on-premises Internet and networking beyond the provisioning system?|None|[Outbound Internet access](how-to-connect-pta-quick-start.md) from the servers running authentication agents|[Inbound Internet access](/windows-server/identity/ad-fs/overview/ad-fs-requirements) to WAP servers in the perimeter<br><br>Inbound network access to AD FS servers from WAP servers in the perimeter<br><br>Network load balancing| |Is there a TLS/SSL certificate requirement?|No|No|Yes|
-|Is there a health monitoring solution?|Not required|Agent status provided by [Azure portal](tshoot-connect-pass-through-authentication.md)|[Azure AD Connect Health](how-to-connect-health-adfs.md)|
+|Is there a health monitoring solution?|Not required|Agent status provided by the [Azure portal](tshoot-connect-pass-through-authentication.md)|[Azure AD Connect Health](how-to-connect-health-adfs.md)|
|Do users get single sign-on to cloud resources from domain-joined devices within the company network?|Yes with [Azure AD joined devices (AADJ)](../../devices/concept-azure-ad-join.md), [Hybrid Azure AD joined devices (HAADJ)](../../devices/howto-hybrid-azure-ad-join.md), the [Microsoft Enterprise SSO plug-in for Apple devices](../../develop/apple-sso-plugin.md), or [Seamless SSO](how-to-connect-sso.md)|Yes with [Azure AD joined devices (AADJ)](../../devices/concept-azure-ad-join.md), [Hybrid Azure AD joined devices (HAADJ)](../../devices/howto-hybrid-azure-ad-join.md), the [Microsoft Enterprise SSO plug-in for Apple devices](../../develop/apple-sso-plugin.md), or [Seamless SSO](how-to-connect-sso.md)|Yes| |What sign-in types are supported?|UserPrincipalName + password<br><br>Windows-Integrated Authentication by using [Seamless SSO](how-to-connect-sso.md)<br><br>[Alternate login ID](how-to-connect-install-custom.md)<br><br>[Azure AD Joined Devices](../../devices/concept-azure-ad-join.md)<br><br>[Hybrid Azure AD joined devices (HAADJ)](../../devices/howto-hybrid-azure-ad-join.md)<br><br>[Certificate and smart card authentication](../../authentication/concept-certificate-based-authentication-smartcard.md)|UserPrincipalName + password<br><br>Windows-Integrated Authentication by using [Seamless SSO](how-to-connect-sso.md)<br><br>[Alternate login ID](how-to-connect-pta-faq.yml)<br><br>[Azure AD Joined Devices](../../devices/concept-azure-ad-join.md)<br><br>[Hybrid Azure AD joined devices (HAADJ)](../../devices/howto-hybrid-azure-ad-join.md)<br><br>[Certificate and smart card authentication](../../authentication/concept-certificate-based-authentication-smartcard.md)|UserPrincipalName + password<br><br>sAMAccountName + password<br><br>Windows-Integrated Authentication<br><br>[Certificate and smart card authentication](/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication)<br><br>[Alternate login ID](/windows-server/identity/ad-fs/operations/configuring-alternate-login-id)| |Is Windows Hello for Business supported?|[Key trust model](/windows/security/identity-protection/hello-for-business/hello-identity-verification)<br><br>[Hybrid Cloud Trust](/windows/security/identity-protection/hello-for-business/hello-hybrid-cloud-trust)|[Key trust model](/windows/security/identity-protection/hello-for-business/hello-identity-verification)<br><br>[Hybrid Cloud Trust](/windows/security/identity-protection/hello-for-business/hello-hybrid-cloud-trust)<br><br>*Both require Windows Server 2016 Domain functional level*|[Key trust model](/windows/security/identity-protection/hello-for-business/hello-identity-verification)<br><br>[Hybrid Cloud Trust](/windows/security/identity-protection/hello-for-business/hello-hybrid-cloud-trust)<br><br>[Certificate trust model](/windows/security/identity-protection/hello-for-business/hello-key-trust-adfs)|
In today's world, threats are present 24 hours a day and come from everywhere. I
[Get started](../../fundamentals/active-directory-whatis.md) with Azure AD and deploy the right authentication solution for your organization.
-If you're thinking about migrating from federated to cloud authentication, learn more about [changing the sign-in method](plan-connect-user-signin.md). To help you plan and implement the migration, use [these project deployment plans](../../fundamentals/deployment-plans.md), or consider using the new [Staged Rollout](how-to-connect-staged-rollout.md) feature to migrate federated users to using cloud authentication in a staged approach.
+If you're thinking about migrating from federated to cloud authentication, learn more about [changing the sign-in method](plan-connect-user-signin.md). To help you plan and implement the migration, use [these project deployment plans](../../fundamentals/deployment-plans.md), or consider using the new [Staged Rollout](how-to-connect-staged-rollout.md) feature to migrate federated users to using cloud authentication in a staged approach.
active-directory How To Connect Health Adfs Risky Ip Workbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-health-adfs-risky-ip-workbook.md
Alerting threshold can be updated through Threshold Settings. To start with, sys
**Hour or Day** detection window length can be configured through the toggle button above the filters for customizing thresholds.
-## Configure notification alerts using Azure Monitor Alerts through the Azure Portal:
+<a name='configure-notification-alerts-using-azure-monitor-alerts-through-the-azure-portal'></a>
+
+## Configure notification alerts using Azure Monitor Alerts through the Azure portal:
[![Azure Alerts Rule](./media/how-to-connect-health-adfs-risky-ip-workbook/azure-alerts-rule-1.png)](./media/how-to-connect-health-adfs-risky-ip-workbook/azure-alerts-rule-1.png#lightbox)
-1. In the Azure Portal, search for ΓÇ£MonitorΓÇ¥ in the search bar to navigate to the Azure ΓÇ£MonitorΓÇ¥ service. Select ΓÇ£AlertsΓÇ¥ from the left menu, then ΓÇ£+ New alert ruleΓÇ¥.
+1. In the Azure portal, search for ΓÇ£MonitorΓÇ¥ in the search bar to navigate to the Azure ΓÇ£MonitorΓÇ¥ service. Select ΓÇ£AlertsΓÇ¥ from the left menu, then ΓÇ£+ New alert ruleΓÇ¥.
2. On the ΓÇ£Create alert ruleΓÇ¥ blade: * Scope: Click ΓÇ£Select resourceΓÇ¥ and select your Log Analytics workspace that contains the ADFSSignInLogs you wish to monitor. * Condition: Click ΓÇ£Add conditionΓÇ¥. Select ΓÇ£LogΓÇ¥ for Signal type and ΓÇ£Log analyticsΓÇ¥ for Monitor service. Choose ΓÇ£Custom log searchΓÇ¥.
active-directory How To Connect Health Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-health-operations.md
You can configure the Azure AD Connect Health service to send email notification
> ### To enable Azure AD Connect Health email notifications
-1. In the Azure Portal, search for Azure AD Connect Health
+1. In the Azure portal, search for Azure AD Connect Health
2. Select **Sync errors** 3. Select **Notification Settings**. 5. At the email notification switch, select **ON**.
active-directory How To Connect Health Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-health-sync.md
By selecting an alert you will be provided with additional information as well a
### Limited Evaluation of Alerts If Azure AD Connect is NOT using the default configuration (for example, if Attribute Filtering is changed from the default configuration to a custom configuration), then the Azure AD Connect Health agent will not upload the error events related to Azure AD Connect.
-This limits the evaluation of alerts by the service. You will see a banner that indicates this condition in the Azure Portal under your service.
+This limits the evaluation of alerts by the service. You will see a banner that indicates this condition in the Azure portal under your service.
![Screenshot of the the alert banner that says Alert evaluation is limited. Update your settings to enable all alerts.](./media/how-to-connect-health-sync/banner.png)
Admins Frequently want to know about the time it takes to sync changes to Azure
* Object Change trend ### Sync Latency
-This feature provides a graphical trend of latency of the sync operations (import, export, etc.) for connectors. This provides a quick and easy way to understand not only the latency of your operations (larger if you have a large set of changes occurring) but also a way to detect anomalies in the latency that may require further investigation.
+This feature provides a graphical trend of latency of the sync operations (such as import and export) for connectors. This provides a quick and easy way to understand not only the latency of your operations (larger if you have a large set of changes occurring) but also a way to detect anomalies in the latency that may require further investigation.
![Screenshot of the Run Profile Latency from past 3 days graph.](./media/how-to-connect-health-sync/synclatency02.png)
active-directory How To Connect Pta Disable Do Not Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-pta-disable-do-not-configure.md
Before you begin, ensure that you have the following prerequisite.
If you don't already have an agent, you can install it.
- 1. Go to the [Azure portal](https://portal.azure.com).
+ 1. Sign in to the [Azure portal](https://portal.azure.com).
1. Download the latest Auth Agent. 1. Install the feature by running either of the following commands. * `.\AADConnectAuthAgentSetup.exe`
active-directory Concept Workload Identity Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-workload-identity-risk.md
To make use of workload identity risk, including the new **Risky workload identi
- Global Administrator - Security Administrator - Security Operator
- - Security Reader
-Users assigned the Conditional Access administrator role can create policies that use risk as a condition.
+ - Security Reader Users assigned the Conditional Access administrator role can create policies that use risk as a condition.
## Workload identity risk detections
We detect risk on workload identities across sign-in behavior and offline indica
Organizations can find workload identities that have been flagged for risk in one of two locations:
-1. Navigate to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Browse to **Azure Active Directory** > **Security** > **Risky workload identities**. 1. Or browse to **Azure Active Directory** > **Security** > **Risk detections**. 1. Select the **Workload identity detections** tab.'
Organizations can find workload identities that have been flagged for risk in on
You can also query risky workload identities [using the Microsoft Graph API](/graph/use-the-api). There are two new collections in the [Identity Protection APIs](/graph/api/resources/identityprotection-root). -- riskyServicePrincipals-- servicePrincipalRiskDetections
+- `riskyServicePrincipals`
+- `servicePrincipalRiskDetections`
### Export risk data
The [Azure AD Toolkit](https://github.com/microsoft/AzureADToolkit) is a PowerSh
- [Azure AD audit logs](../reports-monitoring/concept-audit-logs.md) - [Azure AD sign-in logs](../reports-monitoring/concept-sign-ins.md) - [Simulate risk detections](howto-identity-protection-simulate-risk.md)--
active-directory Howto Identity Protection Configure Mfa Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-mfa-policy.md
For more information on Azure AD multifactor authentication, see [What is Azure
## Policy configuration
-1. Navigate to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Browse to **Azure Active Directory** > **Security** > **Identity Protection** > **MFA registration policy**. 1. Under **Assignments** > **Users** 1. Under **Include**, select **All users** or **Select individuals and groups** if limiting your rollout.
active-directory Howto Identity Protection Simulate Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-simulate-risk.md
The sign-in shows up in the Identity Protection dashboard within 2-4 hours.
This risk detection indicates that the application's valid credentials have been leaked. This leak can occur when someone checks in the credentials in a public code artifact on GitHub. Therefore, to simulate this detection, you need a GitHub account and can [sign up a GitHub account](https://docs.github.com/get-started/signing-up-for-github) if you don't have one already. **To simulate Leaked Credentials in GitHub for Workload Identities, perform the following steps**:
-1. Navigate to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. Browse to **Azure Active Directory** > **App registrations**. 3. Select **New registration** to register a new application or reuse an existing stale application. 4. Select **Certificates & Secrets** > **New client Secret** , add a description of your client secret and set an expiration for the secret or specify a custom lifetime and select **Add**. Record the secret's value for later use for your GitHub Commit.
This section provides you with steps for testing the user and the sign-in risk p
To test a user risk security policy, perform the following steps:
-1. Navigate to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Browse to **Azure Active Directory** > **Security** > **Identity Protection** > **Overview**. 1. Select **Configure user risk policy**. 1. Under **Assignments**
To test a user risk security policy, perform the following steps:
To test a sign-in risk policy, perform the following steps:
-1. Navigate to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Browse to **Azure Active Directory** > **Security** > **Identity Protection** > **Overview**. 1. Select **Configure sign-in risk policy**. 1. Under **Assignments**
active-directory Add Application Portal Assign Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-assign-users.md
To create a user account and assign it to an enterprise application, you need:
To create a user account in your Azure AD tenant:
-1. Go to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites.
+1. Sign in to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites.
1. Browse to **Azure Active Directory** and select **Users**. 1. Select **New user** at the top of the pane.
active-directory Add Application Portal Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-configure.md
Application properties control how the application is represented and how the ap
To configure the application properties:
-1. Go to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites.
+1. Sign in to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites.
1. Browse to **Azure Active Directory** > **Enterprise applications**. The **All applications** pane opens and displays a list of the applications in your Azure AD tenant. Search for and select the application that you want to use. 1. In the **Manage** section, select **Properties** to open the **Properties** pane for editing. 1. On the **Properties** pane, you may want to configure the following properties for your application:
active-directory Add Application Portal Setup Oidc Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-setup-oidc-sso.md
When you add an enterprise application that uses the OIDC standard for SSO, you
To configure OIDC-based SSO for an application:
-1. Go to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites.
+1. Sign in to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites.
1. Browse to **Azure Active Directory** > **Enterprise applications**. The **All applications** pane opens and displays a list of the applications in your Azure AD tenant. 1. In the **Enterprise applications** pane, select **New application**. 1. The **Browse Azure AD Gallery** pane opens and displays tiles for cloud platforms, on-premises applications, and featured applications. Applications listed in the **Featured applications** section have icons indicating whether they support federated SSO and provisioning. Search for and select the application. In this example, **SmartSheet** is being used.
active-directory Add Application Portal Setup Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-setup-sso.md
To configure SSO, you need:
To enable SSO for an application:
-1. Go to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites.
+1. Sign in to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites.
1. Browse to **Azure Active Directory** > **Enterprise applications**. The **All applications** pane opens and displays a list of the applications in your Azure AD tenant. Search for and select the application that you want to use. For example, **Azure AD SAML Toolkit 1**. 1. In the **Manage** section of the left menu, select **Single sign-on** to open the **Single sign-on** pane for editing. 1. Select **SAML** to open the SSO configuration page. After the application is configured, users can sign in to it by using their credentials from the Azure AD tenant.
To test SSO:
- [Manage self service access](manage-self-service-access.md) - [Configure user consent](configure-user-consent.md)-- [Grant tenant-wide admin consent](grant-admin-consent.md)
+- [Grant tenant-wide admin consent](grant-admin-consent.md)
active-directory Add Application Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal.md
To add an enterprise application to your Azure AD tenant, you need:
To add an enterprise application to your tenant:
-1. Go to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites.
+1. Sign in to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites.
1. Browse to **Azure Active Directory** and select **Enterprise applications**. The **All applications** pane opens and displays a list of the applications in your Azure AD tenant. 1. In the **Enterprise applications** pane, select **New application**. 1. The **Browse Azure AD Gallery** pane opens and displays tiles for cloud platforms, on-premises applications, and featured applications. Applications listed in the **Featured applications** section have icons indicating whether they support federated single sign-on (SSO) and provisioning. Search for and select the application. In this quickstart, **Azure AD SAML Toolkit** is being used.
active-directory Application Sign In Problem Application Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-sign-in-problem-application-error.md
Azure AD selects the format for the **NameID** attribute (User Identifier) based
To change which parts of the SAML token are digitally signed by Azure AD, follow these steps:
-1. Open the [Azure portal](https://portal.azure.com/) and sign in as a global administrator or co-admin.
+1. Sign in to the [Azure portal](https://portal.azure.com/) and sign in as a global administrator or co-admin.
2. Select **All services** at the top of the navigation pane on the left side to open the Azure AD extension.
By default, Azure AD signs the SAML token by using the most-secure algorithm. We
To change the signing algorithm, follow these steps:
-1. Open the [Azure portal](https://portal.azure.com/) and sign in as a global administrator or co-admin.
+1. Sign in to the [Azure portal](https://portal.azure.com/) and sign in as a global administrator or co-admin.
2. Select **All services** at the top of the navigation pane on the left side to open the Azure AD extension.
active-directory Cloudflare Conditional Access Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/cloudflare-conditional-access-policies.md
Go to developers.cloudflare.com to [set up Azure AD as an IdP](https://developer
## Configure Conditional Access
-1. Go to the [Azure portal](https://portal.azure.com/).
+1. Sign in to the [Azure portal](https://portal.azure.com/).
2. Select **Azure Active Directory**. 3. Under **Manage**, select **App registrations**. 4. Select the application you created.
active-directory Configure Admin Consent Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-admin-consent-workflow.md
To configure the admin consent workflow, you need:
To enable the admin consent workflow and choose reviewers:
-1. Sign-in to the [Azure portal](https://portal.azure.com) with one of the roles listed in the prerequisites.
+1. Sign in to the [Azure portal](https://portal.azure.com) with one of the roles listed in the prerequisites.
1. Search for and select **Azure Active Directory**. 1. Select **Enterprise applications**. 1. Under **Security**, select **Consent and permissions**.
active-directory F5 Big Ip Headers Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-headers-easy-button.md
Learn more: [Quickstart: Register an application with the Microsoft identity pla
Create a tenant app registration to authorize the Easy Button access to Graph. With these permissions, the BIG-IP pushes the configurations to establish a trust between a SAML SP instance for published application, and Azure AD as the SAML IdP.
-1. Sign-in to the [Azure portal](https://portal.azure.com/) with Application Administrative permissions.
+1. Sign in to the [Azure portal](https://portal.azure.com/) with Application Administrative permissions.
2. In the left navigation, select **Azure Active Directory**. 3. Under **Manage**, select **App registrations > New registration**. 4. Enter an application **Name**.
active-directory Howto Saml Token Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/howto-saml-token-encryption.md
To configure enterprise application's SAML token encryption, follow these steps:
You can add the public cert to your application configuration within the Azure portal.
-1. Go to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Search for and select the **Azure Active Directory**.
You can add the public cert to your application configuration within the Azure p
1. On the **Token encryption** page, select **Import Certificate** to import the .cer file that contains your public X.509 certificate.
- ![Screenshot shows how to import a certificate file using Azure Portal.](./media/howto-saml-token-encryption/import-certificate-small.png)
+ ![Screenshot shows how to import a certificate file using Azure portal.](./media/howto-saml-token-encryption/import-certificate-small.png)
1. Once the certificate is imported, and the private key is configured for use on the application side, activate encryption by selecting the **...** next to the thumbprint status, and then select **Activate token encryption** from the options in the dropdown menu.
active-directory Migrate Adfs Plan Management Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-plan-management-insights.md
Once you've migrated the apps, consider applying the following suggestions to en
## Secure app access
-Azure AD provides a centralized access location to manage your migrated apps. Go to the [Azure portal](https://portal.azure.com/) and enable the following capabilities:
+Azure AD provides a centralized access location to manage your migrated apps. Sign in to the [Azure portal](https://portal.azure.com/) and enable the following capabilities:
- **Secure user access to apps.** Enable [Conditional Access policies](../conditional-access/overview.md)or [Identity Protection](../identity-protection/overview-identity-protection.md)to secure user access to applications based on device state, location, and more. - **Automatic provisioning.** Set up [automatic provisioning of users](../app-provisioning/user-provisioning.md) with various third-party SaaS apps that users need to access. In addition to creating user identities, it includes the maintenance and removal of user identities as status or roles change. - **Delegate user access** **management**. As appropriate, enable self-service application access to your apps and *assign a business approver to approve access to those apps*. Use [Self-Service Group Management](../enterprise-users/groups-self-service-management.md)for groups assigned to collections of apps.-- **Delegate admin access.** using **Directory Role** to assign an admin role (such as Application administrator, Cloud Application administrator, or Application developer) to your user.
+- **Delegate admin access** using **Directory Role** to assign an admin role (such as Application administrator, Cloud Application administrator, or Application developer) to your user.
- **Add applications to Access Packages** to provide governance and attestation. ## Audit and gain insights of your apps
active-directory Migrate Okta Sign On Policies Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-sign-on-policies-conditional-access.md
To enable hybrid Azure AD join on your Azure AD Connect server, run the configur
7. Select **Next**. > [!TIP]
- > If you blocked legacy authentication on Windows clients in the global or app-level sign-on policy, make a rule that enables the hybrid Azure AD join process to finish. Allow the legacy authentication stack for Windows clients. </br>To enable custom client strings on app policies, contact the [Okta Help Center](https://support.okta.com/help/).
+ > If you blocked legacy authentication on Windows clients in the global or app-level sign-on policy, make a rule that enables the hybrid Azure AD join process to finish. Allow the legacy authentication stack for Windows clients. <br>To enable custom client strings on app policies, contact the [Okta Help Center](https://support.okta.com/help/).
### Configure device compliance
If you deployed hybrid Azure AD join, you can deploy another group policy to com
Before you convert to Conditional Access, confirm the base MFA tenant settings for your organization.
-1. Go to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. Sign in as a Global Administrator. 3. Select **Azure Active Directory** > **Users** > **Multi-Factor Authentication**. 4. The legacy Azure AD Multi-Factor Authentication portal appears. Or select [Azure AD MFA portal](https://aka.ms/mfaportal).
Before you get started:
* [Understand Conditional Access policy components](../conditional-access/plan-conditional-access.md) * [Building a Conditional Access policy](../conditional-access/concept-conditional-access-policies.md)
-1. Go to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. On **Manage Azure Active Directory**, select **View**. 3. Create a policy. See, [Common Conditional Access policy: Require MFA for all users](../conditional-access/howto-conditional-access-policy-all-users-mfa.md). 4. Create a device trust-based Conditional Access rule.
active-directory Migrate Okta Sync Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-sync-provisioning.md
After you disable Okta provisioning, the Azure AD Connect server can synchronize
After you disable Okta provisioning, the Azure AD cloud sync agent can synchronize objects.
-1. Go to the [Azure portal](https://portal.azure.com/).
+1. Sign in to the [Azure portal](https://portal.azure.com/).
2. Browse to **Azure Active Directory**. 3. Select **Azure AD Connect**. 4. Select **Cloud Sync**.
active-directory Troubleshoot Password Based Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/troubleshoot-password-based-sso.md
In general, if automatic sign-in field capture doesn't work, try the manual opti
To configure password-based SSO by using automatic sign-in field capture, follow these steps:
-1. Open the [Azure portal](https://portal.azure.com/). Sign in as a global administrator or co-admin.
+1. Sign in to the [Azure portal](https://portal.azure.com/). Sign in as a global administrator or co-admin.
2. In the navigation pane on the left side, select **All services** to open the Azure AD extension. 3. Type **Azure Active Directory** in the filter search box, and then select **Azure Active Directory**. 4. Select **Enterprise Applications** in the Azure AD navigation pane.
To manually capture sign-in fields, you must have the My Apps browser extension
To configure password-based SSO for an app by using manual sign-in field capture, follow these steps:
-1. Open the [Azure portal](https://portal.azure.com/). Sign in as a global administrator or co-admin.
+1. Sign in to the [Azure portal](https://portal.azure.com/). Sign in as a global administrator or co-admin.
2. In the navigation pane on the left side, select **All services** to open the Azure AD extension. 3. Type **Azure Active Directory** in the filter search box, and then select **Azure Active Directory**. 4. Select **Enterprise Applications** in the Azure AD navigation pane.
active-directory View Applications Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/view-applications-portal.md
To view applications that have been registered in your Azure AD tenant, you need
To view the enterprise applications registered in your tenant:
-1. Go to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites.
+1. Sign in to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites.
1. Browse to **Azure Active Directory** and select **Enterprise applications**. The **All applications** pane opens and displays a list of the applications in your Azure AD tenant. :::image type="content" source="media/view-applications-portal/view-enterprise-applications.png" alt-text="View the registered applications in your Azure AD tenant.":::
active-directory Managed Identities Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-faq.md
az resource list --query "[?identity.type=='SystemAssigned'].{Name:name, princi
You can keep your users from creating user-assigned managed identities using [Azure Policy](../../governance/policy/overview.md)
-1. Navigate to the [Azure portal](https://portal.azure.com) and go to **Policy**.
+1. Sign in to the [Azure portal](https://portal.azure.com) and go to **Policy**.
2. Choose **Definitions** 3. Select **+ Policy definition** and enter the necessary information. 4. In the policy rule section, paste:
active-directory Msi Tutorial Linux Vm Access Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/msi-tutorial-linux-vm-access-arm.md
For the remainder of the tutorial, we will work from the VM we created earlier.
To complete these steps, you need an SSH client. If you are using Windows, you can use the SSH client in the [Windows Subsystem for Linux](/windows/wsl/about).
-1. Sign in to the Azure [portal](https://portal.azure.com).
+1. Sign in to the [Azure portal]portal](https://portal.azure.com).
2. In the portal, navigate to **Virtual Machines** and go to the Linux virtual machine and in the **Overview**, click **Connect**. Copy the string to connect to your VM. 3. Connect to the VM with the SSH client of your choice. If you are using Windows, you can use the SSH client in the [Windows Subsystem for Linux](/windows/wsl/about). If you need assistance configuring your SSH client's keys, see [How to Use SSH keys with Windows on Azure](~/articles/virtual-machines/linux/ssh-from-windows.md), or [How to create and use an SSH public and private key pair for Linux VMs in Azure](~/articles/virtual-machines/linux/mac-create-ssh-keys.md). 4. In the terminal window, use CURL to make a request to the Azure Instance Metadata Service (IMDS) identity endpoint to get an access token for Azure Resource Manager.  
active-directory Tutorial Linux Vm Access Nonaad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-nonaad.md
This section shows how to grant your VM access to a secret stored in a Key Vault
First, we need to create a Key Vault and grant our VM's system-assigned managed identity access to the Key Vault.
-1. Open the Azure [portal](https://portal.azure.com/)
+1. Sign in to the [Azure portal]portal](https://portal.azure.com/)
1. At the top of the left navigation bar, select **Create a resource** 1. In the **Search the Marketplace** box type in **Key Vault** and hit **Enter**.   1. Select **Key Vault** from the results.
The managed identity used by the virtual machine needs to be granted access to r
1. Select **Access Policy** from the menu on the left side. 1. Select **Add Access Policy**
- ![key vault create access policy screen](./media/tutorial-linux-vm-access-nonaad/key-vault-access-policy.png)
+ ![Key vault create access policy screen](./media/tutorial-linux-vm-access-nonaad/key-vault-access-policy.png)
1. In the **Add access policy** section under **Configure from template (optional)** choose **Secret Management** from the pull-down menu. 1. Choose **Select Principal**, and in the search field enter the name of the VM you created earlier.  Select the VM in the result list and choose **Select**.
Once you've retrieved the secret from the Key Vault, you can use it to authentic
## Clean up resources
-When you want to clean up the resources, visit the [Azure portal](https://portal.azure.com), select **Resource groups**, locate, and select the resource group that was created in the process of this tutorial (such as `mi-test`), and then use the **Delete resource group** command.
+When you want to clean up the resources, sign in to the [Azure portal](https://portal.azure.com), select **Resource groups**, locate, and select the resource group that was created in the process of this tutorial (such as `mi-test`), and then use the **Delete resource group** command.
Alternatively you may also do this via [PowerShell or the CLI](../../azure-resource-manager/management/delete-resource-group.md)
active-directory Tutorial Windows Vm Access Nonaad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-nonaad.md
This section shows how to grant your VM access to a secret stored in a Key Vault
First, we need to create a Key Vault and grant our VMΓÇÖs system-assigned managed identity access to the Key Vault.
-1. Open the Azure [portal](https://portal.azure.com/)
+1. Sign in to the [Azure portal]portal](https://portal.azure.com/)
1. At the top of the left navigation bar, select **Create a resource** 1. In the **Search the Marketplace** box type in **Key Vault** and hit **Enter**.   1. Select **Key Vault** from the results.
The managed identity used by the virtual machine needs to be granted access to r
1. Select **Access Policy** from the menu on the left side. 1. Select **Add Access Policy**
- ![key vault create access policy screen](./media/msi-tutorial-windows-vm-access-nonaad/key-vault-access-policy.png)
+ ![Key vault create access policy screen](./media/msi-tutorial-windows-vm-access-nonaad/key-vault-access-policy.png)
1. In the **Add access policy** section, under **Configure from template (optional)**, choose **Secret Management** from the pull-down menu. 1. Choose **Select Principal**, and in the search field enter the name of the VM you created earlier.  Select the VM in the result list and choose **Select**.
$Response = Invoke-RestMethod -Uri 'http://169.254.169.254/metadata/identity/oau
You can see what the response looks like below:
-![request with token response](./media/msi-tutorial-windows-vm-access-nonaad/token.png)
+![Request with token response](./media/msi-tutorial-windows-vm-access-nonaad/token.png)
Next, extract the access token from the response.  
Once youΓÇÖve retrieved the secret from the Key Vault, you can use it to authent
## Clean up resources
-When you want to clean up the resources, visit the [Azure portal](https://portal.azure.com), select **Resource groups**, locate, and select the resource group that was created in the process of this tutorial (such as `mi-test`), and then use the **Delete resource group** command.
+When you want to clean up the resources, sign in to the [Azure portal](https://portal.azure.com), select **Resource groups**, locate, and select the resource group that was created in the process of this tutorial (such as `mi-test`), and then use the **Delete resource group** command.
Alternatively you may also clean up resources via [PowerShell or the CLI](../../azure-resource-manager/management/delete-resource-group.md)
active-directory Pim Complete Roles And Resource Roles Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-complete-roles-and-resource-roles-review.md
Once the review has been created, follow the steps in this article to complete t
## Complete access reviews
-1. Login to the [Azure portal](https://portal.azure.com/). For **Azure resources**, navigate to **Privileged Identity Management** and select **Azure resources** under **Manage** from the dashboard. For **Azure AD roles**, select **Azure AD roles** from the same dashboard.
+1. Sign in to the [Azure portal](https://portal.azure.com/). For **Azure resources**, navigate to **Privileged Identity Management** and select **Azure resources** under **Manage** from the dashboard. For **Azure AD roles**, select **Azure AD roles** from the same dashboard.
2. For **Azure resources**, select your resource under **Azure resources** and then select **Access reviews** from the dashboard. For **Azure AD roles**, proceed directly to the **Access reviews** on the dashboard.
active-directory Pim Create Roles And Resource Roles Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-create-roles-and-resource-roles-review.md
Access Reviews for **Service Principals** requires an Entra Workload Identities
## Create access reviews
-1. Sign in to [Azure portal](https://portal.azure.com/) as a user that is assigned to one of the prerequisite role(s).
+1. Sign in to the [Azure portal](https://portal.azure.com/) as a user that is assigned to one of the prerequisite role(s).
2. Select **Identity Governance**.
active-directory Pim How To Add Role To User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-add-role-to-user.md
Privileged Identity Management support both built-in and custom Azure AD roles.
Follow these steps to make a user eligible for an Azure AD admin role.
-1. Sign in to [Azure portal](https://portal.azure.com/) with a user that is a member of the [Privileged role administrator](../roles/permissions-reference.md#privileged-role-administrator) role.
+1. Sign in to the [Azure portal](https://portal.azure.com/) with a user that is a member of the [Privileged role administrator](../roles/permissions-reference.md#privileged-role-administrator) role.
1. Open **Azure AD Privileged Identity Management**.
active-directory Pim Resource Roles Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-assign-roles.md
For more information, see [What is Azure attribute-based access control (Azure A
Follow these steps to make a user eligible for an Azure resource role.
-1. Sign in to [Azure portal](https://portal.azure.com/) with Owner or User Access Administrator role permissions.
+1. Sign in to the [Azure portal](https://portal.azure.com/) with Owner or User Access Administrator role permissions.
1. Open **Azure AD Privileged Identity Management**.
active-directory Howto Access Activity Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-access-activity-logs.md
The following roles provide read access to audit and sign-in logs. Always use th
## Access the activity logs in the portal
-1. Navigate to the [Azure portal](https://portal.azure.com) using one of the required roles.
+1. Sign in to the [Azure portal](https://portal.azure.com) using one of the required roles.
1. Go to **Azure AD** and select **Audit logs**, **Sign-in logs**, or **Provisioning logs**. 1. Adjust the filter according to your needs. - For more information on the filter options for audit logs, see [Azure AD audit log categories and activities](reference-audit-activities.md).
You can also export your logs to an independent log analysis tool, such as [Splu
* [Get data using the Azure Active Directory reporting API with certificates](tutorial-access-api-with-certificates.md) * [Audit API reference](/graph/api/resources/directoryaudit) * [Sign-in activity report API reference](/graph/api/resources/signin)-
active-directory Howto Download Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-download-logs.md
Azure AD stores activity logs for a specific period. For more information, see [
> [!NOTE] > **Issues downloading large data sets**
- > The Azure Portal downloader will time out if you attempt to download large data sets. Generally, data sets smaller than 250,000 records work well with the browser download feature. If you face issues completing large downloads in the browser, you should use the [reporting API](/graph/api/resources/azure-ad-auditlog-overview) to download the data.
+ > The Azure portal downloader will time out if you attempt to download large data sets. Generally, data sets smaller than 250,000 records work well with the browser download feature. If you face issues completing large downloads in the browser, you should use the [reporting API](/graph/api/resources/azure-ad-auditlog-overview) to download the data.
## How to download activity logs
active-directory Overview Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-monitoring.md
Depending on the final destination of your log data, you'll need one of the foll
## Diagnostic settings configuration
-To configure monitoring settings for Azure AD activity logs, first sign-in to the [Azure portal](https://portal.azure.com), then select **Azure Active Directory**. From here, you can access the diagnostic settings configuration page in two ways:
+To configure monitoring settings for Azure AD activity logs, first sign in to the [Azure portal](https://portal.azure.com), then select **Azure Active Directory**. From here, you can access the diagnostic settings configuration page in two ways:
* Select **Diagnostic settings** from the **Monitoring** section.
active-directory Quickstart Access Log With Graph Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-access-log-with-graph-api.md
The goal of this step is to create a record of a failed sign-in in the Azure AD
**To complete this step:**
-1. Sign in to your [Azure portal](https://portal.azure.com/) as Isabella Simonsen using an incorrect password.
+1. Sign in to the [Azure portal](https://portal.azure.com/) as Isabella Simonsen using an incorrect password.
2. Wait for 5 minutes to ensure that you can find a record of the sign-in in the sign-ins log. For more information, see [Activity reports](reference-reports-latencies.md#activity-reports).
active-directory Quickstart Analyze Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-analyze-sign-in.md
The goal of this step is to create a record of a failed sign-in in the Azure AD
**To complete this step:**
-1. Sign in to your [Azure portal](https://portal.azure.com/) as Isabella Simonsen using an incorrect password.
+1. Sign in to the [Azure portal](https://portal.azure.com/) as Isabella Simonsen using an incorrect password.
2. Wait for 5 minutes to ensure that you can find a record of the sign-in in the sign-ins log. For more information, see [Activity reports](reference-reports-latencies.md#activity-reports).
This section provides you with the steps to analyze a failed sign-in:
2. To list only records for Isabella Simonsen:
- a. In the toolbar, select **Add filters**.
+ 1. In the toolbar, select **Add filters**.
- ![Add user filter](./media/quickstart-analyze-sign-in/add-filters.png)
+ ![Add user filter](./media/quickstart-analyze-sign-in/add-filters.png)
- b. In the **Pick a field** list, select **User**, and then select **Apply**.
+ 1. In the **Pick a field** list, select **User**, and then select **Apply**.
- c. In the **Username** textbox, type **Isabella Simonsen**, and then select **Apply**.
+ 1. In the **Username** textbox, type **Isabella Simonsen**, and then select **Apply**.
- d. In the toolbar, select **Refresh**.
+ 1. In the toolbar, select **Refresh**.
3. To analyze the issue, select **Troubleshooting and support**.
- ![Add filter](./media/quickstart-analyze-sign-in/troubleshooting-and-support.png)
+ ![Add filter](./media/quickstart-analyze-sign-in/troubleshooting-and-support.png)
4. Copy the **Sign-in error code**.
active-directory Protected Actions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/protected-actions-overview.md
Here's the initial set of permissions:
If an application or service attempts to perform a protection action, it must be able to handle the required Conditional Access policy. In some cases, a user might need to intervene and satisfy the policy. For example, they may be required to complete multi-factor authentication. In this preview, the following applications support step-up authentication for protected actions: -- Azure Active Directory administrator experiences for the actions in the [Entra admin center](https://entra.microsoft.com) or [Azure portal](https://portal.azure.com)
+- Azure Active Directory administrator experiences for the actions in the [Entra admin center](https://entra.microsoft.com) or the [Azure portal](https://portal.azure.com)
- [Microsoft Graph PowerShell](/powershell/microsoftgraph/overview?branch=main) - [Microsoft Graph Explorer](/graph/graph-explorer/graph-explorer-overview?branch=main)
active-directory Adobe Identity Management Provisioning Oidc Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adobe-identity-management-provisioning-oidc-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* Review the [adobe documentation](https://helpx.adobe.com/enterprise/admin-guide.html/enterprise/using/add-azure-sync.ug.html) on user provisioning > [!NOTE]
-> If your organization uses the User Sync Tool or a UMAPI integration, you must first pause the integration. Then, add Azure AD automatic provisioning to automate user management from the Azure Portal. Once Azure AD automatic provisioning is configured and running, you can completely remove the User Sync Tool or UMAPI integration.
+> If your organization uses the User Sync Tool or a UMAPI integration, you must first pause the integration. Then, add Azure AD automatic provisioning to automate user management from the Azure portal. Once Azure AD automatic provisioning is configured and running, you can completely remove the User Sync Tool or UMAPI integration.
## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
active-directory Adobe Identity Management Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adobe-identity-management-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* Review the [adobe documentation](https://helpx.adobe.com/enterprise/admin-guide.html/enterprise/using/add-azure-sync.ug.html) on user provisioning > [!NOTE]
-> If your organization uses the User Sync Tool or a UMAPI integration, you must first pause the integration. Then, add Azure AD automatic provisioning to automate user management from the Azure Portal. Once Azure AD automatic provisioning is configured and running, you can completely remove the User Sync Tool or UMAPI integration.
+> If your organization uses the User Sync Tool or a UMAPI integration, you must first pause the integration. Then, add Azure AD automatic provisioning to automate user management from the Azure portal. Once Azure AD automatic provisioning is configured and running, you can completely remove the User Sync Tool or UMAPI integration.
## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
active-directory Aws Single Sign On Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/aws-single-sign-on-provisioning-tutorial.md
With PIM for Groups, you can provide just-in-time access to groups in Amazon Web
**Configure your enterprise application for SSO and provisioning** 1. Add AWS IAM Identity Center to your tenant, configure it for provisioning as described in the tutorial above, and start provisioning. 1. Configure [single sign-on](aws-single-sign-on-provisioning-tutorial.md) for AWS IAM Identity Center.
-1. Create a [group](https://learn.microsoft.com/azure/active-directory/fundamentals/how-to-manage-groups) that will provide all users access to the application.
+1. Create a [group](/azure/active-directory/fundamentals/how-to-manage-groups) that will provide all users access to the application.
1. Assign the group to the AWS Identity Center application. 1. Assign your test user as a direct member of the group created in the previous step, or provide them access to the group through an access package. This group can be used for persistent, non-admin access in AWS. **Enable PIM for groups**
-1. Create a second group in Azure AD. This group will provide access to admin permissions in AWS.
-1. Bring the group under [management in Azure AD PIM](https://learn.microsoft.com/azure/active-directory/privileged-identity-management/groups-discover-groups).
-1. Assign your test user as [eligible for the group in PIM](https://learn.microsoft.com/azure/active-directory/privileged-identity-management/groups-assign-member-owner) with the role set to member.
+1. Create a second group in Azure AD. This group will provide access to admin permissions in AWS.
+1. Bring the group under [management in Azure AD PIM](/azure/active-directory/privileged-identity-management/groups-discover-groups).
+1. Assign your test user as [eligible for the group in PIM](/azure/active-directory/privileged-identity-management/groups-assign-member-owner) with the role set to member.
1. Assign the second group to the AWS IAM Identity Center application. 1. Use on-demand provisioning to create the group in AWS IAM Identity Center. 1. Sign-in to AWS IAM Identity Center and assign the second group the necessary permissions to perform admin tasks.
-Now any end user that was made eligible for the group in PIM can get JIT access to the group in AWS by [activating their group membership](https://learn.microsoft.com/azure/active-directory/privileged-identity-management/groups-activate-roles#activate-a-role).
+Now any end user that was made eligible for the group in PIM can get JIT access to the group in AWS by [activating their group membership](/azure/active-directory/privileged-identity-management/groups-activate-roles#activate-a-role).
> [!IMPORTANT] > The group membership is provisioned roughly a minute after the activation is complete. Please wait before attempting to sign-in to AWS. If the user is unable to access the necessary group in AWS, please review the troubleshooting tips below and provisioning logs to ensure that the user was successfully provisioned.
active-directory Box Userprovisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/box-userprovisioning-tutorial.md
If automatic provisioning is enabled, then the assigned users and/or groups are
* If group objects were configured to be provisioned, then all assigned group objects are provisioned to Box, and all users that are members of those groups. The group and user memberships are preserved upon being written to Box. > [!TIP]
-> You may also choose to enabled SAML-based Single Sign-On for Box, following the instructions provided in [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features compliment each other.
+> You may also choose to enabled SAML-based Single Sign-On for Box, following the instructions provided in the [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features compliment each other.
### To configure automatic user account provisioning:
In your Box tenant, synchronized users are listed under **Managed Users** in the
* [Managing user account provisioning for Enterprise Apps](tutorial-list.md) * [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
-* [Configure Single Sign-on](box-tutorial.md)
+* [Configure Single Sign-on](box-tutorial.md)
active-directory Cernercentral Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cernercentral-provisioning-tutorial.md
Before configuring and enabling the provisioning service, you should decide what
This section guides you through connecting your Azure AD to Cerner CentralΓÇÖs User Roster using Cerner's SCIM user account provisioning API, and configuring the provisioning service to create, update, and disable assigned user accounts in Cerner Central based on user and group assignment in Azure AD. > [!TIP]
-> You may also choose to enable SAML-based single sign-on for Cerner Central, following the instructions provided in [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features complement each other. For more information, see the [Cerner Central single sign-on tutorial](cernercentral-tutorial.md).
+> You may also choose to enable SAML-based single sign-on for Cerner Central, following the instructions provided in the [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features complement each other. For more information, see the [Cerner Central single sign-on tutorial](cernercentral-tutorial.md).
### To configure automatic user account provisioning to Cerner Central in Azure AD:
active-directory Citrixgotomeeting Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/citrixgotomeeting-provisioning-tutorial.md
Before configuring and enabling the provisioning service, you need to decide wha
This section guides you through connecting your Azure AD to GoToMeeting's user account provisioning API, and configuring the provisioning service to create, update, and disable assigned user accounts in GoToMeeting based on user and group assignment in Azure AD. > [!TIP]
-> You may also choose to enabled SAML-based Single Sign-On for GoToMeeting, following the instructions provided in [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features compliment each other.
+> You may also choose to enabled SAML-based Single Sign-On for GoToMeeting, following the instructions provided in the [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features compliment each other.
### To configure automatic user account provisioning:
active-directory Concur Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/concur-provisioning-tutorial.md
Before configuring and enabling the provisioning service, you need to decide wha
This section guides you through connecting your Azure AD to Concur's user account provisioning API, and configuring the provisioning service to create, update, and disable assigned user accounts in Concur based on user and group assignment in Azure AD. > [!Tip]
-> You may also choose to enabled SAML-based Single Sign-On for Concur, following the instructions provided in [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features compliment each other.
+> You may also choose to enabled SAML-based Single Sign-On for Concur, following the instructions provided in the [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features compliment each other.
### To configure user account provisioning:
active-directory Confluencemicrosoft Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/confluencemicrosoft-tutorial.md
As of now, following versions of Confluence are supported:
- Confluence: 5.0 to 5.10 - Confluence: 6.0.1 to 6.15.9-- Confluence: 7.0.1 to 8.3.0
+- Confluence: 7.0.1 to 8.0.4
> [!NOTE] > Please note that our Confluence Plugin also works on Ubuntu Version 16.04
active-directory Docusign Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/docusign-provisioning-tutorial.md
Before configuring and enabling the provisioning service, you need to decide wha
This section guides you through connecting your Azure AD to DocuSign's user account provisioning API, and configuring the provisioning service to create, update, and disable assigned user accounts in DocuSign based on user and group assignment in Azure AD. > [!Tip]
-> You may also choose to enabled SAML-based Single Sign-On for DocuSign, following the instructions provided in [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features compliment each other.
+> You may also choose to enabled SAML-based Single Sign-On for DocuSign, following the instructions provided in the [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features compliment each other.
### To configure user account provisioning:
The objective of this section is to outline how to enable user provisioning of A
1. Under the **Admin Credentials** section, provide the following configuration settings:
- a. In the **Admin User Name** textbox, type a DocuSign account name that has the **System Administrator** profile in DocuSign.com assigned.
+ 1. In the **Admin User Name** textbox, type a DocuSign account name that has the **System Administrator** profile in DocuSign.com assigned.
- b. In the **Admin Password** textbox, type the password for this account.
+ 1. In the **Admin Password** textbox, type the password for this account.
> [!NOTE] > If both SSO and user provisioning is setup, the authorization credentials used for provisioning needs to be configured to work with both SSO and Username/Password.
active-directory F5 Big Ip Headers Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/f5-big-ip-headers-easy-button.md
Before a client or service can access Microsoft Graph, it must be trusted by the
This first step creates a tenant app registration that will be used to authorize the **Easy Button** access to Graph. Through these permissions, the BIG-IP will be allowed to push the configurations required to establish a trust between a SAML SP instance for published application, and Azure AD as the SAML IdP.
-1. Sign-in to the [Azure portal](https://portal.azure.com/) using an account with Application Administrative rights.
+1. Sign in to the [Azure portal](https://portal.azure.com/) using an account with Application Administrative rights.
2. From the left navigation pane, select the **Azure Active Directory** service. 3. Under Manage, select **App registrations > New registration**. 4. Enter a display name for your application. For example, `F5 BIG-IP Easy Button`.
Our backend application sits on HTTP port 80 but obviously switch to 443 if your
Enabling SSO allows users to access BIG-IP published services without having to enter credentials. The **Easy Button wizard** supports Kerberos, OAuth Bearer, and HTTP authorization headers for SSO, the latter of which weΓÇÖll enable to configure the following.
-* **Header Operation:** Insert
-* **Header Name:** upn
-* **Header Value:** %{session.saml.last.identity}
+* **Header Operation:** `Insert`
+* **Header Name:** `upn`
+* **Header Value:** `%{session.saml.last.identity}`
-* **Header Operation:** Insert
-* **Header Name:** employeeid
-* **Header Value:** %{session.saml.last.attr.name.employeeid}
+* **Header Operation:** `Insert`
+* **Header Name:** `employeeid`
+* **Header Value:** `%{session.saml.last.attr.name.employeeid}`
![Screenshot for SSO and HTTP headers.](./media/f5-big-ip-headers-easy-button/sso-http-headers.png)
active-directory F5 Big Ip Sap Erp Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/f5-big-ip-sap-erp-easy-button.md
Before a client or service can access Microsoft Graph, it must be trusted by the
The Easy Button client must also be registered in Azure AD, before it is allowed to establish a trust between each SAML SP instance of a BIG-IP published application, and Azure AD as the SAML IdP.
-1. Sign-in to the [Azure portal](https://portal.azure.com/) using an account with Application Administrative rights
+1. Sign in to the [Azure portal](https://portal.azure.com/) using an account with Application Administrative rights
2. From the left navigation pane, select the **Azure Active Directory** service
If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related
2. Select the link for your active session. The **View Variables** link in this location may also help determine root cause KCD issues, particularly if the BIG-IP APM fails to obtain the right user and domain identifiers from session variables
-See [BIG-IP APM variable assign examples]( https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference]( https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
+See [BIG-IP APM variable assign examples]( https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference]( https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
active-directory G Suite Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/g-suite-provisioning-tutorial.md
With PIM for Groups, you can provide just-in-time access to groups in Google Clo
**Configure your enterprise application for SSO and provisioning** 1. Add Google Cloud / Google Workspace to your tenant, configure it for provisioning as described in the tutorial above, and start provisioning. 1. Configure [single sign-on](google-apps-tutorial.md) for Google Cloud / Google Workspace.
-1. Create a [group](https://learn.microsoft.com/azure/active-directory/fundamentals/how-to-manage-groups) that provides all users access to the application.
+1. Create a [group](/azure/active-directory/fundamentals/how-to-manage-groups) that provides all users access to the application.
1. Assign the group to the Google Cloud / Google Workspace application. 1. Assign your test user as a direct member of the group created in the previous step, or provide them access to the group through an access package. This group can be used for persistent, nonadmin access in Google Cloud / Google Workspace. **Enable PIM for groups** 1. Create a second group in Azure AD. This group provides access to admin permissions in Google Cloud / Google Workspace.
-1. Bring the group under [management in Azure AD PIM](https://learn.microsoft.com/azure/active-directory/privileged-identity-management/groups-discover-groups).
-1. Assign your test user as [eligible for the group in PIM](https://learn.microsoft.com/azure/active-directory/privileged-identity-management/groups-assign-member-owner) with the role set to member.
+1. Bring the group under [management in Azure AD PIM](/azure/active-directory/privileged-identity-management/groups-discover-groups).
+1. Assign your test user as [eligible for the group in PIM](/azure/active-directory/privileged-identity-management/groups-assign-member-owner) with the role set to member.
1. Assign the second group to the Google Cloud / Google Workspace application. 1. Use on-demand provisioning to create the group in Google Cloud / Google Workspace. 1. Sign-in to Google Cloud / Google Workspace and assign the second group the necessary permissions to perform admin tasks.
-Now any end user that was made eligible for the group in PIM can get JIT access to the group in Google Cloud / Google Workspace by [activating their group membership](https://learn.microsoft.com/azure/active-directory/privileged-identity-management/groups-activate-roles#activate-a-role).
+Now any end user that was made eligible for the group in PIM can get JIT access to the group in Google Cloud / Google Workspace by [activating their group membership](/azure/active-directory/privileged-identity-management/groups-activate-roles#activate-a-role).
> [!IMPORTANT] > The group membership is provisioned roughly a minute after the activation is complete. Please wait before attempting to sign-in to Google Cloud / Google Workspace. If the user is unable to access the necessary group in Google Cloud / Google Workspace, please review the provisioning logs to ensure that the user was successfully provisioned.
active-directory Hashicorp Cloud Platform Hcp Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hashicorp-cloud-platform-hcp-tutorial.md
Complete the following steps to enable Azure AD single sign-on in the Azure port
1. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Identifier** textbox, type a value using the following pattern:
+ 1. In the **Identifier** textbox, type a value using the following pattern:
`urn:hashicorp:HCP-SSO-<HCP_ORG_ID>-samlp`
- b. In the **Reply URL** textbox, type a URL using the following pattern:
+ 1. In the **Reply URL** textbox, type a URL using the following pattern:
`https://auth.hashicorp.com/login/callback?connection=HCP-SSO-<ORG_ID>-samlp`
- c. In the **Sign on URL** textbox, type a URL using the following pattern:
+ 1. In the **Sign on URL** textbox, type a URL using the following pattern:
`https://portal.cloud.hashicorp.com/sign-in?conn-id=HCP-SSO-<HCP_ORG_ID>-samlp` > [!NOTE]
To configure single sign-on on the **HashiCorp Cloud Platform (HCP)** side, you
## Test SSO
-In the previous [Create and assign Azure AD test user](#create-and-assign-azure-ad-test-user) section, you created a user called B.Simon and assigned it to the HashiCorp Cloud Platform (HCP) app within the Azure Portal. This can now be used for testing the SSO connection. You may also use any account that is already associated with the HashiCorp Cloud Platform (HCP) app in the Azure Portal.
+In the previous [Create and assign Azure AD test user](#create-and-assign-azure-ad-test-user) section, you created a user called B.Simon and assigned it to the HashiCorp Cloud Platform (HCP) app within the Azure portal. This can now be used for testing the SSO connection. You may also use any account that is already associated with the HashiCorp Cloud Platform (HCP) app in the Azure portal.
## Additional resources
In the previous [Create and assign Azure AD test user](#create-and-assign-azure-
## Next steps
-Once you configure HashiCorp Cloud Platform (HCP) you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
+Once you configure HashiCorp Cloud Platform (HCP) you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Jive Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jive-provisioning-tutorial.md
Before configuring and enabling the provisioning service, you need to decide wha
This section guides you through connecting your Azure AD to Jive's user account provisioning API, and configuring the provisioning service to create, update, and disable assigned user accounts in Jive based on user and group assignment in Azure AD. > [!TIP]
-> You may also choose to enabled SAML-based Single Sign-On for Jive, following the instructions provided in [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features compliment each other.
+> You may also choose to enabled SAML-based Single Sign-On for Jive, following the instructions provided in the [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features compliment each other.
### To configure user account provisioning:
As part of this procedure, you are required to provide a user security token you
1. Under the **Admin Credentials** section, provide the following configuration settings:
- a. In the **Jive Admin User Name** textbox, type a Jive account name that has the **System Administrator** profile in Jive.com assigned.
+ 1. In the **Jive Admin User Name** textbox, type a Jive account name that has the **System Administrator** profile in Jive.com assigned.
- b. In the **Jive Admin Password** textbox, type the password for this account.
+ 1. In the **Jive Admin Password** textbox, type the password for this account.
- c. In the **Jive Tenant URL** textbox, type the Jive tenant URL.
+ 1. In the **Jive Tenant URL** textbox, type the Jive tenant URL.
> [!NOTE] > The Jive tenant URL is URL that is used by your organization to log in to Jive.
For more information on how to read the Azure AD provisioning logs, see [Reporti
* [Managing user account provisioning for Enterprise Apps](tutorial-list.md) * [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
-* [Configure Single Sign-on](jive-tutorial.md)
+* [Configure Single Sign-on](jive-tutorial.md)
active-directory Linkedinelevate Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/linkedinelevate-provisioning-tutorial.md
Before configuring and enabling the provisioning service, you will need to decid
This section guides you through connecting your Azure AD to LinkedIn Elevate's SCIM user account provisioning API, and configuring the provisioning service to create, update and disable assigned user accounts in LinkedIn Elevate based on user and group assignment in Azure AD.
-**Tip:** You may also choose to enabled SAML-based Single Sign-On for LinkedIn Elevate, following the instructions provided in [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features complement each other.
+**Tip:** You may also choose to enabled SAML-based Single Sign-On for LinkedIn Elevate, following the instructions provided in the [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features complement each other.
### To configure automatic user account provisioning to LinkedIn Elevate in Azure AD:
-The first step is to retrieve your LinkedIn access token. If you are an Enterprise administrator, you can self-provision an
- access token. In your account center, go to **Settings &gt; Global Settings** and open the **SCIM Setup** panel.
+The first step is to retrieve your LinkedIn access token. If you are an Enterprise administrator, you can self-provision an access token. In your account center, go to **Settings &gt; Global Settings** and open the **SCIM Setup** panel.
> [!NOTE] > If you are accessing the account center directly rather than through a link, you can reach it using the following steps.
The first step is to retrieve your LinkedIn access token. If you are an Enterpri
![Screenshot shows the S C I M Setup page.](./media/linkedinelevate-provisioning-tutorial/linkedin_elevate2.PNG)
-5. Click **Generate token**. You should see your access token display
- under the **Access token** field.
+5. Click **Generate token**. You should see your access token display under the **Access token** field.
-6. Save your access token to your clipboard or computer before leaving
- the page.
+6. Save your access token to your clipboard or computer before leaving the page.
7. Next, sign in to the [Azure portal](https://portal.azure.com), and browse to the **Azure Active Directory > Enterprise Apps > All applications** section.
The first step is to retrieve your LinkedIn access token. If you are an Enterpri
* In the **Secret Token** field, enter the access token you generated in step 1 and click **Test Connection** .
- * You should see a success notification on the upper-right side of
- your portal.
+ * You should see a success notification on the upper-right side of your portal.
12. Enter the email address of a person or group who should receive provisioning error notifications in the **Notification Email** field, and check the checkbox below.
active-directory Linkedinsalesnavigator Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/linkedinsalesnavigator-provisioning-tutorial.md
Before configuring and enabling the provisioning service, you will need to decid
This section guides you through connecting your Azure AD to LinkedIn Sales Navigator's SCIM user account provisioning API, and configuring the provisioning service to create, update and disable assigned user accounts in LinkedIn Sales Navigator based on user and group assignment in Azure AD. > [!TIP]
-> You may also choose to enabled SAML-based Single Sign-On for LinkedIn Sales Navigator, following the instructions provided in [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features complement each other.
+> You may also choose to enabled SAML-based Single Sign-On for LinkedIn Sales Navigator, following the instructions provided in the [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features complement each other.
### To configure automatic user account provisioning to LinkedIn Sales Navigator in Azure AD:
-The first step is to retrieve your LinkedIn access token. If you are an Enterprise administrator, you can self-provision an
- access token. In your account center, go to **Settings &gt; Global Settings** and open the **SCIM Setup** panel.
+The first step is to retrieve your LinkedIn access token. If you are an Enterprise administrator, you can self-provision an access token. In your account center, go to **Settings &gt; Global Settings** and open the **SCIM Setup** panel.
> [!NOTE] > If you are accessing the account center directly rather than through a link, you can reach it using the following steps.
The first step is to retrieve your LinkedIn access token. If you are an Enterpri
* In the **Secret Token** field, enter the access token you generated in step 1 and click **Test Connection** .
- * You should see a success notification on the upper-right side of
- your portal.
+ * You should see a success notification on the upper-right side of your portal.
12. Enter the email address of a person or group who should receive provisioning error notifications in the **Notification Email** field, and check the checkbox below.
active-directory Menlosecurity Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/menlosecurity-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
4. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Sign on URL** text box, type a URL using the following pattern:
+ 1. In the **Sign on URL** text box, type a URL using the following pattern:
`https://<SUBDOMAIN>.menlosecurity.com/account/login`
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ 1. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
`https://<SUBDOMAIN>.menlosecurity.com/safeview-auth-server/saml/metadata`
- > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [Menlo Security Client support team](https://www.menlosecurity.com/menlo-contact) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [Menlo Security Client support team](https://www.menlosecurity.com/menlo-contact) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
- ![The Certificate download link](common/certificatebase64.png)
+ ![The Certificate download link](common/certificatebase64.png)
6. On the **Set up Menlo Security** section, copy the appropriate URL(s) as per your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. To configure single sign-on on **Menlo Security** side, login to the **Menlo Security** website as an administrator. 2. Under **Settings** go to **Authentication** and perform following actions:
-
- ![Configure Single Sign-On](./media/menlosecurity-tutorial/authentication.png)
- a. Tick the checkbox **Enable user authentication using SAML**.
+ ![Configure Single Sign-On](./media/menlosecurity-tutorial/authentication.png)
- b. Select **Allow External Access** to **Yes**.
+ 1. Tick the checkbox **Enable user authentication using SAML**.
- c. Under **SAML Provider**, select **Azure Active Directory**.
+ 1. Select **Allow External Access** to **Yes**.
- d. **SAML 2.0 Endpoint** : Paste the **Login URL** which you have copied from Azure portal.
+ 1. Under **SAML Provider**, select **Azure Active Directory**.
- e. **Service Identifier (Issuer)** : Paste the **Azure AD Identifier** which you have copied from Azure portal.
+ 1. **SAML 2.0 Endpoint** : Paste the **Login URL** which you have copied from Azure portal.
- f. **X.509 Certificate** : Open the **Certificate (Base64)** downloaded from the Azure Portal in notepad and paste it in this box.
+ 1. **Service Identifier (Issuer)** : Paste the **Azure AD Identifier** which you have copied from Azure portal.
- g. Click **Save** to save the settings.
+ 1. **X.509 Certificate** : Open the **Certificate (Base64)** downloaded from the Azure portal in notepad and paste it in this box.
+
+ 1. Click **Save** to save the settings.
### Create Menlo Security test user
active-directory Ms Confluence Jira Plugin Adminguide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ms-confluence-jira-plugin-adminguide.md
The plug-in supports the following versions of Jira and Confluence:
* JIRA also supports 5.2. For more details, click [Microsoft Azure Active Directory single sign-on for JIRA 5.2](./jira52microsoft-tutorial.md). * Confluence: 5.0 to 5.10. * Confluence: 6.0.1 to 6.15.9.
-* Confluence: 7.0.1 to 8.3.0.
+* Confluence: 7.0.1 to 8.0.4.
## Installation
Confluence:
|Plugin Version | Release Notes | Supported JIRA versions | |--|-|-|
-| 6.3.9 | Bug Fixes: | Confluence Server: 7.20.3 to 8.3.0 |
+| 6.3.9 | Bug Fixes: | Confluence Server: 7.20.3 to 8.0.4 |
| | System Error: Metadata link cannot be configured on SSO plugins. | | | | | | | 6.3.8 | New Feature: | Confluence Server: 5.0 to 7.20.1 |
The plug-in supports these versions:
* JIRA also supports 5.2. For more details, click [Microsoft Azure Active Directory single sign-on for JIRA 5.2](./jira52microsoft-tutorial.md). * Confluence: 5.0 to 5.10. * Confluence: 6.0.1 to 6.15.9.
-* Confluence: 7.0.1 to 8.3.0.
+* Confluence: 7.0.1 to 8.0.4.
### Is the plug-in free or paid?
active-directory Salesforce Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/salesforce-provisioning-tutorial.md
Before configuring and enabling the provisioning service, you need to decide whi
This section guides you through connecting your Azure AD to [Salesforce's user account provisioning API - v40](https://developer.salesforce.com/docs/atlas.en-us.208.0.api.meta/api/implementation_considerations.htm), and configuring the provisioning service to create, update, and disable assigned user accounts in Salesforce based on user and group assignment in Azure AD. > [!Tip]
-> You may also choose to enabled SAML-based Single Sign-On for Salesforce, following the instructions provided in [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features compliment each other.
+> You may also choose to enabled SAML-based Single Sign-On for Salesforce, following the instructions provided in the [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features compliment each other.
### Configure automatic user account provisioning
The objective of this section is to outline how to enable user provisioning of A
5. Under the **Admin Credentials** section, provide the following configuration settings:
- a. In the **Admin Username** textbox, type a Salesforce account name that has the **System Administrator** profile in Salesforce.com assigned.
+ 1. In the **Admin Username** textbox, type a Salesforce account name that has the **System Administrator** profile in Salesforce.com assigned.
- b. In the **Admin Password** textbox, type the password for this account.
+ 1. In the **Admin Password** textbox, type the password for this account.
6. To get your Salesforce security token, open a new tab and sign into the same Salesforce admin account. On the top right corner of the page, click your name, and then click **Settings**.
active-directory Salesforce Sandbox Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/salesforce-sandbox-provisioning-tutorial.md
Before configuring and enabling the provisioning service, you need to decide whi
This section guides you through connecting your Azure AD to Salesforce Sandbox's user account provisioning API, and configuring the provisioning service to create, update, and disable assigned user accounts in Salesforce Sandbox based on user and group assignment in Azure AD. >[!Tip]
->You may also choose to enabled SAML-based Single Sign-On for Salesforce Sandbox, following the instructions provided in [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features compliment each other.
+>You may also choose to enabled SAML-based Single Sign-On for Salesforce Sandbox, following the instructions provided in the [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features compliment each other.
### Configure automatic user account provisioning
The objective of this section is to outline how to enable user provisioning of A
1. Under the **Admin Credentials** section, provide the following configuration settings:
- a. In the **Admin Username** textbox, type a Salesforce Sandbox account name that has the **System Administrator** profile in Salesforce.com assigned.
+ 1. In the **Admin Username** textbox, type a Salesforce Sandbox account name that has the **System Administrator** profile in Salesforce.com assigned.
- b. In the **Admin Password** textbox, type the password for this account.
+ 1. In the **Admin Password** textbox, type the password for this account.
1. To get your Salesforce Sandbox security token, open a new tab and sign into the same Salesforce Sandbox admin account. On the top right corner of the page, click your name, and then click **Settings**.
active-directory Sap Successfactors Inbound Provisioning Cloud Only Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-successfactors-inbound-provisioning-cloud-only-tutorial.md
This section provides steps for user account provisioning from SuccessFactors to
**To configure SuccessFactors to Azure AD provisioning:**
-1. Go to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. In the left navigation bar, select **Azure Active Directory**
In this section, you will configure how user data flows from SuccessFactors to A
>[!NOTE] >For the complete list of SuccessFactors attribute supported by the application, please refer to [SuccessFactors Attribute Reference](../app-provisioning/sap-successfactors-attribute-reference.md)
-1. Click on an existing attribute mapping to update it, or click **Add new mapping** at the bottom of the screen to add new
- mappings. An individual attribute mapping supports these properties:
+1. Click on an existing attribute mapping to update it, or click **Add new mapping** at the bottom of the screen to add new mappings. An individual attribute mapping supports these properties:
* **Mapping Type**
In this section, you will configure how user data flows from SuccessFactors to A
* **Target attribute** ΓÇô The user attribute in Active Directory.
- * **Match objects using this attribute** ΓÇô Whether or not this mapping should be used to uniquely identify users between
- SuccessFactors and Active Directory. This value is typically set on the Worker ID field for SuccessFactors, which is typically mapped to one of the Employee ID attributes in Active Directory.
+ * **Match objects using this attribute** ΓÇô Whether or not this mapping should be used to uniquely identify users between SuccessFactors and Active Directory. This value is typically set on the Worker ID field for SuccessFactors, which is typically mapped to one of the Employee ID attributes in Active Directory.
- * **Matching precedence** ΓÇô Multiple matching attributes can be set. When there are multiple, they are evaluated in the
- order defined by this field. As soon as a match is found, no further matching attributes are evaluated.
+ * **Matching precedence** ΓÇô Multiple matching attributes can be set. When there are multiple, they are evaluated in the order defined by this field. As soon as a match is found, no further matching attributes are evaluated.
* **Apply this mapping**
Once the SuccessFactors provisioning app configurations have been completed, you
* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) * [Learn how to configure single sign-on between SuccessFactors and Azure Active Directory](successfactors-tutorial.md) * [Learn how to integrate other SaaS applications with Azure Active Directory](tutorial-list.md)
-* [Learn how to export and import your provisioning configurations](../app-provisioning/export-import-provisioning-configuration.md)
+* [Learn how to export and import your provisioning configurations](../app-provisioning/export-import-provisioning-configuration.md)
active-directory Sap Successfactors Inbound Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-successfactors-inbound-provisioning-tutorial.md
Work with your SuccessFactors admin team or implementation partner to create or
1. In SuccessFactors Admin Center, search for *Manage Permission Roles*, then select **Manage Permission Roles** from the search results. 1. From the **Permission Role List**, select the role that you created for API usage permissions.
-1. Under **Grant this role to...**, click **Add...** button.
+1. Under **Grant this role to...**, click the **Add...** button.
1. Select **Permission Group...** from the drop-down menu, then click **Select...** to open the Groups window to search and select the group created above. > [!div class="mx-imgBorder"] > ![Add permission group](./media/sap-successfactors-inbound-provisioning/add-permission-group.png)
This section provides steps for user account provisioning from SuccessFactors to
**To configure SuccessFactors to Active Directory provisioning:**
-1. Go to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. In the left navigation bar, select **Azure Active Directory**
In this section, you will configure how user data flows from SuccessFactors to A
>[!NOTE] >For the complete list of SuccessFactors attribute supported by the application, please refer to [SuccessFactors Attribute Reference](../app-provisioning/sap-successfactors-attribute-reference.md)
-1. Click on an existing attribute mapping to update it, or click **Add new mapping** at the bottom of the screen to add new
- mappings. An individual attribute mapping supports these properties:
+1. Click on an existing attribute mapping to update it, or click **Add new mapping** at the bottom of the screen to add new mappings. An individual attribute mapping supports these properties:
* **Mapping Type**
In this section, you will configure how user data flows from SuccessFactors to A
* **Target attribute** ΓÇô The user attribute in Active Directory.
- * **Match objects using this attribute** ΓÇô Whether or not this mapping should be used to uniquely identify users between
- SuccessFactors and Active Directory. This value is typically set on the Worker ID field for SuccessFactors, which is typically mapped to one of the Employee ID attributes in Active Directory.
+ * **Match objects using this attribute** ΓÇô Whether or not this mapping should be used to uniquely identify users between SuccessFactors and Active Directory. This value is typically set on the Worker ID field for SuccessFactors, which is typically mapped to one of the Employee ID attributes in Active Directory.
- * **Matching precedence** ΓÇô Multiple matching attributes can be set. When there are multiple, they are evaluated in the
- order defined by this field. As soon as a match is found, no further matching attributes are evaluated.
+ * **Matching precedence** ΓÇô Multiple matching attributes can be set. When there are multiple, they are evaluated in the order defined by this field. As soon as a match is found, no further matching attributes are evaluated.
* **Apply this mapping**
Once the SuccessFactors provisioning app configurations have been completed and
* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) * [Learn how to configure single sign-on between SuccessFactors and Azure Active Directory](successfactors-tutorial.md) * [Learn how to integrate other SaaS applications with Azure Active Directory](tutorial-list.md)
-* [Learn how to export and import your provisioning configurations](../app-provisioning/export-import-provisioning-configuration.md)
+* [Learn how to export and import your provisioning configurations](../app-provisioning/export-import-provisioning-configuration.md)
active-directory Sap Successfactors Writeback Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-successfactors-writeback-tutorial.md
Work with your SuccessFactors admin team or implementation partner to create or
1. In SuccessFactors Admin Center, search for *Manage Permission Roles*, then select **Manage Permission Roles** from the search results. 1. From the **Permission Role List**, select the role that you created for API usage permissions.
-1. Under **Grant this role to...**, click **Add...** button.
-1. Select **Permission Group...** from the drop-down menu, then click **Select...** to open the Groups window to search and select the group created above.
+1. Under **Grant this role to...**, click **Add...** Button.
+1. Select **Permission Group...** from the drop-down menu, then click **Select...** to open the Groups window to search and select the group created above.
> [!div class="mx-imgBorder"] > ![Add permission group](./media/sap-successfactors-inbound-provisioning/add-permission-group.png)
The SuccessFactors Writeback provisioning app uses certain *code* values for set
### Identify Email and Phone Number picklist names
-In SAP SuccessFactors, a *picklist* is a configurable set of options from which a user can make a selection. The different types of email and phone number (e.g. business, personal, other) are represented using a picklist. In this step, we will identify the picklists configured in your SuccessFactors tenant to store email and phone number values.
+In SAP SuccessFactors, a *picklist* is a configurable set of options from which a user can make a selection. The different types of email and phone number (such as business, personal, and other) are represented using a picklist. In this step, we will identify the picklists configured in your SuccessFactors tenant to store email and phone number values.
1. In SuccessFactors Admin Center, search for *Manage business configuration*.
In SAP SuccessFactors, a *picklist* is a configurable set of options from which
### Retrieve constant value for emailType 1. In SuccessFactors Admin Center, search and open *Picklist Center*.
-1. Use the name of the email picklist captured from the previous section (e.g. ecEmailType) to find the email picklist.
+1. Use the name of the email picklist captured from the previous section (such as ecEmailType) to find the email picklist.
> [!div class="mx-imgBorder"] > ![Find email type picklist](./media/sap-successfactors-inbound-provisioning/find-email-type-picklist.png)
In SAP SuccessFactors, a *picklist* is a configurable set of options from which
> ![Get email type code](./media/sap-successfactors-inbound-provisioning/get-email-type-code.png) > [!NOTE]
- > Drop the comma character when you copy over the value. For e.g. if the **Option ID** value is *8,448*, then set the *emailType* in Azure AD to the constant number *8448* (without the comma character).
+ > Drop the comma character when you copy over the value. For example, if the **Option ID** value is *8,448*, then set the *emailType* in Azure AD to the constant number *8448* (without the comma character).
### Retrieve constant value for phoneType
In SAP SuccessFactors, a *picklist* is a configurable set of options from which
> ![Get cell phone code](./media/sap-successfactors-inbound-provisioning/get-cell-phone-code.png) > [!NOTE]
- > Drop the comma character when you copy over the value. For e.g. if the **Option ID** value is *10,606*, then set the *cellPhoneType* in Azure AD to the constant number *10606* (without the comma character).
+ > Drop the comma character when you copy over the value. For example, if the **Option ID** value is *10,606*, then set the *cellPhoneType* in Azure AD to the constant number *10606* (without the comma character).
## Configuring SuccessFactors Writeback App
This section provides steps for
**To configure SuccessFactors Writeback:**
-1. Go to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. In the left navigation bar, select **Azure Active Directory**
active-directory Sprinklr Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sprinklr-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
4. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Sign on URL** text box, type a URL using the following pattern:
+ 1. In the **Sign on URL** text box, type a URL using the following pattern:
`https://<SUBDOMAIN>.sprinklr.com`
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ 1. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
`https://<SUBDOMAIN>.sprinklr.com` > [!NOTE]
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![Screenshot shows the Single Sign on page where you can enter the values described.](./media/sprinklr-tutorial/configuration.png "Single Sign-Ons")
- a. In the **Name** textbox, type a name for your configuration (for example: **WAADSSOTest**).
+ 1. In the **Name** textbox, type a name for your configuration (for example: **WAADSSOTest**).
- b. Select **Enabled**.
+ 1. Select **Enabled**.
- c. Select **Use new SSO Certificate**.
+ 1. Select **Use new SSO Certificate**.
- d. Open your base-64 encoded certificate in notepad, copy the content of it into your clipboard, and then paste it to the **Identity Provider Certificate** textbox.
+ 1. Open your base-64 encoded certificate in notepad, copy the content of it into your clipboard, and then paste it to the **Identity Provider Certificate** textbox.
- e. Paste the **Azure AD Identifier** value which you have copied from Azure Portal into the **Entity Id** textbox.
+ 1. Paste the **Azure AD Identifier** value which you have copied from Azure portal into the **Entity Id** textbox.
- f. Paste the **Login URL** value which you have copied from Azure Portal into the **Identity Provider Login URL** textbox.
+ 1. Paste the **Login URL** value which you have copied from Azure portal into the **Identity Provider Login URL** textbox.
- g. Paste the **Logout URL** value which you have copied from Azure Portal into the **Identity Provider Logout URL** textbox.
+ 1. Paste the **Logout URL** value which you have copied from Azure portal into the **Identity Provider Logout URL** textbox.
- h. As **SAML User ID Type**, select **Assertion contains UserΓÇÖs sprinklr.com username**.
+ 1. As **SAML User ID Type**, select **Assertion contains UserΓÇÖs sprinklr.com username**.
- i. As **SAML User ID Location**, select **User ID is in the Name Identifier element of the Subject statement**.
+ 1. As **SAML User ID Location**, select **User ID is in the Name Identifier element of the Subject statement**.
- j. Click **Save**.
+ 1. Click **Save**.
![SAML](./media/sprinklr-tutorial/save-configuration.png "SAML")
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![Edit user](./media/sprinklr-tutorial/update-users.png "Edit user")
- a. In the **Email**, **First Name** and **Last Name** textboxes, type the information of an Azure AD user account you want to provision.
+ 1. In the **Email**, **First Name** and **Last Name** textboxes, type the information of an Azure AD user account you want to provision.
- b. Select **Password Disabled**.
+ 1. Select **Password Disabled**.
- c. Select **Language**.
+ 1. Select **Language**.
- d. Select **User Type**.
+ 1. Select **User Type**.
- e. Click **Update**.
+ 1. Click **Update**.
> [!IMPORTANT] > **Password Disabled** must be selected to enable a user to log in via an Identity provider.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![Partner Roles](./media/sprinklr-tutorial/role.png "Partner Roles")
- a. From the **Global** list, select **ALL_Permissions**.
+ 1. From the **Global** list, select **ALL_Permissions**.
- b. Click **Update**.
+ 1. Click **Update**.
> [!NOTE] > You can use any other Sprinklr user account creation tools or APIs provided by Sprinklr to provision Azure AD user accounts.
active-directory Thousandeyes Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/thousandeyes-provisioning-tutorial.md
Before configuring and enabling the provisioning service, you need to decide wha
This section guides you through connecting your Azure AD to ThousandEyes's user account provisioning API, and configuring the provisioning service to create, update, and disable assigned user accounts in ThousandEyes based on user and group assignment in Azure AD. > [!TIP]
-> You may also choose to enabled SAML-based Single Sign-On for ThousandEyes, following the instructions provided in [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features compliment each other.
+> You may also choose to enabled SAML-based Single Sign-On for ThousandEyes, following the instructions provided in the [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features compliment each other.
### Configure automatic user account provisioning to ThousandEyes in Azure AD
This section guides you through connecting your Azure AD to ThousandEyes's user
![Screenshot shows the Provisioning tab for ThousandEyes with Automatic selected for Provisioning Mode.](./media/thousandeyes-provisioning-tutorial/ThousandEyes1.png)
-5. Under the **Admin Credentials** section, input the **OAuth Bearer Token**
-generated by your ThousandEyes's account (you can find and or generate a token under your ThousandEyes account **Profile** section).
+5. Under the **Admin Credentials** section, input the **OAuth Bearer Token** generated by your ThousandEyes's account (you can find and or generate a token under your ThousandEyes account **Profile** section).
![Screenshot shows where to find the Account Settings link for the Current Account Group.](./media/thousandeyes-provisioning-tutorial/ThousandEyes2.png)
Once you've configured provisioning, use the following resources to monitor your
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Tonicdm Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tonicdm-tutorial.md
Previously updated : 11/21/2022 Last updated : 07/20/2022 # Tutorial: Azure AD SSO integration with TonicDM
To configure Azure AD integration with TonicDM, you need the following items:
In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* TonicDM supports **SP** initiated SSO.
+* TonicDM supports **SP** and **IDP** initiated SSO.
* TonicDM supports **Just In Time** user provisioning.
To configure the integration of TonicDM into Azure AD, you need to add TonicDM f
1. In the **Add from the gallery** section, type **TonicDM** in the search box. 1. Select **TonicDM** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
- Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+ Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
## Configure and test Azure AD SSO for TonicDM
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Configuration")
-4. On the **Basic SAML Configuration** section, perform the following steps:
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
- a. In the **Identifier (Entity ID)** text box, type the URL:
- `https://tonicdm.com/saml/metadata`
+1. Perform the following step, if you wish to configure the application in **SP** initiated mode:
- b. In the **Sign on URL** text box, type the URL:
- `https://tonicdm.com/`
+ In the **Sign on URL** text box, type the URL:
+ `https://app.tonicdm.com/logon`
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
- ![The Certificate download link](common/certificatebase64.png)
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
6. On the **Set up TonicDM** section, copy the appropriate URL(s) as per your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Attributes")
### Create an Azure AD test user
In this section, you create a user called Britta Simon in TonicDM. Work with [To
## Test SSO
-In this section, you test your Azure AD single sign-on configuration with following options.
+In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on **Test this application** in Azure portal. This will redirect to TonicDM Sign-on URL where you can initiate the login flow.
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to TonicDM Sign on URL where you can initiate the login flow.
* Go to TonicDM Sign-on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the TonicDM tile in the My Apps, this will redirect to TonicDM Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the TonicDM for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the TonicDM tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the TonicDM for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
active-directory Velpic Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/velpic-provisioning-tutorial.md
Before configuring and enabling the provisioning service, you will need to decid
This section guides you through connecting your Azure AD to Velpic's user account provisioning API, and configuring the provisioning service to create, update and disable assigned user accounts in Velpic based on user and group assignment in Azure AD. > [!TIP]
-> You may also choose to enabled SAML-based Single Sign-On for Velpic, following the instructions provided in [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features compliment each other.
+> You may also choose to enabled SAML-based Single Sign-On for Velpic, following the instructions provided in the [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features compliment each other.
### To configure automatic user account provisioning to Velpic in Azure AD:
active-directory Workday Inbound Cloud Only Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workday-inbound-cloud-only-tutorial.md
The following sections describe steps for configuring user provisioning from Wor
**To configure Workday to Azure Active Directory provisioning for cloud-only users:**
-1. Go to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. In the Azure portal, search for and select **Azure Active Directory**.
The following sections describe steps for configuring user provisioning from Wor
* Click the **Test Connection** button.
- * If the connection test succeeds, click the **Save** button at
- the top. If it fails, double-check that the Workday URL and credentials are valid
- in Workday.
+ * If the connection test succeeds, click the **Save** button at the top. If it fails, double-check that the Workday URL and credentials are valid in Workday.
### Part 2: Configure Workday and Azure AD attribute mappings
In this section, you will configure how user data flows from Workday to Azure Ac
4. In the **Attribute mappings** section, you can define how individual Workday attributes map to Active Directory attributes.
-5. Click on an existing attribute mapping to update it, or click **Add new mapping** at the bottom of the screen to add new
- mappings. An individual attribute mapping supports these properties:
+5. Click on an existing attribute mapping to update it, or click **Add new mapping** at the bottom of the screen to add new mappings. An individual attribute mapping supports these properties:
* **Mapping Type**
In this section, you will configure how user data flows from Workday to Azure Ac
* **Constant** - Write a static, constant string value to the AD attribute
- * **Expression** ΓÇô Allows you to write a custom value to the AD attribute, based on one or more Workday
- attributes. [For more info, see this article on expressions](../app-provisioning/functions-for-customizing-application-data.md).
+ * **Expression** ΓÇô Allows you to write a custom value to the AD attribute, based on one or more Workday attributes. [For more info, see this article on expressions](../app-provisioning/functions-for-customizing-application-data.md).
* **Source attribute** - The user attribute from Workday. If the attribute you are looking for is not present, see [Customizing the list of Workday user attributes](workday-inbound-tutorial.md#customizing-the-list-of-workday-user-attributes).
In this section, you will configure how user data flows from Workday to Azure Ac
* **Target attribute** ΓÇô The user attribute in Azure AD.
- * **Match objects using this attribute** ΓÇô Whether or not this attribute should be used to uniquely identify users between
- Workday and Azure AD. This value is typically set on the Worker ID field for Workday, which is typically mapped to
- the Employee ID attribute (new) or an extension attribute in Azure AD.
+ * **Match objects using this attribute** ΓÇô Whether or not this attribute should be used to uniquely identify users between Workday and Azure AD. This value is typically set on the Worker ID field for Workday, which is typically mapped to the Employee ID attribute (new) or an extension attribute in Azure AD.
- * **Matching precedence** ΓÇô Multiple matching attributes can be set. When there are multiple, they are evaluated in the
- order defined by this field. As soon as a match is found, no further matching attributes are evaluated.
+ * **Matching precedence** ΓÇô Multiple matching attributes can be set. When there are multiple, they are evaluated in the order defined by this field. As soon as a match is found, no further matching attributes are evaluated.
* **Apply this mapping**
Once the Workday provisioning app configurations have been completed, you can tu
* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) * [Learn how to configure single sign-on between Workday and Azure Active Directory](workday-tutorial.md) * [Learn how to export and import your provisioning configurations](../app-provisioning/export-import-provisioning-configuration.md)--
active-directory Workday Inbound Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workday-inbound-tutorial.md
In this step, you'll grant "domain security" policy permissions for the worker d
>[!div class="mx-imgBorder"] >![Select Security Group](./media/workday-inbound-tutorial/select-security-group-workday.png)
-1. Click on the ellipsis (...) next to the group name and from the menu, select **Security Group > Maintain Domain Permissions for Security Group**
+1. Click on the ellipsis (`...`) next to the group name and from the menu, select **Security Group > Maintain Domain Permissions for Security Group**
>[!div class="mx-imgBorder"] >![Select Maintain Domain Permissions](./media/workday-inbound-tutorial/select-maintain-domain-permissions.png)
This section provides steps for user account provisioning from Workday to each A
**To configure Workday to Active Directory provisioning:**
-1. Go to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. In the Azure portal, search for and select **Azure Active Directory**.
In this section, you will configure how user data flows from Workday to Active D
1. In the **Attribute mappings** section, you can define how individual Workday attributes map to Active Directory attributes.
-1. Click on an existing attribute mapping to update it, or click **Add new mapping** at the bottom of the screen to add new
- mappings. An individual attribute mapping supports these properties:
+1. Click on an existing attribute mapping to update it, or click **Add new mapping** at the bottom of the screen to add new mappings. An individual attribute mapping supports these properties:
* **Mapping Type**
In this section, you will configure how user data flows from Workday to Active D
* **Target attribute** ΓÇô The user attribute in Active Directory.
- * **Match objects using this attribute** ΓÇô Whether or not this mapping should be used to uniquely identify users between
- Workday and Active Directory. This value is typically set on the Worker ID field for Workday, which is typically mapped to one of the Employee ID attributes in Active Directory.
+ * **Match objects using this attribute** ΓÇô Whether or not this mapping should be used to uniquely identify users between Workday and Active Directory. This value is typically set on the Worker ID field for Workday, which is typically mapped to one of the Employee ID attributes in Active Directory.
- * **Matching precedence** ΓÇô Multiple matching attributes can be set. When there are multiple, they are evaluated in the
- order defined by this field. As soon as a match is found, no further matching attributes are evaluated.
+ * **Matching precedence** ΓÇô Multiple matching attributes can be set. When there are multiple, they are evaluated in the order defined by this field. As soon as a match is found, no further matching attributes are evaluated.
* **Apply this mapping**
The solution currently does not support setting binary attributes such as *thumb
#### How do I format display names in AD based on the user's department/country/city attributes and handle regional variances?
-It is a common requirement to configure the *displayName* attribute in AD so that it also provides information about the user's department and country/region. For e.g. if John Smith works in the Marketing Department in US, you might want his *displayName* to show up as *Smith, John (Marketing-US)*.
+It is a common requirement to configure the *displayName* attribute in AD so that it also provides information about the user's department and country/region. For example, if John Smith works in the Marketing Department in US, you might want his *displayName* to show up as *Smith, John (Marketing-US)*.
Here is how you can handle such requirements for constructing *CN* or *displayName* to include attributes such as company, business unit, city, or country/region.
active-directory Workday Mobile Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workday-mobile-tutorial.md
To set up Workday as a managed device, perform the following steps:
1. In **Cloud apps or actions**:
- a. Switch **Select what this policy applies to** to **Cloud apps**.
+ 1. Switch **Select what this policy applies to** to **Cloud apps**.
- b. In **Include**, choose **Select apps**.
+ 1. In **Include**, choose **Select apps**.
- c. From the **Select** list, choose **Workday**.
+ 1. From the **Select** list, choose **Workday**.
- d. Select **Done**.
+ 1. Select **Done**.
1. Switch **Enable policy** to **On**.
For **Grant** access, perform the following steps:
1. In **Grant**:
- a. Select the controls to be enforced as **Grant access**.
+ 1. Select the controls to be enforced as **Grant access**.
- b. Select **Require device to be marked as compliant**.
+ 1. Select **Require device to be marked as compliant**.
- c. Select **Require one of the selected controls**.
+ 1. Select **Require one of the selected controls**.
- d. Choose **Select**.
+ 1. Choose **Select**.
1. Switch **Enable policy** to **On**.
To ensure that iOS devices are only able to sign in through Workday managed by m
## iOS configuration policies
-1. Go to the [Azure portal](https://portal.azure.com/), and sign in.
+1. Sign in to the [Azure portal](https://portal.azure.com/), and sign in.
1. Search for **Intune** or select the widget from the list. 1. Go to **Client Apps** > **Apps** > **App Configuration Policies**. Then select **+ Add** > **Managed Devices**. 1. Enter a name.
To ensure that iOS devices are only able to sign in through Workday managed by m
## Android configuration policies
-1. Go to the [Azure portal](https://portal.azure.com/), and sign in.
+1. Sign in to the [Azure portal](https://portal.azure.com/), and sign in.
2. Search for **Intune** or select the widget from the list. 3. Go to **Client Apps** > **Apps** > **App Configuration Policies**. Then select **+ Add** > **Managed Devices**. 5. Enter a name.
active-directory Workday Writeback Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workday-writeback-tutorial.md
Follow these instructions to configure writeback of user email addresses and use
**To configure Workday Writeback connector:**
-1. Go to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. In the Azure portal, search for and select **Azure Active Directory**.
Follow these instructions to configure writeback of user email addresses and use
8. Complete the **Admin Credentials** section as follows:
- * **Admin Username** ΓÇô Enter the username of the Workday integration system account, with the tenant domain name
- appended. Should look something like: *username\@contoso4*
+ * **Admin Username** ΓÇô Enter the username of the Workday integration system account, with the tenant domain name appended. Should look something like: *username\@contoso4*
* **Admin password ΓÇô** Enter the password of the Workday integration system account
Follow these instructions to configure writeback of user email addresses and use
* **Notification Email ΓÇô** Enter your email address, and check the "send email if failure occurs" checkbox.
- * Click the **Test Connection** button. If the connection test succeeds, click the **Save** button at
- the top. If it fails, double-check that the Workday URL and credentials are valid in Workday.
+ * Click the **Test Connection** button. If the connection test succeeds, click the **Save** button at the top. If it fails, double-check that the Workday URL and credentials are valid in Workday.
### Part 2: Configure writeback attribute mappings
Once the Workday provisioning app configurations have been completed, you can tu
* [Learn how to configure single sign-on between Workday and Azure Active Directory](workday-tutorial.md) * [Learn how to integrate other SaaS applications with Azure Active Directory](tutorial-list.md) * [Learn how to export and import your provisioning configurations](../app-provisioning/export-import-provisioning-configuration.md)-
active-directory Workplace By Facebook Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workplace-by-facebook-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
![Screenshot shows Admin Credentials dialog box with an Authorize option.](./media/workplace-by-facebook-provisioning-tutorial/provisionings.png)
- ![authorize](./media/workplace-by-facebook-provisioning-tutorial/workplace-login.png)
+ ![Authorize](./media/workplace-by-facebook-provisioning-tutorial/workplace-login.png)
> [!NOTE] > Failure to change the URL to https://scim.workplace.com/ will result in a failure when trying to save the configuration
In December 2021, Facebook released a SCIM 2.0 connector. Completing the steps b
> [!NOTE] > Be sure to note any changes that have been made to the settings listed above before completing the steps below. Failure to do so will result in the loss of customized settings.
-1. Sign into the Azure portal at https://portal.azure.com
+1. Sign into the [Azure portal](https://portal.azure.com)
2. Navigate to your current Workplace by Facebook app under Azure Active Directory > Enterprise Applications 3. In the Properties section of your new custom app, copy the Object ID.
active-directory Admin Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/admin-api.md
Content-type: application/json
### Remarks
->You can create multiple authorities with their own DID and private keys, these will not be visible in the UI of the azure portal. Currently we only support having 1 authority. We have not fully tested all scenarios with multiple created authorities. If you are trying this please let us know your experience.
+>You can create multiple authorities with their own DID and private keys, these will not be visible in the UI of the Azure portal. Currently we only support having 1 authority. We have not fully tested all scenarios with multiple created authorities. If you are trying this please let us know your experience.
### Update authority
ai-services How To Create Immersive Reader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/how-to-create-immersive-reader.md
For more information, _see_ [Azure AD built-in roles](../../active-directory/rol
Manage your Azure AD application secrets
- ![Azure Portal Certificates and Secrets blade](./media/client-secrets-blade.png)
+ ![Azure portal Certificates and Secrets blade](./media/client-secrets-blade.png)
1. Copy the JSON output into a text file for later use. The output should look like the following.
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
description: Learn about the different model capabilities that are available wit
Previously updated : 07/12/2023 Last updated : 07/20/2023
Currently, we offer three families of Embeddings models for different functional
The DALL-E models, currently in preview, generate images from text prompts that the user provides. - ## Model summary table and region availability > [!IMPORTANT]
-> South Central US is temporarily unavailable for creating new resources due to high demand.
+> South Central US and East US are temporarily unavailable for creating new resources and deployments due to high demand.
### GPT-4 models
GPT-3.5 Turbo is used with the Chat Completion API. GPT-3.5 Turbo (0301) can als
| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | | | | - | -- | - | | `gpt-35-turbo`<sup>1</sup> (0301) | East US, France Central, South Central US, UK South, West Europe | N/A | 4,096 | Sep 2021 |
-| `gpt-35-turbo` (0613) | East US, France Central, UK South | N/A | 4,096 | Sep 2021 |
-| `gpt-35-turbo-16k` (0613) | East US, France Central, UK South | N/A | 16,384 | Sep 2021 |
+| `gpt-35-turbo` (0613) | East US, France Central, Japan East, North Central US, UK South | N/A | 4,096 | Sep 2021 |
+| `gpt-35-turbo-16k` (0613) | East US, France Central, Japan East, North Central US, UK South | N/A | 16,384 | Sep 2021 |
<sup>1</sup> Version `0301` of gpt-35-turbo will be retired on January 4, 2024. See [model updates](#model-updates) for model upgrade behavior.
curl -X PUT https://management.azure.com/subscriptions/00000000-0000-0000-0000-0
``` > [!NOTE]
-> There are multiple ways to generate an authorization token. The easiest method for initial testing is to launch the Cloud Shell from https://portal.azure.com. Then run [`az account get-access-token`](/cli/azure/account?view=azure-cli-latest#az-account-get-access-token&preserve-view=true). You can use this token as your temporary authorization token for API testing.
+> There are multiple ways to generate an authorization token. The easiest method for initial testing is to launch the Cloud Shell from the [Azure portal](https://portal.azure.com). Then run [`az account get-access-token`](/cli/azure/account?view=azure-cli-latest#az-account-get-access-token&preserve-view=true). You can use this token as your temporary authorization token for API testing.
#### Example response
ai-services Function Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/function-calling.md
+
+ Title: How to use function calling with Azure OpenAI Service
+
+description: Learn how to use function calling with the GPT-35-Turbo and GPT-4 models
+++++ Last updated : 07/20/2023+++
+# How to use function calling with Azure OpenAI Service
+
+The latest versions of gpt-35-turbo and gpt-4 have been fine-tuned to work with functions and are able to both determine when and how a function should be called. If one or more functions are included in your request, the model will then determine if any of the functions should be called based on the context of the prompt. When the model determines that a function should be called, it will then respond with a JSON object including the arguments for the function.
+
+This provides a native way for these models to formulate API calls and structure data outputs, all based on the functions you specify. It's important to note that while the models can generate these calls, it's up to you to execute them, ensuring you remain in control.
+
+At a high level you can break down working with functions into three steps:
+1. Call the chat completions API with your functions and the userΓÇÖs input
+2. Use the modelΓÇÖs response to call your API or function
+3. Call the chat completions API again, including the response from your function to get a final response
+
+## Using function in the chat completions API
+
+Function calling is available in the `2023-07-01-preview` API version and works with version 0613 of gpt-35-turbo, gpt-35-turbo-16k, gpt-4, and gpt-4-32k.
+
+To use function calling with the Chat Completions API, you need to include two new properties in your request: `functions` and `function_call`. You can include one or more `functions` in your request and you can learn more about how to define functions in the [defining functions](#defining-functions) section below. Keep in mind that functions are injected into the system message under the hood so functions count against your token usage.
+
+When functions are provided, by default the `function_call` will be set to `"auto"` and the model will decide whether or not a function should be called. Alternatively, you can set the `function_call` parameter to `{"name": "<insert-function-name>"}` to force the API to call a specific function or you can set the parameter to `"none"` to prevent the model from calling any functions.
+
+```python
+# Note: The openai-python library support for Azure OpenAI is in preview.
+import os
+import openai
+
+openai.api_key = os.getenv("AZURE_OPENAI_ENDPOINT")
+openai.api_version = "2023-07-01-preview"
+openai.api_type = "azure"
+openai.api_base = os.getenv("AZURE_OPENAI_KEY")
+
+messages= [
+ {"role": "user", "content": "Find beachfront hotels in San Diego for less than $300 a month with free breakfast."}
+]
+
+functions= [
+ {
+ "name": "search_hotels",
+ "description": "Retrieves hotels from the search index based on the parameters provided",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "location": {
+ "type": "string",
+ "description": "The location of the hotel (i.e. Seattle, WA)"
+ },
+ "max_price": {
+ "type": "number",
+ "description": "The maximum price for the hotel"
+ },
+ "features": {
+ "type": "string",
+ "description": "A comma separated list of features (i.e. beachfront, free wifi, etc.)"
+ }
+ },
+ "required": ["location"],
+ },
+ }
+]
+
+response = openai.ChatCompletion.create(
+ engine="gpt-35-turbo-0613",
+ messages=messages,
+ functions=functions,
+ function_call="auto",
+)
+
+print(response['choices'][0]['message'])
+```
+
+The response from the API includes a `function_call` property if the model determines that a function should be called. The `function_call` property includes the name of the function to call and the arguments to pass to the function. The arguments are a JSON string that you can parse and use to call your function.
+
+```json
+{
+ "role": "assistant",
+ "function_call": {
+ "name": "search_hotels",
+ "arguments": "{\n \"location\": \"San Diego\",\n \"max_price\": 300,\n \"features\": \"beachfront,free breakfast\"\n}"
+ }
+}
+```
+
+In some cases, the model may generate both `content` and a `function_call`. For example, for the prompt above the content could say something like "Sure, I can help you find some hotels in San Diego that match your criteria" along with the function_call.
+
+## Working with function calling
+
+The following section goes into additional detail on how to effectively use functions with the Chat Completions API.
+
+### Defining functions
+
+A function has three main parameters: `name`, `description`, and `parameters`. The `description` parameter is used by the model to determine when and how to call the function so it's important to give a meaningful description of what the function does.
+
+`parameters` is a JSON schema object that describes the parameters that the function accepts. You can learn more about JSON schema objects in the [JSON schema reference](https://json-schema.org/understanding-json-schema/).
+
+If you want to describe a function that doesn't accept any parameters, use `{"type": "object", "properties": {}}` as the value for the `parameters` property.
+
+### Managing the flow with functions
+
+```python
+response = openai.ChatCompletion.create(
+ deployment_id="gpt-35-turbo-0613",
+ messages=messages,
+ functions=functions,
+ function_call="auto",
+)
+response_message = response["choices"][0]["message"]
+
+# Check if the model wants to call a function
+if response_message.get("function_call"):
+
+ # Call the function. The JSON response may not always be valid so make sure to handle errors
+ function_name = response_message["function_call"]["name"]
+
+ available_functions = {
+ "search_hotels": search_hotels,
+ }
+ function_args = json.loads(response_message["function_call"]["arguments"])
+ function_response = fuction_to_call(**function_args)
+
+ # Add the assistant response and function response to the messages
+ messages.append( # adding assistant response to messages
+ {
+ "role": response_message["role"],
+ "name": response_message["function_call"]["name"],
+ "content": response_message["function_call"]["arguments"],
+ }
+ )
+ messages.append( # adding function response to messages
+ {
+ "role": "function",
+ "name": function_name,
+ "content": function_response,
+ }
+ )
+
+ # Call the API again to get the final response from the model
+ second_response = openai.ChatCompletion.create(
+ messages=messages,
+ deployment_id="gpt-35-turbo-0613"
+ # optionally, you could provide functions in the second call as well
+ )
+ print(second_response["choices"][0]["message"])
+else:
+ print(response["choices"][0]["message"])
+```
+
+In the example above, we don't do any validation or error handling so you'll want to make sure to add that to your code.
+
+For a full example of working with functions, see the [sample notebook on function calling](https://aka.ms/oai/functions-samples). You can also apply more complex logic to chain multiple function calls together, which is covered in the sample as well.
+
+### Prompt engineering with functions
+
+When you define a function as part of your request, the details are injected into the system message using specific syntax that the model has been trained on. This means that functions consume tokens in your prompt and that you can apply prompt engineering techniques to optimize the performance of your function calls. The model uses the full context of the prompt to determine if a function should be called including function definition, the system message, and the user messages.
+
+#### Improving quality and reliability
+If the model isn't calling your function when or how you expect, there are a few things you can try to improve the quality.
+
+##### Provide more details in your function definition
+It's important that you provide a meaningful `description` of the function and provide descriptions for any parameter that might not be obvious to the model. For example, in the description for the `location` parameter, you could include extra details and examples on the format of the location.
+```json
+"location": {
+ "type": "string",
+ "description": "The location of the hotel. The location should include the city and the state's abbreviation (i.e. Seattle, WA or Miami, FL)"
+},
+```
+
+##### Provide more context in the system message
+The system message can also be used to provide more context to the model. For example, if you have a function called `search_hotels` you could include a system message like the following to instruct the model to call the function when a user asks for help with finding a hotel.
+```json
+{"role": "system", "content": "You're an AI assistant designed to help users search for hotels. When a user asks for help finding a hotel, you should call the search_hotels function."}
+```
+
+##### Instruct the model to ask clarifying questions
+In some cases, you may want to instruct the model to ask clarifying questions. This is helpful to prevent the model from making assumptions about what values to use with functions. For example, with `search_hotels` you would want the model to ask for clarification if the user request didn't include details on `location`. To instruct the model to ask a clarifying question, you could include content like the following in your system message.
+```json
+{"role": "system", "content": "Don't make assumptions about what values to use with functions. Ask for clarification if a user request is ambiguous."}
+```
+
+#### Reducing errors
+
+Another area where prompt engineering can be valuable is in reducing errors in function calls. The models have been trained to generate function calls matching the schema that you define, but the models may produce a function call that doesn't match the schema you defined or it may try to call a function that you didn't include.
+
+If you find the model is generating function calls that weren't provided, try including a sentence in the system message that says `"Only use the functions you have been provided with."`.
+
+## Using function calling responsibly
+Like any AI system, using function calling to integrate language models with other tools and systems presents potential risks. ItΓÇÖs important to understand the risks that function calling could present and take measures to ensure you use the capabilities responsibly.
+
+Here are a few tips to help you use functions safely and securely:
+* **Validate Function Calls**: Always verify the function calls generated by the model. This includes checking the parameters, the function being called, and ensuring that the call aligns with the intended action.
+* **Use Trusted Data and Tools**: Only use data from trusted and verified sources. Untrusted data in a functionΓÇÖs output could be used to instruct the model to write function calls in a way other than you intended.
+* **Follow the Principle of Least Privilege**: Grant only the minimum access necessary for the function to perform its job. This reduces the potential impact if a function is misused or exploited. For example, if youΓÇÖre using function calls to query a database, you should only give your application read-only access to the database. You also shouldnΓÇÖt depend solely on excluding capabilities in the function definition as a security control.
+* **Consider Real-World Impact**: Be aware of the real-world impact of function calls that you plan to execute, especially those that trigger actions such as executing code, updating databases, or sending notifications.
+* **Implement User Confirmation Steps**: Particularly for functions that take actions, we recommend including a step where the user confirms the action before it's executed.
+
+To learn more about our recommendations on how to use Azure OpenAI models responsibly, see the [Overview of Responsible AI practices for Azure OpenAI models](/legal/cognitive-services/openai/overview?context=/azure/ai-services/openai/context/context).
+
+## Next steps
+
+* [Learn more about Azure OpenAI](../overview.md).
+* For more examples on working with functions, check out the [Azure OpenAI Samples GitHub repository](https://aka.ms/oai/function-samples)
+* Get started with the GPT-35-Turbo model with [the GPT-35-Turbo quickstart](../chatgpt-quickstart.md).
ai-services Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/quota.md
Previously updated : 07/18/2023 Last updated : 07/20/2023
Quota provides the flexibility to actively manage the allocation of rate limits across the deployments within your subscription. This article walks through the process of managing your Azure OpenAI quota.
+## Prerequisites
+
+> [!IMPORTANT]
+> Quota requires the **Cognitive Services Usages Reader** role. This role provides the minimal access necessary to view quota usage across an Azure subscription. This role can be found in the Azure portal under **Subscriptions** > **Access control (IAM)** > **Add role assignment** > search for **Cognitive Services Usages Reader**.
+ ## Introduction to quota Azure OpenAI's quota feature enables assignment of rate limits to your deployments, up-to a global limit called your ΓÇ£quota.ΓÇ¥ Quota is assigned to your subscription on a per-region, per-model basis in units of **Tokens-per-Minute (TPM)**. When you onboard a subscription to Azure OpenAI, you'll receive default quota for most available models. Then, you'll assign TPM to each deployment as it is created, and the available quota for that model will be reduced by that amount. You can continue to create deployments and assign them TPM until you reach your quota limit. Once that happens, you can only create new deployments of that model by reducing the TPM assigned to other deployments of the same model (thus freeing TPM for use), or by requesting and being approved for a model quota increase in the desired region.
ai-services Switching Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/switching-endpoints.md
Previously updated : 05/24/2023 Last updated : 07/20/2023
embedding = openai.Embedding.create(
</tr> </table>
-## Azure OpenAI embeddings doesn't support multiple inputs
+## Azure OpenAI embeddings multiple input support
-Many examples show passing multiple inputs into the embeddings API. For Azure OpenAI, currently we must pass a single text input per call.
+OpenAI currently allows a larger number of array inputs with text-embedding-ada-002. Azure OpenAI currently supports input arrays up to 16 for text-embedding-ada-002 Version 2. Both require the max input token limit per API request to remain under 8191 for this model.
<table> <tr>
Many examples show passing multiple inputs into the embeddings API. For Azure Op
<td> ```python
-inputs = ["A", "B", "C"]
+inputs = ["A", "B", "C"]
embedding = openai.Embedding.create( input=inputs,
embedding = openai.Embedding.create(
<td> ```python
-inputs = ["A", "B", "C"]
-
-for text in inputs:
- embedding = openai.Embedding.create(
- input=text,
- deployment_id="text-embedding-ada-002"
- #engine="text-embedding-ada-002"
- )
+inputs = ["A", "B", "C"] #max array size=16
+
+embedding = openai.Embedding.create(
+ input=inputs,
+ deployment_id="text-embedding-ada-002"
+ #engine="text-embedding-ada-002"
+)
``` </td>
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md
Previously updated : 05/15/2023 Last updated : 07/19/2023 recommendations: false
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
- `2023-03-15-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json) - `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json) - `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-07-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/generated.json)
#### Example request
Output formatting adjusted for ease of reading, actual output is a single block
| Parameter | Type | Required? | Default | Description | |--|--|--|--|--|
-| ```messages``` | array | Required | | The messages to generate chat completions for, in the chat format. |
+| ```messages``` | array | Required | | The collection of context messages associated with this chat completions request. Typical usage begins with a [chat message](#chatmessage) for the System role that provides instructions for the behavior of the assistant, followed by alternating messages between the User and Assistant roles.|
| ```temperature```| number | Optional | 1 | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.\nWe generally recommend altering this or `top_p` but not both. | | ```n``` | integer | Optional | 1 | How many chat completion choices to generate for each input message. | | ```stream``` | boolean | Optional | false | If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a `data: [DONE]` message." |
Output formatting adjusted for ease of reading, actual output is a single block
| ```frequency_penalty``` | number | Optional | 0 | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.| | ```logit_bias``` | object | Optional | null | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.| | ```user``` | string | Optional | | A unique identifier representing your end-user, which can help Azure OpenAI to monitor and detect abuse.|
+|```function_call```| | Optional | | Controls how the model responds to function calls. "none" means the model does not call a function, and responds to the end-user. "auto" means the model can pick between an end-user or calling a function. Specifying a particular function via {"name": "my_function"} forces the model to call that function. "none" is the default when no functions are present. "auto" is the default if functions are present. This parameter requires API version [`2023-07-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/generated.json) |
+|```functions``` | [`FunctionDefinition[]`](#functiondefinition) | Optional | | A list of functions the model may generate JSON inputs for. This parameter requires API version [`2023-07-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/generated.json)|
+
+### ChatMessage
+
+A single, role-attributed message within a chat completion interaction.
+
+| Name | Type | Description |
+||||
+| content | string | The text associated with this message payload.|
+| function_call | [FunctionCall](#functioncall)| The name and arguments of a function that should be called, as generated by the model. |
+| name | string | The `name` of the author of this message. `name` is required if role is `function`, and it should be the name of the function whose response is in the `content`. May contain a-z, A-Z, 0-9, and underscores, with a maximum length of 64 characters.|
+|role | [ChatRole](#chatrole) | The role associated with this message payload |
+
+### ChatRole
+
+A description of the intended purpose of a message within a chat completions interaction.
+
+|Name | Type | Description |
+||||
+| assistant | string | The role that provides responses to system-instructed, user-prompted input. |
+| function | string | The role that provides function results for chat completions. |
+| system | string | The role that instructs or sets the behavior of the assistant. |
+| user | string | The role that provides input for chat completions. |
+
+### FunctionCall
+
+The name and arguments of a function that should be called, as generated by the model. This requires API version [`2023-07-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/generated.json)
+
+| Name | Type | Description|
+||||
+| arguments | string | The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may fabricate parameters not defined by your function schema. Validate the arguments in your code before calling your function. |
+| name | string | The name of the function to call.|
+
+### FunctionDefinition
+
+The definition of a caller-specified function that chat completions may invoke in response to matching user input. This requires API version [`2023-07-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/generated.json)
+
+|Name | Type| Description|
+||||
+| description | string | A description of what the function does. The model will use this description when selecting the function and interpreting its parameters. |
+| name | string | The name of the function to be called. |
+| parameters | | The parameters the functions accepts, described as a [JSON Schema](https://json-schema.org/understanding-json-schema/) object.|
## Completions extensions
Azure OpenAI is deployed as a part of the Azure AI services. All Azure AI servic
## Next steps
-Learn about [managing deployments, models, and fine-tuning with the REST API](/rest/api/cognitiveservices/azureopenaistable/deployments/create).
+Learn about [ Models, and fine-tuning with the REST API](/rest/api/cognitiveservices/azureopenaistable/files).
Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
Previously updated : 06/12/2023 Last updated : 07/20/2023 recommendations: false keywords: # What's new in Azure OpenAI Service
+## July 2023
+
+### Support for function calling
+
+- [Azure OpenAI now supports function calling](./how-to/function-calling.md) to enable you to work with functions in the chat completions API.
+
+### Embedding input array increase
+
+- Azure OpenAI now [supports arrays with up to 16 inputs](./how-to/switching-endpoints.md#azure-openai-embeddings-multiple-input-support) per API request with text-embedding-ada-002 Version 2.
+ ## June 2023 ### Use Azure OpenAI on your own data (preview)
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-ad-pod-identity.md
az aks pod-identity delete --name ${POD_IDENTITY_NAME} --namespace ${POD_IDENTIT
``` ```azurecli
-az aks update --resource-group myResourceGroup --cluster-name myAKSCluster --disable-pod-identity
+az aks update --resource-group myResourceGroup --name myAKSCluster --disable-pod-identity
``` ## Clean up
aks Use Kms Etcd Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-kms-etcd-encryption.md
Title: Use Key Management Service (KMS) etcd encryption in Azure Kubernetes Serv
description: Learn how to use the Key Management Service (KMS) etcd encryption with Azure Kubernetes Service (AKS) Previously updated : 06/15/2023 Last updated : 07/19/2023 # Add Key Management Service (KMS) etcd encryption to an Azure Kubernetes Service (AKS) cluster
For more information on using the KMS plugin, see [Encrypting Secret Data at Res
The following limitations apply when you integrate KMS etcd encryption with AKS: * Deletion of the key, Key Vault, or the associated identity isn't supported.
-* KMS etcd encryption doesn't work with system-assigned managed identity. The key vault access policy is required to be set before the feature is enabled. In addition, system-assigned managed identity isn't available until cluster creation, thus there's a cycle dependency.
-* Azure Key Vault with Firewall enabled to allow public access isn't supported because it blocks traffic from KMS plugin to the Key Vault.
+* KMS etcd encryption doesn't work with system-assigned managed identity. The key vault access policy is required to be set before the feature is enabled. In addition, system-assigned managed identity isn't available until cluster creation. Consequently, there's a cycle dependency.
+* Azure Key Vault with Firewall enabled to allow public access isn't supported. It blocks traffic from KMS plugin to the Key Vault.
* The maximum number of secrets supported by a cluster enabled with KMS is 2,000. However, it's important to note that [KMS V2][kms-v2-support] isn't limited by this restriction and can handle a higher number of secrets. * Bring your own (BYO) Azure Key Vault from another tenant isn't supported. * With KMS enabled, you can't change associated Azure Key Vault model (public, private). To [change associated key vault mode][changing-associated-key-vault-mode], you need to disable and enable KMS again. * If a cluster is enabled with KMS and private key vault and isn't using the `API Server VNet integration` tunnel, then stop/start cluster isn't allowed.
-* Using the virtual machine scale set (VMSS) API to scale down nodes in the cluster to zero will deallocate the nodes, causing the cluster to go down and unrecoverable.
-
+* Using the Virtual Machine Scale Sets API to scale the nodes in the cluster down to zero deallocates the nodes, causing the cluster to go down and become unrecoverable.
+* After you disable KMS, you can't destroy the keys. Otherwise, it causes the API server to stop working.
KMS supports [public key vault][Enable-KMS-with-public-key-vault] and [private key vault][Enable-KMS-with-private-key-vault].
After changing the key ID (including key name and key version), you can use [az
> [!WARNING] > Remember to update all secrets after key rotation. Otherwise, the secrets will be inaccessible if the old keys are not existing or working.
->
+>
> Once you rotate the key, the old key (key1) is still cached and shouldn't be deleted. If you want to delete the old key (key1) immediately, you need to rotate the key twice. Then key2 and key3 are cached, and key1 can be deleted without impacting existing cluster. ```azurecli-interactive
api-management Api Management Howto App Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-app-insights.md
You can specify loggers on different levels:
Specifying *both*: - By default, the single API logger (more granular level) overrides the one for all APIs.-- If the loggers configured at the two levels are different, and you need both loggers to receive telemetry (multiplexing), please contact Microsoft Support.
+- If the loggers configured at the two levels are different, and you need both loggers to receive telemetry (multiplexing), please contact Microsoft Support. Please note that multiplexing is not supported if you're using the same logger (Application Insights destination) at the "All APIs" level and the single API level. For multiplexing to work correctly, you must configure different loggers at the "All APIs" and individual API level and request assistance from Microsoft support to enable multiplexing for your service.
## What data is added to Application Insights
api-management Cosmosdb Data Source Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cosmosdb-data-source-policy.md
-+ Last updated 06/07/2023
api-management Http Data Source Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/http-data-source-policy.md
-+ Last updated 03/07/2023
api-management Publish Event Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/publish-event-policy.md
-+ Last updated 05/24/2023
api-management Send Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/send-request-policy.md
The `send-request` policy sends the provided request to the specified URL, waiti
<send-request mode="new | copy" response-variable-name="" timeout="60 sec" ignore-error ="false | true"> <set-url>request URL</set-url>
- <set-method>.../set-method>
+ <set-method>...</set-method>
<set-header>...</set-header> <set-body>...</set-body> <authentication-certificate thumbprint="thumbprint" />
This example shows one way to verify a reference token with an authorization ser
* [API Management advanced policies](api-management-advanced-policies.md)
api-management Set Graphql Resolver Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-graphql-resolver-policy.md
-+ Last updated 03/07/2023
api-management Sql Data Source Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/sql-data-source-policy.md
-+ Last updated 06/07/2023
app-service Configure Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-custom-container.md
App Service logs actions by the Docker host as well as activities from within t
There are several ways to access Docker logs: -- [In Azure portal](#in-azure-portal)
+- [In the Azure portal](#in-azure-portal)
- [From the Kudu console](#from-the-kudu-console) - [With the Kudu API](#with-the-kudu-api) - [Send logs to Azure monitor](troubleshoot-diagnostic-logs.md#send-logs-to-azure-monitor)
app-service Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-nodejs.md
You can launch the app at http://&lt;app-name>.azurewebsites.net
:::zone target="docs" pivot="development-environment-azure-portal" ### Sign in to Azure portal
-Sign in to the Azure portal at https://portal.azure.com.
+Sign in to the [Azure portal](https://portal.azure.com).
### Create Azure resources
-1. To start creating a Node.js app, browse to [https://ms.portal.azure.com/#create/Microsoft.WebSite](https://ms.portal.azure.com/#create/Microsoft.WebSite).
+1. To start creating a Node.js app, browse to [https://portal.azure.com/#create/Microsoft.WebSite](https://portal.azure.com/#create/Microsoft.WebSite).
1. In the **Basics** tab, under **Project details**, ensure the correct subscription is selected and then select to **Create new** resource group. Type *myResourceGroup* for the name.
application-gateway Multiple Site Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/multiple-site-overview.md
Application Gateway relies on HTTP 1.1 host headers to host more than one websit
## Next steps Learn how to configure multiple site hosting in Application Gateway
-* [Using Azure portal](create-multiple-sites-portal.md)
+* [Using the Azure portal](create-multiple-sites-portal.md)
* [Using Azure PowerShell](tutorial-multiple-sites-powershell.md) * [Using Azure CLI](tutorial-multiple-sites-cli.md)
automation Automation Tutorial Installed Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-tutorial-installed-software.md
To complete this tutorial, you need:
## Log in to Azure
-Log in to the Azure portal at https://portal.azure.com.
+Sign in to the [Azure portal](https://portal.azure.com).
## Enable Change Tracking and Inventory
automation Enable From Automation Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/enable-from-automation-account.md
This article describes how you can use your Automation account to enable [Change
## Sign in to Azure
-Sign in to Azure at https://portal.azure.com.
+Sign in to the [Azure portal](https://portal.azure.com).
## Enable Change Tracking and Inventory
automation Enable From Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/enable-from-portal.md
The number of resource groups that you can use for managing your VMs is limited
## Sign in to Azure
-Sign in to Azure at https://portal.azure.com.
+Sign in to the [Azure portal](https://portal.azure.com).
## Enable Change Tracking and Inventory
Sign in to Azure at https://portal.azure.com.
## Next steps * For details of working with the feature, see [Manage Change Tracking](manage-change-tracking.md) and [Manage Inventory](manage-inventory-vms.md).
-* To troubleshoot general problems with the feature, see [Troubleshoot Change Tracking and Inventory issues](../troubleshoot/change-tracking.md).
+* To troubleshoot general problems with the feature, see [Troubleshoot Change Tracking and Inventory issues](../troubleshoot/change-tracking.md).
automation Enable From Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/enable-from-vm.md
This article describes how you can use an Azure VM to enable [Change Tracking an
## Sign in to Azure
-Sign in to the Azure portal at https://portal.azure.com.
+Sign in to the [Azure portal](https://portal.azure.com).
## Enable Change Tracking and Inventory
Sign in to the Azure portal at https://portal.azure.com.
* For details of working with the feature, see [Manage Change Tracking](manage-change-tracking.md) and [Manage Inventory](manage-inventory-vms.md).
-* To troubleshoot general problems with the feature, see [Troubleshoot Change Tracking and Inventory issues](../troubleshoot/change-tracking.md).
+* To troubleshoot general problems with the feature, see [Troubleshoot Change Tracking and Inventory issues](../troubleshoot/change-tracking.md).
automation Dsc Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstarts/dsc-configuration.md
To complete this quickstart, you need:
* An Azure Resource Manager virtual machine running Red Hat Enterprise Linux, CentOS, or Oracle Linux. For instructions on creating a VM, see [Create your first Linux virtual machine in the Azure portal](../../virtual-machines/linux/quick-create-portal.md) ## Sign in to Azure
-Sign in to Azure at https://portal.azure.com.
+Sign in to the [Azure portal](https://portal.azure.com).
## Enable a virtual machine
automation Enable From Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/enable-from-portal.md
The number of resource groups that you can use for managing your VMs is limited
## Sign in to Azure
-Sign in to Azure at https://portal.azure.com.
+Sign in to the [Azure portal](https://portal.azure.com).
## Enable Update Management
Sign in to Azure at https://portal.azure.com.
* To use Update Management for VMs, see [Manage updates and patches for your VMs](manage-updates-for-vm.md). * To troubleshoot general Update Management errors, see [Troubleshoot Update Management issues](../troubleshoot/update-management.md). * To troubleshoot problems with the Windows update agent, see [Troubleshoot Windows update agent issues](../troubleshoot/update-agent-issues.md).
-* To troubleshoot problems with the Linux update agent, see [Troubleshoot Linux update agent issues](../troubleshoot/update-agent-issues-linux.md).
+* To troubleshoot problems with the Linux update agent, see [Troubleshoot Linux update agent issues](../troubleshoot/update-agent-issues-linux.md).
azure-arc Onboard Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-portal.md
If you don't have an Azure subscription, create a [free account](https://azure.m
The script to automate the download and installation, and to establish the connection with Azure Arc, is available from the Azure portal. To complete the process, perform the following steps:
-1. From your browser, go to the [Azure portal](https://portal.azure.com).
+1. From your browser, sign in to the [Azure portal](https://portal.azure.com).
1. On the **Servers - Azure Arc** page, select **Add** at the upper left.
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
Azure Arc supports the following Windows and Linux operating systems. Only x86-6
* Windows 10, 11 (see [client operating system guidance](#client-operating-system-guidance)) * Windows IoT Enterprise * Azure Stack HCI
-* Azure Linux 1.0, 2.0
+* Azure Linux (CBL-Mariner) 1.0, 2.0
* Ubuntu 16.04, 18.04, 20.04, and 22.04 LTS * Debian 10, 11, and 12 * CentOS Linux 7 and 8
azure-cache-for-redis Cache Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-administration.md
The effect on your client applications varies depending on which nodes you reboo
* **Primary** - When the primary node is rebooted, Azure Cache for Redis fails over to the replica node and promotes it to primary. During this failover, there may be a short interval in which connections may fail to the cache. * **Replica** - When the replica node is rebooted, there's typically no effect on the cache clients.
-* **Both primary and replica** - When both cache nodes are rebooted, you lose all data in the cache and connections to the cache fail until the primary node comes back online. If you have configured [data persistence](cache-how-to-premium-persistence.md), the most recent backup is restored when the cache comes back online. However, any cache writes that occurred after the most recent backup are lost.
+* **Both primary and replica** - When both cache nodes are rebooted, Azure Cache for Redis will attempt to gracefully reboot both nodes, waiting for one to finish before rebooting the other. Typically, data loss does not occur. However, data loss can still occur do to unexpected maintenance events or failures. Rebooting your cache many times in a row increases the odds of data loss.
* **Nodes of a premium cache with clustering enabled** - When you reboot one or more nodes of a premium cache with clustering enabled, the behavior for the selected nodes is the same as when you reboot the corresponding node or nodes of a non-clustered cache. ## Reboot FAQ
The effect on your client applications varies depending on which nodes you reboo
### Which node should I reboot to test my application?
-To test the resiliency of your application against failure of the primary node of your cache, reboot the **Primary** node. To test the resiliency of your application against failure of the replica node, reboot the **Replica** node. To test the resiliency of your application against total failure of the cache, reboot **Both** nodes.
+To test the resiliency of your application against failure of the primary node of your cache, reboot the **Primary** node. To test the resiliency of your application against failure of the replica node, reboot the **Replica** node.
### Can I reboot the cache to clear client connections?
Yes, if you reboot the cache, all client connections are cleared. Rebooting can
### Will I lose data from my cache if I do a reboot?
-If you reboot both the **Primary** and **Replica** nodes, all data in the cache (or in that shard when you're using a premium cache with clustering enabled) might be lost. However, the data might not be lost either. If you have configured [data persistence](cache-how-to-premium-persistence.md), the most recent backup is restored when the cache comes back online. However, any cache writes that have occurred after the backup was made are lost.
+If you reboot both the **Primary** and **Replica** nodes, all data in the cache (or in that shard when you're using a premium cache with clustering enabled) like likely be safe. However, the data may be lost in some cases. Rebooting both nodes should be taken with caution.
If you reboot just one of the nodes, data isn't typically lost, but it still might be. For example if the primary node is rebooted and a cache write is in progress, the data from the cache write is lost. Another scenario for data loss would be if you reboot one node and the other node happens to go down because of a failure at the same time. For more information about possible causes for data loss, see [What happened to my data in Redis?](https://gist.github.com/JonCole/b6354d92a2d51c141490f10142884ea4#file-whathappenedtomydatainredis-md)
azure-cache-for-redis Cache Tutorial Functions Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-tutorial-functions-getting-started.md
Previously updated : 07/18/2023 Last updated : 07/19/2023
dotnet add package Microsoft.Azure.WebJobs.Extensions.Redis --prerelease
} ```
+ The code in `RedisConnection.cs` looks to this value when running local.
+
+ ```csharp
+ public const string connectionString = "redisConnectionString";
+ ```
> [!IMPORTANT] > This example is simplified for the tutorial. For production use, we recommend that you use [Azure Key Vault](../service-connector/tutorial-portal-key-vault.md) to store connection string information.
dotnet add package Microsoft.Azure.WebJobs.Extensions.Redis --prerelease
1. You see several prompts on information to configure the new functions app: - Enter a unique name
- - Choose **.NET 6** as the runtime stack
+ - Choose **.NET 6 (LTS)** as the runtime stack
- Choose either **Linux** or **Windows** (either works) - Select an existing or new resource group to hold the Function App - Choose the same region as your cache instance
dotnet add package Microsoft.Azure.WebJobs.Extensions.Redis --prerelease
1. Navigate to your new Function App in the Azure portal and select the **Configuration** from the Resource menu.
-1. Select **New application setting** and enter `redisConnectionString` as the Name, with your connection string as the Value. Set Type to _Custom_, and select **Ok** to close the menu and then **Save** on the Configuration page to confirm. The functions app restarts with the new connection string information.
+1. In the working pane, you see **Application settings**. In the **Connection strings** section, select **New connection string**.
+
+1. Then, type `redisConnectionString` as the **Name**, with your connection string as the **Value**. Set **Type** to _Custom_, and select **Ok** to close the menu. Then, select **Save** on the Configuration page to confirm. The functions app restarts with the new connection string information.
### Test your triggers
azure-functions Language Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/language-support-policy.md
Title: Azure Functions language runtime support policy description: Learn about Azure Functions language runtime support policy Previously updated : 08/17/2021 Last updated : 07/18/2023 # Language runtime support policy
After the language end-of-life date, function apps that use retired language ver
> [!IMPORTANT] >You're highly encouraged to upgrade the language version of your affected function apps to a supported version.
->If you're running functions apps using an unsupported language version, you'll be required to upgrade before receiving support for your function app.
+>If you're running functions apps using an unsupported runtime or language version, you may encounter issues and performance implications and will be required to upgrade before receiving support for your function app.
## Retirement policy exceptions
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|Dell Corp.|Get_Azure@Dell.com|888-375-9857| |Insight Public Sector|federal@insight.com|800-467-4448| |PC Connection|govtssms@connection.com|800-800-0019|
-|SHI, Inc.|msftgov@shi.com|888-764-8888|
+|SHI, Inc.|MSFederal@shi.com|888-764-8888|
|Minburn Technology Group|microsoft@minburntech.com |571-699-0705 Opt. 1| ## Approved AOS-G partners
azure-monitor Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/gateway.md
Title: Connect computers by using the Log Analytics gateway | Microsoft Docs description: Connect your devices and Operations Manager-monitored computers by using the Log Analytics gateway to send data to the Azure Automation and Log Analytics service when they do not have internet access. Previously updated : 04/06/2022 Last updated : 07/06/2023
azure-monitor Alerts Processing Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-processing-rules.md
az monitor alert-processing-rules list
az monitor alert-processing-rules show --resource-group RG1 --name MyRule # Update an alert processing rule
-az monitor alert-processing-rules update --resource-group RG1 --name MyRule --status Disabled
+az monitor alert-processing-rule update --resource-group RG1 --name MyRule --enabled true
# Delete an alert processing rule az monitor alert-processing-rules delete --resource-group RG1 --name MyRule
azure-monitor Java Get Started Supplemental https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-get-started-supplemental.md
Title: Application Insights with containers description: This article shows you how to set-up Application Insights Previously updated : 07/10/2023 Last updated : 07/20/2023 ms.devlang: java
For more information, see [Use Application Insights Java In-Process Agent in Azu
### Docker entry point
-If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.14.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.15.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.14.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.15.jar", "-jar", "<myapp.jar>"]
```
-If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.14.jar"` somewhere before `-jar`, for example:
+If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.15.jar"` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.14.jar" -jar <myapp.jar>
+ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.15.jar" -jar <myapp.jar>
```
FROM ...
COPY target/*.jar app.jar
-COPY agent/applicationinsights-agent-3.4.14.jar applicationinsights-agent-3.4.14.jar
+COPY agent/applicationinsights-agent-3.4.15.jar applicationinsights-agent-3.4.15.jar
COPY agent/applicationinsights.json applicationinsights.json ENV APPLICATIONINSIGHTS_CONNECTION_STRING="CONNECTION-STRING"
-ENTRYPOINT["java", "-javaagent:applicationinsights-agent-3.4.14.jar", "-jar", "app.jar"]
+ENTRYPOINT["java", "-javaagent:applicationinsights-agent-3.4.15.jar", "-jar", "app.jar"]
```
-In this example we have copied the `applicationinsights-agent-3.4.14.jar` and `applicationinsights.json` files from an `agent` folder (you can choose any folder of your machine). These two files have to be in the same folder in the Docker container.
+In this example we have copied the `applicationinsights-agent-3.4.15.jar` and `applicationinsights.json` files from an `agent` folder (you can choose any folder of your machine). These two files have to be in the same folder in the Docker container.
### Third-party container images
The following sections show how to set the Application Insights Java agent path
If you installed Tomcat via `apt-get` or `yum`, you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file: ```
-JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.14.jar"
+JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.15.jar"
``` #### Tomcat installed via download and unzip
JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.14.jar"
If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content: ```
-CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.14.jar"
+CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.15.jar"
```
-If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.14.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.15.jar` to `CATALINA_OPTS`.
### Tomcat 8 (Windows)
If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-
Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content: ```
-set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.14.jar
+set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.15.jar
``` Quotes aren't necessary, but if you want to include them, the proper placement is: ```
-set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.14.jar"
+set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.15.jar"
```
-If the file `<tomcat>/bin/setenv.bat` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.14.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.bat` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.15.jar` to `CATALINA_OPTS`.
#### Run Tomcat as a Windows service
-Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.14.jar` to the `Java Options` under the `Java` tab.
+Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.15.jar` to the `Java Options` under the `Java` tab.
### JBoss EAP 7 #### Standalone server
-Add `-javaagent:path/to/applicationinsights-agent-3.4.14.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
+Add `-javaagent:path/to/applicationinsights-agent-3.4.15.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
```java ...
- JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.14.jar -Xms1303m -Xmx1303m ..."
+ JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.15.jar -Xms1303m -Xmx1303m ..."
... ``` #### Domain server
-Add `-javaagent:path/to/applicationinsights-agent-3.4.14.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.4.15.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
```xml ...
Add `-javaagent:path/to/applicationinsights-agent-3.4.14.jar` to the existing `j
<jvm-options> <option value="-server"/> <!--Add Java agent jar file here-->
- <option value="-javaagent:path/to/applicationinsights-agent-3.4.14.jar"/>
+ <option value="-javaagent:path/to/applicationinsights-agent-3.4.15.jar"/>
<option value="-XX:MetaspaceSize=96m"/> <option value="-XX:MaxMetaspaceSize=256m"/> </jvm-options>
Add these lines to `start.ini`:
``` --exec--javaagent:path/to/applicationinsights-agent-3.4.14.jar
+-javaagent:path/to/applicationinsights-agent-3.4.15.jar
``` ### Payara 5
-Add `-javaagent:path/to/applicationinsights-agent-3.4.14.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.4.15.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
```xml ... <java-config ...> <!--Edit the JVM options here--> <jvm-options>
- -javaagent:path/to/applicationinsights-agent-3.4.14.jar>
+ -javaagent:path/to/applicationinsights-agent-3.4.15.jar>
</jvm-options> ... </java-config>
Add `-javaagent:path/to/applicationinsights-agent-3.4.14.jar` to the existing `j
1. In `Generic JVM arguments`, add the following JVM argument: ```
- -javaagent:path/to/applicationinsights-agent-3.4.14.jar
+ -javaagent:path/to/applicationinsights-agent-3.4.15.jar
``` 1. Save and restart the application server.
Add `-javaagent:path/to/applicationinsights-agent-3.4.14.jar` to the existing `j
Create a new file `jvm.options` in the server directory (for example, `<openliberty>/usr/servers/defaultServer`), and add this line: ```--javaagent:path/to/applicationinsights-agent-3.4.14.jar
+-javaagent:path/to/applicationinsights-agent-3.4.15.jar
``` ### Others
azure-monitor Java Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-spring-boot.md
Title: Configure Azure Monitor Application Insights for Spring Boot description: How to configure Azure Monitor Application Insights for Spring Boot applications Previously updated : 06/19/2023 Last updated : 07/20/2023 ms.devlang: java
There are two options for enabling Application Insights Java with Spring Boot: J
## Enabling with JVM argument
-Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.14.jar"` somewhere before `-jar`, for example:
+Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.15.jar"` somewhere before `-jar`, for example:
```
-java -javaagent:"path/to/applicationinsights-agent-3.4.14.jar" -jar <myapp.jar>
+java -javaagent:"path/to/applicationinsights-agent-3.4.15.jar" -jar <myapp.jar>
``` ### Spring Boot via Docker entry point
-If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.14.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.15.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.14.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.15.jar", "-jar", "<myapp.jar>"]
```
-If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.14.jar"` somewhere before `-jar`, for example:
+If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.15.jar"` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.14.jar" -jar <myapp.jar>
+ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.15.jar" -jar <myapp.jar>
``` ### Configuration
To enable Application Insights Java programmatically, you must add the following
<dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-runtime-attach</artifactId>
- <version>3.4.14</version>
+ <version>3.4.15</version>
</dependency> ```
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
Title: Configuration options - Azure Monitor Application Insights for Java description: This article shows you how to configure Azure Monitor Application Insights for Java. Previously updated : 07/10/2023 Last updated : 07/20/2023 ms.devlang: java
More information and configuration options are provided in the following section
## Configuration file path
-By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.14.jar`.
+By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.15.jar`.
You can specify your own configuration file path by using one of these two options: * `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable * `applicationinsights.configuration.file` Java system property
-If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.14.jar` is located.
+If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.15.jar` is located.
Alternatively, instead of using a configuration file, you can specify the entire _content_ of the JSON configuration via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`.
Or you can set the connection string by using the Java system property `applicat
You can also set the connection string by specifying a file to load the connection string from.
-If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.14.jar` is located.
+If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.15.jar` is located.
```json {
and add `applicationinsights-core` to your application:
<dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-core</artifactId>
- <version>3.4.14</version>
+ <version>3.4.15</version>
</dependency> ```
In the preceding configuration example:
* `level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`. * `path` can be an absolute or relative path. Relative paths are resolved against the directory where
-`applicationinsights-agent-3.4.14.jar` is located.
+`applicationinsights-agent-3.4.15.jar` is located.
Starting from version 3.0.2, you can also set the self-diagnostics `level` by using the environment variable `APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_LEVEL`. It then takes precedence over the self-diagnostics level specified in the JSON configuration.
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
Title: Upgrading from 2.x - Azure Monitor Application Insights Java description: Upgrading from Azure Monitor Application Insights Java 2.x Previously updated : 06/19/2023 Last updated : 07/20/2023 ms.devlang: java
There are typically no code changes when upgrading to 3.x. The 3.x SDK dependenc
Add the 3.x Java agent to your JVM command-line args, for example ```--javaagent:path/to/applicationinsights-agent-3.4.14.jar
+-javaagent:path/to/applicationinsights-agent-3.4.15.jar
``` If you're using the Application Insights 2.x Java agent, just replace your existing `-javaagent:...` with the aforementioned example.
azure-monitor Opentelemetry Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md
You may want to enable sampling to reduce your data ingestion volume, which redu
The sampler expects a sample rate of between 0 and 1 inclusive. A rate of 0.1 means approximately 10% of your traces are sent.
-In this example, we utilize the `ApplicationInsightsSampler`, which is included with the Distro.
- ```csharp var builder = WebApplication.CreateBuilder(args);
app.Run();
The sampler expects a sample rate of between 0 and 1 inclusive. A rate of 0.1 means approximately 10% of your traces are sent.
-In this example, we utilize the `ApplicationInsightsSampler`, which offers compatibility with Application Insights SDKs.
-
-1. Install the latest [OpenTelemetry.Extensions.AzureMonitor](https://www.nuget.org/packages/OpenTelemetry.Extensions.AzureMonitor) package:
- ```dotnetcli
- dotnet add package --prerelease OpenTelemetry.Extensions.AzureMonitor
- ```
-
-1. Add the following code snippet.
- ```csharp
- var tracerProvider = Sdk.CreateTracerProviderBuilder()
- .SetSampler(new ApplicationInsightsSampler(new ApplicationInsightsSamplerOptions { SamplingRatio = 0.1F }))
- .AddAzureMonitorTraceExporter();
- ```
+```csharp
+var tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .AddAzureMonitorTraceExporter(options =>
+ {
+ options.SamplingRatio = 0.1F;
+ });
+```
#### [Java](#tab/java)
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Title: Enable Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications description: This article provides guidance on how to enable Azure Monitor on applications by using OpenTelemetry. Previously updated : 07/10/2023 Last updated : 07/20/2023 ms.devlang: csharp, javascript, typescript, python
# Enable Azure Monitor OpenTelemetry for .NET, Node.js, Python and Java applications
-This article describes how to enable and configure OpenTelemetry-based data collection to power the experiences within [Azure Monitor Application Insights](app-insights-overview.md#application-insights-overview). We walk through how to install the "Azure Monitor OpenTelemetry Distro". The Distro will [automatically collect](opentelemetry-add-modify.md#automatic-data-collection) traces, metrics, logs, and exceptions across your application and its dependencies. The To learn more about collecting data using OpenTelemetry, see [Data Collection Basics](opentelemetry-overview.md) or [OpenTelemetry FAQ](/azure/azure-monitor/faq#opentelemetry).
+This article describes how to enable and configure OpenTelemetry-based data collection to power the experiences within [Azure Monitor Application Insights](app-insights-overview.md#application-insights-overview). We walk through how to install the "Azure Monitor OpenTelemetry Distro". The Distro will [automatically collect](opentelemetry-add-modify.md#automatic-data-collection) traces, metrics, logs, and exceptions across your application and its dependencies. To learn more about collecting data using OpenTelemetry, see [Data Collection Basics](opentelemetry-overview.md) or [OpenTelemetry FAQ](/azure/azure-monitor/faq#opentelemetry).
## OpenTelemetry Release Status
dotnet add package --prerelease Azure.Monitor.OpenTelemetry.Exporter
#### [Java](#tab/java)
-Download the [applicationinsights-agent-3.4.14.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.14/applicationinsights-agent-3.4.14.jar) file.
+Download the [applicationinsights-agent-3.4.15.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.15/applicationinsights-agent-3.4.15.jar) file.
> [!WARNING] >
var loggerFactory = LoggerFactory.Create(builder =>
Java autoinstrumentation is enabled through configuration changes; no code changes are required.
-Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights-agent-3.4.14.jar"` to your application's JVM args.
+Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights-agent-3.4.15.jar"` to your application's JVM args.
> [!TIP] > For scenario-specific guidance, see [Get Started (Supplemental)](./java-get-started-supplemental.md).
To paste your Connection String, select from the following options:
B. Set via Configuration File - Java Only (Recommended)
- Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-3.4.14.jar` with the following content:
+ Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-3.4.15.jar` with the following content:
```json {
azure-monitor Metrics Charts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-charts.md
Title: Advanced features of Metrics Explorer description: Metrics are a series of measured values and counts that Azure collects. Learn to use Metrics Explorer to investigate the health and usage of resources. ++ Previously updated : 06/09/2022 Last updated : 07/20/2023
Most metrics support 93 days of retention but only let you view 30 days at a tim
### Zoom
-You can click and drag on the chart to zoom into a section of a chart. Zooming updates the chart's time range to span your selection. If the time grain is set to Automatic, zooming selects a smaller time grain. The new time range applies to all charts in Metrics.
+You can select and drag on the chart to zoom into a section of a chart. Zooming updates the chart's time range to span your selection. If the time grain is set to Automatic, zooming selects a smaller time grain. The new time range applies to all charts in Metrics.
![Animated gif showing the metrics zoom feature.](./media/metrics-charts/metrics-zoom-control.gif) ## Aggregation
-When you add a metric to a chart, Metrics Explorer applies a default aggregation. The default makes sense in basic scenarios. But you can use a different aggregation to gain more insights about the metric.
+When you add a metric to a chart, Metrics Explorer applies a default aggregation. The default makes sense in basic scenarios, but you can use a different aggregation to gain more insights about the metric.
-Before you use different aggregations on a chart, you should understand how Metrics Explorer handles them. Metrics are a series of measurements (or "metric values") that are captured over a time period. When you plot a chart, the values of the selected metric are separately aggregated over the *time grain*.
+Before you use different aggregations on a chart, you should understand how Metrics Explorer handles them. Metrics are a series of measurements (or "metric values") that are captured over a time period. When you plot a chart, the values of the selected metric are separately aggregated over the *time granularity*.
-You select the size of the time grain by using Metrics Explorer's [time picker panel](./metrics-getting-started.md#select-a-time-range). If you don't explicitly select the time grain, the currently selected time range is used by default. After the time grain is determined, the metric values that were captured during each time grain are aggregated on the chart, one data point per time grain.
+You select the size of the time grain by using Metrics Explorer's time picker panel. If you don't explicitly select the time grain, the currently selected time range is used by default. After the time grain is determined, the metric values that were captured during each time grain are aggregated on the chart, one data point per time grain.
-For example, suppose a chart shows the *Server response time* metric. It uses the *average* aggregation over time span of the *last 24 hours*. In this example:
--- If the time granularity is set to 30 minutes, the chart is drawn from 48 aggregated data points. The line chart connects 48 dots in the chart plot area (24 hours x 2 data points per hour). Each data point represents the *average* of all captured response times for server requests that occurred during each of the relevant 30-minute time periods.-- If you switch the time granularity to 15 minutes, you get 96 aggregated data points. That is, you get 24 hours x 4 data points per hour.
-Metrics Explorer has five basic statistical aggregation types: sum, count, min, max, and average. The *sum* aggregation is sometimes called the *total* aggregation. For many metrics, Metrics Explorer hides the aggregations that are irrelevant and can't be used.
-
-For a deeper discussion of how metric aggregation works, see [Azure Monitor metrics aggregation and display explained](metrics-aggregation-explained.md).
--- **Sum**: The sum of all values captured during the aggregation interval.
+For example, suppose a chart shows the *Server response time* metric. It uses the *average* aggregation over time span of the *last 24 hours*. In this example:
- ![Screenshot of a sum request.](./media/metrics-charts/request-sum.png)
+- If the time granularity is set to 30 minutes, the chart is drawn from 48 aggregated data points. That is, 2 data points per hour for 24 hours. The line chart connects 48 dots in the chart plot area. Each data point represents the *average* of all captured response times for server requests that occurred during each of the relevant 30-minute time periods.
+- If you switch the time granularity to 15 minutes, you get 96 aggregated data points. That is, 4 data points per hour for 24 hours.
+Metrics Explorer has five aggregation types:
+- **Sum**: The sum of all values captured during the aggregation interval. The *sum* aggregation is sometimes called the *total* aggregation.
- **Count**: The number of measurements captured during the aggregation interval.- When the metric is always captured with the value of 1, the count aggregation is equal to the sum aggregation. This scenario is common when the metric tracks the count of distinct events and each measurement represents one event. The code emits a metric record every time a new request arrives.-
- ![Screenshot of a count request.](./media/metrics-charts/request-count.png)
- - **Average**: The average of the metric values captured during the aggregation interval.-
- ![Screenshot of an average request.](./media/metrics-charts/request-avg.png)
- - **Min**: The smallest value captured during the aggregation interval.
+- **Max**: The largest value captured during the aggregation interval.
- ![Screenshot of a minimum request.](./media/metrics-charts/request-min.png)
+ :::image type="content" source="media/metrics-charts/aggregations.png" alt-text="A screenshot showing the aggregation dropdown." lightbox="media/metrics-charts/aggregations.png":::
-- **Max**: The largest value captured during the aggregation interval.
+Metrics Explorer hides the aggregations that are irrelevant and can't be used.
- ![Screenshot of a maximum request.](./media/metrics-charts/request-max.png)
+For a deeper discussion of how metric aggregation works, see [Azure Monitor metrics aggregation and display explained](metrics-aggregation-explained.md).
## Filters
-You can apply filters to charts whose metrics have dimensions. For example, imagine a *Transaction count* metric that has a *Response type* dimension. This dimension indicates whether the response from transactions succeeded or failed. If you filter on this dimension, you'll see a chart line for only successful or only failed transactions.
+You can apply filters to charts whose metrics have dimensions. For example, imagine a *Transaction count* metric that has a *Response type* dimension. This dimension indicates whether the response from transactions succeeded or failed. If you filter on this dimension, a chart line is displayed for only successful or only failed transactions.
### Add a filter 1. Above the chart, select **Add filter**.
-1. Select a dimension (property) to filter.
+1. Select a dimension from the **Property** dropdown to filter.
- ![Screenshot that shows the dimensions (properties) you can filter.](./media/metrics-charts/028.png)
+ :::image type="content" source="./media/metrics-charts/filter-property.png" alt-text="Screenshot that shows the filter properties dropdown." lightbox="./media/metrics-charts/filter-property.png":::
1. Select the operator you want to apply against the dimension (property). The default operator is = (equals)-
- ![Screenshot that shows the operator you can use with the filter.](./media/metrics-charts/filter-operator.png)
+ :::image type="content" source="./media/metrics-charts/filter-operator.png" alt-text="Screenshot that shows the operator you can use with the filter." lightbox="./media/metrics-charts/filter-operator.png":::
1. Select which dimension values you want to apply to the filter when plotting the chart. This example shows filtering out the successful storage transactions.
+ :::image type="content" source="./media/metrics-charts/filter-values.png" alt-text="Screenshot that shows the filter values dropdown." lightbox="./media/metrics-charts/filter-values.png":::
- ![Screenshot that shows the successful filtered storage transactions.](./media/metrics-charts/029.png)
-
-1. After selecting the filter values, click away from the filter selector to close it. Now the chart shows how many storage transactions have failed:
-
- ![Screenshot that shows how many storage transactions have failed.](./media/metrics-charts/030.png)
+1. After selecting the filter values, click away from the filter selector to close it. The chart shows how many storage transactions have failed:
+ :::image type="content" source="./media/metrics-charts/filtered-chart.png" alt-text="Screenshot that shows the successful filtered storage transactions." lightbox="./media/metrics-charts/filtered-chart.png":::
1. Repeat these steps to apply multiple filters to the same charts.
You can split a metric by dimension to visualize how different segments of the m
1. Above the chart, select **Apply splitting**.
-1. Choose dimension(s) on which to segment your chart:
-
- ![Screenshot that shows the selected dimension on which to segment the chart.](./media/metrics-charts/031.png)
+1. Choose dimensions on which to segment your chart:
+ :::image type="content" source="./media/metrics-charts/apply-splitting.png" alt-text="Screenshot that shows the selected dimension on which to segment the chart." lightbox="./media/metrics-charts/apply-splitting.png":::
- The chart now shows multiple lines, one for each dimension segment:
+ The chart shows multiple lines, one for each dimension segment:
+ :::image type="content" source="./media/metrics-charts/segment-dimension.png" alt-text="Screenshot that shows multiple lines, one for each segment of dimension." lightbox="./media/metrics-charts/segment-dimension.png":::
- ![Screenshot that shows multiple lines, one for each segment of dimension.](./media/metrics-charts/segment-dimension.png)
1. Choose a limit on the number of values to be displayed after splitting by selected dimension. The default limit is 10 as shown in the above chart. The range of limit is 1 - 50.-
- ![Screenshot that shows split limit, which restricts the number of values after splitting.](./media/metrics-charts/segment-dimension-limit.png)
+ :::image type="content" source="./media/metrics-charts/segment-dimension-limit.png" alt-text="Screenshot that shows split limit, which restricts the number of values after splitting." lightbox="./media/metrics-charts/segment-dimension-limit.png":::
1. Choose the sort order on segments: **Ascending** or **Descending**. The default selection is **Descending**.
- ![Screenshot that shows sort order on split values.](./media/metrics-charts/segment-dimension-sort.png)
-
-1. If you like to segment by multiple segments select multiple dimensions from the values dropdown. The legends will show a comma-separated list of dimension values for each segment
+
+ :::image type="content" source="./media/metrics-charts/segment-dimension-sort.png" alt-text="Screenshot that shows sort order on split values." lightbox="./media/metrics-charts/segment-dimension-sort.png":::
- ![Screenshot that shows multiple segments selected, and the corresponding chart below.](./media/metrics-charts/segment-dimension-multiple.png)
+1. Segment by multiple segments by selecting multiple dimensions from the values dropdown. The legends shows a comma-separated list of dimension values for each segment
+ :::image type="content" source="./media/metrics-charts/segment-dimension-multiple.png" alt-text="Screenshot that shows multiple segments selected, and the corresponding chart." lightbox="./media/metrics-charts/segment-dimension-multiple.png":::
-3. Click away from the grouping selector to close it.
+1. Click away from the grouping selector to close it.
- > [!NOTE]
+ > [!TIP]
> To hide segments that are irrelevant for your scenario and to make your charts easier to read, use both filtering and splitting on the same dimension. ## Locking the range of the y-axis
For example, a drop in the volume of successful requests from 99.99 percent to 9
Another example is a fluctuation in the available memory. In this scenario, the value technically never reaches 0. Fixing the range to a higher value might make drops in available memory easier to spot.
-To control the y-axis range, open the chart menu (**...**). Then select **Chart settings** to access advanced chart settings.
-
-![Screenshot that highlights the chart settings selection.](./media/metrics-charts/033.png)
+1. To control the y-axis range, open the chart menu **...**. Then select **Chart settings** to access advanced chart settings.
+ :::image source="./media/metrics-charts/select-chart-settings.png" alt-text="Screenshot that highlights the chart settings selection." lightbox="./media/metrics-charts/select-chart-settings.png":::
-Modify the values in the **Y-axis range** section, or select **Auto** to revert to the default values.
+1. Modify the values in the **Y-axis range** section, or select **Auto** to revert to the default values.
+ :::image type="content" source="./media/metrics-charts/chart-settings.png" alt-text="Screenshot that shows the Y-axis range section." lightbox="./media/metrics-charts/chart-settings.png":::
+
- ![Screenshot that highlights the Y-axis range section.](./media/metrics-charts/034.png)
-
-> [!WARNING]
-> If you need to lock the boundaries of the y-axis for charts that track counts or sums over a period of time (by using count, sum, min, or max aggregations), you should usually specify a fixed time granularity. In this case, you shouldn't rely on the automatic defaults.
+> [!NOTE]
+> If you lock the boundaries of the y-axis for charts that tracks count, sum, min, or max aggregations over a period of time, specify a fixed time granularity. Don't rely on the automatic defaults.
>
-> You choose a fixed time granularity because chart values change when the time granularity is automatically modified after a user resizes a browser window or changes screen resolution. The resulting change in time granularity affects the look of the chart, invalidating the current selection of the y-axis range.
+> A fixed time granularity is chosen because chart values change when the time granularity is automatically modified when a user resizes a browser window or changes screen resolution. The resulting change in time granularity affects the appearance of the chart, invalidating the selection of the y-axis range.
## Line colors
-After you configure the charts, the chart lines are automatically assigned a color from a default palette. You can change those colors.
+Chart lines are automatically assigned a color from a default palette.
-To change the color of a chart line, select the colored bar in the legend that corresponds to the chart. The color picker dialog opens. Use the color picker to configure the line color.
+To change the color of a chart line, select the colored bar in the legend that corresponds to the line on the chart. Use the color picker to select the line color.
-![Screenshot that shows how to change color.](./media/metrics-charts/035.png)
-Your customized colors are preserved when you pin the chart to a dashboard. The following section shows how to pin a chart.
+Customized colors are preserved when you pin the chart to a dashboard. The following section shows how to pin a chart.
## Saving to dashboards or workbooks
-After you configure a chart, you might want to add it to a dashboard or workbook. By adding a chart to a dashboard or workbook, you can make it accessible to your team. You can also gain insights by viewing it in the context of other monitoring information.
+After you configure a chart, you can add it to a dashboard or workbook. By adding a chart to a dashboard or workbook, you can make it accessible to your team. You can also gain insights by viewing it in the context of other monitoring information.
- To pin a configured chart to a dashboard, in the upper-right corner of the chart, select **Save to dashboard** and then **Pin to dashboard**. - To save a configured chart to a workbook, in the upper-right corner of the chart, select **Save to dashboard** and then **Save to workbook**. ## Alert rules You can use your visualization criteria to create a metric-based alert rule. The new alert rule includes your chart's target resource, metric, splitting, and filter dimensions. You can modify these settings by using the alert rule creation pane.
-To begin, select **New alert rule**.
+To create an alert rule,
+1. Select **New alert rule** in the upper-right corner of the chart
+
+1. On the **Condition** tab, the **Signal name** is defaulted to the metric from your chart. You can choose a different metric.
-![Screenshot that shows the New alert rule button highlighted in red.](./media/metrics-charts/042.png)
+1. Enter a **Threshold value**. The threshold value is the value that triggers the alert. The Preview chart shows the threshold value as a horizontal line over the metric values.
-The alert rule creation pane opens. In the pane, you see the chart's metric dimensions. The fields in the pane are prepopulated to help you customize the rule.
+1. Select the **Details** tab.
-![Screenshot showing the rule creation pane.](./media/metrics-charts/041.png)
+1. On the **Details** tab, enter a **Name** and **Description** for the alert rule.
+
+1. Select a **Severity** level for the alert rule. Severities include Critical, Error Warning, Informational, and Verbose.
+
+1. Select **Review + create** to review the alert rule, then select **Create** to create the alert rule.
For more information, see [Create, view, and manage metric alerts](../alerts/alerts-metric.md). ## Correlate metrics to logs
-To help customers diagnose the root cause of anomalies in their metrics chart, we created the *Drill into Logs* feature. Drill into Logs allows customers to correlate spikes in their metrics chart to logs and queries.
+**Drill into Logs** helps you diagnose the root cause of anomalies in your metrics chart. Drilling into logs allows you to correlate spikes in your metrics chart to logs and queries.
-This table summarizes the types of logs and queries provided:
+The following table summarizes the types of logs and queries provided:
| Term | Definition | ||-| | Activity logs | Provides insight into the operations on each Azure resource in the subscription from the outside (the management plane) in addition to updates on Service Health events. Use the Activity log to determine the what, who, and when for any write operations (PUT, POST, DELETE) taken on the resources in your subscription. There's a single Activity log for each Azure subscription. |
-| Diagnostic log | Provides insight into operations that were performed within an Azure resource (the data plane), for example getting a secret from a Key Vault or making a request to a database. The content of resource logs varies by the Azure service and resource type. **Note:** Must be provided by service and enabled by customer. |
-| Recommended log | Scenario-based queries that customer can use to investigate anomalies in their Metrics Explorer. |
+| Diagnostic log | Provides insight into operations that were performed within an Azure resource (the data plane). For example, getting a secret from a Key Vault or making a request to a database. The content of resource logs varies by the Azure service and resource type. You must enable logs for the resource. |
+| Recommended log | Scenario-based queries that you can use to investigate anomalies in Metrics Explorer. |
-Currently, Drill into Logs is available for select resource providers. The resource providers that have the complete Drill into Logs experience are:
+Currently, Drill into Logs is available for select resource providers. The following resource providers offer the complete Drill into Logs experience:
- Application Insights - Autoscale - App Services - Storage
-This screenshot shows a sample for the Application Insights resource provider.
-
-![Screenshot shows a spike in failures in app insights metrics pane.](./media/metrics-charts/drill-into-log-ai.png)
+ :::image source="./media/metrics-charts/drill-into-log-ai.png" alt-text="Screenshot that shows a spike in failures in app insights metrics pane." lightbox="./media/metrics-charts/drill-into-log-ai.png":::
1. To diagnose the spike in failed requests, select **Drill into Logs**.
azure-monitor Metrics Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-getting-started.md
By default, the chart shows the most recent 24 hours of metrics data. Use the **
> [!TIP] > Use the **time brush** to investigate an interesting area of the chart like a spike or a dip. Select an area on the chart and the chart zooms in to show more detail for the selected area. + ## Apply dimension filters and splitting [Filtering](../essentials/metrics-charts.md#filters) and [splitting](../essentials/metrics-charts.md#apply-splitting) are powerful diagnostic tools for the metrics that have dimensions. These features show how various metric segments or dimensions affect the overall value of the metric. You can use them to identify possible outliers. For example
azure-monitor Access Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/access-api.md
Use the token in requests to the Log Analytics endpoint:
POST /v1/workspaces/your workspace id/query?timespan=P1D Host: https://api.loganalytics.azure.com Content-Type: application/json
- Authorization: bearer <your access token>
+ Authorization: Bearer <your access token>
Body: {
azure-monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/functions.md
Last updated 06/22/2022
# Functions in Azure Monitor log queries A function is a log query in Azure Monitor that can be used in other log queries as though it's a command. You can use functions to provide solutions to different customers and also reuse query logic in your own environment. This article describes how to use functions and how to create your own.
+## Permissions required
+
+- To view or use functions, you need `Microsoft.OperationalInsights/workspaces/query/*/read` permissions to the Log Analytics workspace, as provided by the [Log Analytics Reader built-in role](./manage-access.md#log-analytics-reader), for example.
+
+- To create or edit functions, you need `microsoft.operationalinsights/workspaces/savedSearches/write` permissions to the Log Analytics workspace, as provided by the [Log Analytics Reader built-in role](./manage-access.md#log-analytics-reader), for example.
+ ## Types of functions There are two types of functions in Azure Monitor:
azure-monitor Get Started Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/get-started-queries.md
Last updated 10/20/2021
> > If you already know how to query in Kusto Query Language (KQL) but need to quickly create useful queries based on resource types, see the saved example queries pane in [Use queries in Azure Monitor Log Analytics](../logs/queries.md).
-In this tutorial, you'll learn to write log queries in Azure Monitor. The article shows you how to:
+In this tutorial, you learn to write log queries in Azure Monitor. The article shows you how to:
- Understand query structure. - Sort query results.
Here's a video version of this tutorial:
> [!VIDEO https://www.microsoft.com/videoplayer/embed/RE42pGX] + ## Write a new query Queries can start with either a table name or the `search` command. It's a good idea to start with a table name because it defines a clear scope for the query. It also improves query performance and the relevance of the results.
search in (SecurityEvent) "Cryptographic"
| take 10 ```
-This query searches the `SecurityEvent` table for records that contain the phrase "Cryptographic." Of those records, 10 records will be returned and displayed. If you omit the `in (SecurityEvent)` part and run only `search "Cryptographic"`, the search will go over *all* tables. The process would then take longer and be less efficient.
+This query searches the `SecurityEvent` table for records that contain the phrase "Cryptographic." Of those records, 10 records are returned and displayed. If you omit the `in (SecurityEvent)` part and run only `search "Cryptographic"`, the search goes over *all* tables. The process would then take longer and be less efficient.
> [!IMPORTANT] > Search queries are ordinarily slower than table-based queries because they have to process more data.
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
Azure NetApp Files volumes are designed to be contained in a special purpose sub
* Australia Southeast * Brazil South * Canada Central
+* Central India
* Central US * East Asia * East US
Azure NetApp Files volumes are designed to be contained in a special purpose sub
* Southeast Asia * Sweden Central * Switzerland North
+* Switzerland West
* UAE Central * UAE North * UK South
azure-netapp-files Faq Application Resilience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-application-resilience.md
This article answers frequently asked questions (FAQs) about Azure NetApp Files
Azure NetApp Files might undergo occasional planned maintenance (for example, platform updates, service or software upgrades). From a file protocol (NFS/SMB) perspective, the maintenance operations are non-disruptive, as long as the application can handle the IO pauses that might briefly occur during these events. The I/O pauses are typically short, ranging from a few seconds up to 30 seconds. The NFS protocol is especially robust, and client-server file operations continue normally. Some applications might require tuning to handle IO pauses for as long as 30-45 seconds. As such, ensure that you're aware of the applicationΓÇÖs resiliency settings to cope with the storage service maintenance events. For human interactive applications leveraging the SMB protocol, the standard protocol settings are usually sufficient.
+>[!IMPORTANT]
+>To ensure a resilient architecture, it is crucial to recognize that the cloud operates under a _shared responsibility_ model. This model encompasses the Azure cloud platform, its infrastructure services, the OS-layer, and application vendors. Each of these components plays a vital role in gracefully handling potential application disruptions that may arise during storage service maintenance events.
+ ## Do I need to take special precautions for SMB-based applications? Yes, certain SMB-based applications require SMB Transparent Failover. SMB Transparent Failover enables maintenance operations on the Azure NetApp Files service without interrupting connectivity to server applications storing and accessing data on SMB volumes. To support SMB Transparent Failover for specific applications, Azure NetApp Files now supports the [SMB Continuous Availability shares option](azure-netapp-files-create-volumes-smb.md#continuous-availability). Using SMB Continuous Availability is only supported for workloads on:
Yes, certain SMB-based applications require SMB Transparent Failover. SMB Transp
* [FSLogix user profile containers](../virtual-desktop/create-fslogix-profile-container.md) * Microsoft SQL Server (not Linux SQL Server)
-**Custom applications are not supported with SMB Continuous Availability.**
+>[!CAUTION]
+>Custom applications are not supported with SMB Continuous Availability and cannot be used with SMB Continuous Availability enabled volumes.
## I'm running IBM MQ on Azure NetApp Files. What precautions can I take to avoid disruptions due to storage service maintenance events despite using the NFS protocol?
azure-portal How To Manage Azure Support Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/supportability/how-to-manage-azure-support-request.md
To reopen a closed support request, select **Reopen request** near the top of th
To cancel a support plan, see [Cancel a support plan](../../cost-management-billing/manage/cancel-azure-subscription.md#cancel-a-subscription-in-the-azure-portal).
+## Get Help with a support request
+
+To get more help with managing the support request, go to the Azure portal and create a case for the Issue type, "Technical", choose "All Services", Service type, "Portal" and Problem type "Issue with Support Ticket Experience".
+ ## Next steps - Review the process to [create an Azure support request](how-to-create-azure-support-request.md).
azure-resource-manager Deployment Stacks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-stacks.md
Title: Create & deploy deployment stacks in Bicep description: Describes how to create deployment stacks in Bicep. Previously updated : 07/12/2023 Last updated : 07/20/2023 # Deployment stacks (Preview)
Deployment stacks provide the following benefits:
- [What-if](./deploy-what-if.md) isn't available in the preview. - Management group scoped deployment stacks can only deploy the template to subscription. - When using the Azure CLI create command to modify an existing stack, the deployment process continues regardless of whether you choose _n_ for a prompt. To halt the procedure, use _[CTRL] + C_.-- There is an issue with the Azure CLI create command when the value `none` is passed to the `deny-settings-mode` parameter. Before the issue is fixed, use the `denyDelete` instead of `none`. - If you create or modify a deployment stack in the Azure portal, deny settings will be overwritten (support for deny settings in the Azure portal is currently in progress). - Management group deployment stacks are not yet available in the Azure portal. - ## Create deployment stacks A deployment stack resource can be created at resource group, subscription, or management group scope. The template passed into a deployment stack defines the resources to be created or updated at the target scope specified for the template deployment.
To create a deployment stack at the resource group scope:
```azurepowershell New-AzResourceGroupDeploymentStack `
- -Name '<deployment-stack-name>' `
- -ResourceGroupName '<resource-group-name>' `
- -TemplateFile '<bicep-file-name>' `
- -DenySettingsMode none
+ -Name "<deployment-stack-name>" `
+ -ResourceGroupName "<resource-group-name>" `
+ -TemplateFile "<bicep-file-name>" `
+ -DenySettingsMode "none"
``` # [CLI](#tab/azure-cli) ```azurecli az stack group create \
- --name <deployment-stack-name> \
- --resource-group <resource-group-name> \
- --template-file <bicep-file-name> \
- --deny-settings-mode none
+ --name '<deployment-stack-name>' \
+ --resource-group '<resource-group-name>' \
+ --template-file '<bicep-file-name>' \
+ --deny-settings-mode 'none'
``` # [Portal](#tab/azure-portal)
To create a deployment stack at the subscription scope:
```azurepowershell New-AzSubscriptionDeploymentStack `
- -Name '<deployment-stack-name>' `
- -Location '<location>' `
- -TemplateFile '<bicep-file-name>' `
- -DeploymentResourceGroupName '<resource-group-name>' `
- -DenySettingsMode none
+ -Name "<deployment-stack-name>" `
+ -Location "<location>" `
+ -TemplateFile "<bicep-file-name>" `
+ -DeploymentResourceGroupName "<resource-group-name>" `
+ -DenySettingsMode "none"
``` The `DeploymentResourceGroupName` parameter specifies the resource group used to store the managed resources. If the parameter isn't specified, the managed resources are stored in the subscription scope.
The `DeploymentResourceGroupName` parameter specifies the resource group used to
```azurecli az stack sub create \
- --name <deployment-stack-name> \
- --location <location> \
- --template-file <bicep-file-name> \
- --deployment-resource-group-name <resource-group-name> \
- --deny-settings-mode none
+ --name '<deployment-stack-name>' \
+ --location '<location>' \
+ --template-file '<bicep-file-name>' \
+ --deployment-resource-group-name' <resource-group-name>' \
+ --deny-settings-mode 'none'
``` The `deployment-resource-group-name` parameter specifies the resource group used to store the managed resources. If the parameter isn't specified, the managed resources are stored in the subscription scope.
To create a deployment stack at the management group scope:
```azurepowershell New-AzManagmentGroupDeploymentStack `
- -Name '<deployment-stack-name>' `
- -Location '<location>' `
- -TemplateFile '<bicep-file-name>' `
- -DeploymentSubscriptionId '<subscription-id>' `
- -DenySettingsMode none
+ -Name "<deployment-stack-name>" `
+ -Location "<location>" `
+ -TemplateFile "<bicep-file-name>" `
+ -DeploymentSubscriptionId "<subscription-id>" `
+ -DenySettingsMode "none"
``` The `deploymentSubscriptionId` parameter specifies the subscription used to store the managed resources. If the parameter isn't specified, the managed resources are stored in the management group scope.
The `deploymentSubscriptionId` parameter specifies the subscription used to stor
```azurecli az stack mg create \
- --name <deployment-stack-name> \
- --location <location> \
- --template-file <bicep-file-name> \
- --deployment-subscription-id <subscription-id> \
- --deny-settings-mode none
+ --name '<deployment-stack-name>' \
+ --location '<location>' \
+ --template-file '<bicep-file-name>' \
+ --deployment-subscription-id '<subscription-id>' \
+ --deny-settings-mode 'none'
``` The `deployment-subscription` parameter specifies the subscription used to store the managed resources. If the parameter isn't specified, the managed resources are stored in the management group scope.
To list deployment stack resources at the resource group scope:
```azurepowershell Get-AzResourceGroupDeploymentStack `
- -ResourceGroupName '<resource-group-name>'
+ -ResourceGroupName "<resource-group-name>"
``` # [CLI](#tab/azure-cli) ```azurecli az stack group list \
- --resource-group <resource-group-name>
+ --resource-group '<resource-group-name>'
``` # [Portal](#tab/azure-portal)
To list deployment stack resources at the management group scope:
```azurepowershell Get-AzManagementGroupDeploymentStack `
- -ManagementGroupId '<management-group-id>'
+ -ManagementGroupId "<management-group-id>"
``` # [CLI](#tab/azure-cli) ```azurecli az stack mg list \
- --management-group-id <management-group-id>
+ --management-group-id '<management-group-id>'
``` # [Portal](#tab/azure-portal)
To update a deployment stack at the resource group scope:
```azurepowershell Set-AzResourceGroupDeploymentStack `
- -Name '<deployment-stack-name>' `
- -ResourceGroupName '<resource-group-name>' `
- -TemplateFile '<bicep-file-name>' `
- -DenySettingsMode none
+ -Name "<deployment-stack-name>" `
+ -ResourceGroupName "<resource-group-name>" `
+ -TemplateFile "<bicep-file-name>" `
+ -DenySettingsMode "none"
``` # [CLI](#tab/azure-cli) ```azurecli az stack group create \
- --name <deployment-stack-name> \
- --resource-group <resource-group-name> \
- --template-file <bicep-file-name> \
- --deny-settings-mode none
+ --name '<deployment-stack-name>' \
+ --resource-group '<resource-group-name>' \
+ --template-file '<bicep-file-name>' \
+ --deny-settings-mode 'none'
``` > [!NOTE]
To update a deployment stack at the subscription scope:
```azurepowershell Set-AzSubscriptionDeploymentStack `
- -Name '<deployment-stack-name>' `
- -Location '<location>' `
- -TemplateFile '<bicep-file-name>' `
- -DeploymentResourceGroupName '<resource-group-name>' `
- -DenySettingsMode none
+ -Name "<deployment-stack-name>" `
+ -Location "<location>" `
+ -TemplateFile "<bicep-file-name>" `
+ -DeploymentResourceGroupName "<resource-group-name>" `
+ -DenySettingsMode "none"
``` The `DeploymentResourceGroupName` parameter specifies the resource group used to store the deployment stack resources. If you don't specify a resource group name, the deployment stack service will create a new resource group for you.
The `DeploymentResourceGroupName` parameter specifies the resource group used to
```azurecli az stack sub create \
- --name <deployment-stack-name> \
- --location <location> \
- --template-file <bicep-file-name> \
- --deployment-resource-group-name <resource-group-name> \
- --deny-settings-mode none
+ --name '<deployment-stack-name>' \
+ --location '<location>' \
+ --template-file '<bicep-file-name>' \
+ --deployment-resource-group-name '<resource-group-name>' \
+ --deny-settings-mode 'none'
``` # [Portal](#tab/azure-portal)
To update a deployment stack at the management group scope:
```azurepowershell Set-AzManagmentGroupDeploymentStack `
- -Name '<deployment-stack-name>' `
- -Location '<location>' `
- -TemplateFile '<bicep-file-name>' `
- -DeploymentSubscriptionId '<subscription-id>' `
- -DenySettingsMode none
+ -Name "<deployment-stack-name>" `
+ -Location "<location>" `
+ -TemplateFile "<bicep-file-name>" `
+ -DeploymentSubscriptionId "<subscription-id>" `
+ -DenySettingsMode "none"
``` # [CLI](#tab/azure-cli) ```azurecli az stack mg create \
- --name <deployment-stack-name> \
- --location <location> \
- --template-file <bicep-file-name> \
- --deployment-subscription-id <subscription-id> \
- --deny-settings-mode none
+ --name '<deployment-stack-name>' \
+ --location '<location>' \
+ --template-file '<bicep-file-name>' \
+ --deployment-subscription-id '<subscription-id>' \
+ --deny-settings-mode 'none'
``` # [Portal](#tab/azure-portal)
For example:
```azurepowershell New-AzSubscriptionDeploymentStack `
- -Name '<deployment-stack-name' `
- -TemplateFile '<bicep-file-name>' `
- -DenySettingsMode none`
+ -Name "<deployment-stack-name" `
+ -TemplateFile "<bicep-file-name>" `
+ -DenySettingsMode "none" `
-DeleteResourceGroups ` -DeleteResources ```
New-AzSubscriptionDeploymentStack `
- `delete-all`: use delete rather than detach for managed resources and resource groups. - `delete-resources`: use delete rather than detach for managed resources only.-- `delete-resource-groups`: use delete rather than detach for managed resource groups only. It's invalid to use `delete-resource-groups` by itself. `delete-resource-groups` must be used together with `delete-resources`.
+- `delete-resource-groups`: use delete rather than detach for managed resource groups only. It"s invalid to use `delete-resource-groups` by itself. `delete-resource-groups` must be used together with `delete-resources`.
For example: ```azurecli az stack sub create `
- --name <deployment-stack-name> `
- --location <location> `
- --template-file <bicep-file-name> `
- --deny-settings-mode none `
+ --name '<deployment-stack-name>' `
+ --location '<location>' `
+ --template-file '<bicep-file-name>' `
+ --deny-settings-mode 'none' `
--delete-resource-groups ` --delete-resources ```
To delete deployment stack resources at the resource group scope:
```azurepowershell Remove-AzResourceGroupDeploymentStack `
- -name '<deployment-stack-name>' `
- -ResourceGroupName '<resource-group-name>' `
+ -name "<deployment-stack-name>" `
+ -ResourceGroupName "<resource-group-name>" `
[-DeleteAll/-DeleteResourceGroups/-DeleteResources] ```
Remove-AzResourceGroupDeploymentStack `
```azurecli az stack group delete \
- --name <deployment-stack-name> \
- --resource-group <resource-group-name> \
+ --name '<deployment-stack-name>' \
+ --resource-group '<resource-group-name>' \
[--delete-all/--delete-resource-groups/--delete-resources] ```
To delete deployment stack resources at the subscription scope:
```azurepowershell Remove-AzSubscriptionDeploymentStack `
- -Name '<deployment-stack-name>' `
+ -Name "<deployment-stack-name>" `
[-DeleteAll/-DeleteResourceGroups/-DeleteResources] ```
Remove-AzSubscriptionDeploymentStack `
```azurecli az stack sub delete \
- --name <deployment-stack-name> \
+ --name '<deployment-stack-name>' \
[--delete-all/--delete-resource-groups/--delete-resources] ```
To delete deployment stack resources at the management group scope:
```azurepowershell Remove-AzManagementGroupDeploymentStack `
- -Name '<deployment-stack-name>' `
- -ManagementGroupId '<management-group-id>' `
+ -Name "<deployment-stack-name>" `
+ -ManagementGroupId "<management-group-id>" `
[-DeleteAll/-DeleteResourceGroups/-DeleteResources] ```
Remove-AzManagementGroupDeploymentStack `
```azurecli az stack mg delete \
- --name <deployment-stack-name> \
- --management-group-id <management-group-id> \
+ --name '<deployment-stack-name>' \
+ --management-group-id '<management-group-id>' \
[--delete-all/--delete-resource-groups/--delete-resources] ```
To view managed resources at the resource group scope:
# [PowerShell](#tab/azure-powershell) ```azurepowershell
-(Get-AzResourceGroupDeploymentStack -Name '<deployment-stack-name>' -ResourceGroupName '<resource-group-name>').Resources
+(Get-AzResourceGroupDeploymentStack -Name "<deployment-stack-name>" -ResourceGroupName "<resource-group-name>").Resources
``` # [CLI](#tab/azure-cli) ```azurecli az stack group list \
- --name <deployment-stack-name> \
- --resource-group <resource-group-name> \
- --output json
+ --name '<deployment-stack-name>' \
+ --resource-group '<resource-group-name>' \
+ --output 'json'
``` # [Portal](#tab/azure-portal)
To view managed resources at the subscription scope:
# [PowerShell](#tab/azure-powershell) ```azurepowershell
-(Get-AzSubscriptionDeploymentStack -Name '<deployment-stack-name>').Resources
+(Get-AzSubscriptionDeploymentStack -Name "<deployment-stack-name>").Resources
``` # [CLI](#tab/azure-cli) ```azurecli az stack sub show \
- --name <deployment-stack-name> \
- --output json
+ --name '<deployment-stack-name>' \
+ --output 'json'
``` # [Portal](#tab/azure-portal)
To view managed resources at the management group scope:
# [PowerShell](#tab/azure-powershell) ```azurepowershell
-(Get-AzManagementGroupDeploymentStack -Name '<deployment-stack-name>' -ManagementGroupId '<management-group-id>').Resources
+(Get-AzManagementGroupDeploymentStack -Name "<deployment-stack-name>" -ManagementGroupId "<management-group-id>").Resources
``` # [CLI](#tab/azure-cli) ```azurecli az stack mg show \
- --name <deployment-stack-name> \
- --management-group-id <management-group-id> \
- --output json
+ --name '<deployment-stack-name>' \
+ --management-group-id '<management-group-id>' \
+ --output 'json'
``` # [Portal](#tab/azure-portal)
To apply deny settings at the resource group scope:
```azurepowershell New-AzResourceGroupDeploymentStack `
- -Name '<deployment-stack-name>' `
- -ResourceGroupName '<resource-group-name>' `
- -TemplateFile '<bicep-file-name>' `
- -DenySettingsMode DenyDelete `
- -DenySettingsExcludedActions Microsoft.Compute/virtualMachines/write Microsoft.StorageAccounts/delete `
- -DenySettingsExcludedPrincipals <object-id> <object-id>
+ -Name "<deployment-stack-name>" `
+ -ResourceGroupName "<resource-group-name>" `
+ -TemplateFile "<bicep-file-name>" `
+ -DenySettingsMode "DenyDelete" `
+ -DenySettingsExcludedActions "Microsoft.Compute/virtualMachines/write Microsoft.StorageAccounts/delete" `
+ -DenySettingsExcludedPrincipals "<object-id>" "<object-id>"
``` # [CLI](#tab/azure-cli) ```azurecli az stack group create \
- --name <deployment-stack-name> \
- --resource-group <resource-group-name> \
- --template-file <bicep-file-name> \
- --deny-settings-mode denyDelete \
- --deny-settings-excluded-actions Microsoft.Compute/virtualMachines/write Microsoft.StorageAccounts/delete \
- --deny-settings-excluded-principals <object-id> <object-id>
+ --name '<deployment-stack-name>' \
+ --resource-group '<resource-group-name>' \
+ --template-file '<bicep-file-name>' \
+ --deny-settings-mode 'denyDelete' \
+ --deny-settings-excluded-actions 'Microsoft.Compute/virtualMachines/write Microsoft.StorageAccounts/delete' \
+ --deny-settings-excluded-principals '<object-id>' '<object-id>'
``` # [Portal](#tab/azure-portal)
To apply deny settings at the subscription scope:
```azurepowershell New-AzSubscriptionDeploymentStack `
- -Name '<deployment-stack-name>' `
- -Location '<location>' `
- -TemplateFile '<bicep-file-name>' `
- -DenySettingsMode DenyDelete `
- -DenySettingsExcludedActions Microsoft.Compute/virtualMachines/write Microsoft.StorageAccounts/delete `
- -DenySettingsExcludedPrincipals <object-id> <object-id>
+ -Name "<deployment-stack-name>" `
+ -Location "<location>" `
+ -TemplateFile "<bicep-file-name>" `
+ -DenySettingsMode "DenyDelete" `
+ -DenySettingsExcludedActions "Microsoft.Compute/virtualMachines/write Microsoft.StorageAccounts/delete" `
+ -DenySettingsExcludedPrincipals "<object-id>" "<object-id>"
``` Use the `DeploymentResourceGroupName` parameter to specify the resource group name at which the deployment stack is created. If a scope isn't specified, it uses the scope of the deployment stack.
Use the `DeploymentResourceGroupName` parameter to specify the resource group na
```azurecli az stack sub create \
- --name <deployment-stack-name> \
- --location <location> \
- --template-file <bicep-file-name> \
- --deny-settings-mode denyDelete \
- --deny-settings-excluded-actions Microsoft.Compute/virtualMachines/write Microsoft.StorageAccounts/delete \
- --deny-settings-excluded-principals <object-id> <object-id>
+ --name '<deployment-stack-name>' \
+ --location '<location>' \
+ --template-file '<bicep-file-name>' \
+ --deny-settings-mode 'denyDelete' \
+ --deny-settings-excluded-actions 'Microsoft.Compute/virtualMachines/write Microsoft.StorageAccounts/delete' \
+ --deny-settings-excluded-principals '<object-id>' '<object-id>'
``` Use the `deployment-resource-group` parameter to specify the resource group at which the deployment stack is created. If a scope isn't specified, it uses the scope of the deployment stack.
To apply deny settings at the management group scope:
```azurepowershell New-AzManagmentGroupDeploymentStack `
- -Name '<deployment-stack-name>' `
- -Location '<location>' `
- -TemplateFile '<bicep-file-name>' `
- -DenySettingsMode DenyDelete `
- -DenySettingsExcludedActions Microsoft.Compute/virtualMachines/write Microsoft.StorageAccounts/delete `
- -DenySettingsExcludedPrincipals <object-id> <object-id>
+ -Name "<deployment-stack-name>" `
+ -Location "<location>" `
+ -TemplateFile "<bicep-file-name>" `
+ -DenySettingsMode "DenyDelete" `
+ -DenySettingsExcludedActions "Microsoft.Compute/virtualMachines/write Microsoft.StorageAccounts/delete" `
+ -DenySettingsExcludedPrincipals "<object-id>" "<object-id>"
``` Use the `DeploymentSubscriptionId ` parameter to specify the subscription ID at which the deployment stack is created. If a scope isn't specified, it uses the scope of the deployment stack.
Use the `DeploymentSubscriptionId ` parameter to specify the subscription ID at
```azurecli az stack mg create \
- --name <deployment-stack-name> \
- --location <location> \
- --template-file <bicep-file-name> \
- --deny-settings-mode denyDelete \
- --deny-settings-excluded-actions Microsoft.Compute/virtualMachines/write Microsoft.StorageAccounts/delete \
- --deny-settings-excluded-principals <object-id> <object-id>
+ --name '<deployment-stack-name>' \
+ --location '<location>' \
+ --template-file '<bicep-file-name>' \
+ --deny-settings-mode 'denyDelete' \
+ --deny-settings-excluded-actions 'Microsoft.Compute/virtualMachines/write Microsoft.StorageAccounts/delete' \
+ --deny-settings-excluded-principals '<object-id>' '<object-id>'
``` Use the `deployment-subscription ` parameter to specify the subscription ID at which the deployment stack is created. If a scope isn't specified, it uses the scope of the deployment stack.
Save-AzResourceGroupDeploymentStack `
```azurecli az stack group export \
- --name <deployment-stack-name> \
- --resource-group <resource-group-name>
+ --name '<deployment-stack-name>' \
+ --resource-group '<resource-group-name>'
``` # [Portal](#tab/azure-portal)
Save-AzSubscriptionDeploymentStack `
```azurecli az stack sub export \
- --name <deployment-stack-name>
+ --name '<deployment-stack-name>'
``` # [Portal](#tab/azure-portal)
Save-AzManagmentGroupDeploymentStack `
```azurecli az stack mg export \
- --name <deployment-stack-name> \
- --management-group-id <management-group-id>
+ --name '<deployment-stack-name>' \
+ --management-group-id '<management-group-id>'
``` # [Portal](#tab/azure-portal)
backup Backup Instant Restore Capability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-instant-restore-capability.md
Title: Azure Instant Restore Capability
description: Azure Instant Restore Capability and FAQs for VM backup stack, Resource Manager deployment model Previously updated : 04/23/2019 Last updated : 07/20/2023
By default, snapshots are retained for two days. This feature allows restore ope
* For premium storage accounts, the snapshots taken for instant recovery points count towards the 10-TB limit of allocated space. * You get an ability to configure the snapshot retention based on the restore needs. Depending on the requirement, you can set the snapshot retention to a minimum of one day in the backup policy pane as explained below. This will help you save cost for snapshot retention if you donΓÇÖt perform restores frequently. * It's a one directional upgrade. Once upgraded to Instant restore, you can't go back.
+* When you use an Instant Restore recovery point, you must restore the VM or disks to a subscription and resource group that don't require CMK-encrypted disks via Azure Policy.
>[!NOTE] >With this instant restore upgrade, the snapshot retention duration of all the customers (**new and existing both included**) will be set to a default value of two days. However, you can set the duration according to your requirement to any value between 1 to 5 days.
baremetal-infrastructure Concepts Baremetal Infrastructure Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/concepts-baremetal-infrastructure-overview.md
description: Provides an overview of the BareMetal Infrastructure on Azure. Previously updated : 04/01/2023 Last updated : 07/01/2023 # What is BareMetal Infrastructure on Azure?
baremetal-infrastructure Connect Baremetal Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/connect-baremetal-infrastructure.md
Title: Connect BareMetal Infrastructure instances in Azure++ description: Learn how to identify and interact with BareMetal instances in the Azure portal or Azure CLI.
baremetal-infrastructure Know Baremetal Terms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/know-baremetal-terms.md
Title: Know the terms of Azure BareMetal Infrastructure++ description: Know the terms of Azure BareMetal Infrastructure. Last updated 04/01/2023
baremetal-infrastructure About Nc2 On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/about-nc2-on-azure.md
Title: About Nutanix Cloud Clusters on Azure++ description: Learn about Nutanix Cloud Clusters on Azure and the benefits it offers.
baremetal-infrastructure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/architecture.md
Title: Architecture of BareMetal Infrastructure for NC2++ description: Learn about the architecture of several configurations of BareMetal Infrastructure for NC2.
baremetal-infrastructure Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/faq.md
Title: FAQ++ description: Questions frequently asked about NC2 on Azure
baremetal-infrastructure Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/get-started.md
Title: Getting started++ description: Learn how to sign up, set up, and use Nutanix Cloud Clusters on Azure.
baremetal-infrastructure Nc2 Baremetal Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/nc2-baremetal-overview.md
Title: What is BareMetal Infrastructure for Nutanix Cloud Clusters on Azure?++ description: Learn about the features BareMetal Infrastructure offers for NC2 workloads.
baremetal-infrastructure Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/requirements.md
Title: Requirements++ description: Learn what you need to run NC2 on Azure, including Azure, Nutanix, networking, and other requirements.
baremetal-infrastructure Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/skus.md
Title: SKUs++ description: Learn about SKU options for NC2 on Azure, including core, RAM, storage, and network.
baremetal-infrastructure Solution Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/solution-design.md
Title: Solution design++ description: Learn about topologies and constraints for NC2 on Azure.
baremetal-infrastructure Supported Instances And Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/supported-instances-and-regions.md
Title: Supported instances and regions++ description: Learn about instances and regions supported for NC2 on Azure.
baremetal-infrastructure Use Cases And Supported Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/use-cases-and-supported-scenarios.md
Title: Use cases and supported scenarios++ description: Learn about use cases and supported scenarios for NC2 on Azure, including cluster management, disaster recovery, on-demand elasticity, and lift-and-shift.
communication-services Emergency Calling Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/emergency-calling-concept.md
description: Learn how to implement emergency calling for PSTN in your Azure Com
- Previously updated : 11/30/2021 Last updated : 07/20/2023
[!INCLUDE [Emergency Calling Notice](../../includes/emergency-calling-notice-include.md)]
-You can use the Azure Communication Services Calling SDK to add Enhanced Emergency dialing and Public Safety Answering Point (PSAP) callback support to your applications in the United States (US), Puerto Rico (PR), the United Kingdom (GB), and Canada (CA). The capability to dial 911 (in US, PR, and CA) and 999 or 112 (in GB) and receive a callback might be a requirement for your application. Verify the emergency calling requirements with your legal counsel.
+You can use the Azure Communication Services Calling SDK to add Enhanced Emergency dialing and Public Safety Answering Point (PSAP) callback support to your applications in the United States (US), Puerto Rico (PR), the United Kingdom (GB), Canada (CA), and Denmark (DK). The capability to dial 911 (in US, PR, and CA), to dial 112 (in DK), and to dial 999 or 112 (in GB) and receive a callback might be a requirement for your application. Verify the emergency calling requirements with your legal counsel.
-Calls to an emergency number are routed over the Microsoft network. Microsoft assigns a temporary phone number as the Call Line Identity (CLI) when a user places an emergency call from US, PR, GB, or CA. Microsoft temporarily maintains a mapping of the phone number to the caller's identity.
+Calls to an emergency number are routed over the Microsoft network. Microsoft assigns a temporary phone number as the Call Line Identity (CLI) when a user places an emergency call from US, PR, GB, CA, or DK. Microsoft temporarily maintains a mapping of the phone number to the caller's identity.
If there's a callback from the PSAP, Microsoft routes the call directly to the originating caller. The caller can accept the incoming PSAP call even if inbound calling is disabled. The service is available for Microsoft phone numbers. It requires the Azure resource that the emergency call originates from to have a Microsoft-issued phone number enabled with outbound dialing (also known as *make calls*).
-Azure Communication Services direct routing is currently in public preview and not intended for production workloads. Emergency calling is out of scope for Azure Communication Services direct routing.
- ## Call flow 1. An Azure Communication Services user identity dials an emergency number by using the Calling SDK from US or PR.
Emergency calling is automatically enabled for all users of the Azure Communicat
- Microsoft uses the ISO 3166-1 alpha-2 standard for country/region codes.
- - Microsoft supports US, PR, GB, and CA country/region codes for emergency number dialing.
+ - Microsoft supports US, PR, GB, CA, and DK country/region codes for emergency number dialing.
- If you don't provide the country/region code to the SDK, Microsoft uses the IP address to determine the country or region of the caller.
communication-services Emergency Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/telephony/emergency-calling.md
description: In this quickstart, you learn how to add emergency calling to your app by using Azure Communication Services. Previously updated : 12/13/2021 Last updated : 07/20/2023
communication-services Get Started Call To Teams User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-call-to-teams-user.md
+
+ Title: Quickstart - Call to Teams user with Azure Communication Services
+
+description: In this quickstart, you learn how to make a call to Teams user with the Azure Communication Calling SDK.
++ Last updated : 07/19/2023+++
+zone_pivot_groups: acs-plat-web-ios-android-windows
+++
+# Quickstart: How to make a call between your application and Teams user
+
+In this quickstart, you're going to learn how to start a call from Azure Communication Services user to Teams user. You're going to achieve it with the following steps:
+
+1. Enable federation of Azure Communication Services resource with Teams Tenant.
+2. Find Teams user ID.
+3. Start a call with Azure Communication Services Calling SDK.
+
+## Get the Teams user Object ID
+
+All Teams information could be found in [Microsoft Graph Explorer](https://developer.microsoft.com/en-us/graph/graph-explorer) using email in the search.
+
+```console
+https://graph.microsoft.com/v1.0/users/user-email@contoso.com
+```
+
+In results, we be able to find "ID" field
+
+```json
+ "userPrincipalName": "user-email@contoso.com",
+ "id": "31a011c2-2672-4dd0-b6f9-9334ef4999db"
+```
+
+Or the same ID could be found in [Azure portal](https://aka.ms/portal) in Users tab:
+![User Object ID in Azure Portal](./includes/teams-user/portal-user-id.png)
++++++
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+For more information, see the following articles:
+
+- Check out our [calling hero sample](../../samples/calling-hero-sample.md)
+- Get started with the [UI Library](../ui-library/get-started-composites.md)
+- Learn about [Calling SDK capabilities](./getting-started-with-calling.md)
+- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
communication-services Get Started Teams Auto Attendant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-auto-attendant.md
+
+ Title: Quickstart - Teams Auto Attendant on Azure Communication Services
+
+description: In this quickstart, you learn how to create and join a Teams Auto Attendant with the Azure Communication Calling SDK.
++ Last updated : 07/14/2023++++++
+# Quickstart: Join your calling app to a Teams Auto Attendant
+
+In this quickstart you are going to learn how to start a call from Azure Communication Services user to Teams Auto Attendant. You are going to achieve it with the following steps:
+
+1. Enable federation of Azure Communication Services resource with Teams Tenant.
+2. Select or create Teams Auto Attendant via Teams Admin Center.
+3. Get email address of Auto Attendant via Teams Admin Center.
+4. Get Object ID of the Auto Attendant via Graph API.
+5. Start a call with Azure Communication Services Calling SDK.
+
+If you'd like to skip ahead to the end, you can download this quickstart as a sample on [GitHub](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/add-1-on-1-cte-video-calling).
+
+## Create or select Teams Auto Attendant
+
+Teams Auto Attendant is system that provides an automated call handling system for incoming calls. It serves as a virtual receptionist, allowing callers to be automatically routed to the appropriate person or department without the need for a human operator. You can select existing or create new Auto Attendant via [Teams Admin Center](https://aka.ms/teamsadmincenter).
+
+Learn more about how to create Auto Attendant using Teams Admin Center [here](/microsoftteams/create-a-phone-system-auto-attendant?tabs=general-info).
+
+## Find Object ID for Auto Attendant
+
+After Auto Attendant is created, we need to find correlated Object ID to use it later for calls. Object ID is connected to Resource Account that was attached to Auto Attendant - open [Resource Accounts tab](https://admin.teams.microsoft.com/company-wide-settings/resource-accounts) in Teams Admin and find email of account.
+All required information for Resource Account can be found in [Microsoft Graph Explorer](https://developer.microsoft.com/en-us/graph/graph-explorer) using this email in the search.
+
+```console
+https://graph.microsoft.com/v1.0/users/lab-test2-cq-@contoso.com
+```
+
+In results we'll are able to find "ID" field
+
+```json
+ "userPrincipalName": "lab-test2-cq@contoso.com",
+ "id": "31a011c2-2672-4dd0-b6f9-9334ef4999db"
+```
++
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+For more information, see the following articles:
+
+- Check out our [calling hero sample](../../samples/calling-hero-sample.md)
+- Get started with the [UI Library](../ui-library/get-started-composites.md)
+- Learn about [Calling SDK capabilities](./getting-started-with-calling.md)
+- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
communication-services Get Started Teams Call Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-call-queue.md
+
+ Title: Quickstart - Teams Call Queue on Azure Communication Services
+
+description: In this quickstart, you learn how to create and join a Teams call queue with the Azure Communication Calling SDK.
++ Last updated : 07/14/2023++++++
+# Quickstart: Join your calling app to a Teams call queue
+
+In this quickstart you are going to learn how to start a call from Azure Communication Services user to Teams Call Queue. You are going to achieve it with the following steps:
+
+1. Enable federation of Azure Communication Services resource with Teams Tenant.
+2. Select or create Teams Call Queue via Teams Admin Center.
+3. Get email address of Call Queue via Teams Admin Center.
+4. Get Object ID of the Call Queue via Graph API.
+5. Start a call with Azure Communication Services Calling SDK.
+
+If you'd like to skip ahead to the end, you can download this quickstart as a sample on [GitHub](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/add-1-on-1-cte-video-calling).
+
+## Create or select Teams Call Queue
+
+Teams Call Queue is a feature in Microsoft Teams that efficiently distributes incoming calls among a group of designated users or agents. It's useful for customer support or call center scenarios. Calls are placed in a queue and assigned to the next available agent based on a predetermined routing method. Agents receive notifications and can handle calls using Teams' call controls. The feature offers reporting and analytics for performance tracking. It simplifies call handling, ensures a consistent customer experience, and optimizes agent productivity. You can select existing or create new Call Queue via [Teams Admin Center](https://aka.ms/teamsadmincenter).
+
+Learn more about how to create Auto Attendant using Teams Admin Center [here](/microsoftteams/create-a-phone-system-auto-attendant?tabs=general-info).
+
+## Find Object ID for Call Queue
+
+After Call queue is created, we need to find correlated Object ID to use it later for calls. Object ID is connected to Resource Account that was attached to call queue - open [Resource Accounts tab](https://admin.teams.microsoft.com/company-wide-settings/resource-accounts) in Teams Admin and find email.
+All required information for Resource Account can be found in [Microsoft Graph Explorer](https://developer.microsoft.com/en-us/graph/graph-explorer) using this email in the search.
+
+```console
+https://graph.microsoft.com/v1.0/users/lab-test2-cq-@contoso.com
+```
+
+In results we'll are able to find "ID" field
+
+```json
+ "userPrincipalName": "lab-test2-cq@contoso.com",
+ "id": "31a011c2-2672-4dd0-b6f9-9334ef4999db"
+```
++
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+For more information, see the following articles:
+
+- Check out our [calling hero sample](../../samples/calling-hero-sample.md)
+- Get started with the [UI Library](../ui-library/get-started-composites.md)
+- Learn about [Calling SDK capabilities](./getting-started-with-calling.md)
+- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
communication-services Web Calling Push Notifications Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/web-calling-push-notifications-sample.md
Title: ACS Web Calling SDK - Web push notifications
+ Title: Azure Communication Services Web Calling SDK - Web push notifications
description: Quickstart tutorial for ACS Web Calling SDK push notifications
Last updated 04/04/2023
-# ACS Web Calling SDK - Web push notifications quickstart
+# Azure Communication Services Web Calling SDK - Web push notifications quickstart
[!INCLUDE [Public Preview Disclaimer](../includes/public-preview-include.md)]
-ACS Web Calling SDK - Web push notifications is in public preview and available as part of version 1.12.0-beta.2+.
+Azure Communication Services Web Calling SDK - Web push notifications is in public preview and available as part of version 1.12.0-beta.2+.
[Please visit our web push notifications quickstart tutorial](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/blob/main/calling-web-push-notifications/README.md)
communications-gateway Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/deploy.md
If you haven't filled in the configuration correctly, you'll see an error messag
Check your configuration and ensure it matches your requirements. If the configuration is correct, select **Create**.
-You now need to wait for your resource to be provisioned and connected to the Microsoft Teams environment. When your resource has been provisioned and connected, your onboarding team will contact you and the Provisioning Status filed on the resource overview will be "Complete". We recommend you check in periodically to see if your resource has been provisioned. This process can take up to two weeks, because updating ACLs in the Azure and Teams environments is done on a periodic basis.
- Once your resource has been provisioned, a message appears saying **Your deployment is complete**. Select **Go to resource group**, and then check that your resource group contains the correct Azure Communications Gateway resource.
+> [!NOTE]
+> You will not be able to make calls immediately. You need to complete the remaining steps in this guide before your resource is ready to handle traffic.
+ :::image type="content" source="media/deploy/go-to-resource-group.png" alt-text="Screenshot of the Create an Azure Communications Gateway portal, showing a completed deployment screen."::: ## 3. Find the Object ID and Application ID for your Azure Communication Gateway resource
Microsoft Teams only sends traffic to domains that you've confirmed that you own
1. Share your DNS TXT record information with your onboarding team. Wait for your onboarding team to confirm that the DNS TXT record has been configured correctly. 1. Complete the following procedure: [Verify your custom domain name](../active-directory/fundamentals/add-custom-domain.md#verify-your-custom-domain-name).
+## 9. Wait for provisioning to complete
+
+You now need to wait for your resource to be provisioned and connected to the Microsoft Teams environment. When your resource has been provisioned and connected, your onboarding team will contact you and the Provisioning Status filed on the resource overview will be "Complete". We recommend you check in periodically to see if your resource has been provisioned. This process can take up to two weeks, because updating ACLs in the Azure and Teams environments is done on a periodic basis.
+ ## Next steps - [Prepare for live traffic with Azure Communications Gateway](prepare-for-live-traffic.md)
confidential-ledger Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-portal.md
In this quickstart, you create a confidential ledger with the [Azure portal](htt
## Sign in to Azure
-Sign in to the Azure portal at https://portal.azure.com.
+Sign in to the [Azure portal](https://portal.azure.com).
## Create a confidential ledger
connectors Connectors Create Api Mq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-mq.md
The MQ connector has different versions, based on [logic app type and host envir
| Logic app | Environment | Connection version | |--|-|--| | **Consumption** | Multi-tenant Azure Logic Apps and Integration Service Environment (ISE) | Managed connector, which appears in the designer under the **Enterprise** label. This connector provides only actions, not triggers. In on-premises MQ server scenarios, the managed connector supports server only authentication with TLS (SSL) encryption. <br><br>For more information, review the following documentation: <br><br>- [MQ managed connector reference](/connectors/mq) <br>- [Managed connectors in Azure Logic Apps](managed.md) |
-| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (ASE v3 with Windows plans only) | Managed connector, which appears in the designer under the **Azure** label, and built-in connector, which appears in the designer under the **Built-in** label and is service provider based. The built-in version differs in the following ways: <br><br>- The built-in version includes actions *and* triggers. <br><br>- The built-in version can connect directly to an MQ server and access Azure virtual networks. You don't need an on-premises data gateway. <br><br>- The built-in version supports both server authentication and server-client authentication with TLS (SSL) encryption for data in transit, message encoding for both the send and receive operations, and Azure virtual network integration. <br><br>For more information, review the following documentation: <br><br>- [MQ managed connector reference](/connectors/mq) <br>- [MQ built-in connector reference](/azure/logic-apps/connectors/built-in/reference/mq/) <br>- [Built-in connectors in Azure Logic Apps](built-in.md) |
+| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (ASE v3 with Windows plans only) | Managed connector, which appears in the connector gallery under **Runtime** > **Shared**, and the built-in connector, which appears in the connector gallery under **Runtime** > **In-App** and is [service provider-based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in version differs in the following ways: <br><br>- The built-in version includes actions *and* triggers. <br><br>- The built-in connector can directly connect to an MQ server and access Azure virtual networks by using a connection string without an on-premises data gateway. <br><br>- The built-in version supports both server authentication and server-client authentication with TLS (SSL) encryption for data in transit, message encoding for both the send and receive operations, and Azure virtual network integration. <br><br>For more information, review the following documentation: <br><br>- [MQ managed connector reference](/connectors/mq) <br>- [MQ built-in connector reference](/azure/logic-apps/connectors/built-in/reference/mq/) <br>- [Built-in connectors in Azure Logic Apps](built-in.md) |
## Authentication with TLS (SSL) encryption
connectors Connectors Create Api Sqlazure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sqlazure.md
The SQL Server connector has different versions, based on [logic app type and ho
|--|-|-| | **Consumption** | Multi-tenant Azure Logic Apps | Managed connector, which appears in the designer under the **Standard** label. For more information, review the following documentation: <br><br>- [SQL Server managed connector reference](/connectors/sql) <br>- [Managed connectors in Azure Logic Apps](managed.md) | | **Consumption** | Integration service environment (ISE) | Managed connector, which appears in the designer under the **Standard** label, and the ISE version, which has different message limits than the Standard class. For more information, review the following documentation: <br><br>- [SQL Server managed connector reference](/connectors/sql) <br>- [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](managed.md) |
-| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector, which appears in the designer under the **Azure** label, and built-in connector, which appears in the designer under the **Built-in** label and is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in version differs in the following ways: <br><br>- The built-in version can connect directly to an SQL database and access Azure virtual networks. You don't need an on-premises data gateway. <br><br>For more information, review the following documentation: <br><br>- [SQL Server managed connector reference](/connectors/sql/) <br>- [SQL Server built-in connector reference](/azure/logic-apps/connectors/built-in/reference/sql/) <br>- [Built-in connectors in Azure Logic Apps](built-in.md) |
+| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector, which appears in the connector gallery under **Runtime** > **Shared**, and the built-in connector, which appears in the connector gallery under **Runtime** > **In-App** and is [service provider-based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in connector differs in the following ways: <br><br>- The built-in connector can directly connect to an SQL database and access Azure virtual networks by using a connection string without an on-premises data gateway. <br><br>For more information, review the following documentation: <br><br>- [SQL Server managed connector reference](/connectors/sql/) <br>- [SQL Server built-in connector reference](/azure/logic-apps/connectors/built-in/reference/sql/) <br>- [Built-in connectors in Azure Logic Apps](built-in.md) |
### Limitations
connectors Connectors Sftp Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-sftp-ssh.md
The SFTP connector has different versions, based on [logic app type and host env
||-|-| | **Consumption** | Multi-tenant Azure Logic Apps | Managed connector, which appears in the designer under the **Standard** label. For more information, review the following documentation: <br><br>- [SFTP-SSH managed connector reference](/connectors/sftpwithssh/) <br>- [Managed connectors in Azure Logic Apps](managed.md) | | **Consumption** | Integration service environment (ISE) | Managed connector, which appears in the designer under the **Standard** label, and the ISE version, which appears in the designer with the **ISE** label and has different message limits than the managed connector. For more information, review the following documentation: <br><br>- [SFTP-SSH managed connector reference](/connectors/sftpwithssh/) <br>- [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](managed.md) |
-| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector, which appears in the designer under the **Azure** label and built-in connector, which appears in the designer under the **Built-in** label and is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in connector can directly access Azure virtual networks with a connection string. For more information, review the following documentation: <br><br>- [SFTP-SSH managed connector reference](/connectors/sftpwithssh/) <br>- [SFTP built-in connector reference](/azure/logic-apps/connectors/built-in/reference/sftp/) <br><br>- [Managed connectors in Azure Logic Apps](managed.md) <br>- [Built-in connectors in Azure Logic Apps](built-in.md) |
+| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector, which appears in the connector gallery under **Runtime** > **Shared**, and the built-in connector, which appears in the connector gallery under **Runtime** > **In-App** and is [service provider-based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in connector can directly connect to an SFTP server and access Azure virtual networks by using a connection string without an on-premises data gateway. For more information, review the following documentation: <br><br>- [SFTP-SSH managed connector reference](/connectors/sftpwithssh/) <br>- [SFTP built-in connector reference](/azure/logic-apps/connectors/built-in/reference/sftp/) <br><br>- [Managed connectors in Azure Logic Apps](managed.md) <br>- [Built-in connectors in Azure Logic Apps](built-in.md) |
## General limitations
container-apps Microservices Dapr Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-bindings.md
In the Azure portal, verify the batch container app is logging each insert into
1. Copy the Container App name from the terminal output.
-1. Navigate to the [Azure portal](https://ms.portal.azure.com) and search for the Container App resource by name.
+1. Sign in to the [Azure portal](https://portal.azure.com) and search for the Container App resource by name.
1. In the Container App dashboard, select **Monitoring** > **Log stream**.
In the Azure portal, verify the batch container app is logging each insert into
1. Copy the Container App name from the terminal output.
-1. Navigate to the [Azure portal](https://ms.portal.azure.com) and search for the Container App resource by name.
+1. Sign in to the [Azure portal](https://portal.azure.com) and search for the Container App resource by name.
1. In the Container App dashboard, select **Monitoring** > **Log stream**.
In the Azure portal, verify the batch container app is logging each insert into
1. Copy the Container App name from the terminal output.
-1. Navigate to the [Azure portal](https://ms.portal.azure.com) and search for the Container App resource by name.
+1. Sign in to the [Azure portal](https://portal.azure.com) and search for the Container App resource by name.
1. In the Container App dashboard, select **Monitoring** > **Log stream**.
container-apps Microservices Dapr Pubsub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-pubsub.md
In the Azure portal, verify the `checkout` service is publishing messages to the
1. Copy the `checkout` container app name from the terminal output.
-1. Go to the [Azure portal](https://ms.portal.azure.com) and search for the container app resource by name.
+1. Sign in to the [Azure portal](https://portal.azure.com) and search for the container app resource by name.
1. In the Container Apps dashboard, select **Monitoring** > **Log stream**.
In the Azure portal, verify the `checkout` service is publishing messages to the
1. Copy the `checkout` container app name from the terminal output.
-1. Go to the [Azure portal](https://ms.portal.azure.com) and search for the container app resource by name.
+1. Sign in to the [Azure portal](https://portal.azure.com) and search for the container app resource by name.
1. In the Container Apps dashboard, select **Monitoring** > **Log stream**.
In the Azure portal, verify the `checkout` service is publishing messages to the
1. Copy the `checkout` container app name from the terminal output.
-1. Go to the [Azure portal](https://ms.portal.azure.com) and search for the container app resource by name.
+1. Sign in to the [Azure portal](https://portal.azure.com) and search for the container app resource by name.
1. In the Container Apps dashboard, select **Monitoring** > **Log stream**.
container-apps Microservices Dapr Service Invoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-service-invoke.md
In the Azure portal, verify the `checkout` service is passing orders to the `ord
1. Copy the `checkout` container app's name from the terminal output.
-1. Navigate to the [Azure portal](https://ms.portal.azure.com) and search for the container app resource by name.
+1. Sign in to the [Azure portal](https://portal.azure.com) and search for the container app resource by name.
1. In the Container Apps dashboard, select **Monitoring** > **Log stream**.
In the Azure portal, verify the `checkout` service is passing orders to the `ord
1. Copy the `checkout` container app's name from the terminal output.
-1. Navigate to the [Azure portal](https://ms.portal.azure.com) and search for the container app resource by name.
+1. Sign in to the [Azure portal](https://portal.azure.com) and search for the container app resource by name.
1. In the Container Apps dashboard, select **Monitoring** > **Log stream**.
In the Azure portal, verify the `checkout` service is passing orders to the `ord
1. Copy the `checkout` container app's name from the terminal output.
-1. Navigate to the [Azure portal](https://ms.portal.azure.com) and search for the container app resource by name.
+1. Sign in to the [Azure portal](https://portal.azure.com) and search for the container app resource by name.
1. In the Container Apps dashboard, select **Monitoring** > **Log stream**.
container-apps Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/services.md
The following table shows you which service to use in development, and which ser
|||| | Cache | Open-source Redis | Azure Cache for Redis | | Database | N/A | Azure Cosmos DB |
-| Database | Open-source PostgreSQL | Azure Database for PostgreSQL Flexible Service |
+| Database | Open-source PostgreSQL | Azure Database for PostgreSQL Flexible Server |
You're responsible for data continuity between development and production environments.
container-instances Container Instances Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-quickstart-portal.md
In this quickstart, you use the Azure portal to deploy an isolated Docker contai
## Sign in to Azure
-Sign in to the Azure portal at https://portal.azure.com.
+Sign in to the [Azure portal](https://portal.azure.com).
If you don't have an Azure subscription, create a [free account][azure-free-account] before you begin.
container-registry Container Registry Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-geo-replication.md
Geo-replication is a feature of [Premium registries](container-registry-skus.md)
![Switching service tiers in the Azure portal](media/container-registry-skus/update-registry-sku.png)
-To configure geo-replication for your Premium registry, log in to the Azure portal at https://portal.azure.com.
+To configure geo-replication for your Premium registry, sign in to the [Azure portal](https://portal.azure.com).
Navigate to your Azure Container Registry, and select **Replications**:
container-registry Container Registry Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-portal.md
You must also have Docker installed locally with the daemon running. Docker prov
## Sign in to Azure
-Sign in to the Azure portal at https://portal.azure.com.
+Sign in to the [Azure portal](https://portal.azure.com).
## Create a container registry
cosmos-db Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/change-feed.md
[Change feed](../change-feed.md) support in the Azure Cosmos DB for Apache Cassandra is available through the query predicates in the Cassandra Query Language (CQL). Using these predicate conditions, you can query the change feed API. Applications can get the changes made to a table using the primary key (also known as the partition key) as is required in CQL. You can then take further actions based on the results. Changes to the rows in the table are captured in the order of their modification time and the sort order per partition key.
-The following example shows how to get a change feed on all the rows in a API for Cassandra Keyspace table using .NET. The predicate COSMOS_CHANGEFEED_START_TIME() is used directly within CQL to query items in the change feed from a specified start time (in this case current datetime). You can download the full sample, for C# [here](/samples/azure-samples/azure-cosmos-db-cassandra-change-feed/cassandra-change-feed/) and for Java [here](https://github.com/Azure-Samples/cosmos-changefeed-cassandra-java).
+The following example shows how to get a change feed on all the rows in a API for Cassandra Keyspace table using .NET. The predicate COSMOS_CHANGEFEED_START_TIME() is used directly within CQL to query items in the change feed from a specified start time (in this case current datetime). You can download the full sample, for C# [here](https://github.com/azure-samples/azure-cosmos-db-cassandra-change-feed) and for Java [here](https://github.com/Azure-Samples/cosmos-changefeed-cassandra-java).
In each iteration, the query resumes at the last point changes were read, using paging state. We can see a continuous stream of new changes to the table in the Keyspace. We will see changes to rows that are inserted, or updated. Watching for delete operations using change feed in API for Cassandra is currently not supported.
The following limitations are applicable when using change feed with API for Cas
* Inserts and updates are currently supported. Delete operation is not yet supported. As a workaround, you can add a soft marker on rows that are being deleted. For example, add a field in the row called "deleted" and set it to "true". * Last update is persisted as in core API for NoSQL and intermediate updates to the entity are not available. - ## Error handling The following error codes and messages are supported when using change feed in API for Cassandra:
The following error codes and messages are supported when using change feed in A
## Next steps * [Manage Azure Cosmos DB for Apache Cassandra resources using Azure Resource Manager templates](templates-samples.md)+
cosmos-db Migration Choices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/migration-choices.md
A summary of migration pathways from your current solution to Azure Cosmso DB fo
|Offline|[Intra-account container copy](intra-account-container-copy.md)|Azure Cosmos DB for MongoDB|Azure Cosmos DB for MongoDB|&bull; Command-line tool; No set up needed.<br/>&bull; Suitable for large datasets| |Offline|[Azure Cosmos DB desktop data migration tool](how-to-migrate-desktop-tool.md)|&bull;Azure Cosmos DB for NoSQL<br/>&bull;Azure Cosmos DB for MongoDB<br/>&bull;Azure Cosmos DB for Table<br/>&bull;Azure Table storage<br/>&bull;JSON Files<br/>&bull;MongoDB<br/>&bull;SQL Server<br/>|&bull;Azure Cosmos DB for NoSQL<br/>&bull;Azure Cosmos DB for MongoDB<br/>&bull;Azure Cosmos DB for Table<br/>&bull;Azure Table storage<br/>&bull;JSON Files<br/>&bull;MongoDB<br/>&bull;SQL Server<br/>|&bull; Command-line tool<br/>&bull; Open-source| |Online|[Azure Database Migration Service](../dms/tutorial-mongodb-cosmos-db-online.md)| MongoDB|Azure Cosmos DB for MongoDB |&bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets and takes care of replicating live changes. <br/>&bull; Works only with other MongoDB sources.|
-|Offline|[Azure Database Migration Service](../dms/tutorial-mongodb-cosmos-db-online.md)| MongoDB| Azure Cosmos DB for MongoDB| &bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets and takes care of replicating live changes. <br/>&bull; Works only with other MongoDB sources.|
+|Offline|[Azure Database Migration Service](../dms/tutorial-mongodb-cosmos-db.md)| MongoDB| Azure Cosmos DB for MongoDB| &bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets and takes care of replicating live changes. <br/>&bull; Works only with other MongoDB sources.|
|Offline|[Azure Data Factory](../data-factory/connector-azure-cosmos-db-mongodb-api.md)| &bull;JSON/CSV Files<br/>&bull;Azure Cosmos DB for NoSQL<br/>&bull;Azure Cosmos DB for MongoDB <br/>&bull;MongoDB<br/>&bull;SQL Server<br/>&bull;Table Storage<br/>&bull;Azure Blob Storage <br/><br/> See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported sources. | &bull;Azure Cosmos DB for NoSQL<br/>&bull;Azure Cosmos DB for MongoDB <br/>&bull; JSON files <br/><br/> See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported targets.| &bull; Easy to set up and supports multiple sources. <br/>&bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets. <br/>&bull; Lack of checkpointing means that any issue during the course of migration would require a restart of the whole migration process.<br/>&bull; Lack of a dead letter queue would mean that a few erroneous files could stop the entire migration process. <br/>&bull; Needs custom code to increase read throughput for certain data sources.| |Offline|Existing Mongo Tools ([mongodump](mongodb/tutorial-mongotools-cosmos-db.md#mongodumpmongorestore), [mongorestore](mongodb/tutorial-mongotools-cosmos-db.md#mongodumpmongorestore), [Studio3T](mongodb/connect-using-mongochef.md))|&bull;MongoDB<br/>&bull;Azure Cosmos DB for MongoDB<br/> | Azure Cosmos DB for MongoDB| &bull; Easy to set up and integration. <br/>&bull; Needs custom handling for throttles.|
cosmos-db Concat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/concat.md
Title: CONCAT
-description: An Azure Cosmos DB for NoSQL system function that returns
+description: An Azure Cosmos DB for NoSQL system function that returns a string concatenated from two or more strings.
cosmos-db Cot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/cot.md
Previously updated : 07/01/2023 Last updated : 07/19/2023
COT(<numeric_expr>)
Returns a numeric expression. ## Examples
-
+ The following example calculates the cotangent of the specified angle using the function.
-
-```sql
-SELECT VALUE {
- cotangent: COT(124.1332)
-}
-```
-
-```json
-[
- {
- "cotangent": -0.040311998371148884
- }
-]
-```
++ ## Remarks -- This system function doesn't utilize the index.
+- This function doesn't use the index.
## Next steps
cosmos-db Datetimeadd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/datetimeadd.md
Title: DateTimeAdd in Azure Cosmos DB query language
-description: Learn about SQL system function DateTimeAdd in Azure Cosmos DB.
-
+ Title: DateTimeAdd
+
+description: An Azure Cosmos DB for NoSQL system function that returns a datetime that's the resulting of adding a number to a part of the specified datetime.
+++ - Previously updated : 07/09/2020---+ Last updated : 07/19/2023+
-# DateTimeAdd (Azure Cosmos DB)
+
+# DateTimeAdd (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
-Returns DateTime string value resulting from adding a specified number value (as a signed integer) to a specified DateTime string
+Returns a date and time string value that is the result of adding a specified number value to the provided date and time string.
## Syntax ```sql
-DateTimeAdd (<DateTimePart> , <numeric_expr> ,<DateTime>)
+DateTimeAdd(<date_time_part>, <numeric_expr> ,<date_time>)
``` ## Arguments
-
-*DateTimePart*
- The part of date to which DateTimeAdd adds an integer number. This table lists all valid DateTimePart arguments:
-
-| DateTimePart | abbreviations |
-| | -- |
-| Year | "year", "yyyy", "yy" |
-| Month | "month", "mm", "m" |
-| Day | "day", "dd", "d" |
-| Hour | "hour", "hh" |
-| Minute | "minute", "mi", "n" |
-| Second | "second", "ss", "s" |
-| Millisecond | "millisecond", "ms" |
-| Microsecond | "microsecond", "mcs" |
-| Nanosecond | "nanosecond", "ns" |
-
-*numeric_expr*
- Is a signed integer value that will be added to the DateTimePart of the specified DateTime
-
-*DateTime*
- UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
-
-|Format|Description|
-|-|-|
-|YYYY|four-digit year|
-|MM|two-digit month (01 = January, etc.)|
-|DD|two-digit day of month (01 through 31)|
-|T|signifier for beginning of time elements|
-|hh|two-digit hour (00 through 23)|
-|mm|two-digit minutes (00 through 59)|
-|ss|two-digit seconds (00 through 59)|
-|.fffffff|seven-digit fractional seconds|
-|Z|UTC (Coordinated Universal Time) designator|
-
- For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
-
-## Return types
-Returns a UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
-
-|Format|Description|
-|-|-|
-|YYYY|four-digit year|
-|MM|two-digit month (01 = January, etc.)|
-|DD|two-digit day of month (01 through 31)|
-|T|signifier for beginning of time elements|
-|hh|two-digit hour (00 through 23)|
-|mm|two-digit minutes (00 through 59)|
-|ss|two-digit seconds (00 through 59)|
-|.fffffff|seven-digit fractional seconds|
-|Z|UTC (Coordinated Universal Time) designator|
+| | Description |
+| | |
+| **`date_time_part`** | A string representing a part of an ISO 8601 date format specification. This part is used to indicate which aspect of the date to modify by the related numeric expression. |
+| **`numeric_expr`** | A numeric expression resulting in a signed integer. |
+| **`date_time`** | A Coordinated Universal Time (UTC) date and time string in the ISO 8601 format `YYYY-MM-DDThh:mm:ss.fffffffZ`. |
-## Remarks
+> [!NOTE]
+> For more information on the ISO 8601 format, see [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601).
-DateTimeAdd will return `undefined` for the following reasons:
+## Return types
-- The DateTimePart value specified is invalid-- The numeric_expr specified is not a valid integer-- The DateTime in the argument or result is not a valid ISO 8601 DateTime.
+Returns a UTC date and time string in the ISO 8601 format `YYYY-MM-DDThh:mm:ss.fffffffZ`.
## Examples
-
-The following example adds 1 month to the DateTime: `2020-07-09T23:20:13.4575530Z`
-```sql
-SELECT DateTimeAdd("mm", 1, "2020-07-09T23:20:13.4575530Z") AS OneMonthLater
-```
+The following example adds various values (one year, one month, one day, one hour) to the date **July 3, 2020** at **midnight (00:00 UTC)**. The example also subtracts various values (two years, two months, two days, two hours) from the same date. Finally, this example uses an expression to modify the seconds of the same date.
-```json
-[
- {
- "OneMonthLater": "2020-08-09T23:20:13.4575530Z"
- }
-]
-```
-The following example subtracts 2 hours from the DateTime: `2020-07-09T23:20:13.4575530Z`
-```sql
-SELECT DateTimeAdd("hh", -2, "2020-07-09T23:20:13.4575530Z") AS TwoHoursEarlier
-```
+## Remarks
-```json
-[
- {
- "TwoHoursEarlier": "2020-07-09T21:20:13.4575530Z"
- }
-]
-```
+- This function returns `undefined` for these reasons:
+ - The specified date and time part is invalid.
+ - The numeric expression isn't a valid integer.
+ - The date and time in the argument isn't a valid ISO 8601 date and time string.
+- The ISO 8601 date format specifies valid date and time parts to use with this function:
+ | | Format |
+ | | |
+ | **Year** | `year`, `yyyy`, `yy` |
+ | **Month** | `month`, `mm`, `m` |
+ | **Day** | `day`, `dd`, `d` |
+ | **Hour** | `hour`, `hh` |
+ | **Minute** | `minute`, `mi`, `n` |
+ | **Second** | `second`, `ss`, `s` |
+ | **Millisecond** | `millisecond`, `ms` |
+ | **Microsecond** | `microsecond`, `mcs` |
+ | **Nanosecond** | `nanosecond`, `ns` |
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`DateTimeBin`](datetimebin.md)
cosmos-db Datetimebin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/datetimebin.md
Title: DateTimeBin in Azure Cosmos DB query language
-description: Learn about SQL system function DateTimeBin in Azure Cosmos DB.
---- Previously updated : 05/27/2022 --
-
-
-# DateTimeBin (Azure Cosmos DB)
- [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
-
-Returns the nearest multiple of *BinSize* below the specified DateTime given the unit of measurement *DateTimePart* and start value of *BinAtDateTime*.
--
-## Syntax
-
-```sql
-DateTimeBin (<DateTime> , <DateTimePart> [,BinSize] [,BinAtDateTime])
-```
--
-## Arguments
-
-*DateTime*
- The string value date and time to be binned. A UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
-
-|Format|Description|
-|-|-|
-|YYYY|four-digit year|
-|MM|two-digit month (01 = January, etc.)|
-|DD|two-digit day of month (01 through 31)|
-|T|signifier for beginning of time elements|
-|hh|two-digit hour (00 through 23)|
-|mm|two-digit minutes (00 through 59)|
-|ss|two-digit seconds (00 through 59)|
-|.fffffff|seven-digit fractional seconds|
-|Z|UTC (Coordinated Universal Time) designator|
-
-For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
-
-*DateTimePart*
- The date time part specifies the units for BinSize. DateTimeBin is Undefined for DayOfWeek, Year, and Month. The finest granularity for binning by Nanosecond is 100 nanosecond ticks; if Nanosecond is specified with a BinSize less than 100, the result is Undefined. This table lists all valid DateTimePart arguments for DateTimeBin:
-
-| DateTimePart | abbreviations |
-| | -- |
-| Day | "day", "dd", "d" |
-| Hour | "hour", "hh" |
-| Minute | "minute", "mi", "n" |
-| Second | "second", "ss", "s" |
-| Millisecond | "millisecond", "ms" |
-| Microsecond | "microsecond", "mcs" |
-| Nanosecond | "nanosecond", "ns" |
-
-*BinSize* (optional)
- Numeric value that specifies the size of bins. If not specified, the default value is one.
+ Title: DateTimeBin
+
+description: An Azure Cosmos DB for NoSQL system function that returns a date and time that's the resulting of binning (rounding) a part of the specified datetime.
++++++ Last updated : 07/19/2023++
+# DateTimeBin (NoSQL query)
-*BinAtDateTime* (optional)
- A UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` that specifies the start date to bin from. Default value is the Unix epoch, ΓÇÿ1970-01-01T00:00:00.000000ZΓÇÖ.
+Returns a date and time string value that is the result of binning (or rounding) a part of the provided date and time string.
-## Return types
+## Syntax
-Returns the result of binning the *DateTime* value.
+```sql
+DateTimeBin(<date_time> , <date_time_part> [, <bin_size>] [, <bin_start_date_time>])
+```
+## Arguments
-## Remarks
+| | Description |
+| | |
+| **`date_time`** | A Coordinated Universal Time (UTC) date and time string in the ISO 8601 format `YYYY-MM-DDThh:mm:ss.fffffffZ`. |
+| **`date_time_part`** | A string representing a part of an ISO 8601 date format specification. This part is used to indicate which aspect of the date to bin. Specifically, this part argument represents the level of granularity for binning (or rounding). The minimum granularity for the part is **days** and the maximum granularity is **nanoseconds**. |
+| **`bin_size` *(Optional)*** | An optional numeric value specifying the size of the bin. If not specified, the default value is `1`. |
+| **`bin_start_date_time` *(Optional)*** | An optional Coordinated Universal Time (UTC) date and time string in the ISO 8601 format `YYYY-MM-DDThh:mm:ss.fffffffZ`. This date and time argument specifies the start date to bin from. If not specified, the default value is the Unix epoch `1970-01-01T00:00:00.000000Z`. |
-DateTimeBin will return `Undefined` for the following reasons:
-- The DateTimePart value specified is invalid -- The BinSize value is zero or negative -- The DateTime or BinAtDateTime isn't a valid ISO 8601 DateTime or precedes the year 1601 (the Windows epoch)
+> [!NOTE]
+> For more information on the ISO 8601 format, see [ISO 8601](https://wikipedia.org/wiki/ISO_8601). For more information on the Unix epoch, see [Unix time](https://wikipedia.org/wiki/unix_time).
+## Return types
-## Examples
+Returns a UTC date and time string in the ISO 8601 format `YYYY-MM-DDThh:mm:ss.fffffffZ`.
-The following example bins ΓÇÿ2021-06-28T17:24:29.2991234ZΓÇÖ by one hour:
+## Examples
-```sql
-SELECT DateTimeBin('2021-06-28T17:24:29.2991234Z', 'hh') AS BinByHour
-```
+The following example bins the date **January 8, 2021** at **18:35 UTC** by various values. The example also changes the bin size, and the bin start date and time.
-```json
-[
-    {
-        "BinByHour": "2021-06-28T17:00:00.0000000Z"
-    }
-]
-```
-The following example bins ΓÇÿ2021-06-28T17:24:29.2991234ZΓÇÖ given different *BinAtDateTime* values:
-```sql
-SELECTΓÇ»
-DateTimeBin('2021-06-28T17:24:29.2991234Z', 'day', 5) AS One_BinByFiveDaysUnixEpochImplicit,
-DateTimeBin('2021-06-28T17:24:29.2991234Z', 'day', 5, '1970-01-01T00:00:00.0000000Z') AS Two_BinByFiveDaysUnixEpochExplicit,
-DateTimeBin('2021-06-28T17:24:29.2991234Z', 'day', 5, '1601-01-01T00:00:00.0000000Z') AS Three_BinByFiveDaysFromWindowsEpoch,
-DateTimeBin('2021-06-28T17:24:29.2991234Z', 'day', 5, '2021-01-01T00:00:00.0000000Z') AS Four_BinByFiveDaysFromYearStart,
-DateTimeBin('2021-06-28T17:24:29.2991234Z', 'day', 5, '0001-01-01T00:00:00.0000000Z') AS Five_BinByFiveDaysFromUndefinedYear
-```
+## Remarks
-```json
-[
-    {
-        "One_BinByFiveDaysUnixEpochImplicit": "2021-06-27T00:00:00.0000000Z",
-        "Two_BinByFiveDaysUnixEpochExplicit": "2021-06-27T00:00:00.0000000Z",
-        "Three_BinByFiveDaysFromWindowsEpoch": "2021-06-28T00:00:00.0000000Z",
-        "Four_BinByFiveDaysFromYearStart": "2021-06-25T00:00:00.0000000Z"
-    }
-]
-```
+- This function returns `undefined` for these reasons:
+ - The specified date and time part is invalid.
+ - The bin size value isn't a valid integer, is zero, or is negative.
+ - The date and time in either argument isn't a valid ISO 8601 date and time string.
+ - The date and time for the bin start precedes the year `1601`, the Windows epoch.
+- The ISO 8601 date format specifies valid date and time parts to use with this function:
+ | | Format |
+ | | |
+ | **Day** | `day`, `dd`, `d` |
+ | **Hour** | `hour`, `hh` |
+ | **Minute** | `minute`, `mi`, `n` |
+ | **Second** | `second`, `ss`, `s` |
+ | **Millisecond** | `millisecond`, `ms` |
+ | **Microsecond** | `microsecond`, `mcs` |
+ | **Nanosecond** | `nanosecond`, `ns` |
-## Next steps
+## Next steps
-- [System functions Azure Cosmos DB](system-functions.yml) -- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [System functions Azure Cosmos DB](system-functions.yml)
+- [`DateTimeAdd`](datetimeadd.md)
cosmos-db Datetimediff https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/datetimediff.md
Title: DateTimeDiff in Azure Cosmos DB query language
-description: Learn about SQL system function DateTimeDiff in Azure Cosmos DB.
-
+ Title: DateTimeDiff
+
+description: An Azure Cosmos DB for NoSQL system function that returns the difference of a specific part between two date and times.
+++ - Previously updated : 07/09/2020---+ Last updated : 07/19/2023+
-# DateTimeDiff (Azure Cosmos DB)
+
+# DateTimeDiff (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
-Returns the count (as a signed integer value) of the specified DateTimePart boundaries crossed between the specified *StartDate* and *EndDate*.
+
+Returns the difference, as a signed integer, of the specified date and time part between two date and time values.
## Syntax ```sql
-DateTimeDiff (<DateTimePart> , <StartDate> , <EndDate>)
+DateTimeDiff(<date_time_part>, <start_date_time>, <end_date_time>)
``` ## Arguments
-
-*DateTimePart*
- The part of date to which DateTimeAdd adds an integer number. This table lists all valid DateTimePart arguments:
-
-| DateTimePart | abbreviations |
-| | -- |
-| Year | "year", "yyyy", "yy" |
-| Month | "month", "mm", "m" |
-| Day | "day", "dd", "d" |
-| Hour | "hour", "hh" |
-| Minute | "minute", "mi", "n" |
-| Second | "second", "ss", "s" |
-| Millisecond | "millisecond", "ms" |
-| Microsecond | "microsecond", "mcs" |
-| Nanosecond | "nanosecond", "ns" |
-
-*StartDate*
- UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
-
-|Format|Description|
-|-|-|
-|YYYY|four-digit year|
-|MM|two-digit month (01 = January, etc.)|
-|DD|two-digit day of month (01 through 31)|
-|T|signifier for beginning of time elements|
-|hh|two-digit hour (00 through 23)|
-|mm|two-digit minutes (00 through 59)|
-|ss|two-digit seconds (00 through 59)|
-|.fffffff|seven-digit fractional seconds|
-|Z|UTC (Coordinated Universal Time) designator|
-
- For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
-*EndDate*
- UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ`
+| | Description |
+| | |
+| **`date_time_part`** | A string representing a part of an ISO 8601 date format specification. This part is used to indicate which aspect of the date to compare. |
+| **`start_date_time`** | A Coordinated Universal Time (UTC) date and time string in the ISO 8601 format `YYYY-MM-DDThh:mm:ss.fffffffZ`. |
+| **`end_date_time`** | A Coordinated Universal Time (UTC) date and time string in the ISO 8601 format `YYYY-MM-DDThh:mm:ss.fffffffZ`. |
-## Return types
+> [!NOTE]
+> For more information on the ISO 8601 format, see [ISO 8601](https://wikipedia.org/wiki/ISO_8601).
-Returns a signed integer value.
-
-## Remarks
-
-DateTimeDiff will return `undefined` for the following reasons:
--- The DateTimePart value specified is invalid-- The StartDate or EndDate is not a valid ISO 8601 DateTime
+## Return types
-DateTimeDiff will always return a signed integer value and is a measurement of the number of DateTimePart boundaries crossed, not measurement of the time interval.
+Returns a numeric value that is a signed integer.
## Examples
-
-The following example computes the number of day boundaries crossed between `2020-01-01T01:02:03.1234527Z` and `2020-01-03T01:02:03.1234567Z`.
-```sql
-SELECT DateTimeDiff("day", "2020-01-01T01:02:03.1234527Z", "2020-01-03T01:02:03.1234567Z") AS DifferenceInDays
-```
-
-```json
-[
- {
- "DifferenceInDays": 2
- }
-]
-```
-
-The following example computes the number of year boundaries crossed between `2028-01-01T01:02:03.1234527Z` and `2020-01-03T01:02:03.1234567Z`.
-
-```sql
-SELECT DateTimeDiff("yyyy", "2028-01-01T01:02:03.1234527Z", "2020-01-03T01:02:03.1234567Z") AS DifferenceInYears
-```
+The following examples compare **February 4, 2019 16:00 UTC** and **March 5, 2018 05:00 UTC** using various date and time parts.
-```json
-[
- {
- "DifferenceInYears": -8
- }
-]
-```
-The following example computes the number of hour boundaries crossed between `2020-01-01T01:00:00.1234527Z` and `2020-01-01T01:59:59.1234567Z`. Even though these DateTime values are over 0.99 hours apart, `DateTimeDiff` returns 0 because no hour boundaries were crossed.
-```sql
-SELECT DateTimeDiff("hh", "2020-01-01T01:00:00.1234527Z", "2020-01-01T01:59:59.1234567Z") AS DifferenceInHours
-```
+## Remarks
-```json
-[
- {
- "DifferenceInHours": 0
- }
-]
-```
+- This function returns `undefined` for these reasons:
+ - The specified date and time part is invalid.
+ - The date and time in either start or end argument isn't a valid ISO 8601 date and time string.
+- The ISO 8601 date format specifies valid date and time parts to use with this function:
+ | | Format |
+ | | |
+ | **Day** | `day`, `dd`, `d` |
+ | **Hour** | `hour`, `hh` |
+ | **Minute** | `minute`, `mi`, `n` |
+ | **Second** | `second`, `ss`, `s` |
+ | **Millisecond** | `millisecond`, `ms` |
+ | **Microsecond** | `microsecond`, `mcs` |
+ | **Nanosecond** | `nanosecond`, `ns` |
+- The function always returns a signed integer value. The function returns a measurement of the number of boundaries crossed for the specified date and time part, not a measurement of the time interval.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`DateTimeBin`](datetimebin.md)
cosmos-db Datetimefromparts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/datetimefromparts.md
Title: DateTimeFromParts in Azure Cosmos DB query language
-description: Learn about SQL system function DateTimeFromParts in Azure Cosmos DB.
-
+ Title: DateTimeFromParts
+
+description: An Azure Cosmos DB for NoSQL system function that returns a date and time constructed from various numeric inputs.
+++ - Previously updated : 07/09/2020---+ Last updated : 07/19/2023+
-# DateTimeFromParts (Azure Cosmos DB)
+
+# DateTimeFromParts (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
-Returns a string DateTime value constructed from input values.
+Returns a date and time string value constructed from input numeric values for various date and time parts.
## Syntax ```sql
-DateTimeFromParts(<numberYear>, <numberMonth>, <numberDay> [, numberHour] [, numberMinute] [, numberSecond] [, numberOfFractionsOfSecond])
+DateTimeFromParts(<numeric_year>, <numeric_month>, <numeric_day> [, <numeric_hour>] [, <numeric_minute>] [, <numeric_second>] [, <numeric_second_fraction>])
``` ## Arguments
-
-*numberYear*
- Integer value for the year in the format `YYYY`
-
-*numberMonth*
- Integer value for the month in the format `MM`
-
-*numberDay*
- Integer value for the day in the format `DD`
-
-*numberHour* (optional)
- Integer value for the hour in the format `hh`
-
-*numberMinute* (optional)
- Integer value for the minute in the format `mm`
-*numberSecond* (optional)
- Integer value for the second in the format `ss`
+| | Description |
+| | |
+| **`numeric_year`** | A positive numeric integer value for the **year**. This argument is in the ISO 8601 format `yyyy`. |
+| **`numeric_month`** | A positive numeric integer value for the **month**. This argument is in the ISO 8601 format `mm`. |
+| **`numeric_day`** | A positive numeric integer value for the **day**. This argument is in the ISO 8601 format `dd`. |
+| **`numeric_hour` *(Optional)*** | An optional positive numeric integer value for the **hour**. This argument is in the ISO 8601 format `hh`. If not specified, the default value is `0`. |
+| **`numeric_minute` *(Optional)*** | An optional positive numeric integer value for the **minute**. This argument is in the ISO 8601 format `mm`. If not specified, the default value is `0`. |
+| **`numeric_second` *(Optional)*** | An optional positive numeric integer value for the **second**. This argument is in the ISO 8601 format `ss`. If not specified, the default value is `0`. |
+| **`numeric_second_fraction` *(Optional)*** | An optional positive numeric integer value for the **fractional of a second**. This argument is in the ISO 8601 format `fffffffZ`. If not specified, the default value is `0`. |
-*numberOfFractionsOfSecond* (optional)
- Integer value for the fractional of a second in the format `.fffffff`
+> [!NOTE]
+> For more information on the ISO 8601 format, see [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601).
## Return types
-Returns a UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
-
-|Format|Description|
-|-|-|
-|YYYY|four-digit year|
-|MM|two-digit month (01 = January, etc.)|
-|DD|two-digit day of month (01 through 31)|
-|T|signifier for beginning of time elements|
-|hh|two-digit hour (00 through 23)|
-|mm|two-digit minutes (00 through 59)|
-|ss|two-digit seconds (00 through 59)|
-|.fffffff|seven-digit fractional seconds|
-|Z|UTC (Coordinated Universal Time) designator|
-
- For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
-
-## Remarks
-
-If the specified integers would create an invalid DateTime, DateTimeFromParts will return `undefined`.
-
-If an optional argument isn't specified, its value will be 0.
+Returns a UTC date and time string in the ISO 8601 format `YYYY-MM-DDThh:mm:ss.fffffffZ`.
## Examples
-Here's an example that only includes required arguments to construct a DateTime:
+The following example uses various combinations of the arguments to create date and time strings. This example uses the date and time **April 20, 2017 13:15 UTC**.
-```sql
-SELECT DateTimeFromParts(2020, 9, 4) AS DateTime
-```
-
-```json
-[
- {
- "DateTime": "2020-09-04T00:00:00.0000000Z"
- }
-]
-```
-Here's another example that also uses some optional arguments to construct a DateTime:
-```sql
-SELECT DateTimeFromParts(2020, 9, 4, 10, 52) AS DateTime
-```
-
-```json
-[
- {
- "DateTime": "2020-09-04T10:52:00.0000000Z"
- }
-]
-```
-
-Here's another example that also uses all optional arguments to construct a DateTime:
-
-```sql
-SELECT DateTimeFromParts(2020, 9, 4, 10, 52, 12, 3456789) AS DateTime
-```
+## Remarks
-```json
-[
- {
- "DateTime": "2020-09-04T10:52:12.3456789Z"
- }
-]
-```
+- If the specified integers would create an invalid date and time, the function returns `undefined`.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`DateTimePart`](datetimepart.md)
cosmos-db Datetimepart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/datetimepart.md
Title: DateTimePart in Azure Cosmos DB query language
-description: Learn about SQL system function DateTimePart in Azure Cosmos DB.
-
+ Title: DateTimePart
+
+description: An Azure Cosmos DB for NoSQL system function that returns the numeric value of a specific part of a date and time.
+++ - Previously updated : 08/14/2020---+ Last updated : 07/19/2023+
-# DateTimePart (Azure Cosmos DB)
+
+# DateTimePart (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
-Returns the value of the specified DateTimePart between the specified DateTime.
+Returns the value of the specified date and time part for the provided date and time.
## Syntax ```sql
-DateTimePart (<DateTimePart> , <DateTime>)
+DateTimePart(<date_time> , <date_time_part>)
``` ## Arguments
-
-*DateTimePart*
- The part of the date for which DateTimePart will return the value. This table lists all valid DateTimePart arguments:
-
-| DateTimePart | abbreviations |
-| | -- |
-| Year | "year", "yyyy", "yy" |
-| Month | "month", "mm", "m" |
-| Day | "day", "dd", "d" |
-| Hour | "hour", "hh" |
-| Minute | "minute", "mi", "n" |
-| Second | "second", "ss", "s" |
-| Millisecond | "millisecond", "ms" |
-| Microsecond | "microsecond", "mcs" |
-| Nanosecond | "nanosecond", "ns" |
-
-*DateTime*
- UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ`
-
-## Return types
-
-Returns a positive integer value.
-## Remarks
+| | Description |
+| | |
+| **`date_time`** | A Coordinated Universal Time (UTC) date and time string in the ISO 8601 format `YYYY-MM-DDThh:mm:ss.fffffffZ`. |
+| **`date_time_part`** | A string representing a part of an ISO 8601 date format specification. This part is used to indicate which aspect of the date to extract and return. |
-DateTimePart will return `undefined` for the following reasons:
+> [!NOTE]
+> For more information on the ISO 8601 format, see [ISO 8601](https://wikipedia.org/wiki/ISO_8601).
-- The DateTimePart value specified is invalid-- The DateTime is not a valid ISO 8601 DateTime
+## Return types
-This system function will not utilize the index.
+Returns a numeric value that is a positive integer.
## Examples
-Here's an example that returns the integer value of the month:
+The following example returns various parts of the date and time **May 29, 2016 08:30 UTC**.
-```sql
-SELECT DateTimePart("m", "2020-01-02T03:04:05.6789123Z") AS MonthValue
-```
-
-```json
-[
- {
- "MonthValue": 1
- }
-]
-```
-Here's an example that returns the number of microseconds:
-```sql
-SELECT DateTimePart("mcs", "2020-01-02T03:04:05.6789123Z") AS MicrosecondsValue
-```
+## Remarks
-```json
-[
- {
- "MicrosecondsValue": 678912
- }
-]
-```
+- This function returns `undefined` for these reasons:
+ - The specified date and time part is invalid.
+ - The date and time isn't a valid ISO 8601 date and time string.
+- The ISO 8601 date format specifies valid date and time parts to use with this function:
+ | | Format |
+ | | |
+ | **Year** | `year`, `yyyy`, `yy` |
+ | **Month** | `month`, `mm`, `m` |
+ | **Day** | `day`, `dd`, `d` |
+ | **Hour** | `hour`, `hh` |
+ | **Minute** | `minute`, `mi`, `n` |
+ | **Second** | `second`, `ss`, `s` |
+ | **Millisecond** | `millisecond`, `ms` |
+ | **Microsecond** | `microsecond`, `mcs` |
+ | **Nanosecond** | `nanosecond`, `ns` |
+- This function doesn't use the index.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`DateTimeFromParts`](datetimefromparts.md)
cosmos-db Datetimetoticks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/datetimetoticks.md
Title: DateTimeToTicks in Azure Cosmos DB query language
-description: Learn about SQL system function DateTimeToTicks in Azure Cosmos DB.
-
+ Title: DateTimeToTicks
+
+description: An Azure Cosmos DB for NoSQL system function that returns the number of ticks, or 100 nanoseconds, since the Unix epoch.
+++ - Previously updated : 08/18/2020---+ Last updated : 07/19/2023+
-# DateTimeToTicks (Azure Cosmos DB)
+
+# DateTimeToTicks (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
-Converts the specified DateTime to ticks. A single tick represents one hundred nanoseconds or one ten-millionth of a second.
+Converts the specified DateTime to ticks. A single tick represents `100` nanoseconds or `0.000000001`of a second.
## Syntax
-
+ ```sql
-DateTimeToTicks (<DateTime>)
+DateTimeToTicks(<date_time>)
``` ## Arguments
-
-*DateTime*
- UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ`
-## Return types
+| | Description |
+| | |
+| **`date_time`** | A Coordinated Universal Time (UTC) date and time string in the ISO 8601 format `YYYY-MM-DDThh:mm:ss.fffffffZ`. |
-Returns a signed numeric value, the current number of 100-nanosecond ticks that have elapsed since the Unix epoch. In other words, DateTimeToTicks returns the number of 100-nanosecond ticks that have elapsed since 00:00:00 Thursday, 1 January 1970.
+> [!NOTE]
+> For more information on the ISO 8601 format, see [ISO 8601](https://wikipedia.org/wiki/ISO_8601).
-## Remarks
+## Return types
-DateTimeDateTimeToTicks will return `undefined` if the DateTime is not a valid ISO 8601 DateTime
+Returns a signed numeric value, the current number of `100`-nanosecond ticks that have elapsed since the Unix epoch (January 1, 1970).
-This system function will not utilize the index.
+> [!NOTE]
+> For more information on the Unix epoch, see [Unix time](https://wikipedia.org/wiki/unix_time).
## Examples
-Here's an example that returns the number of ticks:
-
-```sql
-SELECT DateTimeToTicks("2020-01-02T03:04:05.6789123Z") AS Ticks
-```
+The following example measures the ticks since the date and time **May 19, 2015 12:00 UTC**.
-```json
-[
- {
- "Ticks": 15779342456789124
- }
-]
-```
-Here's an example that returns the number of ticks without specifying the number of fractional seconds:
-```sql
-SELECT DateTimeToTicks("2020-01-02T03:04:05Z") AS Ticks
-```
+## Remarks
-```json
-[
- {
- "Ticks": 15779342450000000
- }
-]
-```
+- This function returns `undefined` if the date and time isn't a valid ISO 8601 date and time string.
+- This function doesn't use the index.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`DateTimeToTimestamp`](datetimetotimestamp.md)
cosmos-db Datetimetotimestamp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/datetimetotimestamp.md
Title: DateTimeToTimestamp in Azure Cosmos DB query language
-description: Learn about SQL system function DateTimeToTimestamp in Azure Cosmos DB.
-
+ Title: DateTimeToTimestamp
+
+description: An Azure Cosmos DB for NoSQL system function that returns a numeric timestamp that represents the milliseconds since the Unix epoch.
+++ - Previously updated : 08/18/2020---+ Last updated : 07/19/2023+
-# DateTimeToTimestamp (Azure Cosmos DB)
+
+# DateTimeToTimestamp (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
-Converts the specified DateTime to a timestamp.
-
+Converts the specified date and time to a numeric timestamp. The timestamp is a signed numeric integer that measures the milliseconds since the Unix epoch.
+ ## Syntax
-
+ ```sql
-DateTimeToTimestamp (<DateTime>)
+DateTimeToTimestamp(<date_time>)
``` ## Arguments
-*DateTime*
- UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
-
-|Format|Description|
-|-|-|
-|YYYY|four-digit year|
-|MM|two-digit month (01 = January, etc.)|
-|DD|two-digit day of month (01 through 31)|
-|T|signifier for beginning of time elements|
-|hh|two-digit hour (00 through 23)|
-|mm|two-digit minutes (00 through 59)|
-|ss|two-digit seconds (00 through 59)|
-|.fffffff|seven-digit fractional seconds|
-|Z|UTC (Coordinated Universal Time) designator|
-
- For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
+| | Description |
+| | |
+| **`date_time`** | A Coordinated Universal Time (UTC) date and time string in the ISO 8601 format `YYYY-MM-DDThh:mm:ss.fffffffZ`. |
-## Return types
+> [!NOTE]
+> For more information on the ISO 8601 format, see [ISO 8601](https://wikipedia.org/wiki/ISO_8601).
-Returns a signed numeric value, the current number of milliseconds that have elapsed since the Unix epoch i.e. the number of milliseconds that have elapsed since 00:00:00 Thursday, 1 January 1970.
+## Return types
-## Remarks
+Returns a signed numeric value, the current number of milliseconds that have elapsed since the Unix epoch (January 1, 1970).
-DateTimeToTimestamp will return `undefined` if the DateTime value specified is invalid
+> [!NOTE]
+> For more information on the Unix epoch, see [Unix time](https://wikipedia.org/wiki/unix_time).
## Examples
-
-The following example converts the DateTime to a timestamp:
-```sql
-SELECT DateTimeToTimestamp("2020-07-09T23:20:13.4575530Z") AS Timestamp
-```
+The following example converts the date and time **May 19, 2015 12:00 UTC** to a timestamp.
-```json
-[
- {
- "Timestamp": 1594336813457
- }
-]
-```
-Here's another example:
-```sql
-SELECT DateTimeToTimestamp("2020-07-09") AS Timestamp
-```
+## Remarks
-```json
-[
- {
- "Timestamp": 1594252800000
- }
-]
-```
+- This function returns `undefined` if the date and time isn't a valid ISO 8601 date and time string.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`DateTimeToTicks`](datetimetoticks.md)
cosmos-db Degrees https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/degrees.md
Title: DEGREES in Azure Cosmos DB query language
-description: Learn about the DEGREES SQL system function in Azure Cosmos DB to return the corresponding angle in degrees for an angle specified in radians
-
+ Title: DEGREES
+
+description: An Azure Cosmos DB for NoSQL system function that returns the angle in degrees for a radian value.
+++ - Previously updated : 03/03/2020--+ Last updated : 07/19/2023+
-# DEGREES (Azure Cosmos DB)
+
+# DEGREES (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns the corresponding angle in degrees for an angle specified in radians.
-
+Returns the corresponding angle in degrees for an angle specified in radians.
+ ## Syntax
-
+ ```sql
-DEGREES (<numeric_expr>)
+DEGREES(<numeric_expr>)
```
-
+ ## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
+
+| | Description |
+| | |
+| **`numeric_expr`** | A numeric expression. |
+ ## Return types
-
- Returns a numeric expression.
-
+
+Returns a numeric expression.
+ ## Examples
-
- The following example returns the number of degrees in an angle of PI/2 radians.
-
-```sql
-SELECT DEGREES(PI()/2) AS degrees
-```
-
- Here is the result set.
-
-```json
-[{"degrees": 90}]
-```
+
+The following example returns the degrees for various radian values.
++ ## Remarks
-This system function will not utilize the index.
+- This function doesn't use the index.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`RADIANS`](radians.md)
cosmos-db Documentid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/documentid.md
Previously updated : 07/01/2023 Last updated : 07/19/2023
Integer identifying an item within a physical partition.
This example illustrates using this function to extract and return the integer identifier relative to a physical partition.
-```json
-[
- {
- "id": "63700",
- "name": "Joltage Kid's Vest"
- }
-]
-```
-```sql
-SELECT
- p.id,
- p._rid,
- DOCUMENTID(p) AS documentId
-FROM
- product p
-```
-```json
-[
- {
- "id": "63700",
- "_rid": "36ZyAPW+uN8NAAAAAAAAAA==",
- "documentId": 13
- }
-]
-```
This function can also be used as a filter.
-```sql
-SELECT
- p.id,
- DOCUMENTID(p) AS documentId
-FROM
- product p
-WHERE
- DOCUMENTID(p) >= 5 AND
- DOCUMENTID(p) <= 15
-```
-```json
-[
- {
- "id": "63700",
- "documentId": 13
- }
-]
-```
+ ## Remarks
cosmos-db Endswith https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/endswith.md
Previously updated : 07/01/2023 Last updated : 07/19/2023
Returns a boolean value indicating whether the first string expression ends with
## Syntax ```sql
-ENDSWITH(<str_expr_1>, <str_expr_2> [, <bool_expr>])
+ENDSWITH(<string_expr_1>, <string_expr_2> [, <bool_expr>])
``` ## Arguments | | Description | | | |
-| **`str_expr_1`** | A string expression. |
-| **`str_expr_2`** | A string expression to be compared to the end of `str_expr_1`. |
+| **`string_expr_1`** | A string expression. |
+| **`string_expr_2`** | A string expression to be compared to the end of `string_expr_1`. |
| **`bool_expr`** *(Optional)* | Optional value for ignoring case. When set to `true`, `ENDSWITH` does a case-insensitive search. When unspecified, this default value is `false`. | ## Return types
Returns a boolean expression.
The following example checks if the string `abc` ends with `b` or `bC`.
-```sql
-SELECT VALUE {
- endsWithWrongSuffix: ENDSWITH("abc", "b"),
- endsWithCorrectSuffix: ENDSWITH("abc", "bc"),
- endsWithSuffixWrongCase: ENDSWITH("abc", "bC"),
- endsWithSuffixCaseInsensitive: ENDSWITH("abc", "bC", true)
-}
-```
-
-```json
-[
- {
- "endsWithWrongSuffix": false,
- "endsWithCorrectSuffix": true,
- "endsWithSuffixWrongCase": false,
- "endsWithSuffixCaseInsensitive": true
- }
-]
-```
+ ## Remarks
cosmos-db Exp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/exp.md
Title: EXP in Azure Cosmos DB query language
-description: Learn about the Exponent (EXP) SQL system function in Azure Cosmos DB to return the exponential value of the specified numeric expression
-
+ Title: EXP
+
+description: An Azure Cosmos DB for NoSQL system function that returns the exponential value of the specified number.
+++ - Previously updated : 09/13/2019--+ Last updated : 07/19/2023+
-# EXP (Azure Cosmos DB)
+
+# EXP (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns the exponential value of the specified numeric expression.
-
+Returns the exponential value of the specified numeric expression.
+ ## Syntax
-
+ ```sql
-EXP (<numeric_expr>)
-```
-
+EXP(<numeric_expr>)
+```
+ ## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
+
+| | Description |
+| | |
+| **`numeric_expr`** | A numeric expression. |
+ ## Return types
-
- Returns a numeric expression.
-
-## Remarks
-
- The constant **e** (2.718281…), is the base of natural logarithms.
-
- The exponent of a number is the constant **e** raised to the power of the number. For example, EXP(1.0) = e^1.0 = 2.71828182845905 and EXP(10) = e^10 = 22026.4657948067.
-
- The exponential of the natural logarithm of a number is the number itself: EXP (LOG (n)) = n. And the natural logarithm of the exponential of a number is the number itself: LOG (EXP (n)) = n.
-
+
+Returns a numeric expression.
+ ## Examples
-
- The following example declares a variable and returns the exponential value of the specified variable (10).
-
-```sql
-SELECT EXP(10) AS exp
-```
-
- Here is the result set.
-
-```json
-[{exp: 22026.465794806718}]
-```
-
- The following example returns the exponential value of the natural logarithm of 20 and the natural logarithm of the exponential of 20. Because these functions are inverse functions of one another, the return value with rounding for floating point math in both cases is 20.
-
-```sql
-SELECT EXP(LOG(20)) AS exp1, LOG(EXP(20)) AS exp2
-```
-
- Here is the result set.
-
-```json
-[{exp1: 19.999999999999996, exp2: 20}]
-```
+
+The following example returns the exponential value for various numeric inputs.
+++
+## Remarks
+
+- The constant `e` (`2.718281…`), is the base of natural logarithms.
+- The exponent of a number is the constant `e` raised to the power of the number. For example, `EXP(1.0) = e^1.0 = 2.71828182845905` and `EXP(10) = e^10 = 22026.4657948067`.
+- The exponential of the natural logarithm of a number is the number itself: `EXP (LOG (n)) = n`. And the natural logarithm of the exponential of a number is the number itself: `LOG (EXP (n)) = n`.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`LOG`](log.md)
cosmos-db Floor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/floor.md
Previously updated : 07/01/2023 Last updated : 07/19/2023
Returns a numeric expression.
The following example shows positive numeric, negative, and zero values evaluated with this function.
-```sql
-SELECT VALUE {
- floorPostiveNumber: FLOOR(62.6),
- floorNegativeNumber: FLOOR(-145.12),
- floorSmallNumber: FLOOR(0.2989),
- floorZero: FLOOR(0.0),
- floorNull: FLOOR(null)
-}
-```
-```json
-[
- {
- "floorPostiveNumber": 62,
- "floorNegativeNumber": -146,
- "floorSmallNumber": 0,
- "floorZero": 0
- }
-]
-```
## Remarks
cosmos-db Getcurrentdatetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/getcurrentdatetime.md
Title: GetCurrentDateTime in Azure Cosmos DB query language
-description: Learn about SQL system function GetCurrentDateTime in Azure Cosmos DB.
-
+ Title: GetCurrentDateTime
+
+description: An Azure Cosmos DB for NoSQL system function that returns an ISO 8601 date and time value.
+++ - Previously updated : 02/03/2021---+ Last updated : 07/19/2023+
-# GetCurrentDateTime (Azure Cosmos DB)
+
+# GetCurrentDateTime (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)] Returns the current UTC (Coordinated Universal Time) date and time as an ISO 8601 string.
-
+ ## Syntax
-
+ ```sql
-GetCurrentDateTime ()
+GetCurrentDateTime()
``` ## Return types
-
-Returns the current UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
-
-|Format|Description|
-|-|-|
-|YYYY|four-digit year|
-|MM|two-digit month (01 = January, etc.)|
-|DD|two-digit day of month (01 through 31)|
-|T|signifier for beginning of time elements|
-|hh|two-digit hour (00 through 23)|
-|mm|two-digit minutes (00 through 59)|
-|ss|two-digit seconds (00 through 59)|
-|.fffffff|seven-digit fractional seconds|
-|Z|UTC (Coordinated Universal Time) designator|
-
- For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
-## Remarks
-
-GetCurrentDateTime() is a nondeterministic function. The result returned is UTC. Precision is 7 digits, with an accuracy of 100 nanoseconds.
+Returns the current UTC date and time string value in the **round-trip** (ISO 8601) format.
> [!NOTE]
-> This system function will not utilize the index. If you need to compare values to the current time, obtain the current time before query execution and use that constant string value in the `WHERE` clause.
+> For more information on the round-trip format, see [.NET round-trip format](/dotnet/standard/base-types/standard-date-and-time-format-strings#the-round-trip-o-o-format-specifier). For more information on the ISO 8601 format, see [ISO 8601](https://wikipedia.org/wiki/ISO_8601).
## Examples
-
-The following example shows how to get the current UTC Date Time using the GetCurrentDateTime() built-in function.
-
-```sql
-SELECT GetCurrentDateTime() AS currentUtcDateTime
-```
-
- Here is an example result set.
-
-```json
-[{
- "currentUtcDateTime": "2019-05-03T20:36:17.1234567Z"
-}]
-```
+
+The following example shows how to get the current UTC date and time string.
+++
+## Remarks
+
+- This function is nondeterministic.
+- The result returned is UTC (Coordinated Universal Time) with precision of seven digits and an accuracy of 100 nanoseconds.
+- This function doesn't use the index.
+- If you need to compare values to the current time, obtain the current time before query execution and use that constant string value in the `WHERE` clause.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`GetCurrentDateTimeStatic`](getcurrentdatetimestatic.md)
cosmos-db Getcurrentdatetimestatic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/getcurrentdatetimestatic.md
Previously updated : 07/01/2023 Last updated : 07/19/2023
GetCurrentDateTimeStatic()
## Return types
-Returns the current UTC date and time string value in the **round-trip** (ISO 8601) format. For more information on the round-trip format, see [.NET round-trip format](/dotnet/standard/base-types/standard-date-and-time-format-strings#the-round-trip-o-o-format-specifier). For more information on the ISO 8601 format, see [ISO 8601](https://wikipedia.org/wiki/ISO_8601).
+Returns the current UTC date and time string value in the **round-trip** (ISO 8601) format.
+
+> [!NOTE]
+> For more information on the round-trip format, see [.NET round-trip format](/dotnet/standard/base-types/standard-date-and-time-format-strings#the-round-trip-o-o-format-specifier). For more information on the ISO 8601 format, see [ISO 8601](https://wikipedia.org/wiki/ISO_8601).
## Examples This example uses a container with a partition key path of `/pk`. There are three items in the container with two items within the same logical partition, and one item in a different logical partition.
-```json
-[
- {
- "id": "1",
- "pk": "A"
- },
- {
- "id": "2",
- "pk": "A"
- },
- {
- "id": "3",
- "pk": "B"
- }
-]
-```
This function returns the same static date and time for items within the same partition. For comparison, the nonstatic function gets a new date and time value for each item matched by the query.
-```sql
-SELECT
- i.id,
- i.pk AS partitionKey,
- GetCurrentDateTime() AS nonStaticDateTime,
- GetCurrentDateTimeStatic() AS staticDateTime
-FROM
- items i
-```
-```json
-[
- {
- "id": "1",
- "partitionKey": "A",
- "nonStaticDateTime": "2023-06-28T18:32:12.4500994Z",
- "staticDateTime": "2023-06-28T18:32:12.4499507Z"
- },
- {
- "id": "2",
- "partitionKey": "A",
- "nonStaticDateTime": "2023-06-28T18:32:12.4501101Z",
- "staticDateTime": "2023-06-28T18:32:12.4499507Z"
- },
- {
- "id": "3",
- "partitionKey": "B",
- "nonStaticDateTime": "2023-06-28T18:32:12.4501181Z",
- "staticDateTime": "2023-06-28T18:32:12.4401181Z"
- }
-]
-```
> [!NOTE] > It's possible for items in different logical partitions to exist in the same physical partition. In this scenario, the static date and time value would be identical.
FROM
## See also - [System functions](system-functions.yml)-- [`GetCurrentDateTime` (nonstatic)](getcurrentdatetime.md)
+- [`GetCurrentDateTime`](getcurrentdatetime.md)
cosmos-db Getcurrentticks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/getcurrentticks.md
Title: GetCurrentTicks in Azure Cosmos DB query language
-description: Learn about SQL system function GetCurrentTicks in Azure Cosmos DB.
-
+ Title: GetCurrentTicks
+
+description: An Azure Cosmos DB for NoSQL system function that returns a nanosecond ticks value.
+++ - Previously updated : 02/03/2021---+ Last updated : 07/19/2023+
-# GetCurrentTicks (Azure Cosmos DB)
+
+# GetCurrentTicks (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
-Returns the number of 100-nanosecond ticks that have elapsed since 00:00:00 Thursday, 1 January 1970.
+Returns the number of 100-nanosecond ticks that have elapsed since `00:00:00 Thursday, 1 January 1970`.
## Syntax ```sql
-GetCurrentTicks ()
+GetCurrentTicks()
``` ## Return types
-Returns a signed numeric value, the current number of 100-nanosecond ticks that have elapsed since the Unix epoch. In other words, GetCurrentTicks returns the number of 100 nanosecond ticks that have elapsed since 00:00:00 Thursday, 1 January 1970.
+Returns a signed numeric value that represents the current number of 100-nanosecond ticks that have elapsed since the Unix epoch (`00:00:00 Thursday, 1 January 1970`).
-## Remarks
-
-GetCurrentTicks() is a nondeterministic function. The result returned is UTC (Coordinated Universal Time).
+## Examples
-> [!NOTE]
-> This system function will not utilize the index. If you need to compare values to the current time, obtain the current time before query execution and use that constant string value in the `WHERE` clause.
+The following example returns the current time measured in ticks:
-## Examples
-Here's an example that returns the current time, measured in ticks:
-```sql
-SELECT GetCurrentTicks() AS CurrentTimeInTicks
-```
+## Remarks
-```json
-[
- {
- "CurrentTimeInTicks": 15973607943002652
- }
-]
-```
+- This function is nondeterministic.
+- The result returned is UTC (Coordinated Universal Time).
+- This function doesn't use the index.
+- If you need to compare values to the current time, obtain the current time before query execution and use that constant string value in the `WHERE` clause.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`GetCurrentTicksStatic`](getcurrentticksstatic.md)
cosmos-db Getcurrentticksstatic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/getcurrentticksstatic.md
Previously updated : 07/01/2023 Last updated : 07/19/2023
Returns a signed numeric value that represents the current number of 100-nanosec
This example uses a container with a partition key path of `/pk`. There are three items in the container with two items within the same logical partition, and one item in a different logical partition.
-```json
-[
- {
- "id": "1",
- "pk": "A"
- },
- {
- "id": "2",
- "pk": "A"
- },
- {
- "id": "3",
- "pk": "B"
- }
-]
-```
This function returns the same static nanosecond ticks for items within the same partition. For comparison, the nonstatic function gets a new nanosecond ticks value for each item matched by the query.
-```sql
-SELECT
- i.id,
- i.pk AS partitionKey,
- GetCurrentTicks() AS nonStaticTicks,
- GetCurrentTicksStatic() AS staticTicks
-FROM
- items i
-```
-```json
-[
- {
- "id": "1",
- "partitionKey": "A",
- "nonStaticTicks": 16879779663422236,
- "staticTicks": 16879779663415572
- },
- {
- "id": "2",
- "partitionKey": "A",
- "nonStaticTicks": 16879779663422320,
- "staticTicks": 16879779663415572
- },
- {
- "id": "3",
- "partitionKey": "B",
- "nonStaticTicks": 16879779663422380,
- "staticTicks": 16879779663421680
- }
-]
-```
> [!NOTE] > It's possible for items in different logical partitions to exist in the same physical partition. In this scenario, the static nanosecond ticks value would be identical.
FROM
## See also - [System functions](system-functions.yml)-- [`GetCurrentTicks` (nonstatic)](getcurrentticks.md)
+- [`GetCurrentTicks`](getcurrentticks.md)
cosmos-db Getcurrenttimestamp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/getcurrenttimestamp.md
Title: GetCurrentTimestamp in Azure Cosmos DB query language
-description: Learn about SQL system function GetCurrentTimestamp in Azure Cosmos DB.
-
+ Title: GetCurrentTimestamp
+
+description: An Azure Cosmos DB for NoSQL system function that returns a timestamp value.
+++ - Previously updated : 02/03/2021---+ Last updated : 07/19/2023+
-# GetCurrentTimestamp (Azure Cosmos DB)
+
+# GetCurrentTimestamp (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns the number of milliseconds that have elapsed since 00:00:00 Thursday, 1 January 1970.
-
+Returns the number of milliseconds that have elapsed since `00:00:00 Thursday, 1 January 1970`.
+ ## Syntax
-
+ ```sql
-GetCurrentTimestamp ()
-```
-
+GetCurrentTimestamp()
+```
+ ## Return types
-
-Returns a signed numeric value, the current number of milliseconds that have elapsed since the Unix epoch i.e. the number of milliseconds that have elapsed since 00:00:00 Thursday, 1 January 1970.
-## Remarks
+Returns a signed numeric value that represents the current number of milliseconds that have elapsed since the Unix epoch (`00:00:00 Thursday, 1 January 1970`).
-GetCurrentTimestamp() is a nondeterministic function. The result returned is UTC (Coordinated Universal Time).
+## Examples
-> [!NOTE]
-> This system function will not utilize the index. If you need to compare values to the current time, obtain the current time before query execution and use that constant string value in the `WHERE` clause.
+The following example shows how to get the current timestamp.
-## Examples
-
- The following example shows how to get the current timestamp using the GetCurrentTimestamp() built-in function.
-
-```sql
-SELECT GetCurrentTimestamp() AS currentUtcTimestamp
-```
-
- Here is an example result set.
-
-```json
-[{
- "currentUtcTimestamp": 1556916469065
-}]
-```
++
+## Remarks
+
+- This function is nondeterministic.
+- The result returned is UTC (Coordinated Universal Time).
+- This function doesn't use the index.
+- If you need to compare values to the current time, obtain the current time before query execution and use that constant string value in the `WHERE` clause.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`GetCurrentTimestampStatic`](getcurrenttimestampstatic.md)
cosmos-db Getcurrenttimestampstatic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/getcurrenttimestampstatic.md
Previously updated : 07/01/2023 Last updated : 07/19/2023
Returns a signed numeric value that represents the current number of millisecond
This example uses a container with a partition key path of `/pk`. There are three items in the container with two items within the same logical partition, and one item in a different logical partition.
-```json
-[
- {
- "id": "1",
- "pk": "A"
- },
- {
- "id": "2",
- "pk": "A"
- },
- {
- "id": "3",
- "pk": "B"
- }
-]
-```
This function returns the same static timestamp for items within the same partition. For comparison, the nonstatic function gets a new timestamp value for each item matched by the query.
-```sql
-SELECT
- i.id,
- i.pk AS partitionKey,
- GetCurrentTimestamp() AS nonStaticTimestamp,
- GetCurrentTimestampStatic() AS staticTimestamp
-FROM
- items i
-```
-```json
-[
- {
- "id": "1",
- "partitionKey": "A",
- "nonStaticTimestamp": 1687977636235,
- "staticTimestamp": 1687977636232
- },
- {
- "id": "2",
- "partitionKey": "A",
- "nonStaticTimestamp": 1687977636235,
- "staticTimestamp": 1687977636232
- },
- {
- "id": "3",
- "partitionKey": "B",
- "nonStaticTimestamp": 1687977636238,
- "staticTimestamp": 1687977636237
- }
-]
-```
> [!NOTE] > It's possible for items in different logical partitions to exist in the same physical partition. In this scenario, the static date and time value would be identical.
FROM
## See also - [System functions](system-functions.yml)-- [`GetCurrentTimestamp` (nonstatic)](getcurrenttimestamp.md)
+- [`GetCurrentTimestamp`](getcurrenttimestamp.md)
cosmos-db Intadd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/intadd.md
Previously updated : 07/01/2023 Last updated : 07/20/2023
IntAdd(<int_expr_1>, <int_expr_2>)
## Return types
-Returns a 64-bit integer. For more information, see [__int64](/cpp/cpp/int8-int16-int32-int64).
+Returns a 64-bit integer.
+
+> [!NOTE]
+> For more information, see [__int64](/cpp/cpp/int8-int16-int32-int64).
## Examples This example tests the function with various static values.
-```sql
-SELECT VALUE {
- addNumber: IntAdd(20, 10),
- addZero: IntAdd(20, 0),
- addDecimal: IntAdd(20, 0.10)
-}
-```
-```json
-[
- {
- "addNumber": 30,
- "addZero": 20
- }
-]
-```
## Remarks
cosmos-db Intbitand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/intbitand.md
Previously updated : 07/01/2023 Last updated : 07/20/2023
IntBitAnd(<int_expr_1>, <int_expr_2>)
## Return types
-Returns a 64-bit integer. For more information, see [__int64](/cpp/cpp/int8-int16-int32-int64).
+Returns a 64-bit integer.
+
+> [!NOTE]
+> For more information, see [__int64](/cpp/cpp/int8-int16-int32-int64).
## Examples This example tests the function with various static values.
-```sql
-SELECT VALUE {
- compareNumbers: IntBitAnd(15, 25),
- compareZero: IntBitAnd(15, 0),
- compareSameNumber: IntBitAnd(15, 15),
- compareDecimal: IntBitAnd(15, 1.5)
-}
-```
-```json
-[
- {
- "compareNumbers": 9,
- "compareZero": 0,
- "compareSameNumber": 15
- }
-]
-```
## Remarks
cosmos-db Intbitleftshift https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/intbitleftshift.md
Previously updated : 07/01/2023 Last updated : 07/20/2023
IntBitLeftShift(<int_expr_1>, <int_expr_2>)
## Return types
-Returns a 64-bit integer. For more information, see [__int64](/cpp/cpp/int8-int16-int32-int64).
+Returns a 64-bit integer.
+
+> [!NOTE]
+> For more information, see [__int64](/cpp/cpp/int8-int16-int32-int64).
## Examples This example tests the function with various static values.
-```sql
-SELECT VALUE {
- shiftInteger: IntBitLeftShift(16, 4),
- shiftDecimal: IntBitLeftShift(16, 0.4)
-}
-```
-```json
-[
- {
- "shiftInteger": 256
- }
-]
-```
## Remarks
cosmos-db Intbitnot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/intbitnot.md
Previously updated : 07/01/2023 Last updated : 07/20/2023
IntBitNot(<int_expr>)
## Return types
-Returns a 64-bit integer. For more information, see [__int64](/cpp/cpp/int8-int16-int32-int64).
+Returns a 64-bit integer.
+
+> [!NOTE]
+> For more information, see [__int64](/cpp/cpp/int8-int16-int32-int64).
## Examples This example tests the function with various static values.
-```sql
-SELECT VALUE {
- complementNumber: IntBitNot(65),
- complementZero: IntBitNot(0),
- complementDecimal: IntBitNot(0.1)
-}
-```
-```json
-[
- {
- "complementNumber": -66,
- "complementZero": -1
- }
-]
-```
## Remarks
cosmos-db Intbitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/intbitor.md
Previously updated : 07/01/2023 Last updated : 07/20/2023
IntBitOr(<int_expr_1>, <int_expr_2>)
## Return types
-Returns a 64-bit integer. For more information, see [__int64](/cpp/cpp/int8-int16-int32-int64).
+Returns a 64-bit integer.
+
+> [!NOTE]
+> For more information, see [__int64](/cpp/cpp/int8-int16-int32-int64).
## Examples This example tests the function with various static values.
-```sql
-SELECT VALUE {
- inclusiveOr: IntBitOr(56, 100),
- inclusiveOrSame: IntBitOr(56, 56),
- inclusiveOrZero: IntBitOr(56, 0),
- inclusiveOrDecimal: IntBitOr(56, 0.1)
-}
-```
-```json
-[
- {
- "inclusiveOr": 124,
- "inclusiveOrSame": 56,
- "inclusiveOrZero": 56
- }
-]
-```
## Remarks
cosmos-db Intbitrightshift https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/intbitrightshift.md
Previously updated : 07/01/2023 Last updated : 07/20/2023
IntBitRightShift(<int_expr_1>, <int_expr_2>)
## Return types
-Returns a 64-bit integer. For more information, see [__int64](/cpp/cpp/int8-int16-int32-int64).
+Returns a 64-bit integer.
+
+> [!NOTE]
+> For more information, see [__int64](/cpp/cpp/int8-int16-int32-int64).
## Examples This example tests the function with various static values.
-```sql
-SELECT VALUE {
- shiftInteger: IntBitRightShift(16, 4),
- shiftDecimal: IntBitRightShift(16, 0.4)
-}
-```
-```json
-[
- {
- "shiftInteger": 1
- }
-]
-```
## Remarks
cosmos-db Intbitxor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/intbitxor.md
Previously updated : 07/01/2023 Last updated : 07/20/2023
IntBitXor(<int_expr_1>, <int_expr_2>)
## Return types
-Returns a 64-bit integer. For more information, see [__int64](/cpp/cpp/int8-int16-int32-int64).
+Returns a 64-bit integer.
+
+> [!NOTE]
+> For more information, see [__int64](/cpp/cpp/int8-int16-int32-int64).
## Examples This example tests the function with various static values.
-```sql
-SELECT VALUE {
- exclusiveOr: IntBitXor(56, 100),
- exclusiveOrSame: IntBitXor(56, 56),
- exclusiveOrZero: IntBitXor(56, 0),
- exclusiveOrDecimal: IntBitXor(56, 0.1)
-}
-```
-```json
-[
- {
- "exclusiveOr": 92,
- "exclusiveOrSame": 0,
- "exclusiveOrZero": 56
- }
-]
-```
## Remarks
cosmos-db Intdiv https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/intdiv.md
Previously updated : 07/01/2023 Last updated : 07/20/2023
IntDiv(<int_expr_1>, <int_expr_2>)
## Return types
-Returns a 64-bit integer. For more information, see [__int64](/cpp/cpp/int8-int16-int32-int64).
+Returns a 64-bit integer.
+
+> [!NOTE]
+> For more information, see [__int64](/cpp/cpp/int8-int16-int32-int64).
## Examples This example tests the function with various static values.
-```sql
-SELECT VALUE {
- divide: IntDiv(10, 2),
- negativeResult: IntDiv(10, -2),
- positiveResult: IntDiv(-10, -2),
- resultOne: IntDiv(10, 10),
- divideZero: IntDiv(10, 0),
- divideDecimal: IntDiv(10, 0.1)
-}
-```
-```json
-[
- {
- "divide": 5,
- "negativeResult": -5,
- "positiveResult": 5,
- "resultOne": 1
- }
-]
-```
## Remarks
cosmos-db Intmod https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/intmod.md
Previously updated : 07/01/2023 Last updated : 07/20/2023
IntMod(<int_expr_1>, <int_expr_2>)
## Return types
-Returns a 64-bit integer. For more information, see [__int64](/cpp/cpp/int8-int16-int32-int64).
+Returns a 64-bit integer.
+
+> [!NOTE]
+> For more information, see [__int64](/cpp/cpp/int8-int16-int32-int64).
## Examples This example tests the function with various static values.
-```sql
-SELECT VALUE {
- mod: IntMod(12, 5),
- positiveResult: IntMod(12, -5),
- negativeResult: IntMod(-12, -5),
- resultZero: IntMod(15, 5),
- modZero: IntMod(12, 0),
- modDecimal: IntMod(12, 0.2)
-}
-```
-```json
-[
- {
- "mod": 2,
- "positiveResult": 2,
- "negativeResult": -2,
- "resultZero": 0
- }
-]
-```
## Remarks
cosmos-db Intmul https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/intmul.md
Previously updated : 07/01/2023 Last updated : 07/20/2023
IntMul(<int_expr_1>, <int_expr_2>)
## Return types
-Returns a 64-bit integer. For more information, see [__int64](/cpp/cpp/int8-int16-int32-int64).
+Returns a 64-bit integer.
+
+> [!NOTE]
+> For more information, see [__int64](/cpp/cpp/int8-int16-int32-int64).
## Examples This example tests the function with various static values.
-```sql
-SELECT VALUE {
- multiply: IntMul(5, 2),
- negativeResult: IntMul(5, -2),
- positiveResult: IntMul(-5, -2),
- square: IntMul(5, 5),
- cube: IntMul(5, IntMul(5, 5)),
- multiplyZero: IntMul(5, 0),
- multiplyDecimal: IntMul(5, 0.5)
-}
-```
-```json
-[
- {
- "multiply": 10,
- "negativeResult": -10,
- "positiveResult": 10,
- "square": 25,
- "cube": 125,
- "multiplyZero": 0
- }
-]
-```
## Remarks
cosmos-db Intsub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/intsub.md
Previously updated : 07/01/2023 Last updated : 07/20/2023
IntSub(<int_expr_1>, <int_expr_2>)
## Return types
-Returns a 64-bit integer. For more information, see [__int64](/cpp/cpp/int8-int16-int32-int64).
+Returns a 64-bit integer.
+
+> [!NOTE]
+> For more information, see [__int64](/cpp/cpp/int8-int16-int32-int64).
## Examples This example tests the function with various static values.
-```sql
-SELECT VALUE {
- negativeResult: IntSub(25, 50),
- positiveResult: IntSub(25, 15),
- subtractSameNumber: IntSub(25, 25),
- subtractZero: IntSub(25, 0),
- subtractDecimal: IntSub(25, 2.5)
-}
-```
-```json
-[
- {
- "negativeResult": -25,
- "positiveResult": 10,
- "subtractSameNumber": 0,
- "subtractZero": 25
- }
-]
-```
## Remarks
cosmos-db Is Array https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/is-array.md
Previously updated : 07/01/2023 Last updated : 07/20/2023
Returns a boolean expression.
The following example checks objects of various types using the function.
-```sql
-SELECT VALUE {
- booleanIsArray: IS_ARRAY(true),
- numberIsArray: IS_ARRAY(65),
- stringIsArray: IS_ARRAY("AdventureWorks"),
- nullIsArray: IS_ARRAY(null),
- objectIsArray: IS_ARRAY({size: "small"}),
- arrayIsArray: IS_ARRAY([25344, 82947]),
- arrayObjectPropertyIsArray: IS_ARRAY({skus: [25344, 82947], vendors: null}.skus),
- invalidObjectPropertyIsArray: IS_ARRAY({skus: [25344, 82947], vendors: null}.size),
- nullObjectPropertyIsArray: IS_ARRAY({skus: [25344, 82947], vendors: null}.vendor)
-}
-```
-
-```json
-[
- {
- "booleanIsArray": false,
- "numberIsArray": false,
- "stringIsArray": false,
- "nullIsArray": false,
- "objectIsArray": false,
- "arrayIsArray": true,
- "arrayObjectPropertyIsArray": true,
- "invalidObjectPropertyIsArray": false,
- "nullObjectPropertyIsArray": false
- }
-]
-```
+ ## Remarks -- This system function benefits from a [range index](../../index-policy.md#includeexclude-strategy).
+- This function benefits from a [range index](../../index-policy.md#includeexclude-strategy).
## Next steps
cosmos-db Is Bool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/is-bool.md
Previously updated : 07/01/2023 Last updated : 07/20/2023
Returns a boolean expression.
The following example checks objects of various types using the function.
-```sql
-SELECT VALUE {
- booleanIsBool: IS_BOOL(true),
- numberIsBool: IS_BOOL(65),
- stringIsBool: IS_BOOL("AdventureWorks"),
- nullIsBool: IS_BOOL(null),
- objectIsBool: IS_BOOL({size: "small"}),
- arrayIsBool: IS_BOOL([25344, 82947]),
- arrayObjectPropertyIsBool: IS_BOOL({skus: [25344, 82947], vendors: null}.skus),
- invalidObjectPropertyIsBool: IS_BOOL({skus: [25344, 82947], vendors: null}.size),
- nullObjectPropertyIsBool: IS_BOOL({skus: [25344, 82947], vendors: null}.vendor)
-}
-```
-
-```json
-[
- {
- "booleanIsBool": true,
- "numberIsBool": false,
- "stringIsBool": false,
- "nullIsBool": false,
- "objectIsBool": false,
- "arrayIsBool": false,
- "arrayObjectPropertyIsBool": false,
- "invalidObjectPropertyIsBool": false,
- "nullObjectPropertyIsBool": false
- }
-]
-```
+ ## Remarks -- This system function benefits from a [range index](../../index-policy.md#includeexclude-strategy).
+- This function benefits from a [range index](../../index-policy.md#includeexclude-strategy).
## Next steps
cosmos-db Is Defined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/is-defined.md
Title: IS_DEFINED in Azure Cosmos DB query language
-description: Learn about SQL system function IS_DEFINED in Azure Cosmos DB.
-
+ Title: IS_DEFINED
+
+description: An Azure Cosmos DB for NoSQL system function that returns true if the property has been assigned a value.
+++ - Previously updated : 09/13/2019--+ Last updated : 07/20/2023+
-# IS_DEFINED (Azure Cosmos DB)
+
+# IS_DEFINED (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns a Boolean indicating if the property has been assigned a value.
-
+Returns a boolean indicating if the property has been assigned a value.
+ ## Syntax
-
+ ```sql IS_DEFINED(<expr>) ```
-
+ ## Arguments
-
-*expr*
- Is any expression.
+
+| | Description |
+| | |
+| **`expr`** | Any expression. |
## Return types
- Returns a Boolean expression.
-
+Returns a boolean expression.
+ ## Examples
-
- The following example checks for the presence of a property within the specified JSON document. The first returns true since "a" is present, but the second returns false since "b" is absent.
-
-```sql
-SELECT IS_DEFINED({ "a" : 5 }.a) AS isDefined1, IS_DEFINED({ "a" : 5 }.b) AS isDefined2
-```
-
- Here is the result set.
-
-```json
-[{"isDefined1":true,"isDefined2":false}]
-```
+
+The following example checks for the presence of a property within the specified JSON document.
++ ## Remarks
-This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy).
+- This function benefits from a [range index](../../index-policy.md#includeexclude-strategy).
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`IS_NULL`](is-null.md)
cosmos-db Is Finite Number https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/is-finite-number.md
Previously updated : 07/01/2023 Last updated : 07/20/2023
Returns a boolean.
This example demonstrates the function with various static values.
-```sql
-SELECT VALUE {
- finiteValue: IS_FINITE_NUMBER(1234.567),
- infiniteValue: IS_FINITE_NUMBER(8.9 / 0.0),
- nanValue: IS_FINITE_NUMBER(SQRT(-1.0))
-}
-```
-```json
-[
- {
- "finiteValue": true,
- "infiniteValue": false,
- "nanValue": false
- }
-]
-```
+
+## Remarks
+
+- This function benefits from a [range index](../../index-policy.md#includeexclude-strategy).
## See also
cosmos-db Is Integer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/is-integer.md
Previously updated : 07/01/2023 Last updated : 07/20/2023
Returns a boolean.
This example demonstrates the function with various static values.
-```sql
-SELECT VALUE {
- smallDecimalValue: IS_INTEGER(3454.123),
- integerValue: IS_INTEGER(5523432),
- minIntegerValue: IS_INTEGER(-9223372036854775808),
- maxIntegerValue: IS_INTEGER(9223372036854775807),
- outOfRangeValue: IS_INTEGER(18446744073709551615)
-}
-```
-```json
-[
- {
- "smallDecimalValue": false,
- "integerValue": true,
- "minIntegerValue": true,
- "maxIntegerValue": true,
- "outOfRangeValue": false
- }
-]
-```
+
+## Remarks
+
+- This function benefits from a [range index](../../index-policy.md#includeexclude-strategy).
## See also
cosmos-db Is Null https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/is-null.md
Previously updated : 07/01/2023 Last updated : 07/20/2023
Returns a boolean expression.
The following example checks objects of various types using the function.
-```sql
-SELECT VALUE {
- booleanIsNull: IS_NULL(true),
- numberIsNull: IS_NULL(15),
- stringIsNull: IS_NULL("AdventureWorks"),
- nullIsNull: IS_NULL(null),
- objectIsNull: IS_NULL({price: 85.23}),
- arrayIsNull: IS_NULL(["red", "blue", "yellow"]),
- populatedObjectPropertyIsNull: IS_NULL({quantity: 25, vendor: null}.quantity),
- invalidObjectPropertyIsNull: IS_NULL({quantity: 25, vendor: null}.size),
- nullObjectPropertyIsNull: IS_NULL({quantity: 25, vendor: null}.vendor)
-}
-```
-
-```json
-[
- {
- "booleanIsNull": false,
- "numberIsNull": false,
- "stringIsNull": false,
- "nullIsNull": true,
- "objectIsNull": false,
- "arrayIsNull": false,
- "populatedObjectPropertyIsNull": false,
- "invalidObjectPropertyIsNull": false,
- "nullObjectPropertyIsNull": true
- }
-]
-```
+ ## Remarks -- This system function benefits from a [range index](../../index-policy.md#includeexclude-strategy).
+- This function benefits from a [range index](../../index-policy.md#includeexclude-strategy).
## Next steps
cosmos-db Is Number https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/is-number.md
Title: IS_NUMBER in Azure Cosmos DB query language
-description: Learn about SQL system function IS_NUMBER in Azure Cosmos DB.
-
+ Title: IS_NUMBER
+
+description: An Azure Cosmos DB for NoSQL system function that returns true if the type of the specified expression is a number.
+++ - Previously updated : 09/13/2019--+ Last updated : 07/20/2023+
-# IS_NUMBER (Azure Cosmos DB)
+# IS_NUMBER (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
-Returns a Boolean value indicating if the type of the specified expression is a number.
-
+Returns a boolean value indicating if the type of the specified expression is a number.
+ ## Syntax
-
+ ```sql IS_NUMBER(<expr>) ``` ## Arguments
-
-*expr*
- Is any expression.
+
+| | Description |
+| | |
+| **`expr`** | Any expression. |
## Return types
-Returns a Boolean expression.
+Returns a boolean expression.
## Examples
-
-The following example checks objects of JSON Boolean, number, string, null, object, array, and undefined types using the `IS_NUMBER` function.
-
-```sql
-SELECT
- IS_NUMBER(true) AS isBooleanANumber,
- IS_NUMBER(1) AS isNumberANumber,
- IS_NUMBER("value") AS isTextStringANumber,
- IS_NUMBER("1") AS isNumberStringANumber,
- IS_NUMBER(null) AS isNullANumber,
- IS_NUMBER({prop: "value"}) AS isObjectANumber,
- IS_NUMBER([1, 2, 3]) AS isArrayANumber,
- IS_NUMBER({stringProp: "value"}.stringProp) AS isObjectStringPropertyANumber,
- IS_NUMBER({numberProp: 1}.numberProp) AS isObjectNumberPropertyANumber
-```
-Here's the result set.
-
-```json
-[
- {
- "isBooleanANumber": false,
- "isNumberANumber": true,
- "isTextStringANumber": false,
- "isNumberStringANumber": false,
- "isNullANumber": false,
- "isObjectANumber": false,
- "isArrayANumber": false,
- "isObjectStringPropertyANumber": false,
- "isObjectNumberPropertyANumber": true
- }
-]
-```
+The following example various values to see if they're a number.
++ ## Remarks
-This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy).
+- This function benefits from a [range index](../../index-policy.md#includeexclude-strategy).
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`IS_FINITE_NUMBER`](is-finite-number.md)
cosmos-db Is Object https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/is-object.md
Title: IS_OBJECT in Azure Cosmos DB query language
-description: Learn about SQL system function IS_OBJECT in Azure Cosmos DB.
-
+ Title: IS_OBJECT
+
+description: An Azure Cosmos DB for NoSQL system function that returns true if the type of the specified expression is a JSON object.
+++ - Previously updated : 09/13/2019--+ Last updated : 07/20/2023+
-# IS_OBJECT (Azure Cosmos DB)
+
+# IS_OBJECT (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns a Boolean value indicating if the type of the specified expression is a JSON object.
-
+Returns a boolean value indicating if the type of the specified expression is a JSON object.
+ ## Syntax
-
+ ```sql IS_OBJECT(<expr>) ``` ## Arguments
-
-*expr*
- Is any expression.
+
+| | Description |
+| | |
+| **`expr`** | Any expression. |
## Return types
- Returns a Boolean expression.
+Returns a boolean expression.
## Examples
-
- The following example checks objects of JSON Boolean, number, string, null, object, array, and undefined types using the `IS_OBJECT` function.
-
-```sql
-SELECT
- IS_OBJECT(true) AS isObj1,
- IS_OBJECT(1) AS isObj2,
- IS_OBJECT("value") AS isObj3,
- IS_OBJECT(null) AS isObj4,
- IS_OBJECT({prop: "value"}) AS isObj5,
- IS_OBJECT([1, 2, 3]) AS isObj6,
- IS_OBJECT({prop: "value"}.prop2) AS isObj7
-```
-
- Here is the result set.
-
-```json
-[{"isObj1":false,"isObj2":false,"isObj3":false,"isObj4":false,"isObj5":true,"isObj6":false,"isObj7":false}]
-```
+
+The following example various values to see if they're an object.
++ ## Remarks
-This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy).
+- This function benefits from a [range index](../../index-policy.md#includeexclude-strategy).
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`IS_PRIMITIVE`](is-primitive.md)
cosmos-db Is Primitive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/is-primitive.md
Title: IS_PRIMITIVE in Azure Cosmos DB query language
-description: Learn about SQL system function IS_PRIMITIVE in Azure Cosmos DB.
-
+ Title: IS_PRIMITIVE
+
+description: An Azure Cosmos DB for NoSQL system function that returns true if the type of the specified expression is a primitive (string, boolean, numeric, or null).
+++ - Previously updated : 09/13/2019--+ Last updated : 07/20/2023+
-# IS_PRIMITIVE (Azure Cosmos DB)
+
+# IS_PRIMITIVE (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns a Boolean value indicating if the type of the specified expression is a primitive (string, Boolean, numeric, or null).
-
+Returns a boolean value indicating if the type of the specified expression is a primitive (string, boolean, numeric, or null).
+ ## Syntax
-
+ ```sql IS_PRIMITIVE(<expr>) ``` ## Arguments
-
-*expr*
- Is any expression.
+
+| | Description |
+| | |
+| **`expr`** | Any expression. |
## Return types
- Returns a Boolean expression.
+Returns a boolean expression.
## Examples
-
- The following example checks objects of JSON Boolean, number, string, null, object, array and undefined types using the `IS_PRIMITIVE` function.
-
-```sql
-SELECT
- IS_PRIMITIVE(true) AS isPrim1,
- IS_PRIMITIVE(1) AS isPrim2,
- IS_PRIMITIVE("value") AS isPrim3,
- IS_PRIMITIVE(null) AS isPrim4,
- IS_PRIMITIVE({prop: "value"}) AS isPrim5,
- IS_PRIMITIVE([1, 2, 3]) AS isPrim6,
- IS_PRIMITIVE({prop: "value"}.prop2) AS isPrim7
-```
-
- Here is the result set.
-
-```json
-[{"isPrim1": true, "isPrim2": true, "isPrim3": true, "isPrim4": true, "isPrim5": false, "isPrim6": false, "isPrim7": false}]
-```
+
+The following example various values to see if they're a primitive.
++ ## Remarks
-This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy).
+- This function benefits from a [range index](../../index-policy.md#includeexclude-strategy).
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`IS_OBJECT`](is-object.md)
cosmos-db Is String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/is-string.md
Title: IS_STRING in Azure Cosmos DB query language
-description: Learn about SQL system function IS_STRING in Azure Cosmos DB.
-
+ Title: IS_STRING
+
+description: An Azure Cosmos DB for NoSQL system function that returns true if the type of the specified expression is a string.
+++ - Previously updated : 09/13/2019--+ Last updated : 07/20/2023+
-# IS_STRING (Azure Cosmos DB)
+
+# IS_STRING (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns a Boolean value indicating if the type of the specified expression is a string.
-
+Returns a boolean value indicating if the type of the specified expression is a string.
+ ## Syntax
-
+ ```sql IS_STRING(<expr>) ``` ## Arguments
-
-*expr*
- Is any expression.
+
+| | Description |
+| | |
+| **`expr`** | Any expression. |
## Return types
- Returns a Boolean expression.
+Returns a boolean expression.
## Examples
-
- The following example checks objects of JSON Boolean, number, string, null, object, array, and undefined types using the `IS_STRING` function.
-
-```sql
-SELECT
- IS_STRING(true) AS isStr1,
- IS_STRING(1) AS isStr2,
- IS_STRING("value") AS isStr3,
- IS_STRING(null) AS isStr4,
- IS_STRING({prop: "value"}) AS isStr5,
- IS_STRING([1, 2, 3]) AS isStr6,
- IS_STRING({prop: "value"}.prop2) AS isStr7
-```
-
- Here is the result set.
-
-```json
-[{"isStr1":false,"isStr2":false,"isStr3":true,"isStr4":false,"isStr5":false,"isStr6":false,"isStr7":false}]
-```
+
+The following example various values to see if they're a string.
++ ## Remarks
-This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy).
+- This function benefits from a [range index](../../index-policy.md#includeexclude-strategy).
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`IS_NUMBER`](is-number.md)
cosmos-db Radians https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/radians.md
Title: RADIANS in Azure Cosmos DB query language
-description: Learn about SQL system function RADIANS in Azure Cosmos DB.
-
+ Title: RADIANS
+
+description: An Azure Cosmos DB for NoSQL system function that returns a radian value for an angle in degrees.
+++ - Previously updated : 09/13/2019--+ Last updated : 07/19/2023+
-# RADIANS (Azure Cosmos DB)
+
+# RADIANS (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns radians when a numeric expression, in degrees, is entered.
-
+Returns the corresponding angle in radians for an angle specified in degrees.
+ ## Syntax
-
+ ```sql
-RADIANS (<numeric_expr>)
+RADIANS(<numeric_expr>)
```
-
+ ## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
+
+| | Description |
+| | |
+| **`numeric_expr`** | A numeric expression. |
+ ## Return types
-
- Returns a numeric expression.
-
+
+Returns a numeric expression.
+ ## Examples
-
- The following example takes a few angles as input and returns their corresponding radian values.
-
-```sql
-SELECT RADIANS(-45.01) AS r1, RADIANS(-181.01) AS r2, RADIANS(0) AS r3, RADIANS(0.1472738) AS r4, RADIANS(197.1099392) AS r5
-```
-
- Here is the result set.
-
-```json
-[{
- "r1": -0.7855726963226477,
- "r2": -3.1592204790349356,
- "r3": 0,
- "r4": 0.0025704127119236249,
- "r5": 3.4402174274458375
- }]
-```
+
+The following example returns the radians for various degree values.
++ ## Remarks
-This system function will not utilize the index.
+- This function doesn't use the index.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`DEGREES`](degrees.md)
cosmos-db Partners Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partners-migration.md
Title: Migration and application development partners for Azure Cosmos DB
+ Title: Migration and application development partners for Azure Cosmos DB
description: Lists Microsoft partners with migration solutions that support Azure Cosmos DB. Last updated : 07/20/2023 - Previously updated : 08/26/2021+ # Azure Cosmos DB NoSQL migration and application development partners
From NoSQL migration to application development, you can choose from a variety o
## Systems Integrator and tooling partners
-|**Partner** |**Capabilities & experience** |**Supported countries/regions** |
-||||
-|[Striim](https://www.striim.com/) | Continuous, real-time data movement, Data migration| USA |
+| **Partner** | **Capabilities & experience** | **Supported countries/regions** |
+| | | |
+| [Striim](https://www.striim.com/) | Continuous, real-time data movement, Data migration | USA |
| [10thMagnitude](https://www.10thmagnitude.com/) | IoT, Retail (inventory), Operational Analytics (Spark), Serverless architecture, App development | USA |
-|[Altoros Development LLC](https://www.altoros.com/) | IoT, Personalization Retail (inventory), Serverless architectures NoSQL migration, App development| USA |
-|[Avanade](https://www.avanade.com/) | IoT, Retail (inventory), Serverless Architecture, App development | Austria, Germany, Switzerland, Italy, Norway, Spain, UK, Canada |
-|[Accenture](https://www.accenture.com/) | IoT, Retail (inventory), Serverless Architecture, App development |Global|
-|Capax Global LLC | IoT, Personalization, Retail (inventory), Operational Analytics (Spark), Serverless architecture, App development| USA |
-| [Capgemini](https://www.capgemini.com/) | Retail (inventory), IoT, Operational Analytics (Spark), App development | USA, France, UK, Netherlands, Finland |
-| [Cognizant](https://www.cognizant.com/) | IoT, Personalization, Retail (inventory), Operational Analytics (Spark), App development |USA, Canada, UK, Denmark, Netherlands, Switzerland, Australia, Japan |
-|[Infosys](https://www.infosys.com/) | App development | USA |
-| [Lambda3 Informatics](https://www.lambda3.com.br/) | Real-time personalization, Retail inventory, App development | Brazil|
-|[Neal Analytics](https://www.nealanalytics.com/) | Personalization, Retail (inventory), Operational Analytics (Spark), App development | USA |
-|[Pragmatic Works Software Inc](https://www.pragmaticworks.com/) | NoSQL migration | USA |
-| [Ricoh Digital Experience](https://www.ricoh-europe.com/contact-us) | IoT, Real-time personalization, Retail inventory, NoSQL migration | UK, Europe |
-|[SNP Technologies](https://www.snp.com/) | NoSQL migration| USA |
-| [Solidsoft Reply](https://www.reply.com/solidsoft-reply/) | NoSQL migration | Croatia, Sweden, Denmark, Ireland, Bulgaria, Slovenia, Cyprus, Malta, Lithuania, the Czech Republic, Iceland, and Switzerland and Liechtenstein|
-| [Spanish Point Technologies](https://www.spanishpoint.ie/) | NoSQL migration| Ireland|
-| [Syone](https://www.syone.com/) | NoSQL migration| Portugal|
-|[EY](https://www.ey.com/alliances/microsoft) | App development | USA |
-| [TCS](https://www.tcs.com/) | App development | USA, UK, France, Malaysia, Denmark, Norway, Sweden|
-|[VTeamLabs](https://www.vteamlabs.com/) | Personalization, Retail (inventory), IoT, Gaming, Operational Analytics (Spark), Serverless architecture, NoSQL Migration, App development | USA |
-| [White Duck GmbH](https://whiteduck.de/en/) |New app development, App Backend, Storage for document-based data| Germany |
-| [Xpand IT](https://www.xpand-it.com/) | New app development | Portugal, UK|
-| [Hanu](https://hanu.com/) | IoT, App development | USA|
-| [Incycle Software](https://www.incyclesoftware.com/) | NoSQL migration, Serverless architecture, App development| USA|
-| [Orion](https://www.orioninc.com/) | Personalization, Retail (inventory), Operational Analytics (Spark), IoT, App development| USA, Canada|
+| [Altoros Development LLC](https://www.altoros.com/) | IoT, Personalization Retail (inventory), Serverless architectures NoSQL migration, App development | USA |
+| [Avanade](https://www.avanade.com/) | IoT, Retail (inventory), Serverless Architecture, App development | Austria, Germany, Switzerland, Italy, Norway, Spain, UK, Canada |
+| [Accenture](https://www.accenture.com/) | IoT, Retail (inventory), Serverless Architecture, App development | Global |
+| Capax Global LLC | IoT, Personalization, Retail (inventory), Operational Analytics (Spark), Serverless architecture, App development | USA |
+| [Capgemini](https://www.capgemini.com/) | Retail (inventory), IoT, Operational Analytics (Spark), App development | USA, France, UK, Netherlands, Finland |
+| [Cognizant](https://www.cognizant.com/) | IoT, Personalization, Retail (inventory), Operational Analytics (Spark), App development | USA, Canada, UK, Denmark, Netherlands, Switzerland, Australia, Japan |
+| [Infosys](https://www.infosys.com/) | App development | USA |
+| [Lambda3 Informatics](https://www.lambda3.com.br/) | Real-time personalization, Retail inventory, App development | Brazil |
+| [Neal Analytics](https://www.nealanalytics.com/) | Personalization, Retail (inventory), Operational Analytics (Spark), App development | USA |
+| [Pragmatic Works Software Inc](https://www.pragmaticworks.com/) | NoSQL migration | USA |
+| [Ricoh Digital Experience](https://www.ricoh-europe.com/contact-us) | IoT, Real-time personalization, Retail inventory, NoSQL migration | UK, Europe |
+| [SNP Technologies](https://www.snp.com/) | NoSQL migration | USA |
+| [Solidsoft Reply](https://www.reply.com/solidsoft-reply/) | NoSQL migration | Croatia, Sweden, Denmark, Ireland, Bulgaria, Slovenia, Cyprus, Malta, Lithuania, the Czech Republic, Iceland, and Switzerland and Liechtenstein |
+| [Spanish Point Technologies](https://www.spanishpoint.ie/) | NoSQL migration | Ireland |
+| [Syone](https://www.syone.com/) | NoSQL migration | Portugal |
+| [EY](https://www.ey.com/en_gl/alliances/microsoft) | App development | USA |
+| [TCS](https://www.tcs.com/) | App development | USA, UK, France, Malaysia, Denmark, Norway, Sweden |
+| [VTeamLabs](https://www.vteamlabs.com/) | Personalization, Retail (inventory), IoT, Gaming, Operational Analytics (Spark), Serverless architecture, NoSQL Migration, App development | USA |
+| [White Duck GmbH](https://whiteduck.de/en/) | New app development, App Backend, Storage for document-based data | Germany |
+| [Xpand IT](https://www.xpand-it.com/) | New app development | Portugal, UK |
+| [Hanu](https://hanu.com/) | IoT, App development | USA |
+| [Incycle Software](https://www.incyclesoftware.com/) | NoSQL migration, Serverless architecture, App development | USA |
+| [Orion](https://www.orioninc.com/) | Personalization, Retail (inventory), Operational Analytics (Spark), IoT, App development | USA, Canada |
## Next steps To learn more about some of Microsoft's other partners, see the [Microsoft Partner site](https://partner.microsoft.com/). Trying to do capacity planning for a migration to Azure Cosmos DB?
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+- If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
+- If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cost-management-billing Understand Usage Details Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/understand-usage-details-fields.md
description: This article describes the fields in the usage data files. Previously updated : 07/14/2023 Last updated : 07/19/2023
MPA accounts have all MCA terms, in addition to the MPA terms, as described in t
| AccountName | EA, pay-as-you-go | Display name of the EA enrollment account or pay-as-you-go billing account. | | AccountOwnerId┬╣ | EA, pay-as-you-go | Unique identifier for the EA enrollment account or pay-as-you-go billing account. | | AdditionalInfo┬╣ | All | Service-specific metadata. For example, an image type for a virtual machine. |
+| BenefitId┬╣ | EA, MCA | Unique identifier for the purchased savings plan instance. |
+| BenefitName | EA, MCA | Unique identifier for the purchased savings plan instance. |
| BillingAccountId┬╣ | All | Unique identifier for the root billing account. | | BillingAccountName | All | Name of the billing account. | | BillingCurrency | All | Currency associated with the billing account. |
MPA accounts have all MCA terms, in addition to the MPA terms, as described in t
| InvoiceSectionId┬╣ | EA, MCA | Unique identifier for the EA department or MCA invoice section. | | InvoiceSectionName | EA, MCA | Name of the EA department or MCA invoice section. | | IsAzureCreditEligible | All | Indicates if the charge is eligible to be paid for using Azure credits (Values: `True` or `False`). |
-| Location | MCA | Normalized location of the resource, if different resource locations are configured for the same regions. |
+| Location | MCA | Normalized location of the resource, if different resource locations are configured for the same regions. Purchases and Marketplace usage may be shown as blank or `unassigned`. |
| MeterCategory | All | Name of the classification category for the meter. For example, _Cloud services_ and _Networking_. | | MeterId┬╣ | All | The unique identifier for the meter. |
-| MeterName | All | The name of the meter. |
+| MeterName | All | The name of the meter. Purchases and Marketplace usage may be shown as blank or `unassigned`.|
| MeterRegion | All | Name of the datacenter location for services priced based on location. See Location. |
-| MeterSubCategory | All | Name of the meter subclassification category. |
+| MeterSubCategory | All | Name of the meter subclassification category. Purchases and Marketplace usage may be shown as blank or `unassigned`.|
| OfferId┬╣ | All | Name of the offer purchased. | | pay-as-you-goPrice | All | Retail price for the resource. | | PartnerEarnedCreditApplied | MPA | Indicates whether the partner earned credit has been applied. |
MPA accounts have all MCA terms, in addition to the MPA terms, as described in t
| Quantity | All | The number of units purchased or consumed. | | ResellerName | MPA | The name of the reseller associated with the subscription. | | ResellerMpnId | MPA | ID for the reseller associated with the subscription. |
-| ReservationId | EA, MCA | Unique identifier for the purchased reservation instance. |
+| ReservationId┬╣ | EA, MCA | Unique identifier for the purchased reservation instance. |
| ReservationName | EA, MCA | Name of the purchased reservation instance. | | ResourceGroup | All | Name of the [resource group](../../azure-resource-manager/management/overview.md) the resource is in. Not all charges come from resources deployed to resource groups. Charges that don't have a resource group will be shown as null or empty, **Others**, or **Not applicable**. | | ResourceId┬╣ | All | Unique identifier of the [Azure Resource Manager](/rest/api/resources/resources) resource. |
cost-management-billing Assign Access Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/assign-access-acm-data.md
The department scope requires the **Department admins can view charges** (DA vie
To enable an option in the Azure portal:
-1. Sign in to the Azure portal at https://portal.azure.com with an enterprise administrator account.
+1. Sign in to the [Azure portal](https://portal.azure.com) with an enterprise administrator account.
1. Select the **Cost Management + Billing** menu item. 1. Select **Billing scopes** to view a list of available billing scopes and billing accounts. 1. Select your **Billing Account** from the list of available billing accounts.
cost-management-billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-transfer.md
If you have an Azure support plan and you transfer all of your Azure subscriptio
Use your account administrator credentials for your old account if the credentials differ from the ones used to access your new Microsoft Customer Agreement account.
-1. Sign in to the Azure portal at https://portal.azure.com.
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Navigate to **Cost Management + Billing**. 1. Select **Billing Scopes** in the left pane. 1. Select the billing account associated with your Microsoft support plan.
cost-management-billing Prepare Buy Reservation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepare-buy-reservation.md
Previously updated : 12/06/2022 Last updated : 07/20/2023
Azure Reservations help you save money by committing to one-year or three-years
## Who can buy a reservation
-To buy a reservation, you must have owner role or reservation purchaser role on an Azure subscription that's of type Enterprise (MS-AZR-0017P or MS-AZR-0148P) or Pay-As-You-Go (MS-AZR-0003P or MS-AZR-0023P) or Microsoft Customer Agreement. Cloud solution providers can use the Azure portal or [Partner Center](/partner-center/azure-reservations) to purchase Azure Reservations. You will not be able to purchase a reservation if you have a custom role that mimics owner role or reservation purchaser role on an Azure subscription, you must use built-in owner or built-in reservation purchaser role.
+To buy a reservation, you must have owner role or reservation purchaser role on an Azure subscription that's of type Enterprise (MS-AZR-0017P or MS-AZR-0148P) or Pay-As-You-Go (MS-AZR-0003P or MS-AZR-0023P) or Microsoft Customer Agreement. Cloud solution providers can use the Azure portal or [Partner Center](/partner-center/azure-reservations) to purchase Azure Reservations. You can't buy a reservation if you have a custom role that mimics owner role or reservation purchaser role on an Azure subscription. You must use the built-in Owner or built-in Reservation Purchaser role.
Enterprise Agreement (EA) customers can limit purchases to EA admins by disabling the **Add Reserved Instances** option in the EA Portal. Direct EA customers can now disable Reserved Instance setting in [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/BillingAccounts). Navigate to Policies menu to change settings.
You can scope a reservation to a subscription or resource groups. Setting the sc
You have four options to scope a reservation, depending on your needs: -- **Single resource group scope** ΓÇö Applies the reservation discount to the matching resources in the selected resource group only.-- **Single subscription scope** ΓÇö Applies the reservation discount to the matching resources in the selected subscription.-- **Shared scope** ΓÇö Applies the reservation discount to matching resources in eligible subscriptions that are in the billing context. If a subscription was moved to different billing context, the benefit will no longer be applied to this subscription and will continue to apply to other subscriptions in the billing context.
+- **Single resource group scope** - Applies the reservation discount to the matching resources in the selected resource group only.
+- **Single subscription scope** - Applies the reservation discount to the matching resources in the selected subscription.
+- **Shared scope** - Applies the reservation discount to matching resources in eligible subscriptions that are in the billing context. If a subscription is moved to different billing context, the benefit no longer applies to the subscription. It continues to apply to other subscriptions in the billing context.
- For Enterprise Agreement customers, the billing context is the enrollment. The reservation shared scope would include multiple Active Directory tenants in an enrollment. - For Microsoft Customer Agreement customers, the billing scope is the billing profile. - For individual subscriptions with pay-as-you-go rates, the billing scope is all eligible subscriptions created by the account administrator.-- **Management group** ΓÇö Applies the reservation discount to the matching resource in the list of subscriptions that are a part of both the management group and billing scope. To buy a reservation for a management group, you must have at least read permission on the management group and be a reservation owner or reservation purchaser on the billing subscription.
+- **Management group** - Applies the reservation discount to the matching resource in the list of subscriptions that are a part of both the management group and billing scope. The management group scope applies to all subscriptions throughout the entire management group hierarchy. To buy a reservation for a management group, you must have at least read permission on the management group and be a reservation owner or reservation purchaser on the billing subscription.
-While applying reservation discounts on your usage, Azure processes the reservation in the following order:
+While Azure applies reservation discounts on your usage, it processes the reservation in the following order:
1. Reservations with a single resource group scope 2. Reservations with a single subscription scope
You can purchase reservations from Azure portal, APIs, PowerShell, CLI. Read the
- [Software plans](../../virtual-machines/linux/prepay-suse-software-charges.md?toc=/azure/cost-management-billing/reservations/toc.json) - [SQL Database](/azure/azure-sql/database/reserved-capacity-overview?toc=/azure/cost-management-billing/reservations/toc.json) - [Synapse Analytics - data warehouse](prepay-sql-data-warehouse-charges.md)-- [Synapse Analytics - Pre-purchase](synapse-analytics-pre-purchase-plan.md)
+- [Synapse Analytics - Prepurchase](synapse-analytics-pre-purchase-plan.md)
- [Virtual machines](../../virtual-machines/prepay-reserved-vm-instances.md?toc=/azure/cost-management-billing/reservations/toc.json) - [Virtual machine software](buy-vm-software-reservation.md)
You can pay for reservations with monthly payments. Unlike an up-front purchase
If reservation is purchased using Microsoft customer agreement (MCA), your monthly payment amount may vary, depending on the current month's market exchange rate for your local currency.
-Monthly payments are not available for: Databricks, Synapse Analytics - Pre-purchase, SUSE Linux reservations, Red Hat Plans and Azure Red Hat OpenShift Licenses.
+Monthly payments aren't available for: Databricks, Synapse Analytics - Prepurchase, SUSE Linux reservations, Red Hat Plans and Azure Red Hat OpenShift Licenses.
### View payments made
cost-management-billing Troubleshoot No Eligible Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/troubleshoot-no-eligible-subscriptions.md
Previously updated : 12/06/2022 Last updated : 07/20/2023 # Troubleshoot no eligible subscriptions
This subscription is not eligible for reservation benefit an cannot be used to p
:::image type="content" source="./media/troubleshoot-no-eligible-subscriptions/subscription-not-eligible.png" alt-text="Example showing the Subscription not eligible for purchase error message" :::
+>[!NOTE]
+> Reservations aren't supported by the China legacy Online Service Premium Agreement (OSPA) platform. For more information, see [Azure China OSPA purchase](https://go.microsoft.com/fwlink/?linkid=2239835).
+ ### Cause 2 You must be an owner or reservation purchaser on the subscription. When you don't have sufficient permissions, you see the following error.
cost-management-billing Buy Savings Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/buy-savings-plan.md
Previously updated : 06/20/2023 Last updated : 07/20/2023
Azure savings plans help you save money by committing to an hourly spend for one
Savings plan discounts only apply to resources associated with subscriptions purchased through an Enterprise Agreement (EA), Microsoft Customer Agreement (MCA), or Microsoft Partner Agreement (MPA). You can buy a savings plan for an Azure subscription that's of type EA (MS-AZR-0017P or MS-AZR-0148P), MCA or MPA. To determine if you're eligible to buy a plan, [check your billing type](../manage/view-all-accounts.md#check-the-type-of-your-account).
+>[!NOTE]
+> Azure savings plan isn't supported for the China legacy Online Service Premium Agreement (OSPA) platform.
+ ### Enterprise Agreement customers - EA admins with write permissions can directly purchase savings plans from **Cost Management + Billing** > **Savings plan**. No subscription-specific permissions are needed.
cost-management-billing Reservation Trade In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/reservation-trade-in.md
Previously updated : 07/05/2023 Last updated : 07/20/2023
If your [Azure Virtual Machines](https://azure.microsoft.com/pricing/details/virtual-machines/windows/), [Dedicated Hosts](https://azure.microsoft.com/pricing/details/virtual-machines/dedicated-host/), or [Azure App Service](https://azure.microsoft.com/pricing/details/app-service/windows/) reservations, don't provide the necessary flexibility you need, you can trade them for a savings plan. When you trade-in a reservation and purchase a savings plan, you can select a savings plan term of either one-year to three-year.
-Although you can return the above offerings for a savings plan, you can't exchange a savings plan for them or for another savings plan. You can trade in up to 100 reservations as part of a savings plan purchase.
+Although you can return the above offerings for a savings plan, you can't exchange a savings plan for them or for another savings plan. Due to technical limitations, you can only trade in up to 100 reservations at a time as part of a savings plan purchase.
Apart from [Azure Virtual Machines](https://azure.microsoft.com/pricing/details/virtual-machines/windows/), [Dedicated Hosts](https://azure.microsoft.com/pricing/details/virtual-machines/dedicated-host/), or [Azure App Service](https://azure.microsoft.com/pricing/details/app-service/windows/) reservations, no other reservations or prepurchase plans are eligible for trade-in.
data-factory Connector Appfigures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-appfigures.md
Previously updated : 08/16/2022 Last updated : 07/20/2023 # Transform data in AppFigures (Preview) using Azure Data Factory or Synapse Analytics
data-factory Connector Troubleshoot Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-azure-blob-storage.md
Previously updated : 08/12/2022 Last updated : 07/20/2023
data-factory Connector Troubleshoot Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-azure-data-explorer.md
Previously updated : 08/12/2022 Last updated : 07/20/2023
data-factory Continuous Integration Delivery Improvements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-improvements.md
Previously updated : 08/15/2022 Last updated : 07/20/2023 # Automated publishing for continuous integration and delivery
data-factory Data Flow Aggregate Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-aggregate-functions.md
The following functions are only available in aggregate, pivot, unpivot, and win
| [avg](data-flow-expressions-usage.md#avg) | Gets the average of values of a column. | | [avgIf](data-flow-expressions-usage.md#avgIf) | Based on a criteria gets the average of values of a column. | | [collect](data-flow-expressions-usage.md#collect) | Collects all values of the expression in the aggregated group into an array. Structures can be collected and transformed to alternate structures during this process. The number of items will be equal to the number of rows in that group and can contain null values. The number of collected items should be small. |
-| [collectUnique](data-flow-expressions-usage.md#collectUnique) | Collects all values of the expression in the aggregated group into a unique array. Structures can be collected and transformed to alternate structures during this process.The number of items will be equal to the number of rows in that group and can contain null values. The number of collected items should be small |
+| [collectUnique](data-flow-expressions-usage.md#collectUnique) | Collects all values of the expression in the aggregated group into a unique array. Structures can be collected and transformed to alternate structures during this process.The number of items will be smaller or equal to the number of rows in that group and can contain null values. The number of collected items should be small. |
| [count](data-flow-expressions-usage.md#count) | Gets the aggregate count of values. If the optional column(s) is specified, it ignores NULL values in the count. | | [countAll](data-flow-expressions-usage.md#countAll) | Gets the aggregate count of values including NULLs. | | [countDistinct](data-flow-expressions-usage.md#countDistinct) | Gets the aggregate count of distinct values of a set of columns. |
data-factory Data Flow Sink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-sink.md
When using data flows in Azure Synapse workspaces, you will have an additional o
## <a name="supported-sinks"></a> Supported sink types
-Mapping data flow follows an extract, load, and transform (ELT) approach and works with *staging* datasets that are all in Azure. Currently, the following datasets can be used in a source transformation.
+Mapping data flow follows an extract, load, and transform (ELT) approach and works with *staging* datasets that are all in Azure. Currently, the following datasets can be used in a sink transformation.
| Connector | Format | Dataset/inline | | | | -- |
data-factory Data Flow Stringify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-stringify.md
Last updated 07/17/2023
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-Use the stringify transformation to turn complex data types into strings. This can be very useful when you need to store or send column data as a single string entity that may originate as a structure, map, or array type.
+Use the stringify transformation to turn complex data types into strings. This can be useful when you need to store or send column data as a single string entity that may originate as a structure, map, or array type.
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWMTs9] ## Configuration
-In the stringify transformation configuration panel, you will first pick the type of data contained in the columns that you wish to parse inline. The parse transformation also contains the following configuration settings.
+In the stringify transformation configuration panel, you'll first pick the type of data contained in the columns that you wish to parse inline. The stringify transformation also contains the following configuration settings.
:::image type="content" source="media/data-flow/stringify.png" alt-text="Stringify settings"::: ### Column
-Similar to derived columns and aggregates, this is where you will either modify an exiting column by selecting it from the drop-down picker. Or you can type in the name of a new column here. ADF will store the stringifies source data in this column. In most cases, you will want to define a new column that stringifies the incoming complex field type.
+Similar to derived columns and aggregates, this is where you'll either modify an exiting column by selecting it from the drop-down picker. Or you can type in the name of a new column here. ADF will store the stringifies source data in this column. In most cases, you'll want to define a new column that stringifies the incoming complex field type.
### Expression
data-factory How To Invoke Ssis Package Stored Procedure Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-stored-procedure-activity.md
ms.devlang: powershell Previously updated : 08/10/2022 Last updated : 07/20/2023
data-factory How To Migrate Ssis Job Ssms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-migrate-ssis-job-ssms.md
Previously updated : 08/10/2022 Last updated : 07/20/2023 # Migrate SQL Server Agent jobs to ADF with SSMS
data-factory How To Run Self Hosted Integration Runtime In Windows Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-run-self-hosted-integration-runtime-in-windows-container.md
Title: How to run Self-Hosted Integration Runtime in Windows container description: Learn about how to run Self-Hosted Integration Runtime in Windows container.- Previously updated : 08/10/2022 Last updated : 07/20/2023 # How to run Self-Hosted Integration Runtime in Windows container
data-factory How To Send Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-send-email.md
Previously updated : 08/10/2022 Last updated : 07/20/2023 # Send an email with an Azure Data Factory or Azure Synapse pipeline
data-factory How To Send Notifications To Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-send-notifications-to-teams.md
Previously updated : 08/10/2022 Last updated : 07/20/2023 # Send notifications to a Microsoft Teams channel from an Azure Data Factory or Synapse Analytics pipeline
data-factory How To Sqldb To Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-sqldb-to-cosmosdb.md
Previously updated : 08/10/2022 Last updated : 07/20/2023 # Migrate normalized database schema from Azure SQL Database to Azure Cosmos DB denormalized container
data-factory How To Use Azure Key Vault Secrets Pipeline Activities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-use-azure-key-vault-secrets-pipeline-activities.md
Previously updated : 08/10/2022 Last updated : 07/20/2023 # Use Azure Key Vault secrets in pipeline activities
data-factory How To Use Sql Managed Instance With Ir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-use-sql-managed-instance-with-ir.md
Previously updated : 08/10/2022 Last updated : 07/20/2023 # Use Azure SQL Managed Instance with SQL Server Integration Services (SSIS) in Azure Data Factory or Azure Synapse Analytics
data-factory How To Use Trigger Parameterization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-use-trigger-parameterization.md
Previously updated : 08/10/2022 Last updated : 07/20/2023 # Reference trigger metadata in pipeline runs
data-factory Industry Sap Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/industry-sap-connectors.md
Previously updated : 08/11/2022 Last updated : 07/20/2023 # SAP connectors overview
data-factory Industry Sap Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/industry-sap-templates.md
Previously updated : 08/11/2022 Last updated : 07/20/2023 # SAP templates overview
data-factory Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/introduction.md
Previously updated : 08/11/2022 Last updated : 07/20/2023 # What is Azure Data Factory?
data-factory Iterative Development Debugging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/iterative-development-debugging.md
Title: Iterative development and debugging description: Learn how to develop and debug Data Factory and Synapse Analytics pipelines iteratively with the service UI. Previously updated : 08/12/2022 Last updated : 07/20/2023
data-factory Join Azure Ssis Integration Runtime Virtual Network Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/join-azure-ssis-integration-runtime-virtual-network-powershell.md
description: Learn how to join Azure-SSIS integration runtime to a virtual netwo
Previously updated : 08/11/2022 Last updated : 07/20/2023
data-factory Join Azure Ssis Integration Runtime Virtual Network Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/join-azure-ssis-integration-runtime-virtual-network-ui.md
Previously updated : 08/12/2022 Last updated : 07/20/2023
data-factory Join Azure Ssis Integration Runtime Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/join-azure-ssis-integration-runtime-virtual-network.md
description: Learn how to join Azure-SSIS integration runtime to a virtual netwo
Previously updated : 08/12/2022 Last updated : 07/20/2023
data-factory Lab Data Flow Data Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/lab-data-flow-data-share.md
Previously updated : 08/12/2022 Last updated : 07/20/2023 # Data integration using Azure Data Factory and Azure Data Share
data-factory Load Azure Data Lake Storage Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/load-azure-data-lake-storage-gen2.md
Previously updated : 08/12/2022 Last updated : 07/20/2023 # Load data into Azure Data Lake Storage Gen2 with Azure Data Factory
data-factory Load Azure Data Lake Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/load-azure-data-lake-store.md
Previously updated : 08/12/2022 Last updated : 07/20/2023 # Load data into Azure Data Lake Storage Gen1 by using Azure Data Factory
data-factory Load Azure Sql Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/load-azure-sql-data-warehouse.md
Previously updated : 08/12/2022 Last updated : 07/20/2023 # Load data into Azure Synapse Analytics using Azure Data Factory or a Synapse pipeline
data-factory Load Office 365 Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/load-office-365-data.md
Previously updated : 08/12/2022 Last updated : 07/20/2023
data-factory Load Sap Bw Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/load-sap-bw-data.md
Previously updated : 08/12/2022 Last updated : 07/20/2023 # Copy data from SAP Business Warehouse with Azure Data Factory or Synapse Analytics
data-factory Manage Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/manage-azure-ssis-integration-runtime.md
Previously updated : 08/12/2022 Last updated : 07/20/2023
data-factory Monitor Configure Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-configure-diagnostics.md
Previously updated : 08/12/2022 Last updated : 07/20/2023 # Configure diagnostic settings and a workspace
data-factory Monitor Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-integration-runtime.md
description: Learn how to monitor different types of integration runtime in Azur
Previously updated : 08/12/2022 Last updated : 07/20/2023
data-factory Monitor Managed Virtual Network Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-managed-virtual-network-integration-runtime.md
By using our new enhanced monitoring feature, users can gain valuable insights i
## New metrics The introduction of the new metrics in the Managed Virtual Network Integration Runtime feature significantly enhances the visibility and monitoring capabilities within virtual network environments. These new metrics have been designed to address the pain point of limited monitoring, providing users with valuable insights into the performance and health of their data integration workflows.
+![NOTE]
+> These metrics are only valid when enabling Time-To-Live in managed virtual network integration runtime.
+ Azure Data Factory provides three distinct types of compute pools, each tailored to handle specific activity execution requirements. These compute pools offer flexibility and scalability to accommodate diverse workloads and ensure optimal resource allocation: - Compute for Copy activity - Compute for Pipeline activity such as Lookup
data-factory Monitor Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-programmatically.md
description: Learn how to monitor a pipeline in a data factory by using differen
Previously updated : 08/12/2022 Last updated : 07/20/2023
data-factory Parameters Data Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/parameters-data-flow.md
Previously updated : 08/18/2022 Last updated : 07/20/2023 # Parameterizing mapping data flows
data-factory Pipeline Trigger Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pipeline-trigger-troubleshoot-guide.md
Title: Troubleshoot pipeline orchestration and triggers in Azure Data Factory
description: Use different methods to troubleshoot pipeline trigger issues in Azure Data Factory. Previously updated : 08/18/2022 Last updated : 07/20/2023
data-factory Quickstart Create Data Factory Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-azure-cli.md
Previously updated : 08/18/2022 Last updated : 07/20/2023
data-factory Quickstart Create Data Factory Dot Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-dot-net.md
ms.devlang: csharp Previously updated : 08/18/2022 Last updated : 07/20/2023
data-factory Quickstart Create Data Factory Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-powershell.md
ms.devlang: powershell Previously updated : 08/18/2022 Last updated : 07/20/2023
data-factory Quickstart Create Data Factory Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-python.md
ms.devlang: python Previously updated : 08/18/2022 Last updated : 07/20/2023
data-factory Quickstart Create Data Factory Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-rest-api.md
ms.devlang: rest-api Previously updated : 08/18/2022 Last updated : 07/20/2023
data-factory Quota Increase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quota-increase.md
description: How to create a support request in the Azure portal for Azure Data
Previously updated : 08/18/2022 Last updated : 07/20/2023
data-factory Samples Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/samples-powershell.md
Previously updated : 08/18/2022 Last updated : 07/20/2023 # Azure PowerShell samples for Azure Data Factory
data-factory Sap Change Data Capture Debug Shir Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-debug-shir-logs.md
Previously updated : 08/18/2022 Last updated : 07/20/2023
data-factory Sap Change Data Capture Introduction Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-introduction-architecture.md
Previously updated : 08/18/2022 Last updated : 07/20/2023
data-factory Sap Change Data Capture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-management.md
Previously updated : 08/18/2022 Last updated : 07/20/2023
data-factory Sap Change Data Capture Prepare Linked Service Source Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-prepare-linked-service-source-dataset.md
Previously updated : 08/18/2022 Last updated : 07/20/2023
data-factory Sap Change Data Capture Shir Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-shir-preparation.md
Previously updated : 08/18/2022 Last updated : 07/20/2023
data-factory Scenario Dataflow Process Data Aml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scenario-dataflow-process-data-aml-models.md
Previously updated : 08/18/2022 Last updated : 07/20/2023 ms.co-
data-factory Scenario Ssis Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scenario-ssis-migration-overview.md
Previously updated : 08/18/2022 Last updated : 07/20/2023 # Migrate on-premises SSIS workloads to SSIS in ADF or Synapse Pipelines
data-factory Scenario Ssis Migration Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scenario-ssis-migration-rules.md
Previously updated : 08/18/2022 Last updated : 07/20/2023 # SSIS migration assessment rules
data-factory Self Hosted Integration Runtime Automation Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/self-hosted-integration-runtime-automation-scripts.md
Previously updated : 08/18/2022 Last updated : 07/20/2023 # Automating self-hosted integration runtime installation using local PowerShell scripts
data-factory Self Hosted Integration Runtime Diagnostic Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/self-hosted-integration-runtime-diagnostic-tool.md
Previously updated : 08/18/2022 Last updated : 07/20/2023 # Diagnostic tool for self-hosted integration runtime
data-factory Solution Template Bulk Copy From Files To Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-bulk-copy-from-files-to-database.md
Previously updated : 08/18/2022 Last updated : 07/20/2023 # Bulk copy from files to database
data-lake-analytics Data Lake Analytics Add Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-add-users.md
Last updated 01/20/2023
## Start the Add User Wizard
-1. Open your Azure Data Lake Analytics via https://portal.azure.com.
+1. Open your Azure Data Lake Analytics via the [Azure portal](https://portal.azure.com).
2. Select **Add User Wizard**. 3. In the **Select user** step, find the user you want to add. Select **Select**. 4. the **Select role** step, pick **Data Lake Analytics Developer**. This role has the minimum set of permissions required to submit/monitor/manage U-SQL jobs. Assign to this role if the group isn't intended for managing Azure services.
ddos-protection Manage Ddos Ip Protection Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-ip-protection-portal.md
In this quickstart, you'll enable DDoS IP protection and link it to a public IP
## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- Sign in to the Azure portal at https://portal.azure.com.
+- Sign in to the [Azure portal](https://portal.azure.com).
## Enable DDoS IP Protection on a public IP address
ddos-protection Manage Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection.md
In this quickstart, you'll create a DDoS protection plan and link it to a virtua
## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- Sign in to the Azure portal at https://portal.azure.com. Ensure that your account is assigned to the [network contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) role or to a [custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json) that is assigned the appropriate actions listed in the how-to guide on [Permissions](manage-permissions.md).
+- Sign in to the [Azure portal](https://portal.azure.com). Ensure that your account is assigned to the [network contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) role or to a [custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json) that is assigned the appropriate actions listed in the how-to guide on [Permissions](manage-permissions.md).
## Create a DDoS protection plan
ddos-protection Test Through Simulations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/test-through-simulations.md
It should now appear like this:
### Monitor and validate
-1. Log in to https://portal.azure.com and go to your subscription.
+1. Log in to the [Azure portal](https://portal.azure.com) and go to your subscription.
1. Select the Public IP address you tested the attack on. 1. Under **Monitoring**, select **Metrics**. 1. For **Metric**, select _Under DDoS attack or not_.
defender-for-cloud Concept Agentless Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-containers.md
Container vulnerability assessment powered by MDVM (Microsoft Defender Vulnerabi
- **Scanning OS packages** - container vulnerability assessment has the ability to scan vulnerabilities in packages installed by the OS package manager in Linux. See the [full list of the supported OS and their versions](support-agentless-containers-posture.md#registries-and-images). - **Language specific packages** ΓÇô support for language specific packages and files, and their dependencies installed or copied without the OS package manager. See the [complete list of supported languages](support-agentless-containers-posture.md#registries-and-images). -- **Image scanning in Azure Private Link** - Azure container vulnerability assessment provides the ability to scan images in container registries that are accessible via Azure Private Links. This capability requires access to trusted services and authentication with the registry. Learn how to [connect privately to an Azure container registry using Azure Private Link](/azure/container-registry/container-registry-private-link#set-up-private-endpointportal-recommended).
+- **Image scanning with Azure Private Link** - Azure container vulnerability assessment provides the ability to scan images in container registries that are accessible via Azure Private Links. This capability requires access to trusted services and authentication with the registry. Learn how to [allow access by trusted services](/azure/container-registry/container-registry-private-link#container-registry/allow-access-trusted-services) .
- **Exploitability information** - Each vulnerability report is searched through exploitability databases to assist our customers with determining actual risk associated with each reported vulnerability. - **Reporting** - Defender for Containers powered by Microsoft Defender Vulnerability Management (MDVM) reports the vulnerabilities as the following recommendations:
defender-for-cloud Concept Agentless Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-data-collection.md
Title: Learn about agentless scanning
+ Title: Learn about agentless scanning for VMs
description: Learn how Defender for Cloud can gather information about your multicloud compute resources without installing an agent on your machines.
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
description: Learn about deploying Microsoft Defender for Endpoint from Microsof
Previously updated : 06/14/2023 Last updated : 07/20/2023 # Protect your endpoints with Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint
Use the [Defender for Endpoint status workbook](https://aka.ms/MDEStatus) to ver
Use our [PowerShell script](https://github.com/Azure/Microsoft-Defender-for-Cloud/tree/main/Powershell%20scripts/Enable%20MDE%20Integration%20for%20Linux) from the Defender for Cloud GitHub repository to enable endpoint protection on Linux machines that are in multiple subscriptions.
+##### Manage automatic updates configuration for Linux
+
+In Windows, Defender for Endpoint version updates are provided via continuous knowledge base updates; in Linux you need to update the Defender for Endpoint package. When you use Defender for Servers with the `MDE.Linux` extension, automatic updates for Microsoft Defender for Endpoint are enabled by default. If you wish to manage the Defender for Endpoint version updates manually, you can disable automatic updates on your machines. To do so, add the following tag for machines onboarded with the `MDE.Linux` extension.
+
+- Tag name: 'ExcludeMdeAutoUpdate'
+- Tag value: 'true'
+
+This configuration is supported for Azure VMs and Azure Arc machines, where the `MDE.Linux` extension initiates auto-update.
+ ### Enable the MDE unified solution at scale You can also enable the MDE unified solution at scale through the supplied REST API version 2022-05-01. For full details, see the [API documentation](/rest/api/defenderforcloud/settings/update?tabs=HTTP).
defender-for-cloud Overview Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/overview-page.md
Title: Main overview page description: Learn about the features of the Defender for Cloud overview page Previously updated : 03/08/2023 Last updated : 07/20/2023
Microsoft Defender for Cloud's overview page is an interactive dashboard that pr
You can select any element on the page to get more detailed information. ## Features of the overview page
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes description: This page is updated frequently with the latest updates in Defender for Cloud. Previously updated : 07/18/2023 Last updated : 07/20/2023 # What's new in Microsoft Defender for Cloud?
Updates in July include:
|Date |Update | |||
+| July 20 | [Management of automatic updates to Defender for Endpoint for Linux](#management-of-automatic-updates-to-defender-for-endpoint-for-linux)
| July 18 | [Agentless secret scanning for virtual machines in Defender for servers P2 & DCSPM](#agentless-secret-scanning-for-virtual-machines-in-defender-for-servers-p2--dcspm) | | July 12 | [New Security alert in Defender for Servers plan 2: Detecting Potential Attacks leveraging Azure VM GPU driver extensions](#new-security-alert-in-defender-for-servers-plan-2-detecting-potential-attacks-leveraging-azure-vm-gpu-driver-extensions) | July 9 | [Support for disabling specific vulnerability findings](#support-for-disabling-specific-vulnerability-findings) | July 1 | [Data Aware Security Posture is now Generally Available](#data-aware-security-posture-is-now-generally-available) |
+### Management of automatic updates to Defender for Endpoint for Linux
+
+July 20, 2023
+
+By default, Defender for Cloud attempts to update your Defender for Endpoint for Linux agents onboarded with the `MDE.Linux` extension. With this release, you can manage this setting and opt-out from the default configuration to manage your update cycles manually.
+
+Learn how to [manage automatic updates configuration for Linux](integration-defender-for-endpoint.md#manage-automatic-updates-configuration-for-linux).
+ ### Agentless secret scanning for virtual machines in Defender for servers P2 & DCSPM July 18, 2023
defender-for-cloud Sql Azure Vulnerability Assessment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-azure-vulnerability-assessment-overview.md
Configuration modes benefits and limitations comparison:
| Single rule scan result size | Maximum of 1 MB | Unlimited | | Email notifications | ΓÇó Logic Apps | ΓÇó Internal scheduler<br>ΓÇó Logic Apps | | Scan export | Azure Resource Graph | Excel format, Azure Resource Graph |
+| Supported Clouds | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure China 21Vianet |
## Next steps
devtest-labs Activity Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/activity-logs.md
Last updated 07/10/2020+ # View activity logs for labs in Azure DevTest Labs
devtest-labs Add Artifact Repository https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/add-artifact-repository.md
Last updated 01/11/2022-+ # Add an artifact repository to a lab
devtest-labs Add Vm Use Shared Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/add-vm-use-shared-image.md
Last updated 06/26/2020+ # Add a VM using an image from the attached shared image gallery
devtest-labs Automate Add Lab User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/automate-add-lab-user.md
Last updated 06/26/2020 -+ # Automate adding a lab user to a lab in Azure DevTest Labs
devtest-labs Best Practices Distributive Collaborative Development Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/best-practices-distributive-collaborative-development-environment.md
Last updated 06/26/2020+ # Best practices for distributed and collaborative development of Azure DevTest Labs resources
devtest-labs Configure Lab Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/configure-lab-identity.md
Last updated 08/20/2020+ # Configure a lab identity
devtest-labs Configure Shared Image Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/configure-shared-image-gallery.md
Title: Configure a shared image gallery description: Learn how to configure a shared image gallery in Azure DevTest Labs, which enables users to access images from a shared location while creating lab resources. -+ Last updated 06/26/2020
devtest-labs Connect Environment Lab Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/connect-environment-lab-virtual-network.md
Last updated 06/26/2020+ # Connect an environment to your lab's virtual network in Azure DevTest Labs
devtest-labs Connect Linux Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/connect-linux-virtual-machine.md
Last updated 07/17/2020+ # Connect to a Linux VM in your lab (Azure DevTest Labs)
devtest-labs Connect Windows Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/connect-windows-virtual-machine.md
Last updated 07/17/2020+ # Connect to a Windows VM in your lab (Azure DevTest Labs)
devtest-labs Create Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/create-alerts.md
Last updated 07/10/2020+ # Create activity log alerts for labs in Azure DevTest Labs
devtest-labs Create Environment Service Fabric Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/create-environment-service-fabric-cluster.md
Last updated 06/26/2020+ # Create an environment with self-contained Service Fabric cluster in Azure DevTest Labs
devtest-labs Create Lab Windows Vm Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/create-lab-windows-vm-bicep.md
description: Use Bicep to create a lab that has a virtual machine in Azure DevTe
-+ Last updated 03/22/2022
devtest-labs Create Lab Windows Vm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/create-lab-windows-vm-template.md
description: Use an Azure Resource Manager (ARM) template to create a lab that h
-+ Last updated 01/03/2022
devtest-labs Deliver Proof Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/deliver-proof-concept.md
Last updated 03/22/2022+ # Deliver a proof of concept for Azure DevTest Labs enterprise deployment
devtest-labs Deploy Nested Template Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/deploy-nested-template-environments.md
Title: Deploy nested ARM template environments description: Learn how to nest Azure Resource Manager (ARM) templates to deploy Azure DevTest Labs environments. -+ Last updated 01/26/2022
devtest-labs Devtest Lab Add Claimable Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-add-claimable-vm.md
Last updated 12/21/2022+ # Create and manage claimable VMs in Azure DevTest Labs
devtest-labs Devtest Lab Add Devtest User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-add-devtest-user.md
Last updated 01/26/2022-+ # Add lab owners, contributors, and users in Azure DevTest Labs
devtest-labs Devtest Lab Add Tag https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-add-tag.md
Last updated 06/26/2020+ # Add tags to a lab in Azure DevTest Labs
devtest-labs Devtest Lab Announcements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-announcements.md
Title: Post an announcement to a lab description: Learn how to post a custom announcement in an existing lab to notify users about recent changes or additions to the lab in Azure DevTest Labs. -+ Last updated 06/26/2020
devtest-labs Devtest Lab Artifact Author https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-artifact-author.md
Last updated 01/11/2022+ # Create custom artifacts for DevTest Labs
devtest-labs Devtest Lab Comparing Vm Base Image Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-comparing-vm-base-image-types.md
Last updated 08/26/2021+ # Compare custom images and formulas in DevTest Labs
devtest-labs Devtest Lab Configure Cost Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-configure-cost-management.md
Last updated 06/26/2020+ # Track costs associated with a lab in Azure DevTest Labs
devtest-labs Devtest Lab Configure Marketplace Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-configure-marketplace-images.md
Last updated 06/26/2020+ # Configure Azure Marketplace image settings in Azure DevTest Labs
devtest-labs Devtest Lab Configure Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-configure-vnet.md
Last updated 02/15/2022+ # Add a virtual network in Azure DevTest Labs
devtest-labs Devtest Lab Create Custom Image From Vhd Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-create-custom-image-from-vhd-using-powershell.md
Last updated 12/28/2022-+ # Create a custom image from a VHD file with PowerShell
devtest-labs Devtest Lab Create Custom Image From Vm Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-create-custom-image-from-vm-using-portal.md
Last updated 02/15/2022+ # Create a custom image from a VM
devtest-labs Devtest Lab Create Environment From Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-create-environment-from-arm.md
Last updated 12/21/2022-+ # Create Azure DevTest Labs environments from ARM templates
devtest-labs Devtest Lab Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-create-template.md
Last updated 01/04/2022+ # Create a custom image for Azure DevTest Labs virtual machines from VHD files
devtest-labs Devtest Lab Delete Lab Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-delete-lab-vm.md
Last updated 03/14/2022+ # Delete labs or lab VMs in Azure DevTest Labs
devtest-labs Devtest Lab Dev Ops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-dev-ops.md
Last updated 12/28/2022+ # Integrate DevTest Labs and DevOps CI/CD pipelines
devtest-labs Devtest Lab Enable Licensed Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-enable-licensed-images.md
Last updated 06/26/2020+ # Enable a licensed image in your lab in Azure DevTest Labs
devtest-labs Devtest Lab Grant User Permissions To Specific Lab Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-grant-user-permissions-to-specific-lab-policies.md
Last updated 06/26/2020 -+ # Grant user permissions to specific lab policies
devtest-labs Devtest Lab Guidance Governance Application Migration Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-guidance-governance-application-migration-integration.md
Last updated 06/26/2020 + # Governance of Azure DevTest Labs infrastructure - Application migration and integration
devtest-labs Devtest Lab Guidance Governance Cost Ownership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-guidance-governance-cost-ownership.md
Last updated 06/26/2020 + # Governance of Azure DevTest Labs infrastructure - Manage cost and ownership
devtest-labs Devtest Lab Guidance Governance Policy Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-guidance-governance-policy-compliance.md
Last updated 06/26/2020 + # Governance of Azure DevTest Labs infrastructure - Company policy and compliance
devtest-labs Devtest Lab Guidance Governance Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-guidance-governance-resources.md
Last updated 06/26/2020 + # Governance of Azure DevTest Labs infrastructure - Resources
devtest-labs Devtest Lab Guidance Orchestrate Implementation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-guidance-orchestrate-implementation.md
Last updated 06/26/2020 + # Orchestrate the implementation of Azure DevTest Labs
devtest-labs Devtest Lab Guidance Prescriptive Adoption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-guidance-prescriptive-adoption.md
Last updated 06/26/2020 + # DevTest Labs in the enterprise
devtest-labs Devtest Lab Guidance Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-guidance-scale.md
Last updated 06/26/2020 + # Scale up your Azure DevTest Labs infrastructure
devtest-labs Devtest Lab Integrate Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-integrate-ci-cd.md
Title: Integrate Azure DevTest Labs into Azure Pipelines description: Learn how to integrate Azure DevTest Labs into Azure Pipelines continuous integration and delivery (CI/CD) pipelines. -+ Last updated 12/28/2021
devtest-labs Devtest Lab Internal Support Message https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-internal-support-message.md
Last updated 06/26/2020+ # Add an internal support statement to a lab in Azure DevTest Labs
devtest-labs Devtest Lab Manage Formulas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-manage-formulas.md
Last updated 06/26/2020+ # Manage Azure DevTest Labs formulas
devtest-labs Devtest Lab Mandatory Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-mandatory-artifacts.md
Last updated 01/12/2022+ # Specify mandatory artifacts for DevTest Labs VMs
devtest-labs Devtest Lab Redeploy Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-redeploy-vm.md
Last updated 06/26/2020+ # Redeploy a VM in a lab in Azure DevTest Labs
devtest-labs Devtest Lab Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-reference-architecture.md
Last updated 03/14/2022 + # DevTest Labs enterprise reference architecture
devtest-labs Devtest Lab Restart Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-restart-vm.md
Last updated 06/26/2020+ # Restart a VM in a lab in Azure DevTest Labs
devtest-labs Devtest Lab Scale Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-scale-lab.md
Last updated 06/26/2020+ # Scale quotas and limits in DevTest Labs
devtest-labs Devtest Lab Set Lab Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-set-lab-policy.md
Last updated 02/14/2022+ # Manage lab policies to control costs in Azure DevTest Labs
devtest-labs Devtest Lab Shared Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-shared-ip.md
Last updated 11/08/2021+ # Understand shared IP addresses in Azure DevTest Labs
devtest-labs Devtest Lab Store Secrets In Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-store-secrets-in-key-vault.md
Title: Store secrets in a key vault description: Learn how to store secrets in an Azure Key Vault and use them while creating a VM, formula, or an environment. -+ Last updated 06/26/2020
devtest-labs Devtest Lab Upload Vhd Using Azcopy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-upload-vhd-using-azcopy.md
Last updated 12/22/2022+ # Upload a VHD file to a lab storage account by using AzCopy
devtest-labs Devtest Lab Upload Vhd Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-upload-vhd-using-powershell.md
Last updated 12/22/2022+ # Upload a VHD file to a lab storage account by using PowerShell
devtest-labs Devtest Lab Upload Vhd Using Storage Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-upload-vhd-using-storage-explorer.md
Last updated 12/23/2022+ # Upload a VHD file to a lab storage account by using Storage Explorer
devtest-labs Devtest Lab Use Arm And Powershell For Lab Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-use-arm-and-powershell-for-lab-resources.md
Title: Create and deploy labs with Azure Resource Manager (ARM) templates description: Learn how Azure DevTest Labs uses Azure Resource Manager (ARM) templates to create and configure lab virtual machines (VMs) and environments. -+ Last updated 01/11/2022
devtest-labs Devtest Lab Use Claim Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-use-claim-capabilities.md
Last updated 06/26/2020+ # Use claim capabilities in Azure DevTest Labs
devtest-labs Devtest Lab Vm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-vm-powershell.md
Last updated 03/17/2022-+ # Create DevTest Labs VMs by using Azure PowerShell
devtest-labs Devtest Lab Vmcli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-vmcli.md
Title: Create and manage virtual machines in Azure DevTest Labs with Azure CLI description: Learn how to use Azure DevTest Labs to create and manage virtual machines with Azure CLI -+ Last updated 06/26/2020
devtest-labs Enable Browser Connection Lab Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/enable-browser-connection-lab-virtual-machines.md
Last updated 12/20/2022+ # Enable browser connection to DevTest Labs VMs with Azure Bastion
devtest-labs Enable Managed Identities Lab Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/enable-managed-identities-lab-vms.md
Last updated 06/26/2020+ # Enable user-assigned managed identities on lab virtual machines in Azure DevTest Labs
devtest-labs Encrypt Disks Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/encrypt-disks-customer-managed-keys.md
Last updated 09/29/2021-+ # Encrypt disks using customer-managed keys in Azure DevTest Labs
devtest-labs Encrypt Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/encrypt-storage.md
Last updated 03/15/2022+ # Manage Azure DevTest Labs storage accounts
devtest-labs Environment Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/environment-security-alerts.md
Last updated 06/26/2020+ # Security alerts for environments in Azure DevTest Labs
devtest-labs Extend Devtest Labs Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/extend-devtest-labs-azure-functions.md
Last updated 06/26/2020+ # Use Azure Functions to extend DevTest Labs
devtest-labs How To Move Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/how-to-move-labs.md
Title: Move DevTest Labs to another region description: Shows you how to move a lab to another region. -+ Last updated 03/03/2022
devtest-labs How To Move Schedule To New Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/how-to-move-schedule-to-new-region.md
Last updated 05/09/2022+ # Move a schedule to another region
devtest-labs Image Factory Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/image-factory-create.md
Last updated 06/26/2020+ # Create a custom image factory in Azure DevTest Labs
devtest-labs Image Factory Save Distribute Custom Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/image-factory-save-distribute-custom-images.md
Last updated 06/26/2020+ # Save custom images and distribute to multiple labs
devtest-labs Image Factory Set Retention Policy Cleanup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/image-factory-set-retention-policy-cleanup.md
Last updated 06/26/2020+ # Set up retention policy in Azure DevTest Labs
devtest-labs Image Factory Set Up Devops Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/image-factory-set-up-devops-lab.md
Last updated 06/26/2020+ # Run an image factory from Azure DevOps
devtest-labs Import Virtual Machines From Another Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/import-virtual-machines-from-another-lab.md
Last updated 11/08/2021+ # Import virtual machines from one lab to another
devtest-labs Integrate Environments Devops Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/integrate-environments-devops-pipeline.md
Last updated 11/17/2021+ # Integrate DevTest Labs environments into Azure Pipelines
devtest-labs Lab Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/lab-services-overview.md
Last updated 11/15/2021+ # Compare Azure DevTest Labs and Azure Lab Services
devtest-labs Personal Data Delete Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/personal-data-delete-export.md
Last updated 06/26/2020+ # Export or delete personal data from Azure DevTest Labs
devtest-labs Create Lab Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/quickstarts/create-lab-rest.md
Last updated 10/27/2021-+ #Customer intent: As an administrator, I want to set up a lab so that my developers have a test environment.
devtest-labs Report Usage Across Multiple Labs Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/report-usage-across-multiple-labs-subscriptions.md
Title: Azure DevTest Labs usage across multiple labs and subscriptions description: Learn how to report Azure DevTest Labs usage across multiple labs and subscriptions. -+ Last updated 06/26/2020
devtest-labs Resource Group Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/resource-group-control.md
Title: Specify resource group for Azure VMs in DevTest Labs description: Learn how to specify a resource group for VMs in a lab in Azure DevTest Labs. -+ Last updated 10/18/2021
devtest-labs Samples Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/samples-cli.md
Title: Azure CLI Samples description: Learn about Azure CLI scripts. With these samples, you can create a virtual machine and then start, stop, and delete it in Azure DevTest Labs. -+ Last updated 02/02/2022
devtest-labs Samples Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/samples-powershell.md
Title: Azure PowerShell Samples description: Learn about Azure PowerShell scripts. These samples help you manage labs in Azure Lab Services. -+ Last updated 02/02/2022
devtest-labs Start Machines Use Automation Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/start-machines-use-automation-runbooks.md
Last updated 03/17/2022-+ # Define the startup order for DevTest Lab VMs with Azure Automation
devtest-labs Test App Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/test-app-azure.md
Last updated 12/22/2022-+ # Publish app for testing on an Azure DevTest Labs VM
devtest-labs Troubleshoot Vm Environment Creation Failures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/troubleshoot-vm-environment-creation-failures.md
Last updated 06/26/2020+ # Troubleshoot virtual machine (VM) and environment creation failures in Azure DevTest Labs
devtest-labs Use Devtest Labs Build Release Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/use-devtest-labs-build-release-pipelines.md
Last updated 06/26/2020+ # Use DevTest Labs in Azure Pipelines build and release pipelines
devtest-labs Use Managed Identities Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/use-managed-identities-environments.md
Last updated 06/26/2020+ # Use Azure managed identities to deploy environments in a lab
devtest-labs Use Paas Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/use-paas-services.md
Last updated 03/22/2022+ # Use PaaS services in Azure DevTest Labs
digital-twins How To Use 3D Scenes Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-3d-scenes-studio.md
description: Learn how to use all the features of 3D Scenes Studio (preview) for Azure Digital Twins. Previously updated : 02/27/2023 Last updated : 07/19/2023
To use 3D Scenes Studio, you'll need the following resources:
* A private container in the storage account. For instructions, see [Create a container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). * Take note of the *name* of your storage container to use later. * *Storage Blob Data Owner* or *Storage Blob Data Contributor* access to your storage resources. You can grant required roles at either the storage account level or the container level. For instructions and more information about permissions to Azure storage, see [Assign an Azure role](../storage/blobs/assign-azure-role-data-access.md?tabs=portal#assign-an-azure-role).
+* Configure CORS for your storage account (see details in the following sub-section).
-You should also configure [CORS](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) for your storage account, so that 3D Scenes Studio will be able to access your storage container. You can use the following [Azure CLI](/cli/azure/what-is-azure-cli) command to set the minimum required methods, origins, and headers. The command contains one placeholder for the name of your storage account.
+### Configure CORS
+
+You'll need to configure [CORS](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) for your storage account, so that 3D Scenes Studio will be able to access your storage container.
+
+These CORS headers are always required:
+* Authorization
+* x-ms-version
+* x-ms-blob-type
+
+These additional CORS headers are required if you're planning on using private links functionality:
+* Content-Type
+* Content-Length
+* x-ms-copy-source
+* x-ms-requires-sync
+
+Below is the [Azure CLI](/cli/azure/what-is-azure-cli) command that will set the methods, origins, and headers listed above for CORS in your storage account. The command contains one placeholder for the name of your storage account.
```azurecli
-az storage cors add --services b --methods GET OPTIONS POST PUT --origins https://explorer.digitaltwins.azure.net --allowed-headers Authorization x-ms-version x-ms-blob-type --account-name <your-storage-account>
+az storage cors add --services b --methods GET OPTIONS POST PUT --origins https://explorer.digitaltwins.azure.net --allowed-headers Authorization Content-Type Content-Length x-ms-version x-ms-blob-type x-ms-copy-source x-ms-requires-sync --account-name <your-storage-account>
``` Now you have all the necessary resources to work with scenes in 3D Scenes Studio.
dns Dns Delegate Domain Azure Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-delegate-domain-azure-dns.md
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
## Sign in to Azure
-Sign in to the Azure portal at https://portal.azure.com.
+Sign in to the [Azure portal](https://portal.azure.com).
## Create a DNS zone
dns Dns Web Sites Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-web-sites-custom-domain.md
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
## Sign in to Azure
-Sign in to the Azure portal at https://portal.azure.com.
+Sign in to the [Azure portal](https://portal.azure.com).
## Create the A record
dns Tutorial Alias Pip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-alias-pip.md
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
## Sign in to Azure
-Sign in to the Azure portal at https://portal.azure.com.
+Sign in to the [Azure portal](https://portal.azure.com).
## Create the network infrastructure
dns Tutorial Alias Rr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-alias-rr.md
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
## Sign in to Azure
-Sign in to the Azure portal at https://portal.azure.com.
+Sign in to the [Azure portal](https://portal.azure.com).
## Create an alias record
dns Tutorial Alias Tm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-alias-tm.md
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
## Sign in to Azure
-Sign in to the Azure portal at https://portal.azure.com.
+Sign in to the [Azure portal](https://portal.azure.com).
## Create the network infrastructure
dns Tutorial Public Dns Zones Child https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-public-dns-zones-child.md
There are two ways you can create your child DNS zone:
## Sign in to Azure
-Sign in to the Azure portal at https://portal.azure.com.
+Sign in to the [Azure portal](https://portal.azure.com).
## Create a child DNS zone via parent DNS zone Overview page
education-hub Find Ids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/find-ids.md
You must have an Azure account linked with education hub.
## Sign in to Azure
-* Sign in to the Azure portal at https://portal.azure.com.
+* Sign in to the [Azure portal](https://portal.azure.com).
## Navigate to Cost Management + Billing
This section will show you how to get your Invoice Section ID.
- [Manage your Academic Grant using the Overview page](hub-overview-page.md) -- [Support options](educator-service-desk.md)
+- [Support options](educator-service-desk.md)
expressroute Expressroute Howto Erdirect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-erdirect.md
Once enrolled, verify that the **Microsoft.Network** resource provider is regist
3. List all locations where ExpressRoute Direct is supported. ```powershell
- Get-AzExpressRoutePortsLocation
+ Get-AzExpressRoutePortsLocation | format-list
``` **Example output**
Once enrolled, verify that the **Microsoft.Network** resource provider is regist
4. Determine if a location listed above has available bandwidth ```powershell
- Get-AzExpressRoutePortsLocation -LocationName "Equinix-San-Jose-SV1"
+ Get-AzExpressRoutePortsLocation -LocationName "Equinix-San-Jose-SV1" | format-list
``` **Example output**
firewall-manager Deploy Trusted Security Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/deploy-trusted-security-partner.md
Integrated third-party Security as a service (SECaaS) partners are now available
Skip this section if you are deploying a third-party provider into an existing hub.
-1. Sign in to the Azure portal at https://portal.azure.com.
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. In **Search**, type **Firewall Manager** and select it under **Services**. 3. Navigate to **Getting Started**. Select **View secured virtual hubs**. 4. Select **Create new secured virtual hub**.
firewall Premium Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-certificates.md
Ensure your CA certificate complies with the following requirements:
- The `CA` flag must be set to TRUE. - The Path Length must be greater than or equal to one.
+- It must be exportable.
## Azure Key Vault
frontdoor Create Front Door Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-portal.md
In this example, we create two Web App instances that are deployed in two differ
Use the following steps to create two Web Apps used in this example.
-1. Sign in to the Azure portal at https://portal.azure.com.
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. On the top left-hand side of the portal, select **+ Create a resource**. Then search for **Web App**. Select **Create** to begin configuring the first Web App.
frontdoor Quickstart Create Front Door https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door.md
This quickstart requires two instances of a web application that run in differen
If you don't already have a web app, use the following steps to set up example web apps.
-1. Sign in to the Azure portal at https://portal.azure.com.
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. On the top left-hand side of the screen, select **Create a resource** > **Web App**.
governance Pciv3_2_1_2018_Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/PCIv3_2_1_2018_audit.md
- Title: Regulatory Compliance details for PCI v3.2.1 2018 PCI DSS 3.2.1
-description: Details of the PCI v3.2.1 2018 PCI DSS 3.2.1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
Previously updated : 10/18/2022---
-# Details of the PCI v3.2.1 2018 PCI DSS 3.2.1 Regulatory Compliance built-in initiative
-
-The following article details how the Azure Policy Regulatory Compliance built-in initiative
-definition maps to **compliance domains** and **controls** in PCI v3.2.1 2018 PCI DSS 3.2.1.
-For more information about this compliance standard, see
-[PCI v3.2.1 2018 PCI DSS 3.2.1](https://www.commerce.uwo.ca/pdf/PCI_DSS_v3-2-1.pdf https://www.pcisecuritystandards.org/documents/PCI_DSS-QRG-v3_2_1.pdf). To understand
-_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and
-[Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
-
-The following mappings are to the **PCI v3.2.1 2018 PCI DSS 3.2.1** controls. Many of the controls
-are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete
-initiative definition, open **Policy** in the Azure portal and select the **Definitions** page.
-Then, find and select the **PCI v3.2.1:2018** Regulatory Compliance built-in
-initiative definition.
-
-> [!IMPORTANT]
-> Each control below is associated with one or more [Azure Policy](../overview.md) definitions.
-> These policies may help you [assess compliance](../how-to/get-compliance-data.md) with the
-> control; however, there often is not a one-to-one or complete match between a control and one or
-> more policies. As such, **Compliant** in Azure Policy refers only to the policy definitions
-> themselves; this doesn't ensure you're fully compliant with all requirements of a control. In
-> addition, the compliance standard includes controls that aren't addressed by any Azure Policy
-> definitions at this time. Therefore, compliance in Azure Policy is only a partial view of your
-> overall compliance status. The associations between compliance domains, controls, and Azure Policy
-> definitions for this compliance standard may change over time. To view the change history, see the
-> [GitHub Commit History](https://github.com/Azure/azure-policy/commits/master/built-in-policies/policySetDefinitions/Regulatory%20Compliance/ Regulatory%20Compliance/PCIv3_2_1_2018_audit.json PCIv3_2_1_2018_audit.json).
-
-## Requirement 1
-
-### PCI DSS requirement 1.3.2
-
-**ID**: PCI DSS v3.2.1 1.3.2
-**Ownership**: customer
-
-|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
-|||||
-|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) |
-|[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) |
-
-### PCI DSS requirement 1.3.4
-
-**ID**: PCI DSS v3.2.1 1.3.4
-**Ownership**: customer
-
-|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
-|||||
-|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) |
-|[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) |
-
-### PCI DSS requirement 1.3.4
-
-**ID**: PCI DSS v3.2.1 1.3.4
-**Ownership**: customer
-
-|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
-|||||
-|[Audit diagnostic setting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
-|[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) |
-|[Storage accounts should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F37e0d2fe-28a5-43d6-a273-67d37d1f5606) |Use new Azure Resource Manager for your storage accounts to provide security enhancements such as: stronger access control (RBAC), better auditing, Azure Resource Manager based deployment and governance, access to managed identities, access to key vault for secrets, Azure AD-based authentication and support for tags and resource groups for easier security management |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Classic_AuditForClassicStorages_Audit.json) |
-|[Virtual machines should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d84d5fb-01f6-4d12-ba4f-4a26081d403d) |Use new Azure Resource Manager for your virtual machines to provide security enhancements such as: stronger access control (RBAC), better auditing, Azure Resource Manager based deployment and governance, access to managed identities, access to key vault for secrets, Azure AD-based authentication and support for tags and resource groups for easier security management |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/ClassicCompute_Audit.json) |
-
-## Requirement 10
-
-### PCI DSS requirement 10.5.4
-
-**ID**: PCI DSS v3.2.1 10.5.4
-**Ownership**: shared
-
-|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
-|||||
-|[Audit diagnostic setting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
-|[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) |
-|[Storage accounts should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F37e0d2fe-28a5-43d6-a273-67d37d1f5606) |Use new Azure Resource Manager for your storage accounts to provide security enhancements such as: stronger access control (RBAC), better auditing, Azure Resource Manager based deployment and governance, access to managed identities, access to key vault for secrets, Azure AD-based authentication and support for tags and resource groups for easier security management |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Classic_AuditForClassicStorages_Audit.json) |
-|[Virtual machines should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d84d5fb-01f6-4d12-ba4f-4a26081d403d) |Use new Azure Resource Manager for your virtual machines to provide security enhancements such as: stronger access control (RBAC), better auditing, Azure Resource Manager based deployment and governance, access to managed identities, access to key vault for secrets, Azure AD-based authentication and support for tags and resource groups for easier security management |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/ClassicCompute_Audit.json) |
-
-## Requirement 11
-
-### PCI DSS requirement 11.2.1
-
-**ID**: PCI DSS v3.2.1 11.2.1
-**Ownership**: shared
-
-|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
-|||||
-|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) |
-|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) |
-|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) |
-|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) |
-|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
-
-## Requirement 3
-
-### PCI DSS requirement 3.2
-
-**ID**: PCI DSS v3.2.1 3.2
-**Ownership**: customer
-
-|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
-|||||
-|[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) |
-|[Audit usage of custom RBAC rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |
-|[External accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8456c1c-aa66-4dfb-861a-25d127b775c9) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWithOwnerPermissions_Audit.json) |
-|[External accounts with read permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f76cf89-fbf2-47fd-a3f4-b891fa780b60) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsReadPermissions_Audit.json) |
-|[External accounts with write permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c607a2e-c700-4744-8254-d77e7c9eb5e4) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWritePermissions_Audit.json) |
-|[MFA should be enabled for accounts with write permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9297c21d-2ed6-4474-b48f-163f75654ce3) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForWritePermissions_Audit.json) |
-|[MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa633080-8b72-40c4-a2d7-d00c03e80bed) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForOwnerPermissions_Audit.json) |
-
-### PCI DSS requirement 3.4
-
-**ID**: PCI DSS v3.2.1 3.4
-**Ownership**: customer
-
-|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
-|||||
-|[App Service apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) |
-|[Automation account variables should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3657f5a0-770e-44a3-b44e-9431ba1e9735) |It is important to enable encryption of Automation account variable assets when storing sensitive data |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Automation/Automation_AuditUnencryptedVars_Audit.json) |
-|[Function apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceFunctionApp_AuditHTTP_Audit.json) |
-|[Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22bee202-a82f-4305-9a2a-6d7f44d4dedb) |Audit enabling of only connections via SSL to Azure Cache for Redis. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_AuditSSLPort_Audit.json) |
-|[Secure transfer to storage accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) |
-|[Service Fabric clusters should have the ClusterProtectionLevel property set to EncryptAndSign](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F617c02be-7f02-4efd-8836-3180d47b6c68) |Service Fabric provides three levels of protection (None, Sign and EncryptAndSign) for node-to-node communication using a primary cluster certificate. Set the protection level to ensure that all node-to-node messages are encrypted and digitally signed |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditClusterProtectionLevel_Audit.json) |
-|[Transparent Data Encryption on SQL databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F17k78e20-9358-41c9-923c-fb736d382a12) |Transparent data encryption should be enabled to protect data-at-rest and meet compliance requirements |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlDBEncryption_Audit.json) |
-|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse,](https://aka.ms/disksse,) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](https://aka.ms/diskencryptioncomparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
-
-## Requirement 4
-
-### PCI DSS requirement 4.1
-
-**ID**: PCI DSS v3.2.1 4.1
-**Ownership**: customer
-
-|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
-|||||
-|[App Service apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) |
-|[Automation account variables should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3657f5a0-770e-44a3-b44e-9431ba1e9735) |It is important to enable encryption of Automation account variable assets when storing sensitive data |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Automation/Automation_AuditUnencryptedVars_Audit.json) |
-|[Function apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceFunctionApp_AuditHTTP_Audit.json) |
-|[Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22bee202-a82f-4305-9a2a-6d7f44d4dedb) |Audit enabling of only connections via SSL to Azure Cache for Redis. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_AuditSSLPort_Audit.json) |
-|[Secure transfer to storage accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) |
-|[Service Fabric clusters should have the ClusterProtectionLevel property set to EncryptAndSign](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F617c02be-7f02-4efd-8836-3180d47b6c68) |Service Fabric provides three levels of protection (None, Sign and EncryptAndSign) for node-to-node communication using a primary cluster certificate. Set the protection level to ensure that all node-to-node messages are encrypted and digitally signed |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditClusterProtectionLevel_Audit.json) |
-|[Transparent Data Encryption on SQL databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F17k78e20-9358-41c9-923c-fb736d382a12) |Transparent data encryption should be enabled to protect data-at-rest and meet compliance requirements |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlDBEncryption_Audit.json) |
-|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse,](https://aka.ms/disksse,) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](https://aka.ms/diskencryptioncomparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
-
-## Requirement 5
-
-### PCI DSS requirement 5.1
-
-**ID**: PCI DSS v3.2.1 5.1
-**Ownership**: shared
-
-|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
-|||||
-|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) |
-|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) |
-|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) |
-|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) |
-|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
-
-## Requirement 6
-
-### PCI DSS requirement 6.2
-
-**ID**: PCI DSS v3.2.1 6.2
-**Ownership**: shared
-
-|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
-|||||
-|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) |
-|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) |
-|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) |
-|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) |
-|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
-
-### PCI DSS requirement 6.5.3
-
-**ID**: PCI DSS v3.2.1 6.5.3
-**Ownership**: shared
-
-|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
-|||||
-|[App Service apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) |
-|[Automation account variables should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3657f5a0-770e-44a3-b44e-9431ba1e9735) |It is important to enable encryption of Automation account variable assets when storing sensitive data |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Automation/Automation_AuditUnencryptedVars_Audit.json) |
-|[Function apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceFunctionApp_AuditHTTP_Audit.json) |
-|[Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22bee202-a82f-4305-9a2a-6d7f44d4dedb) |Audit enabling of only connections via SSL to Azure Cache for Redis. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_AuditSSLPort_Audit.json) |
-|[Secure transfer to storage accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) |
-|[Service Fabric clusters should have the ClusterProtectionLevel property set to EncryptAndSign](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F617c02be-7f02-4efd-8836-3180d47b6c68) |Service Fabric provides three levels of protection (None, Sign and EncryptAndSign) for node-to-node communication using a primary cluster certificate. Set the protection level to ensure that all node-to-node messages are encrypted and digitally signed |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditClusterProtectionLevel_Audit.json) |
-|[Transparent Data Encryption on SQL databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F17k78e20-9358-41c9-923c-fb736d382a12) |Transparent data encryption should be enabled to protect data-at-rest and meet compliance requirements |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlDBEncryption_Audit.json) |
-|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse,](https://aka.ms/disksse,) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](https://aka.ms/diskencryptioncomparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
-
-### PCI DSS requirement 6.6
-
-**ID**: PCI DSS v3.2.1 6.6
-**Ownership**: shared
-
-|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
-|||||
-|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) |
-|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) |
-|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) |
-|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) |
-|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
-
-## Requirement 7
-
-### PCI DSS requirement 7.1.1
-
-**ID**: PCI DSS v3.2.1 7.1.1
-**Ownership**: customer
-
-|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
-|||||
-|[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) |
-|[There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F09024ccc-0c5f-475e-9457-b7c0d9ed487b) |It is recommended to designate more than one subscription owner in order to have administrator access redundancy. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateMoreThanOneOwner_Audit.json) |
-
-### PCI DSS requirement 7.1.2
-
-**ID**: PCI DSS v3.2.1 7.1.2
-**Ownership**: shared
-
-|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
-|||||
-|[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) |
-|[There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F09024ccc-0c5f-475e-9457-b7c0d9ed487b) |It is recommended to designate more than one subscription owner in order to have administrator access redundancy. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateMoreThanOneOwner_Audit.json) |
-
-### PCI DSS requirement 7.1.3
-
-**ID**: PCI DSS v3.2.1 7.1.3
-**Ownership**: customer
-
-|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
-|||||
-|[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) |
-|[There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F09024ccc-0c5f-475e-9457-b7c0d9ed487b) |It is recommended to designate more than one subscription owner in order to have administrator access redundancy. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateMoreThanOneOwner_Audit.json) |
-
-### PCI DSS requirement 7.2.1
-
-**ID**: PCI DSS v3.2.1 7.2.1
-**Ownership**: customer
-
-|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |