Updates from: 06/20/2023 01:10:42
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Sap Successfactors Integration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/sap-successfactors-integration-reference.md
https://[SuccessFactorsAPIEndpoint]/odata/v2/PerPerson/$count?$format=json&$filt
&$expand=employmentNav/userNav,employmentNav/jobInfoNav,personalInfoNav,personEmpTerminationInfoNav,phoneNav,emailNav,employmentNav/userNav/manager/empInfo,employmentNav/jobInfoNav/companyNav,employmentNav/jobInfoNav/departmentNav,employmentNav/jobInfoNav/locationNav,employmentNav/jobInfoNav/locationNav/addressNavDEFLT,employmentNav/jobInfoNav/locationNav/addressNavDEFLT/stateNav&customPageSize=100 ```
+## How pre-hire processing works
+
+This section explains how the SAP SuccessFactors connector processes pre-hire records (workers with hire date / start date in future).
+Let's say there is a pre-hire with employeeId "1234" in SuccessFactors Employee Central with start date on 1-June-2023. Let's further assume that this pre-hire record was first created either in Employee Central or in the Onboarding module on 15-May-2023. When the provisioning service first observes this record on 15-May-2023 (either as part of full sync or incremental sync), this record is still in pre-hire state. Due to this, SuccessFactors does not send the provisioning service all attributes (example: userNav/username) associated with the user. Only bare minimum data about the user such as `personIdExternal`, `firstname`, `lastname` and `startDate` is available. To process pre-hires successfully, the following pre-requisites must be met:
+
+1) The `personIdExternal` attribute must be set as the primary matching identifier (joining property). If you configure a different attribute (example: userName) as the joining property then the provisioning service will not be able to retrieve the pre-hire information.
+2) The `startDate` attribute must be available and it's JSONPath must be set to either `$.employmentNav.results[0].startDate` or `$.employmentNav.results[-1:].startDate`.
+3) The pre-hire record must be in one of the following states in Employee Central: 'active' (t), 'inactive' (f), or 'active_external_suite' (e). For details about these states refer to the [SAP support note 2736579](https://launchpad.support.sap.com/#/notes/0002736579).
+
+> [!NOTE]
+> For a pre-hire who has no history with the organization, both the [0] and [-1:] index will work for `startDate`. For a pre-hire who is a re-hire or conversion, we cannot deterministically tell the order and this may cause certain rehire/converted workers to get processed on their actual start date. This is a known limitation in the connector.
+
+During full sync or incremental sync or on-demand provisioning, when the provisioning service encounters a pre-hire record, it sends the following OData query to SuccessFactors with "asOfDate" filter set to the startDate of the user (e.g., asOfDate=2023-06-01).
+
+```
+https://[SuccessFactorsAPIEndpoint]/odata/v2/PerPerson?$format=json&$
+filter=(personIdExternal in '1234' and employmentNav/userNav/status in 't','f','e')&asOfDate=2023-06-01&$
+expand=employmentNav/userNav,employmentNav/jobInfoNav,personalInfoNav,personEmpTerminationInfoNav,phoneNav,emailNav,employmentNav/userNav/manager/empInfo,employmentNav/jobInfoNav/companyNav,employmentNav/jobInfoNav/costCenterNav,employmentNav/jobInfoNav/divisionNav,employmentNav/jobInfoNav/departmentNav,employmentNav/
+```
+
+If you are observing issues with pre-hire processing, you can use the above OData request format to query your SuccessFactors instance replacing the API endpoint, `personIdExternal` and `asOfDate` filter with values corresponding to your test scenario.
+ ## Reading attribute data When Azure AD provisioning service queries SuccessFactors, it retrieves a JSON result set. The JSON result set includes many attributes stored in Employee Central. By default, the provisioning schema is configured to retrieve only a subset of those attributes.
Use the steps to update your mapping to retrieve these codes.
| Provisioning Job | Account status attribute | Mapping expression | | - | | |
- | SuccessFactors to Active Directory User Provisioning | `accountDisabled` | `Switch(\[emplStatus\], "True", "A", "False", "U", "False", "P", "False")` |
- | SuccessFactors to Azure AD User Provisioning | `accountEnabled` | `Switch(\[emplStatus\], "False", "A", "True", "U", "True", "P", "True")` |
+ | SuccessFactors to Active Directory User Provisioning | `accountDisabled` | `Switch([emplStatus], "True", "A", "False", "U", "False", "P", "False")` |
+ | SuccessFactors to Azure AD User Provisioning | `accountEnabled` | `Switch([emplStatus], "False", "A", "True", "U", "True", "P", "True")` |
1. Save the changes. 1. Test the configuration using [provision on demand](provision-on-demand.md).
This section describes how you can update the JSONPath settings to definitely re
| **String to find** | **String to use for replace** | **Purpose** | | | -- | |
- | `$.employmentNav.results\[0\].<br>jobInfoNav.results\[0\].emplStatus` | `$.employmentNav..jobInfoNav..results\[?(@.emplStatusNav.externalCode == 'A' \|\| @.emplStatusNav.externalCode == 'U' \|\| @.emplStatusNav.externalCode == 'P' )\].emplStatusNav.externalCode` | With this find-replace, we're adding the ability to expand emplStatusNav OData object. |
- | `$.employmentNav.results\[0\].<br>jobInfoNav.results\[0\]` | `$.employmentNav..jobInfoNav..results\[?(@.emplStatusNav.externalCode == 'A' \|\| @.emplStatusNav.externalCode == 'U' \|\| @.emplStatusNav.externalCode == 'P')\]` | With this find-replace, we instruct the connector to always retrieve attributes associated with the active SuccessFactors EmpJobInfo record. Attributes associated with terminated/inactive records in SuccessFactors are ignored. |
- | `$.employmentNav.results\[0\]` | `$.employmentNav..results\[?(@.jobInfoNav..results\[?(@.emplStatusNav.externalCode == 'A' \|\| @.emplStatusNav.externalCode == 'U' \|\| @.emplStatusNav.externalCode == 'P')\])\]` | With this find-replace, we instruct the connector to always retrieve attributes associated with the active SuccessFactors Employment record. Attributes associated with terminated/inactive records in SuccessFactors are ignored. |
+ | `$.employmentNav.results[0].<br>jobInfoNav.results[0].emplStatus` | `$.employmentNav..jobInfoNav..results[?(@.emplStatusNav.externalCode == 'A' || @.emplStatusNav.externalCode == 'U' || @.emplStatusNav.externalCode == 'P' )].emplStatusNav.externalCode` | With this find-replace, we're adding the ability to expand emplStatusNav OData object. |
+ | `$.employmentNav.results[0].<br>jobInfoNav.results[0]` | `$.employmentNav..jobInfoNav..results[?(@.emplStatusNav.externalCode == 'A' || @.emplStatusNav.externalCode == 'U' || @.emplStatusNav.externalCode == 'P')]` | With this find-replace, we instruct the connector to always retrieve attributes associated with the active SuccessFactors EmpJobInfo record. Attributes associated with terminated/inactive records in SuccessFactors are ignored. |
+ | `$.employmentNav.results[0]` | `$.employmentNav..results[?(@.jobInfoNav..results[?(@.emplStatusNav.externalCode == 'A' || @.emplStatusNav.externalCode == 'U' || @.emplStatusNav.externalCode == 'P')])]` | With this find-replace, we instruct the connector to always retrieve attributes associated with the active SuccessFactors Employment record. Attributes associated with terminated/inactive records in SuccessFactors are ignored. |
1. Save the schema. 1. The above process updates all JSONPath expressions.
This section describes how you can update the JSONPath settings to definitely re
| Provisioning Job | Account status attribute | Expression to use if account status is based on "activeEmploymentsCount" | Expression to use if account status is based on "emplStatus" value | | -- | | -- | - |
- | SuccessFactors to Active Directory User Provisioning | `accountDisabled` | `Switch(\[activeEmploymentsCount\], "False", "0", "True")` | `Switch(\[emplStatus\], "True", "A", "False", "U", "False", "P", "False")` |
- | SuccessFactors to Azure AD User Provisioning | `accountEnabled` | `Switch(\[activeEmploymentsCount\], "True", "0", "False")` | `Switch(\[emplStatus\], "False", "A", "True", "U", "True", "P", "True")` |
+ | SuccessFactors to Active Directory User Provisioning | `accountDisabled` | `Switch([activeEmploymentsCount], "False", "0", "True")` | `Switch([emplStatus], "True", "A", "False", "U", "False", "P", "False")` |
+ | SuccessFactors to Azure AD User Provisioning | `accountEnabled` | `Switch([activeEmploymentsCount], "True", "0", "False")` | `Switch([emplStatus], "False", "A", "True", "U", "True", "P", "True")` |
1. Save your changes. 1. 1. Test the configuration using [provision on demand](provision-on-demand.md).
active-directory Concept Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication.md
The following scenarios aren't supported:
- Configuring other certificate-to-user account bindings, such as using the **Subject**, **Subject + Issuer** or **Issuer + Serial Number**, arenΓÇÖt available in this release. - Password as an authentication method cannot be disabled and the option to sign in using a password is displayed even with Azure AD CBA method available to the user.
+## Known Limitation with Windows Hello For Business certificates
+
+- While Windows Hello For Business (WHFB) can be used for multi-factor authentication in Azure AD, WHFB is not supported for fresh MFA. Customers may choose to enroll certificates for your users using the WHFB key pair. When properly configured, these WHFB certificates can be used for multi-factor authentication in Azure AD. WHFB certificates are compatible with Azure AD certificate-based authentication (CBA) in Edge and Chrome browsers; however, at this time WHFB certificates are not compatible with Azure AD CBA in non-browser scenarios (e.g. Office 365 applications). The workaround is to use the "Sign in Windows Hello or security key" option to sign in (when available) as this option does not use certificates for authentication and avoids the issue with Azure AD CBA; however, this option may not be available in some older applications.
+ ## Out of Scope The following scenarios are out of scope for Azure AD CBA:
active-directory All Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/all-reports.md
Previously updated : 02/23/2022 Last updated : 06/13/2023 # View a list and description of system reports
-Permissions Management has various types of system reports that capture specific sets of data. These reports allow management, auditors, and administrators to:
+Microsoft Entra Permissions Management has various types of system reports that capture specific sets of data. These reports allow management, auditors, and administrators to:
- Make timely decisions. - Analyze trends and system/user performance.
active-directory Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/faqs.md
Title: Frequently asked questions (FAQs) about Permissions Management
-description: Frequently asked questions (FAQs) about Permissions Management.
+ Title: Frequently asked questions (FAQs) about Microsoft Entra Permissions Management
+description: Frequently asked questions (FAQs) about Microsoft Permissions Management.
Previously updated : 01/25/2023 Last updated : 06/16/2023 # Frequently asked questions (FAQs)
-This article answers frequently asked questions (FAQs) about Permissions Management.
+This article answers frequently asked questions (FAQs) about Microsoft Entra Permissions Management.
-## What's Permissions Management?
+## What's Microsoft Entra Permissions Management?
-Permissions Management is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. For example, over-privileged workload and user identities, actions, and resources across multicloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). Permissions Management detects, automatically right-sizes, and continuously monitors unused and excessive permissions. It deepens the Zero Trust security strategy by augmenting the least privilege access principle.
+Microsoft Entra Permissions Management (Permissions Management) is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. For example, over-privileged workload and user identities, actions, and resources across multicloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). Permissions Management detects, automatically right-sizes, and continuously monitors unused and excessive permissions. It deepens the Zero Trust security strategy by augmenting the least privilege access principle.
## What are the prerequisites to use Permissions Management? Permissions Management supports data collection from AWS, GCP, and/or Microsoft Azure. For data collection and analysis, customers are required to have an Azure Active Directory (Azure AD) account to use Permissions Management.
-## Can a customer use Permissions Management if they have other identities with access to their IaaS platform that aren't yet in Azure AD (for example, if part of their business has Okta or AWS Identity & Access Management (IAM))?
+## Can a customer use Permissions Management if they have other identities with access to their IaaS platform that aren't yet in Azure AD?
-Yes, a customer can detect, mitigate, and monitor the risk of 'backdoor' accounts that are local to AWS IAM, GCP, or from other identity providers such as Okta or AWS IAM.
+Yes, a customer can detect, mitigate, and monitor the risk for AWS IAM or GCP accounts, or from other identity providers such as Okta or AWS IAM.
## Where can customers access Permissions Management? Customers can access the Permissions Management interface from the [Microsoft Entra admin center](https://entra.microsoft.com/) .
-## Can non-cloud customers use Permissions Management on-premises?
+## Can noncloud customers use Permissions Management on-premises?
No, Permissions Management is a hosted cloud offering.
Yes, Permissions Management is currently for tenants hosted in the European Unio
## If I'm already using Azure AD Privileged Identity Management (PIM) for Azure, what value does Permissions Management provide?
-Permissions Management complements Azure AD PIM. Azure AD PIM provides just-in-time access for admin roles in Azure (as well as Microsoft Online Services and apps that use groups), while Permissions Management allows multicloud discovery, remediation, and monitoring of privileged access across Azure, AWS, and GCP.
+Permissions Management complements Azure AD PIM. Azure AD PIM provides just-in-time access for admin roles in Azure and Microsoft Online Services and apps that use groups. Permissions Management allows multicloud discovery, remediation, and monitoring of privileged access across Azure, AWS, and GCP.
-## What public cloud infrastructures are supported by Permissions Management?
+## What public cloud infrastructures does Permissions Management support?
Permissions Management currently supports the three major public clouds: Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure.
No, Permissions Management is currently not available in sovereign Clouds.
## How does Permissions Management collect insights about permissions usage?
-Permissions Management has a data collector that collects access permissions assigned to various identities, activity logs, and resources metadata. This gathers full visibility into permissions granted to all identities to access the resources and details on usage of granted permissions.
+Permissions Management has a data collector that collects access permissions that are assigned to various identities, activity logs, and resources metadata. The data collector provides full visibility into permissions granted to all identities to access the resources and details on usage of granted permissions.
## How does Permissions Management evaluate cloud permissions risk?
-Permissions Management offers granular visibility into all identities and their permissions granted versus used, across cloud infrastructures to uncover any action performed by any identity on any resource. This isn't limited to just user identities, but also workload identities such as virtual machines, access keys, containers, and scripts. The dashboard gives an overview of permission profile to locate the riskiest identities and resources.
+Permissions Management offers granular visibility into all identities and their permissions granted versus used, across cloud infrastructures to uncover any action performed by any identity on any resource. The visibility isn't limited to just user identities, but also workload identities such as virtual machines, access keys, containers, and scripts. The dashboard gives an overview of permission profile to locate the riskiest identities and resources.
## What is the Permissions Creep Index?
Just-in-time (JIT) access is a method used to enforce the principle of least pri
## How can customers monitor permissions usage with Permissions Management?
-Customers only need to track the evolution of their Permission Creep Index to monitor permissions usage. They can do this in the "Analytics" tab in their Permissions Management dashboard where they can see how the PCI of each identity or resource is evolving over time.
+Customers only need to track the evolution of their Permission Creep Index (PCI) to monitor permissions usage. Customers can monitor PCI in the **Analytics** tab from their Permissions Management dashboard.
## Can customers generate permissions usage reports?
We also have the ability to remove, export or modify specific data should the Gl
## Do I require a license to use Entra Permissions Management?
-Yes, as of July 1st, 2022, new customers must acquire a free 45-day trial license or a paid license to use the service. You can enable a trial here: [https://aka.ms/TryPermissionsManagement](https://aka.ms/TryPermissionsManagement) or you can directly purchase resource-based licenses here: [https://aka.ms/BuyPermissionsManagement](https://aka.ms/BuyPermissionsManagement)
+Yes, as of July 1, 2022, new customers must acquire a free 45-day trial license or a paid license to use the service. You can enable a trial here: [https://aka.ms/TryPermissionsManagement](https://aka.ms/TryPermissionsManagement) or you can directly purchase resource-based licenses here: [https://aka.ms/BuyPermissionsManagement](https://aka.ms/BuyPermissionsManagement)
## How is Permissions Management priced?
Although Permissions Management supports all resources, Microsoft only requires
## How do I figure out how many resources I have?
-To find out how many resources you have across your multicloud infrastructure, select Settings (gear icon) and view the Billable Resources tab in Permissions Management.
-
-## What do I do if IΓÇÖm using Public Preview version of Entra Permissions Management?
-
-If you are using the Public Preview version of Entra Permissions Management, your current deployment(s) will continue to work through October 1st.
-
-After October 1st you will need to move over to use the newly released version of the service and enable a 45-day trial or purchase licenses to continue using the service.
+To find out how many resources you have across your multicloud infrastructure, select Settings (gear icon) and view the Billable Resources tab in Permissions Management.
## What do I do if IΓÇÖm using the legacy version of the CloudKnox service?
Where xx-XX is one of the following available language parameters: 'cs-CZ', 'de-
## Resources -- [Public Preview announcement blog](https://www.aka.ms/CloudKnox-Public-Preview-Blog)
+- [Microsoft Entra (Azure AD) blog](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/bg-p/Identity)
- [Permissions Management web page](https://microsoft.com/security/business/identity-access-management/permissions-management) - For more information about Microsoft's privacy and security terms, seeΓÇ»[Commercial Licensing Terms](https://www.microsoft.com/licensing/terms/product/ForallOnlineServices/all). - For more information about Microsoft's data processing and security terms when you subscribe to a product, see [Microsoft Products and Services Data Protection Addendum (DPA)](https://www.microsoft.com/licensing/docs/view/Microsoft-Products-and-Services-Data-Protection-Addendum-DPA).
Where xx-XX is one of the following available language parameters: 'cs-CZ', 'de-
## Next steps -- For an overview of Permissions Management, see [What's Permissions Management?](overview.md).
+- For an overview of Permissions Management, see [What's Microsoft Entra Permissions Management?](overview.md).
+- Deepen your learning with the [Introduction to Microsoft Entra Permissions Management](https://go.microsoft.com/fwlink/?linkid=2240016) learn module.
- For information on how to onboard Permissions Management in your organization, see [Enable Permissions Management in your organization](onboard-enable-tenant.md).
active-directory How To Add Remove Role Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-add-remove-role-task.md
Previously updated : 02/23/2022 Last updated : 06/16/2023
This article describes how you can add and remove roles and tasks for Microsoft
- For information on how to view existing roles/policies, requests, and permissions, see [View roles/policies, requests, and permission in the Remediation dashboard](ui-remediation.md).
+- To view information about roles/policies, see [View information about roles/policies](how-to-view-role-policy.md).
- For information on how to create a role/policy, see [Create a role/policy](how-to-create-role-policy.md). - For information on how to clone a role/policy, see [Clone a role/policy](how-to-clone-role-policy.md). - For information on how to delete a role/policy, see [Delete a role/policy](how-to-delete-role-policy.md).-- For information on how to modify a role/policy, see Modify a role/policy](how-to-modify-role-policy.md).-- To view information about roles/policies, see [View information about roles/policies](how-to-view-role-policy.md).-- For information on how to attach and detach permissions for Amazon Web Services (AWS) identities, see [Attach and detach policies for AWS identities](how-to-attach-detach-permissions.md).-- For information on how to revoke high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities, see [Revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities](how-to-revoke-task-readonly-status.md)
-For information on how to create or approve a request for permissions, see [Create or approve a request for permissions](how-to-create-approve-privilege-request.md).
+- For information on how to modify a role/policy, see [Modify a role/policy](how-to-modify-role-policy.md).
+- For information on how to attach and detach permissions for Amazon Web Services (AWS) identities, see [Attach and detach policies for AWS identities](how-to-attach-detach-permissions.md).
active-directory How To Add Remove User To Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-add-remove-user-to-group.md
Title: Add or remove a user in Permissions Management through the Microsoft Entra admin center
-description: How to add or remove a user in Permissions Management through Azure Active Directory (AD).
+ Title: Add or remove a user in Microsoft Entra Permissions Management through the Microsoft Entra admin center
+description: How to add or remove a user in Microsoft Entra Permissions Management through the Microsoft Enter admin center.
Previously updated : 12/28/2022 Last updated : 06/16/2023
-# Add or remove a user in Permissions Management
+# Add or remove a user in Microsoft Entra Permissions Management
This article describes how you can add or remove a new user for a group in Permissions Management.
active-directory How To Attach Detach Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-attach-detach-permissions.md
Title: Attach and detach permissions for users, roles, and groups for Amazon Web Services (AWS) identities in the Remediation dashboard in Permissions Management
+ Title: Attach and detach permissions for users, roles, and groups for Amazon Web Services (AWS) identities in the Remediation dashboard
description: How to attach and detach permissions for users, roles, and groups for Amazon Web Services (AWS) identities in the Remediation dashboard in Permissions Management.
Previously updated : 02/23/2022 Last updated : 06/16/2023
This article describes how you can attach and detach permissions for users, role
## Next steps -- For information on how to view existing roles/policies, requests, and permissions, see [View roles/policies, requests, and permission in the Remediation dashboard](ui-remediation.md).-- For information on how to create a role/policy, see [Create a role/policy](how-to-create-role-policy.md).-- For information on how to clone a role/policy, see [Clone a role/policy](how-to-clone-role-policy.md).-- For information on how to delete a role/policy, see [Delete a role/policy](how-to-delete-role-policy.md).-- For information on how to modify a role/policy, see Modify a role/policy](how-to-modify-role-policy.md).
+- To view existing roles/policies, requests, and permissions, see [View roles/policies, requests, and permission in the Remediation dashboard](ui-remediation.md).
+- To create a role/policy, see [Create a role/policy](how-to-create-role-policy.md).
+- To clone a role/policy, see [Clone a role/policy](how-to-clone-role-policy.md).
+- To delete a role/policy, see [Delete a role/policy](how-to-delete-role-policy.md).
+- To modify a role/policy, see [Modify a role/policy](how-to-modify-role-policy.md).
- To view information about roles/policies, see [View information about roles/policies](how-to-view-role-policy.md).-- For information on how to revoke high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities, see [Revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities](how-to-revoke-task-readonly-status.md)
-For information on how to create or approve a request for permissions, see [Create or approve a request for permissions](how-to-create-approve-privilege-request.md).
+- To revoke high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities, see [Revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities](how-to-revoke-task-readonly-status.md)
+To create or approve a request for permissions, see [Create or approve a request for permissions](how-to-create-approve-privilege-request.md).
active-directory How To Audit Trail Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-audit-trail-results.md
Previously updated : 02/23/2022 Last updated : 06/16/2023
This article describes how you can generate an on-demand report from a query in
Permissions Management generates the report and exports it in comma-separated values (**CSV**) format, portable document format (**PDF**), or Microsoft Excel Open XML Spreadsheet (**XLSX**) format.
-<!
-## Create a schedule to automatically generate and share a report
-
-1. In the **Audit** tab, load the query you want to use to generate your report.
-2. Select **Settings** (the gear icon).
-3. In **Repeat on**, select on which days of the week you want the report to run.
-4. In **Date**, select the date when you want the query to run.
-5. In **hh mm** (time), select the time when you want the query to run.
-6. In **Request file format**, select the file format you want for your report.
-7. In **Share report with people**, enter email addresses for people to whom you want to send the report.
-8. Select **Schedule**.
-
- Permissions Management generates the report as set in Steps 3 to 6, and emails it to the recipients you specified in Step 7.
--
-## Delete the schedule for a report
-
-1. In the **Audit** tab, load the query whose report schedule you want to delete.
-2. Select the ellipses menu **(…)** on the far right, and then select **Delete schedule**.
-
- Permissions Management deletes the schedule for running the query. The query itself isn't deleted.
->
-- ## Next steps - For information on how to view how users access information, see [Use queries to see how users access information](ui-audit-trail.md).
active-directory How To Clone Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-clone-role-policy.md
Title: Clone a role/policy in the Remediation dashboard in Permissions Management
-description: How to clone a role/policy in the Just Enough Permissions (JEP) Controller.
+ Title: Clone a role/policy in the Remediation dashboard in Microsoft Entra Permissions Management
+description: How to clone a role/policy in Microsoft Entra Permissions Management.
Previously updated : 02/23/2022 Last updated : 06/16/2023 # Clone a role/policy in the Remediation dashboard
-This article describes how you can use the **Remediation** dashboard in Permissions Management to clone roles/policies for the Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) authorization systems.
+This article describes how you can use the **Remediation** dashboard in Microsoft Entra Permissions Management to clone roles/policies for the Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) authorization systems.
> [!NOTE] > To view the **Remediation** tab, you must have **Viewer**, **Controller**, or **Administrator** permissions. To make changes on this tab, you must have **Controller** or **Administrator** permissions. If you don't have these permissions, contact your system administrator.
This article describes how you can use the **Remediation** dashboard in Permissi
## Clone a role/policy
-1. On the Permissions Management Permissions Management home page, select the **Remediation** tab, and then select the **Role/Policies** tab.
+1. On the Permissions Management home page, select the **Remediation** tab, and then select the **Role/Policies** tab.
1. Select the role/policy you want to clone, and from the **Actions** column, select **Clone**. 1. **(AWS Only)** In the **Clone** box, the **Clone Resources** and **Clone Conditions** checkboxes are automatically selected. Deselect the boxes if the resources and conditions are different from what is displayed.
This article describes how you can use the **Remediation** dashboard in Permissi
## Next steps -- For information on how to view existing roles/policies, requests, and permissions, see [View roles/policies, requests, and permission in the Remediation dashboard](ui-remediation.md).-- For information on how to create a role/policy, see [Create a role/policy](how-to-create-role-policy.md).-- For information on how to delete a role/policy, see [Delete a role/policy](how-to-delete-role-policy.md).-- For information on how to modify a role/policy, see Modify a role/policy](how-to-modify-role-policy.md).
+- To view existing roles/policies, requests, and permissions, see [View roles/policies, requests, and permission in the Remediation dashboard](ui-remediation.md).
+- To create a role/policy, see [Create a role/policy](how-to-create-role-policy.md).
+- To delete a role/policy, see [Delete a role/policy](how-to-delete-role-policy.md).
+- To modify a role/policy, see [Modify a role/policy](how-to-modify-role-policy.md).
- To view information about roles/policies, see [View information about roles/policies](how-to-view-role-policy.md).-- For information on how to attach and detach permissions for AWS identities, see [Attach and detach policies for AWS identities](how-to-attach-detach-permissions.md).-- For information on how to revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities, see [Revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities](how-to-revoke-task-readonly-status.md)-- For information on how to create or approve a request for permissions, see [Create or approve a request for permissions](how-to-create-approve-privilege-request.md).-- For information on how to view information about roles/policies, see [View information about roles/policies](how-to-view-role-policy.md)
+- To attach and detach permissions for AWS identities, see [Attach and detach policies for AWS identities](how-to-attach-detach-permissions.md).
+- To revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities, see [Revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities](how-to-revoke-task-readonly-status.md)
+- To create or approve a request for permissions, see [Create or approve a request for permissions](how-to-create-approve-privilege-request.md).
+- To view information about roles/policies, see [View information about roles/policies](how-to-view-role-policy.md)
active-directory How To Create Alert Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-alert-trigger.md
Title: Create and view activity alerts and alert triggers in Permissions Management
-description: How to create and view activity alerts and alert triggers in Permissions Management.
+ Title: Create and view activity alerts and alert triggers in Microsoft Entra Permissions Management
+description: How to create and view activity alerts and alert triggers in Microsoft Entra Permissions Management.
Previously updated : 02/23/2022 Last updated : 06/16/2023
This article describes how you can create and view activity alerts and alert tri
1. To add another parameter, select the plus sign **(+)**, then select an operator, and then enter a value.
- To remove a parameter, select the minus sign **(-)**.
+1. To remove a parameter, select the minus sign **(-)**.
1. To add another activity type, select **Add**, and then enter your parameters. 1. To save your alert, select **Save**.
This article describes how you can create and view activity alerts and alert tri
- **Subscription**: A switch that displays if the alert is **On** or **Off**. - If the column displays **Off**, the current user isn't subscribed to that alert. Switch the toggle to **On** to subscribe to the alert.
- - The user who creates an alert trigger is automatically subscribed to the alert, and will receive emails about the alert.
+ - The user who creates an alert trigger is automatically subscribed to the alert, and receives emails about the alert.
1. To see only activated or only deactivated triggers, from the **Status** dropdown, select **Activated** or **Deactivated**, and then select **Apply**.
This article describes how you can create and view activity alerts and alert tri
- **Duplicate**: Create a duplicate of the alert called "**Copy of XXX**". - **Rename**: Enter the new name of the query, and then select **Save.**
- - **Deactivate**: The alert will still be listed, but will no longer send emails to subscribed users.
+ - **Deactivate**: The alert is listed, but no longer sends emails to subscribed users.
- **Activate**: Activate the alert trigger and start sending emails to subscribed users. - **Notification Settings**: View the **Email** of users who are subscribed to the alert trigger and their **User Status**. - **Delete**: Delete the alert.
active-directory How To Create Approve Privilege Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-approve-privilege-request.md
Title: Create or approve a request for permissions in the Remediation dashboard in Permissions Management
+ Title: Create or approve a request for permissions in the Remediation dashboard
description: How to create or approve a request for permissions in the Remediation dashboard.
Previously updated : 02/23/2022 Last updated : 06/16/2023 # Create or approve a request for permissions
-This article describes how to create or approve a request for permissions in the **Remediation** dashboard in Permissions Management. You can create and approve requests for the Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) authorization systems.
+This article describes how to create or approve a request for permissions in the **Remediation** dashboard in Microsoft Entra Permissions Management. You can create and approve requests for the Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) authorization systems.
The **Remediation** dashboard has two privilege-on-demand (POD) workflows you can use: - **New Request**: The workflow used by a user to create a request for permissions for a specified duration.
The **Remediation** dashboard has two privilege-on-demand (POD) workflows you ca
- For information on how to create a role/policy, see [Create a role/policy](how-to-create-role-policy.md). - For information on how to clone a role/policy, see [Clone a role/policy](how-to-clone-role-policy.md). - For information on how to delete a role/policy, see [Delete a role/policy](how-to-delete-role-policy.md).-- For information on how to modify a role/policy, see Modify a role/policy](how-to-modify-role-policy.md).
+- For information on how to modify a role/policy, see [Modify a role/policy](how-to-modify-role-policy.md).
- To view information about roles/policies, see [View information about roles/policies](how-to-view-role-policy.md). - For information on how to attach and detach permissions for Amazon Web Services (AWS) identities, see [Attach and detach policies for AWS identities](how-to-attach-detach-permissions.md). - For information on how to add and remove roles and tasks for Microsoft Azure and Google Cloud Platform (GCP) identities, see [Add and remove roles and tasks for Azure and GCP identities](how-to-attach-detach-permissions.md).
active-directory How To Create Custom Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-custom-queries.md
Title: Create a custom query in Permissions Management
-description: How to create a custom query in the Audit dashboard in Permissions Management.
+ Title: Create a custom query in Microsoft Entra Permissions Management
+description: How to create a custom query in the Audit dashboard in Microsoft Entra Permissions Management.
Previously updated : 02/23/2022 Last updated : 06/16/2023
active-directory How To Create Group Based Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-group-based-permissions.md
Title: Select group-based permissions settings in Permissions Management with the User management dashboard
-description: How to select group-based permissions settings in Permissions Management with the User management dashboard.
+ Title: Select group-based permissions settings with the User management dashboard
+description: How to select group-based permissions settings with the User management dashboard.
Previously updated : 02/03/2023 Last updated : 06/16/2023 # Select group-based permissions settings
-This article describes how you can create and manage group-based permissions in Permissions Management with the User management dashboard.
+This article describes how you can create and manage group-based permissions in Microsoft Entra Permissions Management with the User management dashboard.
> [!NOTE] > The Permissions Management Administrator for all authorization systems will be able to create the new group based permissions.
active-directory How To Create Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-role-policy.md
Title: Create a role/policy in the Remediation dashboard in Permissions Management
-description: How to create a role/policy in the Remediation dashboard in Permissions Management.
+ Title: Create a role/policy in the Remediation dashboard
+description: How to create a role/policy in the Remediation dashboard.
Previously updated : 02/23/2022 Last updated : 06/16/2023 # Create a role/policy in the Remediation dashboard
-This article describes how you can use the **Remediation** dashboard in Permissions Management to create roles/policies for the Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) authorization systems.
+This article describes how you can use the **Remediation** dashboard in Microsoft Entra Permissions Management to create roles/policies for the Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) authorization systems.
> [!NOTE] > To view the **Remediation** tab, you must have **Viewer**, **Controller**, or **Administrator** permissions. To make changes on this tab, you must have **Controller** or **Administrator** permissions. If you don't have these permissions, contact your system administrator.
This article describes how you can use the **Remediation** dashboard in Permissi
- For information on how to attach and detach permissions for AWS identities, see [Attach and detach policies for AWS identities](how-to-attach-detach-permissions.md). - For information on how to revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities, see [Revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities](how-to-revoke-task-readonly-status.md) - For information on how to create or approve a request for permissions, see [Create or approve a request for permissions](how-to-create-approve-privilege-request.md).-- For information on how to view information about roles/policies, see [View information about roles/policies](how-to-view-role-policy.md)
active-directory How To Create Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-rule.md
Title: Create a rule in the Autopilot dashboard in Permissions Management
-description: How to create a rule in the Autopilot dashboard in Permissions Management.
+description: How to create a rule in the Autopilot dashboard in Microsoft Entra Permissions Management.
Previously updated : 02/23/2022 Last updated : 06/16/2023
active-directory How To Delete Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-delete-role-policy.md
Title: Delete a role/policy in the Remediation dashboard in Permissions Management
-description: How to delete a role/policy in the Just Enough Permissions (JEP) Controller.
+description: How to delete a role/policy in the Microsoft Entra Permissions Management Remediation dashboard.
Previously updated : 02/23/2022 Last updated : 06/16/2023 # Delete a role/policy in the Remediation dashboard
-This article describes how you can use the **Remediation** dashboard in Permissions Management to delete roles/policies for the Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) authorization systems.
+This article describes how you can use the **Remediation** dashboard in Microsoft Entra Permissions Management to delete roles/policies for the Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) authorization systems.
> [!NOTE] > To view the **Remediation** dashboard, you must have **Viewer**, **Controller**, or **Administrator** permissions. To make changes on this tab, you must have **Controller** or **Administrator** permissions. If you don't have these permissions, contact your system administrator.
This article describes how you can use the **Remediation** dashboard in Permissi
## Next steps -- For information on how to view existing roles/policies, requests, and permissions, see [View roles/policies, requests, and permission in the Remediation dashboard](ui-remediation.md).-- For information on how to create a role/policy, see [Create a role/policy](how-to-create-role-policy.md).-- For information on how to clone a role/policy, see [Clone a role/policy](how-to-clone-role-policy.md).-- For information on how to modify a role/policy, see [Modify a role/policy](how-to-modify-role-policy.md).
+- To view existing roles/policies, requests, and permissions, see [View roles/policies, requests, and permission in the Remediation dashboard](ui-remediation.md).
+- To create a role/policy, see [Create a role/policy](how-to-create-role-policy.md).
+- To clone a role/policy, see [Clone a role/policy](how-to-clone-role-policy.md).
+- To modify a role/policy, see [Modify a role/policy](how-to-modify-role-policy.md).
- To view information about roles/policies, see [View information about roles/policies](how-to-view-role-policy.md). - For information on how to attach and detach permissions for AWS identities, see [Attach and detach policies for AWS identities](how-to-attach-detach-permissions.md).-- For information on how to revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities, see [Revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities](how-to-revoke-task-readonly-status.md)-- For information on how to create or approve a request for permissions, see [Create or approve a request for permissions](how-to-create-approve-privilege-request.md).-- For information on how to view information about roles/policies, see [View information about roles/policies](how-to-view-role-policy.md)
+- To revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities, see [Revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities](how-to-revoke-task-readonly-status.md)
+- To create or approve a request for permissions, see [Create or approve a request for permissions](how-to-create-approve-privilege-request.md).
+- To view information about roles/policies, see [View information about roles/policies](how-to-view-role-policy.md)
active-directory How To Modify Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-modify-role-policy.md
Title: Modify a role/policy in the Remediation dashboard in Permissions Management
-description: How to modify a role/policy in the Remediation dashboard in Permissions Management.
+description: How to modify a role/policy in the Remediation dashboard in Microsoft Entra Permissions Management.
Previously updated : 02/23/2022 Last updated : 06/16/2023 # Modify a role/policy in the Remediation dashboard
-This article describes how you can use the **Remediation** dashboard in Permissions Management to modify roles/policies for the Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) authorization systems.
+This article describes how you can use the **Remediation** dashboard in Microsoft Entra Permissions Management to modify roles/policies for the Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) authorization systems.
> [!NOTE] > To view the **Remediation** tab, you must have **Viewer**, **Controller**, or **Administrator** permissions. To make changes on this tab, you must have **Controller** or **Administrator** permissions. If you don't have these permissions, contact your system administrator.
active-directory How To Notifications Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-notifications-rule.md
Title: View notification settings for a rule in the Autopilot dashboard in Permissions Management
-description: How to view notification settings for a rule in the Autopilot dashboard in Permissions Management.
+description: How to view notification settings for a rule in the Autopilot dashboard in Microsoft Entra Permissions Management.
Previously updated : 02/23/2022 Last updated : 06/16/2023 # View notification settings for a rule in the Autopilot dashboard
-This article describes how to view notification settings for a rule in the Permissions Management **Autopilot** dashboard.
+This article describes how to view notification settings for a rule in the Microsoft Entra Permissions Management **Autopilot** dashboard.
> [!NOTE] > Only users with **Administrator** permissions can view and make changes on the Autopilot tab. If you don't have these permissions, contact your system administrator.
active-directory How To Recommendations Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-recommendations-rule.md
Title: Generate, view, and apply rule recommendations in the Autopilot dashboard in Permissions Management
-description: How to generate, view, and apply rule recommendations in the Autopilot dashboard in Permissions Management.
+ Title: Generate, view, and apply rule recommendations in the Microsoft Entra Permissions Management Autopilot dashboard
+description: How to generate, view, and apply rule recommendations in the Microsoft Entra Permissions Management Autopilot dashboard.
Previously updated : 02/23/2022 Last updated : 06/16/2023
active-directory How To Revoke Task Readonly Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-revoke-task-readonly-status.md
Title: Revoke access to high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities in the Remediation dashboard in Permissions Management
-description: How to revoke access to high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities in the Remediation dashboard in Permissions Management.
+ Title: Revoke access to high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities in the Remediation dashboard
+description: How to revoke access to high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities in the Remediation dashboard.
Previously updated : 02/23/2022 Last updated : 06/16/2023
This article describes how you can revoke high-risk and unused tasks or assign r
- For information on how to create a role/policy, see [Create a role/policy](how-to-create-role-policy.md). - For information on how to clone a role/policy, see [Clone a role/policy](how-to-clone-role-policy.md). - For information on how to delete a role/policy, see [Delete a role/policy](how-to-delete-role-policy.md).-- For information on how to modify a role/policy, see Modify a role/policy](how-to-modify-role-policy.md).
+- For information on how to modify a role/policy, see [Modify a role/policy](how-to-modify-role-policy.md).
- To view information about roles/policies, see [View information about roles/policies](how-to-view-role-policy.md). - For information on how to attach and detach permissions for AWS identities, see [Attach and detach policies for AWS identities](how-to-attach-detach-permissions.md). - For information on how to add and remove roles and tasks for Azure and GCP identities, see [Add and remove roles and tasks for Azure and GCP identities](how-to-attach-detach-permissions.md).
active-directory How To View Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-view-role-policy.md
Title: View information about roles/ policies in the Remediation dashboard in Permissions Management
-description: How to view and filter information about roles/ policies in the Remediation dashboard in Permissions Management.
+ Title: View information about roles/ policies in the Remediation dashboard
+description: How to view and filter information about roles/ policies in the Microsoft Entra Permissions Management Remediation dashboard.
Previously updated : 02/23/2022 Last updated : 06/16/2023
The **Remediation** dashboard in Permissions Management enables system administr
- The **Role Policy Details** report in CSV format. - The **Reports** dashboard where you can configure how and when you can automatically receive reports. --- ## Filter information about roles/policies 1. On the Permissions Management home page, select the **Remediation** dashboard, and then select the **Role/Policies** tab.
The **Remediation** dashboard in Permissions Management enables system administr
- For information on how to create a role/policy, see [Create a role/policy](how-to-create-role-policy.md). - For information on how to clone a role/policy, see [Clone a role/policy](how-to-clone-role-policy.md). - For information on how to delete a role/policy, see [Delete a role/policy](how-to-delete-role-policy.md).-- For information on how to modify a role/policy, see Modify a role/policy](how-to-modify-role-policy.md).
+- For information on how to modify a role/policy, see [Modify a role/policy](how-to-modify-role-policy.md).
- For information on how to attach and detach permissions AWS identities, see [Attach and detach policies for AWS identities](how-to-attach-detach-permissions.md). - For information on how to revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities, see [Revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities](how-to-revoke-task-readonly-status.md) - For information on how to create or approve a request for permissions, see [Create or approve a request for permissions](how-to-create-approve-privilege-request.md).
active-directory Integration Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/integration-api.md
Previously updated : 02/23/2022 Last updated : 06/16/2023
The **Integrations** dashboard displays the authorization systems available to y
1. Select an authorization system tile to view the following integration information: 1. To find out more about the Permissions Management API, select **Permissions Management API**, and then select documentation.
- <!Add Link: [documentation](https://developer.cloudknox.io/)>
1. To view information about service accounts, select **Integration**: - **Email**: Lists the email address of the user who created the integration.
The **Integrations** dashboard displays the authorization systems available to y
- **Action (after the key rotation period ends)**: Select **Disable Action Key** or **No Action**. 5. Click **Save**.-
-<!## Next steps>
-
-<!View integrated authorization systems](product-integrations)>
-<![Installation overview](installation.md)>
-<![Sign up and deploy FortSentry registration](fortsentry-registration.md)>
active-directory Multi Cloud Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/multi-cloud-glossary.md
Title: Permissions Management glossary
-description: Permissions Management glossary
+ Title: Microsoft Entra Permissions Management glossary
+description: Microsoft Entra Permissions Management glossary
Previously updated : 02/23/2022 Last updated : 06/16/2023
-# The Permissions Management glossary
+# The Microsoft Entra Permissions Management glossary
-This glossary provides a list of some of the commonly used cloud terms in Permissions Management. These terms will help Permissions Management users navigate through cloud-specific terms and cloud-generic terms.
+This glossary provides a list of some of the commonly used cloud terms in Microsoft Entra Permissions Management. These terms help Permissions Management users navigate through cloud-specific terms and cloud-generic terms.
-## Commonly-used acronyms and terms
+## Commonly used acronyms and terms
| Term | Definition | |--|--|
This glossary provides a list of some of the commonly used cloud terms in Permis
| JIT | Just in Time access can be seen as a way to enforce the principle of least privilege to ensure users and non-human identities are given the minimum level of privileges. It also ensures that privileged activities are conducted in accordance with an organization's Identity Access Management (IAM), IT Service Management (ITSM), and Privileged Access Management (PAM) policies, with its entitlements and workflows. JIT access strategy enables organizations to maintain a full audit trail of privileged activities so they can easily identify who or what gained access to which systems, what they did at what time, and for how long. | | Least privilege | Ensures that users only gain access to the specific tools they need to complete a task. | | Multi-tenant | A single instance of the software and its supporting infrastructure serves multiple customers. Each customer shares the software application and also shares a single database. |
-| OIDC | OpenID Connect. An authentication protocol that verifies user identity when a user is trying to access a protected HTTPs end point. OIDC is an evolutionary development of ideas implemented earlier in OAuth. |
+| OIDC | OpenID Connect. An authentication protocol that verifies user identity when a user is trying to access a protected HTTPS end point. OIDC is an evolutionary development of ideas implemented earlier in OAuth. |
| PAM | Privileged access management. Tools that offer one or more of these features: discover, manage, and govern privileged accounts on multiple systems and applications; control access to privileged accounts, including shared and emergency access; randomize, manage, and vault credentials (password, keys, etc.) for administrative, service, and application accounts; single sign-on (SSO) for privileged access to prevent credentials from being revealed; control, filter, and orchestrate privileged commands, actions, and tasks; manage and broker credentials to applications, services, and devices to avoid exposure; and monitor, record, audit, and analyze privileged access, sessions, and actions. | | PASM | Privileged accounts are protected by vaulting their credentials. Access to those accounts is then brokered for human users, services, and applications. Privileged session management (PSM) functions establish sessions with possible credential injection and full session recording. Passwords and other credentials for privileged accounts are actively managed and changed at definable intervals or upon the occurrence of specific events. PASM solutions may also provide application-to-application password management (AAPM) and zero-install remote privileged access features for IT staff and third parties that don't require a VPN. | | PEDM | Specific privileges are granted on the managed system by host-based agents to logged-in users. PEDM tools provide host-based command control (filtering); application allow, deny, and isolate controls; and/or privilege elevation. The latter is in the form of allowing particular commands to be run with a higher level of privileges. PEDM tools execute on the actual operating system at the kernel or process level. Command control through protocol filtering is explicitly excluded from this definition because the point of control is less reliable. PEDM tools may also provide file integrity monitoring features. |
This glossary provides a list of some of the commonly used cloud terms in Permis
## Next steps -- For an overview of Permissions Management, see [What's Permissions Management?](overview.md).
+- For an overview of Permissions Management, see [What's Microsoft Entra Permissions Management?](overview.md).
active-directory Onboard Add Account After Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-add-account-after-onboarding.md
Previously updated : 02/23/2022 Last updated : 06/16/2023
active-directory Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-aws.md
Title: Onboard an Amazon Web Services (AWS) account on Permissions Management
-description: How to onboard an Amazon Web Services (AWS) account on Permissions Management.
+ Title: Onboard an Amazon Web Services (AWS) account to Permissions Management
+description: How to onboard an Amazon Web Services (AWS) account to Permissions Management.
Previously updated : 04/20/2022 Last updated : 06/16/2023 # Onboard an Amazon Web Services (AWS) account
-This article describes how to onboard an Amazon Web Services (AWS) account on Permissions Management.
+This article describes how to onboard an Amazon Web Services (AWS) account in Microsoft Entra Permissions Management.
> [!NOTE]
-> A *global administrator* or *root user* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable Permissions Management on your Azure Active Directory tenant](onboard-enable-tenant.md).
+> A *global administrator* or *super admin* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable Microsoft Entra Permissions Management on your Azure Active Directory tenant](onboard-enable-tenant.md).
## Explanation
There are several moving parts across AWS and Azure, which are required to be co
* An AWS Cross Account role assumed by OIDC role
-<!-- diagram from gargi -->
- ## Onboard an AWS account 1. If the **Data Collectors** dashboard isn't displayed when Permissions Management launches:
active-directory Onboard Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-azure.md
Previously updated : 04/20/2022 Last updated : 06/16/2023
To view status of onboarding after saving the configuration:
- For information on how to onboard a Google Cloud Platform (GCP) project, see [Onboard a Google Cloud Platform (GCP) project](onboard-gcp.md). - For information on how to enable or disable the controller after onboarding is complete, see [Enable or disable the controller](onboard-enable-controller-after-onboarding.md). - For information on how to add an account/subscription/project after onboarding is complete, see [Add an account/subscription/project after onboarding is complete](onboard-add-account-after-onboarding.md).-- For an overview on Permissions Management, see [What's Permissions Management?](overview.md).
+- For an overview on Permissions Management, see [What's Microsoft Entra Permissions Management?](overview.md).
- For information on how to start viewing information about your authorization system in Permissions Management, see [View key statistics and data about your authorization system](ui-dashboard.md).
active-directory Onboard Enable Controller After Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-controller-after-onboarding.md
Previously updated : 02/13/2023 Last updated : 06/16/2023
This article also describes how to enable the controller in Amazon Web Services
## Enable or disable the controller in Azure
-You can enable or disable the controller in Azure at the Subscription level of you Management Group(s).
+You can enable or disable the controller in Azure at the Subscription level of your Management Group(s).
1. From the Azure **Home** page, select **Management groups**. 1. Locate the group for which you want to enable or disable the controller, then select the arrow to expand the group menu and view your subscriptions. Alternatively, you can select the **Total Subscriptions** number listed for your group.
You can enable or disable the controller in Azure at the Subscription level of y
1. Execute the **gcloud auth login**. 1. Follow the instructions displayed on the screen to authorize access to your Google account.
-1. Execute the **sh mciem-workload-identity-pool.sh** to create the workload identity pool, provider, and service account.
-1. Execute the **sh mciem-member-projects.sh** to give Permissions Management permissions to access each of the member projects.
+1. Execute the ``sh mciem-workload-identity-pool.sh`` to create the workload identity pool, provider, and service account.
+1. Execute the ``sh mciem-member-projects.sh`` to give Permissions Management permissions to access each of the member projects.
- If you want to manage permissions through Permissions Management, select **Y** to **Enable controller**. - If you want to onboard your projects in read-only mode, select **N** to **Disable controller**.
-1. Optionally, execute **mciem-enable-gcp-api.sh** to enable all recommended GCP APIs.
+1. Optionally, execute ``mciem-enable-gcp-api.sh`` to enable all recommended GCP APIs.
1. Go to the Permissions Management home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab. 1. On the **Data Collectors** dashboard, select **GCP**, and then select **Create Configuration**.
active-directory Onboard Enable Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-tenant.md
Title: Enable Permissions Management in your organization
-description: How to enable Permissions Management in your organization.
+ Title: Enable Microsoft Entra Permissions Management in your organization
+description: How to enable Microsoft Entra Permissions Management in your organization.
To enable Permissions Management in your organization:
1. If needed, activate the global administrator role in your Azure AD tenant. 1. In the Azure portal, select **Permissions Management**, and then select the link to purchase a license or begin a trial.
-> [!NOTE]
-> There are two ways to enable a trial or a full product license, self-service and volume licensing.
-> For self-service, navigate to the M365 portal at [https://aka.ms/TryPermissionsManagement](https://aka.ms/TryPermissionsManagement) and purchase licenses or sign up for a free trial. The second way is through Volume Licensing or Enterprise agreements. If your organization falls under a volume license or enterprise agreement scenario, please contact your Microsoft representative.
+
+## Activate a free trial or paid license
+There are two ways to activate a trial or a full product license.
+- The first way is to go to [admin.microsoft.com](https://admin.microsoft.com).
+ - Sign in with *Global Admin* or *Billing Admin* credentials for your tenant.
+ - Go to Setup and sign up for an Entra Permissions Management trial.
+ - For self-service, navigate to the [Microsoft 365 portal](https://aka.ms/TryPermissionsManagement) to sign up for a 45-day free trial or to purchase licenses.
+- The second way is through Volume Licensing or Enterprise agreements. If your organization falls under a volume license or enterprise agreement scenario, contact your Microsoft representative.
Permissions Management launches with the **Data Collectors** dashboard.
Use the **Data Collectors** dashboard in Permissions Management to configure dat
## Next steps -- For an overview of Permissions Management, see [What's Permissions Management?](overview.md)
+- For an overview of Permissions Management, see [What's Microsoft Entra Permissions Management?](overview.md)
- For a list of frequently asked questions (FAQs) about Permissions Management, see [FAQs](faqs.md).-- For information on how to start viewing information about your authorization system in Permissions Management, see [View key statistics and data about your authorization system](ui-dashboard.md).
+- To start viewing information about your authorization system in Permissions Management, see [View key statistics and data about your authorization system](ui-dashboard.md).
active-directory Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-gcp.md
Previously updated : 04/20/2022 Last updated : 06/16/2023
There are several moving parts across GCP and Azure, which are required to be co
1. On the **Permissions Management Onboarding - Azure AD OIDC App Creation** page, enter the **OIDC Azure App Name**.
- This app is used to set up an OpenID Connect (OIDC) connection to your GCP project. OIDC is an interoperable authentication protocol based on the OAuth 2.0 family of specifications. The scripts generated will create the app of this specified name in your Azure AD tenant with the right configuration.
+ This app is used to set up an OpenID Connect (OIDC) connection to your GCP project. OIDC is an interoperable authentication protocol based on the OAuth 2.0 family of specifications. The scripts generated creates the app of this specified name in your Azure AD tenant with the right configuration.
1. To create the app registration, copy the script and run it in your command-line app.
There are several moving parts across GCP and Azure, which are required to be co
> 1. Return to the Permissions Management window, and in the **Permissions Management Onboarding - Azure AD OIDC App Creation**, select **Next**. ### 2. Set up a GCP OIDC project.
-1. In the **Permissions Management Onboarding - GCP OIDC Account Details & IDP Access** page, enter the **OIDC Project Number** and **OIDC Project ID**of the GCP project in which the OIDC provider and pool will be created. You can change the role name to your requirements.
+1. In the **Permissions Management Onboarding - GCP OIDC Account Details & IDP Access** page, enter the **OIDC Project Number** and **OIDC Project ID** of the GCP project in which the OIDC provider and pool is created. You can change the role name to your requirements.
> [!NOTE] > You can find the **Project number** and **Project ID** of your GCP project on the GCP **Dashboard** page of your project in the **Project info** panel.
There are several moving parts across GCP and Azure, which are required to be co
Optionally, specify **G-Suite IDP Secret Name** and **G-Suite IDP User Email** to enable G-Suite integration.
-1. You can either download and run the script at this point or you can run it in the Google Cloud Shell.
-1. Select **Next** after sucessfully running the setup script.
+1. You can either download and run the script at this point or you can do it in the Google Cloud Shell.
-Choose from 3 options to manage GCP projects.
+1. Select **Next** after successfully running the setup script.
+
+Choose from three options to manage GCP projects.
#### Option 1: Automatically manage
To enable controller mode 'On' for any projects, add following roles to the spec
3. Select **Next**. #### Option 2: Enter authorization systems
-You have the ability to specify only certain GCP member projects to manage and monitor with MEPM (up to 100 per collector). Follow the steps below to configure these GCP member projects to be monitored:
+You have the ability to specify only certain GCP member projects to manage and monitor with MEPM (up to 100 per collector). Follow the steps to configure these GCP member projects to be monitored:
1. In the **Permissions Management Onboarding - GCP Project Ids** page, enter the **Project IDs**. You can enter up to comma separated 100 GCP project IDs.
This option detects all projects that are accessible by the Cloud Infrastructure
## Next steps -- For information on how to onboard an Amazon Web Services (AWS) account, see [Onboard an Amazon Web Services (AWS) account](onboard-aws.md).-- For information on how to onboard a Microsoft Azure subscription, see [Onboard a Microsoft Azure subscription](onboard-azure.md).-- For information on how to enable or disable the controller after onboarding is complete, see [Enable or disable the controller](onboard-enable-controller-after-onboarding.md).-- For information on how to add an account/subscription/project after onboarding is complete, see [Add an account/subscription/project after onboarding is complete](onboard-add-account-after-onboarding.md).
+- To onboard an Amazon Web Services (AWS) account, see [Onboard an Amazon Web Services (AWS) account](onboard-aws.md).
+- To onboard a Microsoft Azure subscription, see [Onboard a Microsoft Azure subscription](onboard-azure.md).
+- To enable or disable the controller after onboarding is complete, see [Enable or disable the controller](onboard-enable-controller-after-onboarding.md).
+- To add an account/subscription/project after onboarding is complete, see [Add an account/subscription/project after onboarding is complete](onboard-add-account-after-onboarding.md).
active-directory Partner List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/partner-list.md
Title: Microsoft Entra Permissions Management partners
-description: View current Microsoft Permissions Management partners and their websites.
+description: View current Microsoft Entra Permissions Management partners and their websites.
- Previously updated : 04/24/2023+ Last updated : 06/16/2023 # Microsoft Entra Permissions Management partners
-Microsoft verified partners can help you onboard Microsoft Entra Permissions Management and run a risk assessment across your entire multicloud environment.
+Microsoft verified partners can help you onboard Microsoft Entra Permissions Management (Permissions Management) and run a risk assessment across your entire multicloud environment.
## Benefits of working with Microsoft verified partners
Select a partner from the list provided to begin your Permissions Management ris
If you're a partner and would like to be considered for the Entra Permissions Management partner list, submit a [request](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbRzw7upfFlddNq4ce6ckvEvhUNzE3V0RQNkpPWjhDSU5FNkk1U1RWUDdDTC4u).
-| EPM partner | Website |
+| Permissions Management partner | Website |
|:-|:--| |![Screenshot of edgile logo.](media/partner-list/partner-edgile.png) | [Quick Start Programs for Microsoft Cloud Security](https://edgile.com/information-security/quick-start-programs-for-microsoft-cloud-security/) | ![Screenshot of a invoke logo.](media/partner-list/partner-invoke.png) | [Invoke's Entra PM multicloud risk assessment](https://www.invokellc.com/offers/microsoft-entra-permissions-management-multi-cloud-risk-assessment)|
If you're a partner and would like to be considered for the Entra Permissions Ma
| ![Screenshot of Mazzy Technologies logo.](media/partner-list/partner-mazzy-technologies.png) | [Mazzy Technologies Identity](https://mazzytechnologies.com/identity%3A-microsoft-entra) ## Next steps
-* For an overview of Permissions Management, see [What's Permissions Management?](overview.md)
+* For an overview of Permissions Management, see [What's Microsoft Entra Permissions Management?](overview.md)
active-directory Permissions Management Trial User Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/permissions-management-trial-user-guide.md
Title: Trial User Guide - Microsoft Entra Permissions Management
-description: How to get started with your Entra Permissions free trial
+description: How to get started with your Microsoft Entra Permissions Management free trial
Previously updated : 09/01/2022 Last updated : 06/16/2023
active-directory Product Account Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-account-explorer.md
Previously updated : 02/23/2022 Last updated : 06/16/2023
To export the data in comma-separated values (CSV) file format, select **Export*
- To view the **Role summary** for EC2 instances and Lambda functions, select the "eye" icon to the right of the identity name. - To view a graph of how the identity can access the specified account and through which role(s), select the identity name.
-1. The **Info** tab displays the **Privilege creep index** and **Service control policy (SCP)** information about the account.
+1. The **Dashboard** tab displays the **Permissions Creep Index (PCI)** and **Identity findings** information about the account.
-For more information about the **Privilege creep index** and SCP information, see [View key statistics and data about your authorization system](ui-dashboard.md).
+## Next steps
+
+For more information about the **Permissions Creep Index (PCI)** and SCP information, see [View key statistics and data about your authorization system](ui-dashboard.md).
active-directory Active Directory V2 Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-v2-protocols.md
Four parties are generally involved in an OAuth 2.0 and OpenID Connect authentic
![Diagram showing the OAuth 2.0 roles](./media/v2-flows/protocols-roles.svg)
-* **Authorization server** - The identity platform is the authorization server. Also called an *identity provider* or *IdP*, it securely handles the end-user's information, their access, and the trust relationships between the parties in the auth flow. The authorization server issues the security tokens your apps and APIs use for granting, denying, or revoking access to resources (authorization) after the user has signed in (authenticated).
+* **Authorization server** - The Microsoft identity platform is the authorization server. Also called an *identity provider* or *IdP*, it securely handles the end-user's information, their access, and the trust relationships between the parties in the auth flow. The authorization server issues the security tokens your apps and APIs use for granting, denying, or revoking access to resources (authorization) after the user has signed in (authenticated).
* **Client** - The client in an OAuth exchange is the application requesting access to a protected resource. The client could be a web app running on a server, a single-page web app running in a user's web browser, or a web API that calls another web API. You'll often see the client referred to as *client application*, *application*, or *app*.
Four parties are generally involved in an OAuth 2.0 and OpenID Connect authentic
## Tokens
-The parties in an authentication flow use **bearer tokens** to assure, verify, and authenticate a principal (user, host, or service) and to grant or deny access to protected resources (authorization). Bearer tokens in the identity platform are formatted as [JSON Web Tokens](https://tools.ietf.org/html/rfc7519) (JWT).
+The parties in an authentication flow use **bearer tokens** to assure, verify, and authenticate a principal (user, host, or service) and to grant or deny access to protected resources (authorization). Bearer tokens in the Microsoft identity platform are formatted as [JSON Web Tokens](https://tools.ietf.org/html/rfc7519) (JWT).
Three types of bearer tokens are used by the identity platform as *security tokens*:
Three types of bearer tokens are used by the identity platform as *security toke
## App registration
-Your client app needs a way to trust the security tokens issued to it by the identity platform. The first step in establishing trust is by [registering your app](quickstart-register-app.md). When you register your app, the identity platform automatically assigns it some values, while others you configure based on the application's type.
+Your client app needs a way to trust the security tokens issued to it by the Microsoft identity platform. The first step in establishing trust is by [registering your app](quickstart-register-app.md). When you register your app, the identity platform automatically assigns it some values, while others you configure based on the application's type.
Two of the most commonly referenced app registration settings are:
Your app's registration also holds information about the authentication and auth
## Endpoints
-The identity platform offers authentication and authorization services using standards-compliant implementations of OAuth 2.0 and OpenID Connect (OIDC) 1.0. Standards-compliant authorization servers like the identity platform provide a set of HTTP endpoints for use by the parties in an auth flow to execute the flow.
+The Microsoft identity platform offers authentication and authorization services using standards-compliant implementations of OAuth 2.0 and OpenID Connect (OIDC) 1.0. Standards-compliant authorization servers like the identity platform provide a set of HTTP endpoints for use by the parties in an auth flow to execute the flow.
The endpoint URIs for your app are generated automatically when you register or configure your app. The endpoints you use in your app's code depend on the application's type and the identities (account types) it should support.
Next, learn about the OAuth 2.0 authentication flows used by each application ty
* [Authentication flows and application scenarios](authentication-flows-app-scenarios.md) * [Microsoft Authentication Library (MSAL)](msal-overview.md)
-**We strongly advise against crafting your own library or raw HTTP calls to execute authentication flows.** A [Microsoft Authentication Library](reference-v2-libraries.md) is safer and easier. However, if your scenario prevents you from using our libraries or you'd just like to learn more about the identity platform's implementation, we have protocol reference:
+**We strongly advise against crafting your own library or raw HTTP calls to execute authentication flows.** A [Microsoft Authentication Library](reference-v2-libraries.md) is safer and easier. However, if your scenario prevents you from using our libraries or you'd just like to learn more about the Microsoft identity platform's implementation, we have protocol reference:
* [Authorization code grant flow](v2-oauth2-auth-code-flow.md) - Single-page apps (SPA), mobile apps, native (desktop) applications * [Client credentials flow](v2-oauth2-client-creds-grant-flow.md) - Server-side processes, scripts, daemons
active-directory App Sign In Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-sign-in-flow.md
For other topics covering authentication and authorization basics:
* See [Authentication vs. authorization](authentication-vs-authorization.md) to learn about the basic concepts of authentication and authorization in Microsoft identity platform. * See [Security tokens](security-tokens.md) to learn how access tokens, refresh tokens, and ID tokens are used in authentication and authorization. * See [Application model](application-model.md) to learn about the process of registering your application so it can integrate with Microsoft identity platform.
+* See [Secure applications and APIs by validating claims](./claims-validation.md) to learn about how to securely use token claims for authorization logic in your applications.
To learn more about app sign-in flow:
active-directory Application Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/application-model.md
For more information about authentication and authorization in the Microsoft ide
* To learn about the basic concepts of authentication and authorization, see [Authentication vs. authorization](authentication-vs-authorization.md). * To learn how access tokens, refresh tokens, and ID tokens are used in authentication and authorization, see [Security tokens](security-tokens.md). * To learn about the sign-in flow of web, desktop, and mobile apps, see [App sign-in flow](app-sign-in-flow.md).
+* To learn about proper authorization using token claims, see [Secure applications and APIs by validating claims](./claims-validation.md)
For more information about the application model, see the following articles:
active-directory Authentication Vs Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authentication-vs-authorization.md
For other topics that cover authentication and authorization basics:
* To learn how access tokens, refresh tokens, and ID tokens are used in authorization and authentication, see [Security tokens](security-tokens.md). * To learn about the process of registering your application so it can integrate with the Microsoft identity platform, see [Application model](application-model.md).
+* To learn about proper authorization using token claims, see [Secure applications and APIs by validating claims](./claims-validation.md)
active-directory Authorization Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authorization-basics.md
One method for achieving ABAC with Azure Active Directory is using [dynamic grou
Authorization logic is often implemented within the applications or solutions where access control is required. In many cases, application development platforms offer middleware or other API solutions that simplify the implementation of authorization. Examples include use of the [AuthorizeAttribute](/aspnet/core/security/authorization/simple?view=aspnetcore-5.0&preserve-view=true) in ASP.NET or [Route Guards](./scenario-spa-sign-in.md?tabs=angular2#sign-in-with-a-pop-up-window) in Angular.
-For authorization approaches that rely on information about the authenticated entity, an application evaluates information exchanged during authentication. For example, by using the information that was provided within a [security token](./security-tokens.md). For information not contained in a security token, an application might make extra calls to external resources.
+For authorization approaches that rely on information about the authenticated entity, an application evaluates information exchanged during authentication. For example, by using the information that was provided within a [security token](./security-tokens.md). If you are planning on using information from tokens for authorization, we recommend following [this guidance on properly securing apps through claims validation](./claims-validation.md). in For information not contained in a security token, an application might make extra calls to external resources.
It's not strictly necessary for developers to embed authorization logic entirely within their applications. Instead, dedicated authorization services can be used to centralize authorization implementation and management.
It's not strictly necessary for developers to embed authorization logic entirely
- To learn about custom role-based access control implementation in applications, see [Role-based access control for application developers](./custom-rbac-for-developers.md). - To learn about the process of registering your application so it can integrate with the Microsoft identity platform, see [Application model](./application-model.md). - For an example of configuring simple authentication-based authorization, see [Configure your App Service or Azure Functions app to use Azure AD login](../../app-service/configure-authentication-provider-aad.md).
+- To learn about proper authorization using token claims, see [Secure applications and APIs by validating claims](./claims-validation.md)
active-directory Custom Rbac For Developers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-rbac-for-developers.md
Although either app roles or groups can be used for authorization, key differenc
## Next steps - [Azure Identity Management and access control security best practices](../../security/fundamentals/identity-management-best-practices.md)
+- To learn about proper authorization using token claims, see [Secure applications and APIs by validating claims](./claims-validation.md)
active-directory Howto Add App Roles In Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-add-app-roles-in-apps.md
Another approach is to use Azure Active Directory (Azure AD) groups and group cl
## Declare roles for an application
-You define app roles by using the [Azure portal](https://portal.azure.com) during the [app registration process](quickstart-register-app.md). App roles are defined on an application registration representing a service, app or API. When a user signs in to the application, Azure AD emits a `roles` claim for each role that the user or service principal has been granted. This can be used to implement claim-based authorization. App roles can be assigned [to a user or a group of users](../manage-apps/add-application-portal-assign-users.md). App roles can also be assigned to the service principal for another application, or [to the service principal for a managed identity](../managed-identities-azure-resources/how-to-assign-app-role-managed-identity-powershell.md).
+You define app roles by using the [Azure portal](https://portal.azure.com) during the [app registration process](quickstart-register-app.md). App roles are defined on an application registration representing a service, app or API. When a user signs in to the application, Azure AD emits a `roles` claim for each role that the user or service principal has been granted. This can be used to implement [claim-based authorization](./claims-validation.md). App roles can be assigned [to a user or a group of users](../manage-apps/add-application-portal-assign-users.md). App roles can also be assigned to the service principal for another application, or [to the service principal for a managed identity](../managed-identities-azure-resources/how-to-assign-app-role-managed-identity-powershell.md).
Currently, if you add a service principal to a group, and then assign an app role to that group, Azure AD doesn't add the `roles` claim to tokens it issues.
active-directory Id Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/id-tokens.md
The authorization server issues ID tokens that contain claims that carry information about the user. They can be sent alongside or instead of an access token. Information in ID tokens enables the client to verify that a user is who they claim to be.
-Third-party applications are intended to understand ID tokens. ID tokens shouldn't be used for authorization purposes. Access tokens are used for authorization. The claims provided by ID tokens can be used for UX inside your application, as keys in a database, and providing access to the client application. For more information about the claims used in an ID token, see the [ID token claims reference](id-token-claims-reference.md).
+Third-party applications are intended to understand ID tokens. ID tokens shouldn't be used for authorization purposes. Access tokens are used for authorization. The claims provided by ID tokens can be used for UX inside your application, as keys in a database, and providing access to the client application. For more information about the claims used in an ID token, see the [ID token claims reference](id-token-claims-reference.md). For more information about claims-based authorization, see [Secure applications and APIs by validating claims](./claims-validation.md).
## Token formats
active-directory Microsoft Identity Web https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/microsoft-identity-web.md
Microsoft Identity Web is available on NuGet as a set of packages that provide m
- [Microsoft.Identity.Web](https://www.nuget.org/packages/Microsoft.Identity.Web) - The main package. Required by all apps that use Microsoft Identity Web. - [Microsoft.Identity.Web.UI](https://www.nuget.org/packages/Microsoft.Identity.Web.UI) - Optional. Adds UI for user sign-in and sign-out and an associated controller for web apps.-- [Microsoft.Identity.Web.MicrosoftGraph](https://www.nuget.org/packages/Microsoft.Identity.Web.MicrosoftGraph) - Optional. Provides simplified interaction with the Microsoft Graph API.-- [Microsoft.Identity.Web.MicrosoftGraphBeta](https://www.nuget.org/packages/Microsoft.Identity.Web.MicrosoftGraphBeta) - Optional. Provides simplified interaction with the Microsoft Graph API [beta endpoint](/graph/api/overview?view=graph-rest-beta&preserve-view=true).
+- [Microsoft.Identity.Web.GraphServiceClient](https://www.nuget.org/packages/Microsoft.Identity.Web.GraphServiceClient) - Optional. Provides simplified interaction with the Microsoft Graph API.
+- [Microsoft.Identity.Web.GraphServiceClientBeta](https://www.nuget.org/packages/Microsoft.Identity.Web.GraphServiceClientBeta) - Optional. Provides simplified interaction with the Microsoft Graph API [beta endpoint](/graph/api/overview?view=graph-rest-beta&preserve-view=true).
## Install by using a Visual Studio project template
active-directory Migrate Off Email Claim Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/migrate-off-email-claim-authorization.md
+
+ Title: Migrate away from using email claims for user identification or authorization
+description: Learn how to migrate your application away from using insecure claims, such as email, for authorization purposes.
+++++++ Last updated : 05/11/2023++++++
+# Migrate away from using email claims for user identification or authorization
+
+This article is meant to provide guidance to developers whose applications are currently using an insecure pattern where the email claim is used for authorization, which can lead to full account takeover by another user. Continue reading to learn more about if your application is impacted, and steps for remediation.
+
+## How do I know if my application is impacted?
+
+Microsoft recommends reviewing application source code and determining whether the following patterns are present:
+
+- A mutable claim, such as `email`, is used for the purposes of uniquely identifying a user
+- A mutable claim, such as `email` is used for the purposes of authorizing a user's access to resources
+
+These patterns are considered insecure, as users without a provisioned mailbox can have any email address set for their Mail (Primary SMTP) attribute. **This attribute is not guaranteed to come from a verified email address**. When an email claim with an unverified domain owner is used for authorization, any user without a provisioned mailbox has the potential to gain unauthorized access by changing their Mail attribute to impersonate another user.
+
+An email is considered to be domain-owner verified if:
+
+- The domain belongs to the tenant where the user account resides, and the tenant admin has done verification of the domain
+- The email is from a Microsoft Account (MSA)
+- The email is from a Google account
+- The email was used for authentication using the one-time passcode (OTP) flow
+
+It should also be noted that Facebook and SAML/WS-Fed accounts don't have verified domains.
+
+This risk of unauthorized access has only been found in multi-tenant apps, as a user from one tenant could escalate their privileges to access resources from another tenant through modification of their Mail attribute.
+
+## How do I protect my application immediately?
+
+To secure applications from mistakes with unverified email addresses, all new multi-tenant applications are automatically opted-in to a new default behavior that removes email addresses with unverified domain owners from tokens as of June 2023. This behavior is not enabled for single-tenant applications and multi-tenant applications with previous sign-in activity with domain-owner unverified email addresses.
+
+Depending on your scenario, you may determine that your application's tokens should continue receiving unverified emails. While not recommended for most applications, you may disable the default behavior by setting the `removeUnverifiedEmailClaim` property in the [Authentication Behaviors Microsoft Graph API](/graph/api/resources/authenticationbehaviors).
+
+By setting `removeUnverifiedEmailClaim` to `false`, your application will receive `email` claims that are potentially unverified and subject users to account takeover risk. If you're disabling this behavior in order to not break user login flows, it's highly recommended to migrate to a uniquely identifying token claim mapping as soon as possible, as described in the guidance below.
+
+## Identifying insecure configurations and performing database migration
+
+You should never use mutable claims (such as `email`, `preferred_username`, etc.) as identifiers to perform authorization checks or index users in a database. These values are reusable and could expose your application to privilege escalation attacks.
+
+The following pseudocode sample helps illustrate the insecure pattern of user identification / authorization:
+
+```
+ // Your relying party (RP) using the insecure email claim for user identification (or authorization)
+ MyRPUsesInsecurePattern()
+ {
+ // grab data for the user based on the email (or other mutable) attribute
+ data = GetUserData(token.email)
+
+ // Create new record if no data present (This is the anti-pattern!)
+ if (data == null)
+ {
+ data = WriteNewRecords(token.email)
+ }
+
+ insecureAccess = data.show // this is how an unverified user can escalate their privileges via an arbirarily set email
+ }
+```
+
+Once you've determined that your application is relying on this insecure attribute, you need to update business logic to reindex users on a globally unique identifier (GUID).
+
+Mutli-tenant applications should index on a mapping of two uniquely identifying claims, `tid` + `oid`. This will segment tenants by the `tid`, and segment users by their `oid`.
+
+### Using the `xms_edov` optional claim to determine email verification status and migrate users
+
+To assist developers in the migration process, we have introduced an optional claim, `xms_edov`, a Boolean property that indicates whether or not the email domain owner has been verified.
+
+`xms_edov` can be used to help verifying a user's email before migrating their primary key to unique identifiers, such as `oid`. The following pseudocode example illustrates how this claim may be used as part of your migration.
+
+```
+// Verify email and migrate users by performing lookups on tid+oid, email, and xms_edov claims
+MyRPUsesSecurePattern()
+{
+ // grab the data for a user based on the secure tid + oid mapping
+ data = GetUserData(token.tid + token.oid)
+
+ // address case where users are still indexed by email
+ if (data == null)
+ {
+ data = GetUserData(token.email)
+
+ // if still indexed by email, update user's key to GUID
+ if (data != null)
+ {
+
+ // check if email domain owner is verified
+ if (token.xms_edov == false)
+ {
+ yourEmailVerificationLogic()
+ }
+
+ // migrate primary key to unique identifier mapping (tid + oid)
+ data.UpdateKeyTo(token.tid + token.oid)
+ }
+
+ // new user, create new record with the correct (secure) key
+ data = WriteNewRecord(token.sub)
+ }
+
+ secureAccess = data.show
+}
+```
+
+Migrating to a globally unique mapping ensures that each user is primarily indexed with a value that can't be reused, or abused to impersonate another user. Once your users are indexed on a globally unique identifier, you're ready to fix any potential authorization logic that uses the `email` claim.
++
+## Update authorization logic with proper claims validation
+
+If your application uses `email` (or any other mutable claim) for authorization purposes, you should read through the [Secure applications and APIs by validating claims](claims-validation.md) and implement the appropriate checks.
++
+## Next steps
+
+- To learn more about using claims-based authorization securely, see [Secure applications and APIs by validating claims](claims-validation.md)
+- For more information about optional claims, see the [optional claims reference](./optional-claims-reference.md)
active-directory Multi Service Web App Access Microsoft Graph As App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-access-microsoft-graph-as-app.md
The [ChainedTokenCredential](/dotnet/api/azure.identity.chainedtokencredential),
To see this code as part of a sample application, see the [sample on GitHub](https://github.com/Azure-Samples/ms-identity-easyauth-dotnet-storage-graphapi/tree/main/3-WebApp-graphapi-managed-identity).
-### Install the Microsoft.Identity.Web.MicrosoftGraph client library package
+### Install the Microsoft.Identity.Web.GraphServiceClient client library package
-Install the [Microsoft.Identity.Web.MicrosoftGraph NuGet package](https://www.nuget.org/packages/Microsoft.Identity.Web.MicrosoftGraph) in your project by using the .NET Core command-line interface or the Package Manager Console in Visual Studio.
+Install the [Microsoft.Identity.Web.GraphServiceClient NuGet package](https://www.nuget.org/packages/Microsoft.Identity.Web.GraphServiceClient) in your project by using the .NET Core command-line interface or the Package Manager Console in Visual Studio.
#### .NET Core command-line
Open a command line, and switch to the directory that contains your project file
Run the install commands. ```dotnetcli
-dotnet add package Microsoft.Identity.Web.MicrosoftGraph
+dotnet add package Microsoft.Identity.Web.GraphServiceClient
dotnet add package Microsoft.Graph ```
Open the project/solution in Visual Studio, and open the console by using the **
Run the install commands. ```powershell
-Install-Package Microsoft.Identity.Web.MicrosoftGraph
+Install-Package Microsoft.Identity.Web.GraphServiceClient
Install-Package Microsoft.Graph ```
public async Task OnGetAsync()
List<MSGraphUser> msGraphUsers = new List<MSGraphUser>(); try {
- //var users = await graphServiceClient.Users.Request().GetAsync();
var users = await graphServiceClient.Users.GetAsync(); foreach (var u in users.Value) {
active-directory Multi Service Web App Access Microsoft Graph As User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-access-microsoft-graph-as-user.md
To see this code as part of a sample application, see the [sample on GitHub](htt
### Install client library packages
-Install the [Microsoft.Identity.Web](https://www.nuget.org/packages/Microsoft.Identity.Web/) and [Microsoft.Identity.Web.MicrosoftGraph](https://www.nuget.org/packages/Microsoft.Identity.Web.MicrosoftGraph) NuGet packages in your project by using the .NET Core command-line interface or the Package Manager Console in Visual Studio.
+Install the [Microsoft.Identity.Web](https://www.nuget.org/packages/Microsoft.Identity.Web/) and [Microsoft.Identity.Web.GraphServiceClient](https://www.nuget.org/packages/Microsoft.Identity.Web.GraphServiceClient) NuGet packages in your project by using the .NET Core command-line interface or the Package Manager Console in Visual Studio.
#### .NET Core command line
Open a command line, and switch to the directory that contains your project file
Run the install commands. ```dotnetcli
-dotnet add package Microsoft.Identity.Web.MicrosoftGraph
+dotnet add package Microsoft.Identity.Web.GraphServiceClient
dotnet add package Microsoft.Identity.Web ```
Open the project/solution in Visual Studio, and open the console by using the **
Run the install commands. ```powershell
-Install-Package Microsoft.Identity.Web.MicrosoftGraph
+Install-Package Microsoft.Identity.Web.GraphServiceClient
Install-Package Microsoft.Identity.Web ```
public class IndexModel : PageModel
{ try {
- var user = await _graphServiceClient.Me.Request().GetAsync();
+ var user = await _graphServiceClient.Me.GetAsync();
ViewData["Me"] = user; ViewData["name"] = user.DisplayName;
- using (var photoStream = await _graphServiceClient.Me.Photo.Content.Request().GetAsync())
+ using (var photoStream = await _graphServiceClient.Me.Photo.Content.GetAsync())
{ byte[] photoByte = ((MemoryStream)photoStream).ToArray(); ViewData["photo"] = Convert.ToBase64String(photoByte);
active-directory Optional Claims Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/optional-claims-reference.md
The following table lists the v1.0 and v2.0 optional claim set.
| `acct` | Users account status in tenant | JWT, SAML | | If the user is a member of the tenant, the value is `0`. If they're a guest, the value is `1`. | | `auth_time` | Time when the user last authenticated. | JWT | | | | `ctry` | User's country/region | JWT | | This claim is returned if it's present and the value of the field is a standard two-letter country/region code, such as FR, JP, SZ, and so on. |
-| `email` | The reported email address for this user | JWT, SAML | MSA, Azure AD | This value is included if the user is a guest in the tenant. For managed users (the users inside the tenant), it must be requested through this optional claim or, on v2.0 only, with the OpenID scope. This value isn't guaranteed to be correct, and is mutable over time - never use it for authorization or to save data for a user. For more information, see [Secure applications and APIs by validating claims](claims-validation.md). If you require an addressable email address in your app, request this data from the user directly, using this claim as a suggestion or prefill in your UX. |
+| `email` | The reported email address for this user | JWT, SAML | MSA, Azure AD | This value is included by default if the user is a guest in the tenant. For managed users (the users inside the tenant), it must be requested through this optional claim or, on v2.0 only, with the OpenID scope. This value isn't guaranteed to be correct, and is mutable over time - never use it for authorization or to save data for a user. For more information, see [Validate the user has permission to access this data](access-tokens.md). If you are using the email claim for authorization, we recommend [performing a migration to move to a more secure claim](./migrate-off-email-claim-authorization.md). If you require an addressable email address in your app, request this data from the user directly, using this claim as a suggestion or prefill in your UX. |
| `fwd` | IP address | JWT | | Adds the original IPv4 address of the requesting client (when inside a VNET). | | `groups` | Optional formatting for group claims | JWT, SAML | | The `groups` claim is used with the GroupMembershipClaims setting in the [application manifest](reference-app-manifest.md), which must be set as well. | | `idtyp` | Token type | JWT access tokens | Special: only in app-only access tokens | The value is `app` when the token is an app-only token. This claim is the most accurate way for an API to determine if a token is an app token or an app+user token. |
The following table lists the v1.0 and v2.0 optional claim set.
| `verified_secondary_email` | Sourced from the user's SecondaryAuthoritativeEmail | JWT | | | | `vnet` | VNET specifier information. | JWT | | | | `xms_cc` | Client Capabilities | JWT | Azure AD | Indicates whether the client application that acquired the token is capable of handling claims challenges. Service applications (resource servers) can make use of this claim to authorize access to protected resources. This claim is commonly used in Conditional Access and Continuous Access Evaluation scenarios. The service application that issues the token controls the presence of the claim in it. This optional claim should be configured as part of the service app's registration. For more information, see [Claims challenges, claims requests and client capabilities](claims-challenge.md?tabs=dotnet). |
+| `xms_edov` | Boolean value indicating if the user's email domain owner has been verified. | JWT | | An email is considered to be domain verified if: the domain belongs to the tenant where the user account resides and the tenant admin has done verification of the domain, the email is from a Microsoft account (MSA), the email is from a Google account, or the email was used for authentication using the one-time passcode (OTP) flow. It should also be noted the Facebook and SAML/WS-Fed accounts **do not** have verified domains.|
| `xms_pdl` | Preferred data location | JWT | | For Multi-Geo tenants, the preferred data location is the three-letter code showing the geographic region the user is in. For more information, see the [Azure AD Connect documentation about preferred data location](../hybrid/how-to-connect-sync-feature-preferreddatalocation.md). | | `xms_pl` | User preferred language | JWT | | The user's preferred language, if set. Sourced from their home tenant, in guest access scenarios. Formatted LL-CC ("en-us"). | | `xms_tpl` | Tenant preferred language| JWT | | The resource tenant's preferred language, if set. Formatted LL ("en"). |
active-directory Quickstart V2 Aspnet Core Webapp Calls Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp-calls-graph.md
> [AuthorizeForScopes(ScopeKeySection = "DownstreamApi:Scopes")] > public async Task<IActionResult> Index() > {
-> var user = await _graphServiceClient.Me.Request().GetAsync();
-> ViewData["ApiResult"] = user.DisplayName;
+> var user = await _graphServiceClient.Me.GetAsync();
+> ViewData["ApiResult"] = user?.DisplayName;
> > return View(); > }
active-directory Quickstart V2 Netcore Daemon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-netcore-daemon.md
> > ![Diagram that shows how the sample app generated by this quickstart works.](media/quickstart-v2-netcore-daemon/> netcore-daemon-intro.svg) >
-> ### Microsoft.Identity.Web.MicrosoftGraph
+> ### Microsoft.Identity.Web.GraphServiceClient
>
-> Microsoft Identity Web (in the [Microsoft.Identity.Web.TokenAcquisition](https://www.nuget.org/packages/Microsoft.Identity.Web.TokenAcquisition) package) is the library that's used to request tokens for accessing an API protected by the Microsoft identity platform. This quickstart requests tokens by using the application's own identity instead of delegated permissions. The authentication flow in this case is known as a [client credentials OAuth flow](v2-oauth2-client-creds-grant-flow.md). For more information on how to use MSAL.NET with a client credentials flow, see [this article](https://aka.ms/msal-net-client-credentials). Given the daemon app in this quickstart calls Microsoft Graph, you install the [Microsoft.Identity.Web.MicrosoftGraph](https://www.nuget.org/packages/Microsoft.Identity.Web.MicrosoftGraph) package, which handles automatically authenticated requests to Microsoft Graph (and references itself Microsoft.Identity.Web.TokenAcquisition)
+> Microsoft Identity Web (in the [Microsoft.Identity.Web.TokenAcquisition](https://www.nuget.org/packages/Microsoft.Identity.Web.TokenAcquisition) package) is the library that's used to request tokens for accessing an API protected by the Microsoft identity platform. This quickstart requests tokens by using the application's own identity instead of delegated permissions. The authentication flow in this case is known as a [client credentials OAuth flow](v2-oauth2-client-creds-grant-flow.md). For more information on how to use MSAL.NET with a client credentials flow, see [this article](https://aka.ms/msal-net-client-credentials). Given the daemon app in this quickstart calls Microsoft Graph, you install the [Microsoft.Identity.Web.GraphServiceClient](https://www.nuget.org/packages/Microsoft.Identity.Web.GraphServiceClient) package, which handles automatically authenticated requests to Microsoft Graph (and references itself Microsoft.Identity.Web.TokenAcquisition)
>
-> Microsoft.Identity.Web.MicrosoftGraph can be installed by running the following command in the Visual Studio Package Manager Console:
+> Microsoft.Identity.Web.GraphServiceClient can be installed by running the following command in the Visual Studio Package Manager Console:
> > ```dotnetcli
-> dotnet add package Microsoft.Identity.Web.MicrosoftGraph
+> dotnet add package Microsoft.Identity.Web.GraphServiceClient
> ``` > > ### Application initialization
> ```csharp > GraphServiceClient graphServiceClient = serviceProvider.GetRequiredService<GraphServiceClient>(); > var users = await graphServiceClient.Users
-> .Request()
-> .WithAppOnly()
-> .GetAsync();
+> .GetAsync(r => r.Options.WithAppOnly());
> ``` > > [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
active-directory Scenario Daemon App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-app-configuration.md
Here's an example of defining the configuration in an [*appsettings.json*](https
```
-You provide a certificate instead of the client secret, or [workload identity federation](/azure/active-directory/workload-identities/workload-identity-federation.md) credentials.
+You provide a certificate instead of the client secret, or [workload identity federation](../workload-identities/workload-identity-federation.md) credentials.
# [Java](#tab/java)
Reference the MSAL package in your application code.
# [.NET](#tab/idweb) Add the [Microsoft.Identity.Web.TokenAcquisition](https://www.nuget.org/packages/Microsoft.Identity.Web.TokenAcquisition) NuGet package to your application.
-Alternatively, if you want to call Microsoft Graph, add the [Microsoft.Identity.Web.MicrosoftGraph](https://www.nuget.org/packages/Microsoft.Identity.Web.MicrosoftGraph) package.
+Alternatively, if you want to call Microsoft Graph, add the [Microsoft.Identity.Web.GraphServiceClient](https://www.nuget.org/packages/Microsoft.Identity.Web.GraphServiceClient) package.
Your project could be as follows. The *appsettings.json* file needs to be copied to the output directory. ```xml
Your project could be as follows. The *appsettings.json* file needs to be copied
</PropertyGroup> <ItemGroup>
- <PackageReference Include="Microsoft.Identity.Web.MicrosoftGraph" Version="2.6.1" />
+ <PackageReference Include="Microsoft.Identity.Web.GraphServiceClient" Version="2.12.2" />
</ItemGroup> <ItemGroup>
app = ConfidentialClientApplicationBuilder.Create(config.ClientId)
# [.NET](#tab/idweb) Instead of a client secret or a certificate, the confidential client application can also prove its identity by using client assertions. See
-[CredentialDescription](/dotnet/api/microsoft.identity.abstractions.credentialdescription?view=msal-model-dotnet-latest) for details.
+[CredentialDescription](/dotnet/api/microsoft.identity.abstractions.credentialdescription?view=msal-model-dotnet-latest&preserve-view=true) for details.
# [Java](#tab/java)
active-directory Scenario Daemon Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-call-api.md
try
{ GraphServiceClient graphServiceClient = serviceProvider.GetRequiredService<GraphServiceClient>(); var users = await graphServiceClient.Users
- .Request()
- .WithAppOnly()
- .GetAsync();
+ .GetAsync(r => r.Options.WithAppOnly());
Console.WriteLine($"{users.Count} users"); Console.ReadKey(); }
active-directory Scenario Web Api Call Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-api-call-api-app-configuration.md
To call Microsoft Graph, *Microsoft.Identity.Web* enables you to directly use th
To expose Microsoft Graph:
-1. Add the [Microsoft.Identity.Web.MicrosoftGraph](https://www.nuget.org/packages/Microsoft.Identity.Web.MicrosoftGraph) NuGet package to the project.
+1. Add the [Microsoft.Identity.Web.GraphServiceClient](https://www.nuget.org/packages/Microsoft.Identity.Web.GraphServiceClient) NuGet package to the project.
1. Add `.AddMicrosoftGraph()` after `.EnableTokenAcquisitionToCallDownstreamApi()` in *Program.cs*. `.AddMicrosoftGraph()` has several overrides. Using the override that takes a configuration section as a parameter, the code becomes: ```csharp
Your web app needs to acquire a token for the downstream API, *Microsoft.Identit
If you want to call Microsoft Graph, *Microsoft.Identity.Web* enables you to directly use the `GraphServiceClient` (exposed by the Microsoft Graph SDK) in your API actions. To expose Microsoft Graph:
-1. Add the [Microsoft.Identity.Web.MicrosoftGraph](https://www.nuget.org/packages/Microsoft.Identity.Web.MicrosoftGraph) NuGet package to your project.
+1. Add the [Microsoft.Identity.Web.GraphServiceClient](https://www.nuget.org/packages/Microsoft.Identity.Web.GraphServiceClient) NuGet package to your project.
1. Add `.AddMicrosoftGraph()` to the service collection in the *Startup.Auth.cs* file. `.AddMicrosoftGraph()` has several overrides. Using the override that takes a configuration section as a parameter, the code becomes: ```csharp
active-directory Scenario Web Api Call Api Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-api-call-api-call-api.md
When you use *Microsoft.Identity.Web*, you have three usage scenarios:
#### Option 1: Call Microsoft Graph with the SDK
-In this scenario, you've added `.AddMicrosoftGraph()` in *Startup.cs* as specified in [Code configuration](scenario-web-api-call-api-app-configuration.md#option-1-call-microsoft-graph), and you can directly inject the `GraphServiceClient` in your controller or page constructor for use in the actions. The following example Razor page displays the photo of the signed-in user.
+In this scenario, you've added the **Microsoft.Identity.Web.GraphServiceClient** NuGet package and added `.AddMicrosoftGraph()` in *Startup.cs* as specified in [Code configuration](scenario-web-api-call-api-app-configuration.md#option-1-call-microsoft-graph), and you can directly inject the `GraphServiceClient` in your controller or page constructor for use in the actions. The following example Razor page displays the photo of the signed-in user.
```csharp [Authorize]
In this scenario, you've added `.AddMicrosoftGraph()` in *Startup.cs* as specifi
public async Task OnGet() {
- var user = await _graphServiceClient.Me.Request().GetAsync();
+ var user = await _graphServiceClient.Me.GetAsync();
try {
- using (var photoStream = await _graphServiceClient.Me.Photo.Content.Request().GetAsync())
+ using (var photoStream = await _graphServiceClient.Me.Photo.Content.GetAsync())
{ byte[] photoByte = ((MemoryStream)photoStream).ToArray(); ViewData["photo"] = Convert.ToBase64String(photoByte);
public class HomeController : Controller
public async Task GetIndex() { var graphServiceClient = this.GetGraphServiceClient();
- var user = await graphServiceClient.Me.Request().GetAsync();
+ var user = await graphServiceClient.Me.GetAsync();
try {
- using (var photoStream = await graphServiceClient.Me.Photo.Content.Request().GetAsync())
+ using (var photoStream = await graphServiceClient.Me.Photo.Content.GetAsync())
{ byte[] photoByte = ((MemoryStream)photoStream).ToArray(); ViewData["photo"] = Convert.ToBase64String(photoByte);
active-directory Scenario Web App Call Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-app-configuration.md
The scopes passed to `EnableTokenAcquisitionToCallDownstreamApi` are optional, a
If you want to call Microsoft Graph, *Microsoft.Identity.Web* enables you to directly use the `GraphServiceClient` (exposed by the Microsoft Graph SDK) in your API actions. To expose Microsoft Graph:
-1. Add the [Microsoft.Identity.Web.MicrosoftGraph](https://www.nuget.org/packages/Microsoft.Identity.Web.MicrosoftGraph) NuGet package to your project.
+1. Add the [Microsoft.Identity.Web.GraphServiceClient](https://www.nuget.org/packages/Microsoft.Identity.Web.GraphServiceClient) NuGet package to your project.
1. Add `.AddMicrosoftGraph()` after `.EnableTokenAcquisitionToCallDownstreamApi()` in the *Startup.cs* file. `.AddMicrosoftGraph()` has several overrides. Using the override that takes a configuration section as a parameter, the code becomes: ```csharp
Your web app needs to acquire a token for the downstream API, *Microsoft.Identit
If you want to call Microsoft Graph, *Microsoft.Identity.Web* enables you to directly use the `GraphServiceClient` (exposed by the Microsoft Graph SDK) in your API actions. To expose Microsoft Graph:
-1. Add the [Microsoft.Identity.Web.MicrosoftGraph](https://www.nuget.org/packages/Microsoft.Identity.Web.MicrosoftGraph) NuGet package to your project.
+1. Add the [Microsoft.Identity.Web.GraphServiceClient](https://www.nuget.org/packages/Microsoft.Identity.Web.GraphServiceClient) NuGet package to your project.
1. Add `.AddMicrosoftGraph()` to the service collection in the *Startup.Auth.cs* file. `.AddMicrosoftGraph()` has several overrides. Using the override that takes a configuration section as a parameter, the code becomes: ```csharp
active-directory Scenario Web App Call Api Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-call-api.md
When you use *Microsoft.Identity.Web*, you have three usage options for calling
#### Option 1: Call Microsoft Graph with the SDK
-You want to call Microsoft Graph. In this scenario, you've added `AddMicrosoftGraph` in *Startup.cs* as specified in [Code configuration](scenario-web-app-call-api-app-configuration.md#option-1-call-microsoft-graph), and you can directly inject the `GraphServiceClient` in your controller or page constructor for use in the actions. The following example Razor page displays the photo of the signed-in user.
+You want to call Microsoft Graph. In this scenario, In this scenario, you've added the **Microsoft.Identity.Web.GraphServiceClient** NuGet package and added `.AddMicrosoftGraph()` in *Startup.cs* as specified in [Code configuration](scenario-web-app-call-api-app-configuration.md#option-1-call-microsoft-graph), and you can directly inject the `GraphServiceClient` in your controller or page constructor for use in the actions. The following example Razor page displays the photo of the signed-in user.
```csharp [Authorize]
public class IndexModel : PageModel
public async Task OnGet() {
- var user = await _graphServiceClient.Me.Request().GetAsync();
+ var user = await _graphServiceClient.Me.GetAsync();
try {
- using (var photoStream = await _graphServiceClient.Me.Photo.Content.Request().GetAsync())
+ using (var photoStream = await _graphServiceClient.Me.Photo.Content.GetAsync())
{ byte[] photoByte = ((MemoryStream)photoStream).ToArray(); ViewData["photo"] = Convert.ToBase64String(photoByte);
public class HomeController : Controller
public async Task GetIndex() { var graphServiceClient = this.GetGraphServiceClient();
- var user = await graphServiceClient.Me.Request().GetAsync();
+ var user = await graphServiceClient.Me.GetAsync();
try {
- using (var photoStream = await graphServiceClient.Me.Photo.Content.Request().GetAsync())
+ using (var photoStream = await graphServiceClient.Me.Photo.Content.GetAsync())
{ byte[] photoByte = ((MemoryStream)photoStream).ToArray(); ViewData["photo"] = Convert.ToBase64String(photoByte);
active-directory Concept Planning Your Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/concept-planning-your-solution.md
Customer accounts have a [default set of permissions](reference-user-permissions
Before your applications can interact with Azure AD for customers, you need to register them in your customer tenant. Azure AD performs identity and access management only for registered applications. [Registering your app](how-to-register-ciam-app.md) establishes a trust relationship and allows you to integrate your app with Azure Active Directory for customers.
+Then, to complete the trust relationship between Azure AD and your app, you update your application source code with the values assigned during app registration, such as the application (client) ID, directory (tenant) subdomain, and client secret.
+ We provide code sample guides and in-depth integration guides for several app types and languages. Depending on the type of app you want to register, you can find guidance on our [Samples by app type and language page](samples-ciam-all.md). ### How to register your application
active-directory How To User Flow Add Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-user-flow-add-application.md
If you already registered your application in your customer tenant, you can add
1. Choose **Select**.
-## Update the application code with your tenant information
-
-Now you need to update your application code configuration with the application ID from the application registration, your customer tenant name, and a client secret value.
-
-We have several samples and how-to guides that can help you update your application to integrate with a user flow, based on app type, platform, and language. See [Samples for customer identity and access management (CIAM) in Azure Active Directory](samples-ciam-all.md).
- ## Next steps - If you selected email with password sign-in, [enable password reset](how-to-enable-password-reset-customers.md).
active-directory Nice Cxone Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/nice-cxone-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
| **Identifier** | |-|
- | `https://cxone.niceincontact,com/<guid>` |
+ | `https://cxone.niceincontact.com/<guid>` |
| `https://cxone-gov.niceincontact.com/<guid>` | b. In the **Reply URL** textbox, type a URL using one of the following patterns:
aks Outbound Rules Control Egress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/outbound-rules-control-egress.md
description: Learn what ports and addresses are required to control egress traff
Previously updated : 03/10/2023 Last updated : 06/13/2023 #Customer intent: As an cluster operator, I want to learn the network and FQDNs rules to control egress traffic and improve security for my AKS clusters.
If you choose to block/not allow these FQDNs, the nodes will only receive OS upd
#### Required FQDN / application rules
-| FQDN | Port | Use |
-|--|--|-|
-| **`login.microsoftonline.com`** | **`HTTPS:443`** | Required for Active Directory Authentication. |
-| **`*.ods.opinsights.azure.com`** | **`HTTPS:443`** | Required for Microsoft Defender to upload security events to the cloud.|
-| **`*.oms.opinsights.azure.com`** | **`HTTPS:443`** | Required to Authenticate with LogAnalytics workspaces.|
+| FQDN | Port | Use |
+||--|-|
+| **`login.microsoftonline.com`** <br/> **`login.microsoftonline.us`** (Azure Government) <br/> **`login.microsoftonline.cn`** (Azure China 21Vianet) | **`HTTPS:443`** | Required for Active Directory Authentication. |
+| **`*.ods.opinsights.azure.com`** <br/> **`*.ods.opinsights.azure.us`** (Azure Government) <br/> **`*.ods.opinsights.azure.cn`** (Azure China 21Vianet)| **`HTTPS:443`** | Required for Microsoft Defender to upload security events to the cloud.|
+| **`*.oms.opinsights.azure.com`** <br/> **`*.oms.opinsights.azure.us`** (Azure Government) <br/> **`*.oms.opinsights.azure.cn`** (Azure China 21Vianet)| **`HTTPS:443`** | Required to Authenticate with LogAnalytics workspaces.|
### CSI Secret Store
aks Tutorial Kubernetes Prepare App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-prepare-app.md
azure-voting-app-redis
The following command uses the sample `docker-compose.yaml` file to create the container image, download the Redis image, and start the application. ```console
-docker-compose up -d
+docker compose up -d
``` When completed, use the [`docker images`][docker-images] command to see the created images. Two images are downloaded or created. The *azure-vote-front* image contains the front-end application. The *redis* image is used to start a Redis instance.
Now that the application's functionality has been validated, the running contain
To stop and remove the container instances and resources, use the [`docker-compose down`][docker-compose-down] command. ```console
-docker-compose down
+docker compose down
``` When the local application has been removed, you have a Docker image that contains the Azure Vote application, *azure-vote-front*, to use in the next tutorial.
aks Use Kms Etcd Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-kms-etcd-encryption.md
Use the following command to update all secrets. Otherwise, the old secrets will
kubectl get secrets --all-namespaces -o json | kubectl replace -f - ```
-## KMS V2 support
+## KMS v2 support
-Since AKS version 1.27 and above, enabling the KMS feature configures KMS V2. With KMS V2, you aren't limited to the 2,000 secrets support. For more information, you can refer to the [KMS V2 Improvements](https://kubernetes.io/blog/2023/05/16/kms-v2-moves-to-beta/).
+Starting with AKS version 1.27, enabling the KMS feature configures KMS v2. With KMS v2, you aren't limited to the 2,000 secrets it supports. For more information, review [KMS V2 Improvements](https://kubernetes.io/blog/2023/05/16/kms-v2-moves-to-beta/).
### Migration to KMS v2
-If your cluster version is less than 1.27 and you already enabled KMS, use the following steps to migrate to KMS V2:
+If your cluster version is less than 1.27 and you already enabled KMS, use the following steps to migrate to KMS v2:
1. Disable KMS on the cluster. 2. Perform the storage migration. 3. Upgrade the cluster to version 1.27 or higher. 4. Re-enable KMS on the cluster.
-5. Perform the storage migration
+5. Perform the storage migration.
#### Disable KMS
-Disable KMS on an existing cluster using the `az aks update` command with the `--disable-azure-keyvault-kms` flag.
+To disable KMS on an existing cluster, use the `az aks update` command with the `--disable-azure-keyvault-kms` argument.
```azurecli-interactive az aks update --name myAKSCluster --resource-group MyResourceGroup --disable-azure-keyvault-kms
az aks update --name myAKSCluster --resource-group MyResourceGroup --disable-azu
#### Storage migration
-Update all secrets using the `kubectl get secrets` command with the `--all-namespaces` flag.
+To update all secrets, use the `kubectl get secrets` command with the `--all-namespaces` argument.
```azurecli-interactive kubectl get secrets --all-namespaces -o json | kubectl replace -f -
kubectl get secrets --all-namespaces -o json | kubectl replace -f -
#### Upgrade AKS cluster
-Upgrade the AKS cluster using the `az aks upgrade` command and specify your desired version as `1.27.x` or higher for `--kubernetes-version`.
+To upgrade an AKS cluster, use the `az aks upgrade` command and specify the desired version as `1.27.x` or higher with the `--kubernetes-version` argument.
```azurecli-interactive az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version <AKS version> ```
-Example:
+For example:
```azurecli-interactive az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version 1.27.1
az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes
#### Re-enable KMS
-You can reenable the KMS feature on the cluster to encrypt the secrets. After that, the AKS cluster uses KMS V2.
-If you donΓÇÖt want to do the KMS v2 migration, you can create a new 1.27+ cluster with KMS enabled.
+You can reenable the KMS feature on the cluster to encrypt the secrets. Afterwards, the AKS cluster uses KMS v2.
+If you don't want to do the KMS v2 migration, you can create a new version 1.27 and higher cluster with KMS enabled.
#### Storage migration
-Re-encrypt all secrets under KMS V2 using the `kubectl get secrets` command with the `--all-namespaces` flag.
+To re-encrypt all secrets under KMS v2, use the `kubectl get secrets` command with the `--all-namespaces` argument.
```azurecli-interactive kubectl get secrets --all-namespaces -o json | kubectl replace -f -
api-management Howto Protect Backend Frontend Azure Ad B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/howto-protect-backend-frontend-azure-ad-b2c.md
For defense in depth, we then use EasyAuth to validate the token again inside th
To follow the steps in this article, you must have: * An Azure (StorageV2) General Purpose V2 Storage Account to host the frontend JS Single Page App.
-* An Azure API Management instance (Any tier will work, including 'Consumption', however certain features applicable to the full scenario are not available in this tier (rate-limit-by-key and dedicated Virtual IP), these restrictions are called out below in the article where appropriate).
+* An Azure API Management instance (Any tier will work, including 'Consumption', however certain features applicable to the full scenario aren't available in this tier (rate-limit-by-key and dedicated Virtual IP), these restrictions are called out below in the article where appropriate).
* An empty Azure Function app (running the V3.1 .NET Core runtime, on a Consumption Plan) to host the called API * An Azure AD B2C tenant, linked to a subscription.
Open the Azure AD B2C blade in the portal and do the following steps.
1. Select the **App Registrations** tab 1. Click the 'New Registration' button. 1. Choose 'Web' from the Redirect URI selection box.
-1. Now set the Display Name, choose something unique and relevant to the service being created. In this example, we will use the name "Backend Application".
+1. Now set the Display Name, choose something unique and relevant to the service being created. In this example, we'll use the name "Backend Application".
1. Use placeholders for the reply urls, like 'https://jwt.ms' (A Microsoft owned token decoding site), weΓÇÖll update those urls later. 1. Ensure you have selected the "Accounts in any identity provider or organizational directory (for authenticating users with user flows)" option 1. For this sample, uncheck the "Grant admin consent" box, as we won't require offline_access permissions today.
Open the Azure AD B2C blade in the portal and do the following steps.
1. Give the policy a name and record it for later. For this example, you can use "Frontendapp_signupandsignin", note that this will be prefixed with "B2C_1_" to make "B2C_1_Frontendapp_signupandsignin" 1. Under 'Identity providers' and "Local accounts", check 'Email sign up' (or 'User ID sign up' depending on the config of your B2C tenant) and click OK. This configuration is because we'll be registering local B2C accounts, not deferring to another identity provider (like a social identity provider) to use a user's existing social media account. 1. Leave the MFA and conditional access settings at their defaults.
-1. Under 'User Attributes and claims', click 'Show More...' then choose the claim options that you want your users to enter and have returned in the token. Check at least 'Display Name' and 'Email Address' to collect, with 'Display Name' and 'Email Addresses' to return (pay careful attention to the fact that you are collecting emailaddress, singular, and asking to return email addresses, multiple), and click 'OK', then click 'Create'.
+1. Under 'User Attributes and claims', click 'Show More...' then choose the claim options that you want your users to enter and have returned in the token. Check at least 'Display Name' and 'Email Address' to collect, with 'Display Name' and 'Email Addresses' to return (pay careful attention to the fact that you're collecting emailaddress, singular, and asking to return email addresses, multiple), and click 'OK', then click 'Create'.
1. Click on the user flow that you created in the list, then click the 'Run user flow' button. 1. This action will open the run user flow blade, select the frontend application, copy the user flow endpoint and save it for later. 1. Copy and store the link at the top, recording as the 'well-known openid configuration endpoint' for later use.
Open the Azure AD B2C blade in the portal and do the following steps.
1. Click 'Save' (at the top left of the blade). > [!IMPORTANT]
- > Now your Function API is deployed and should throw 401 responses if the correct JWT is not supplied as an Authorization: Bearer header, and should return data when a valid request is presented.
+ > Now your Function API is deployed and should throw 401 responses if the correct JWT isn't supplied as an Authorization: Bearer header, and should return data when a valid request is presented.
> You added additional defense-in-depth security in EasyAuth by configuring the 'Login With Azure AD' option to handle unauthenticated requests. > > We still have no IP security applied, if you have a valid key and OAuth2 token, anyone can call this from anywhere - ideally we want to force all requests to come via API Management.
You'll need to add CIDR formatted blocks of addresses to the IP restrictions pan
1. Click Browse, choose the function app you're hosting the API inside, and click select. Next, click select again. 1. Give the API a name and description for API Management's internal use and add it to the ΓÇÿunlimitedΓÇÖ Product. 1. Copy and record the API's 'base URL' and click 'create'.
-1. Click the 'settings' tab, then under subscription - switch off the 'Subscription Required' checkbox as we will use the Oauth JWT token in this case to rate limit. Note that if you are using the consumption tier, this would still be required in a production environment.
+1. Click the 'settings' tab, then under subscription - switch off the 'Subscription Required' checkbox as we'll use the Oauth JWT token in this case to rate limit. Note that if you're using the consumption tier, this would still be required in a production environment.
> [!TIP] > If using the consumption tier of APIM the unlimited product won't be available as an out of the box. Instead, navigate to "Products" under "APIs" and hit "Add".
You'll need to add CIDR formatted blocks of addresses to the IP restrictions pan
1. Edit the inbound section and paste the below xml so it reads like the following. 1. Replace the following parameters in the Policy 1. {PrimaryStorageEndpoint} (The 'Primary Storage Endpoint' you copied in the previous section), {b2cpolicy-well-known-openid} (The 'well-known openid configuration endpoint' you copied earlier) and {backend-api-application-client-id} (The B2C Application / Client ID for the **backend API**) with the correct values saved earlier.
-1. If you're using the Consumption tier of API Management, then you should remove both rate-limit-by-key policy as this policy is not available when using the Consumption tier of Azure API Management.
+1. If you're using the Consumption tier of API Management, then you should remove both rate-limit-by-key policy as this policy isn't available when using the Consumption tier of Azure API Management.
```xml <inbound>
You'll need to add CIDR formatted blocks of addresses to the IP restrictions pan
> Congratulations, you now have Azure AD B2C, API Management and Azure Functions working together to publish, secure AND consume an API! > [!TIP]
- > If you're using the API Management consumption tier then instead of rate limiting by the JWT subject or incoming IP Address (Limit call rate by key policy is not supported today for the "Consumption" tier), you can Limit by call rate quota see [here](rate-limit-policy.md).
+ > If you're using the API Management consumption tier then instead of rate limiting by the JWT subject or incoming IP Address (Limit call rate by key policy isn't supported today for the "Consumption" tier), you can Limit by call rate quota see [here](rate-limit-policy.md).
> As this example is a JavaScript Single Page Application, we use the API Management Key only for rate-limiting and billing calls. The actual Authorization and Authentication is handled by Azure AD B2C, and is encapsulated in the JWT, which gets validated twice, once by API Management, and then by the backend Azure Function. ## Upload the JavaScript SPA sample to static storage
You'll need to add CIDR formatted blocks of addresses to the IP restrictions pan
}, api: { scopes: ["{BACKENDAPISCOPE}"], // The scope that we request for the API from B2C, this should be the backend API scope, with the full URI.
- backend: "{APIBASEURL}/hello" // The location that we will call for the backend api, this should be hosted in API Management, suffixed with the name of the API operation (in the sample this is '/hello').
+ backend: "{APIBASEURL}/hello" // The location that we'll call for the backend api, this should be hosted in API Management, suffixed with the name of the API operation (in the sample this is '/hello').
} } document.getElementById("callapibtn").hidden = true;
You'll need to add CIDR formatted blocks of addresses to the IP restrictions pan
1. Select https://docsupdatetracker.net/index.html blob from the list 1. Click 'Edit' 1. Update the auth values in the msal config section to match your *front-end* application you registered in B2C earlier. Use the code comments for hints on how the config values should look.
-The *authority* value needs to be in the format:- https://{b2ctenantname}.b2clogin.com/tfp/{b2ctenantname}.onmicrosoft.com}/{signupandsigninpolicyname}, if you have used our sample names and your b2c tenant is called 'contoso' then you would expect the authority to be 'https://contoso.b2clogin.com/tfp/contoso.onmicrosoft.com}/Frontendapp_signupandsignin'.
+The *authority* value needs to be in the format:- https://{b2ctenantname}.b2clogin.com/tfp/{b2ctenantname}.onmicrosoft.com}/{signupandsigninpolicyname}, if you have used our sample names and your b2c tenant is called 'contoso' then you would expect the authority to be 'https://contoso.b2clogin.com/tfp/contoso.onmicrosoft.com/Frontendapp_signupandsignin'.
1. Set the api values to match your backend address (The API Base Url you recorded earlier, and the 'b2cScopes' values were recorded earlier for the *backend application*). 1. Click Save
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Title: Migrate to App Service Environment v3 by using the migration feature
description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 06/08/2023 Last updated : 06/19/2023
The migration feature doesn't support the following scenarios. See the [manual m
The App Service platform reviews your App Service Environment to confirm migration support. If your scenario doesn't pass all validation checks, you can't migrate at this time using the migration feature. If your environment is in an unhealthy or suspended state, you can't migrate until you make the needed updates.
+> [!NOTE]
+> App Service Environment v3 doesn't support IP SSL. If you use IP SSL, you must remove all IP SSL bindings before migrating to App Service Environment v3. The migration feature will support your environment once all IP SSL bindings are removed.
+>
+ ### Troubleshooting If your App Service Environment doesn't pass the validation checks or you try to perform a migration step in the incorrect order, you may see one of the following error messages:
For more scenarios on cost changes and savings opportunities with App Service En
The migration feature supports this [migration scenario](#supported-scenarios). You can migrate using a manual method if you don't want to use the migration feature. You can configure your [custom domain suffix](./how-to-custom-domain-suffix.md) when creating your App Service Environment v3 or any time after. - **What if my App Service Environment is zone pinned?** Zone pinned App Service Environment is currently not a supported scenario for migration using the migration feature. App Service Environment v3 doesn't support zone pinning. To migrate to App Service Environment v3, see the [manual migration options](migration-alternatives.md).
+- **What if my App Service Environment has IP SSL addresses?**
+ IP SSL isn't supported on App Service Environment v3. You must remove all IP SSL bindings before migrating using the migration feature or one of the manual options. If you intend to use the migration feature, once you remove all IP SSL bindings, you'll pass that validation check and can proceed with the automated migration.
- **What properties of my App Service Environment will change?** You're on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. For ILB App Service Environment, you keep the same ILB IP address. For internet facing App Service Environment, the public IP address and the outbound IP address change. Note for ELB App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses). For a full comparison of the App Service Environment versions, see [App Service Environment version comparison](version-comparison.md). - **What happens if migration fails or there is an unexpected issue during the migration?**
app-service Migration Alternatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migration-alternatives.md
Title: Migrate to App Service Environment v3
description: How to migrate your applications to App Service Environment v3 Previously updated : 10/20/2022 Last updated : 06/19/2023 # Migrate to App Service Environment v3
> The App Service Environment v3 [migration feature](migrate.md) is now available for a set of supported environment configurations in certain regions. Consider that feature which provides an automated migration path to [App Service Environment v3](overview.md). >
-If you're currently using App Service Environment v1 or v2, you have the opportunity to migrate your workloads to [App Service Environment v3](overview.md). App Service Environment v3 has [advantages and feature differences](overview.md#feature-differences) that provide enhanced support for your workloads and can reduce overall costs. Consider using the [migration feature](migrate.md) if your environment falls into one of the [supported scenarios](migrate.md#supported-scenarios). If your environment isn't currently supported by the migration feature, you can wait for support if your scenario is listed in the [upcoming supported scenarios](migrate.md#migration-feature-limitations). Otherwise, you can choose to use one of the manual migration options given in this article.
+If you're currently using App Service Environment v1 or v2, you have the opportunity to migrate your workloads to [App Service Environment v3](overview.md). App Service Environment v3 has [advantages and feature differences](overview.md#feature-differences) that provide enhanced support for your workloads and can reduce overall costs. Consider using the [migration feature](migrate.md) if your environment falls into one of the [supported scenarios](migrate.md#supported-scenarios).
-If your App Service Environment [won't be supported for migration](migrate.md#migration-feature-limitations) with the migration feature, you must use one of the manual methods to migrate to App Service Environment v3.
+If your App Service Environment [isn't supported for migration](migrate.md#migration-feature-limitations) with the migration feature, you must use one of the manual methods to migrate to App Service Environment v3.
## Prerequisites Scenario: An existing app running on an App Service Environment v1 or App Service Environment v2 and you need that app to run on an App Service Environment v3.
-For any migration method that doesn't use the [migration feature](migrate.md), you'll need to [create the App Service Environment v3](creation.md) and a new subnet using the method of your choice. There are [feature differences](overview.md#feature-differences) between App Service Environment v1/v2 and App Service Environment v3 as well as [networking changes](networking.md) that will involve new (and for internet-facing environments, additional) IP addresses. You'll need to update any infrastructure that relies on these IPs.
+For any migration method that doesn't use the [migration feature](migrate.md), you need to [create the App Service Environment v3](creation.md) and a new subnet using the method of your choice. There are [feature differences](overview.md#feature-differences) between App Service Environment v1/v2 and App Service Environment v3 as well as [networking changes](networking.md) that involve new (and for internet-facing environments, additional) IP addresses. You need to update any infrastructure that relies on these IPs.
-Note that multiple App Service Environments can't exist in a single subnet. If you need to use your existing subnet for your new App Service Environment v3, you'll need to delete the existing App Service Environment before you create a new one. For this scenario, the recommended migration method is to [back up your apps and then restore them](#back-up-and-restore) in the new environment after it gets created and configured. There will be application downtime during this process because of the time it takes to delete the old environment, create the new App Service Environment v3, configure any infrastructure and connected resources to work with the new environment, and deploy your apps onto the new environment.
+Multiple App Service Environments can't exist in a single subnet. If you need to use your existing subnet for your new App Service Environment v3, you need to delete the existing App Service Environment before you create a new one. For this scenario, the recommended migration method is to [back up your apps and then restore them](#back-up-and-restore) in the new environment after it gets created and configured. There is application downtime during this process because of the time it takes to delete the old environment, create the new App Service Environment v3, configure any infrastructure and connected resources to work with the new environment, and deploy your apps onto the new environment.
### Checklist before migrating apps
Note that multiple App Service Environments can't exist in a single subnet. If y
## Isolated v2 App Service plans
-App Service Environment v3 uses Isolated v2 App Service plans that are priced and sized differently than those from Isolated plans. Review the [SKU details](https://azure.microsoft.com/pricing/details/app-service/windows/) to understand how you're new environment will need to be sized and scaled to ensure appropriate capacity. There's no difference in how you create App Service plans for App Service Environment v3 compared to previous versions.
+App Service Environment v3 uses Isolated v2 App Service plans that are priced and sized differently than those from Isolated plans. Review the [SKU details](https://azure.microsoft.com/pricing/details/app-service/windows/) to understand how you're new environment needs to be sized and scaled to ensure appropriate capacity. There's no difference in how you create App Service plans for App Service Environment v3 compared to previous versions.
## Back up and restore
The [back up and restore](../manage-backup.md) feature allows you to keep your a
:::image type="content" source="./media/migration/configure-custom-backup.png" alt-text="Screenshot that shows how to configure custom backup for an App Service app."::: >
-You can select a custom backup and restore it to an App Service in your App Service Environment v3. You must create the App Service you will restore to before restoring the app. You can choose to restore the backup to the production slot, an existing slot, or a newly created slot that you can create during the restoration process.
+You can select a custom backup and restore it to an App Service in your App Service Environment v3. You must create the App Service you'll restore to before restoring the app. You can choose to restore the backup to the production slot, an existing slot, or a newly created slot that you can create during the restoration process.
:::image type="content" source="./media/migration/back-up-restore-sample.png" alt-text="Screenshot that shows how to use backup to restore App Service app in App Service Environment v3.":::
You can select a custom backup and restore it to an App Service in your App Serv
> Cloning apps is supported on Windows App Service only. >
-This solution is recommended for users that are using Windows App Service and can't migrate using the [migration feature](migrate.md). You'll need to set up your new App Service Environment v3 before cloning any apps. Cloning an app can take up to 30 minutes to complete. Cloning can be done using PowerShell as described in the [documentation](../app-service-web-app-cloning.md#cloning-an-existing-app-to-an-app-service-environment) or using the Azure portal.
+This solution is recommended for users that are using Windows App Service and can't migrate using the [migration feature](migrate.md). You need to set up your new App Service Environment v3 before cloning any apps. Cloning an app can take up to 30 minutes to complete. Cloning can be done using PowerShell as described in the [documentation](../app-service-web-app-cloning.md#cloning-an-existing-app-to-an-app-service-environment) or using the Azure portal.
To clone an app using the [Azure portal](https://www.portal.azure.com), navigate to your existing App Service and select **Clone App** under **Development Tools**. Fill in the required fields using the details for your new App Service Environment v3. 1. Select an existing or create a new **Resource Group**.
-1. Give your app a **Name**. This name can be the same as the old app, but note the site's default URL using the new environment will be different. You'll need to update any custom DNS or connected resources to point to the new URL.
+1. Give your app a **Name**. This name can be the same as the old app, but note the site's default URL using the new environment will be different. You need to update any custom DNS or connected resources to point to the new URL.
1. Use your App Service Environment v3 name for **Region**. 1. Choose whether or not to clone your deployment source.
-1. You can use an existing Windows **App Service plan** from your new environment if you created one already, or create a new one. The available Windows App Service plans in your new App Service Environment v3, if any, will be listed in the dropdown.
+1. You can use an existing Windows **App Service plan** from your new environment if you created one already, or create a new one. The available Windows App Service plans in your new App Service Environment v3, if any, are listed in the dropdown.
1. Modify **SKU and size** as needed using one of the Isolated v2 options if creating a new App Service plan. Note App Service Environment v3 uses Isolated v2 plans, which have more memory and CPU per corresponding instance size compared to the Isolated plan. For more information, see [App Service Environment v3 SKU details](overview.md#pricing). :::image type="content" source="./media/migration/portal-clone-sample.png" alt-text="Screenshot that shows how to clone an app to App Service Environment v3 using the portal.":::
The following initial changes to your Azure Resource Manager templates are requi
- Update App Service plan (serverfarm) parameter the app is to be deployed into to the plan associated with the App Service Environment v3 - Update hosting environment profile (hostingEnvironmentProfile) parameter to the new App Service Environment v3 resource ID-- An Azure Resource Manager template export includes all properties exposed by the resource providers for the given resources. Remove all non-required properties such as those which point to the domain of the old app. For example, you `sites` resource could be simplified to the following sample:
+- An Azure Resource Manager template export includes all properties exposed by the resource providers for the given resources. Remove all nonrequired properties such as those which point to the domain of the old app. For example, you `sites` resource could be simplified to the following sample:
```json "type": "Microsoft.Web/sites",
The [migration feature](migrate.md) automates the migration to App Service Envir
You can distribute traffic between your old and new environment using an [Application Gateway](../networking/app-gateway-with-service-endpoints.md). If you're using an Internal Load Balancer (ILB) App Service Environment, see the [considerations](../networking/app-gateway-with-service-endpoints.md#considerations-for-ilb-ase) and [create an Azure Application Gateway](integrate-with-application-gateway.md) with an extra backend pool to distribute traffic between your environments. For internet facing App Service Environments, see these [considerations](../networking/app-gateway-with-service-endpoints.md#considerations-for-external-ase). You can also use services like [Azure Front Door](../../frontdoor/quickstart-create-front-door.md), [Azure Content Delivery Network (CDN)](../../cdn/cdn-add-to-web-app.md), and [Azure Traffic Manager](../../cdn/cdn-traffic-manager.md) to distribute traffic between environments. Using these services allows for testing of your new environment in a controlled manner and allows you to move to your new environment at your own pace.
-Once your migration and any testing with your new environment is complete, delete your old App Service Environment, the apps that are on it, and any supporting resources that you no longer need. You'll continue to be charged for any resources that haven't been deleted.
+Once your migration and any testing with your new environment is complete, delete your old App Service Environment, the apps that are on it, and any supporting resources that you no longer need. You continue to be charged for any resources that haven't been deleted.
## Frequently asked questions - **Will I experience downtime during the migration?**
- Downtime is dependent on your migration process. If you have a different App Service Environment that you can point traffic to while you migrate or if you can use a different subnet to create your new environment, you won't have downtime. However, if you must use the same subnet, there will be downtime resulting from the time it takes to delete the old environment, create the App Service Environment v3, create the new App Service plans, re-create the apps, and update any resources that need to know about the new IP addresses.
+ Downtime is dependent on your migration process. If you have a different App Service Environment that you can point traffic to while you migrate or if you can use a different subnet to create your new environment, you won't have downtime. However, if you must use the same subnet, there is downtime resulting from the time it takes to delete the old environment, create the App Service Environment v3, create the new App Service plans, re-create the apps, and update any resources that need to know about the new IP addresses.
- **Do I need to change anything about my apps to get them to run on App Service Environment v3?**
- No, apps that run on App Service Environment v1 and v2 shouldn't need any modifications to run on App Service Environment v3.
+ No, apps that run on App Service Environment v1 and v2 shouldn't need any modifications to run on App Service Environment v3. If you're using IP SSL, you must remove the IP SSL bindings before migrating.
- **What if my App Service Environment has a custom domain suffix?**
- The migration feature supports this [migration scenario](./migrate.md#supported-scenarios). You can migrate using a manual method if you don't want to use the migration feature. You can configure your [custom domain suffix](./how-to-custom-domain-suffix.md) when creating your App Service Environment v3 or any time after that.
+ The migration feature supports this [migration scenario](./migrate.md#supported-scenarios). You can migrate using a manual method if you don't want to use the migration feature. You can configure your [custom domain suffix](./how-to-custom-domain-suffix.md) when creating your App Service Environment v3 or any time after.
- **What if my App Service Environment is zone pinned?** Zone pinning isn't a supported feature on App Service Environment v3. - **What properties of my App Service Environment will change?**
- You'll now be on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. For ILB App Service Environment, you'll keep the same ILB IP address. For internet facing App Service Environment, the public IP address and the outbound IP address will change. Note for internet facing App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses).
+ You'll now be on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. For ILB App Service Environment, you keep the same ILB IP address. For internet facing App Service Environment, the public IP address and the outbound IP address change. Note for internet facing App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses).
- **Is backup and restore supported for moving apps from App Service Environment v2 to v3?** The [back up and restore](../manage-backup.md) feature supports restoring apps between App Service Environment versions as long as a custom backup is used for the restoration. Automatic backup doesn't support restoration to different App Service Environment versions. - **What will happen to my App Service Environment v1/v2 resources after 31 August 2024?**
app-service Operating System Functionality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/operating-system-functionality.md
Because App Service supports a seamless scaling experience between different tie
App Service pricing tiers control the amount of compute resources (CPU, disk storage, memory, and network egress) available to apps. However, the breadth of framework functionality available to apps remains the same regardless of the scaling tiers. App Service supports a variety of development frameworks, including ASP.NET, classic ASP, Node.js, PHP, and Python.
-In order to simplify and normalize security configuration, App Service apps typically run the various development frameworks with their default settings. The frameworks and runtime components provided by the platform are updated regularly to satisfy security and compliance requirements, for this reason we do not guarantee specific minor/patch versions and recommend customers target major version as needed.
+In order to simplify and normalize security configuration, App Service apps typically run the various development frameworks with their default settings. The frameworks and runtime components provided by the platform are updated regularly to satisfy security and compliance requirements, for this reason we don't guarantee specific minor/patch versions and recommend customers target major version as needed.
The following sections summarize the general kinds of operating system functionality available to App Service apps.
At its core, App Service is a service running on top of the Azure PaaS (platform
- An operating system drive (`%SystemDrive%`), whose size varies depending on the size of the VM. - A resource drive (`%ResourceDrive%`) used by App Service internally.
-A best practice is to always use the environment variables `%SystemDrive%` and `%ResourceDrive%` instead of hard-coded file paths. The root path returned from these two environment variables has shifted over time from `d:\` to `c:\`. However, older applications hard-coded with file path references to `d:\` will continue to work because the App Service platform automatically remaps `d:\` to instead point at `c:\`. As noted above, it is highly recommended to always use the environment variables when building file paths and avoid confusion over platform changes to the default root file path.
+A best practice is to always use the environment variables `%SystemDrive%` and `%ResourceDrive%` instead of hard-coded file paths. The root path returned from these two environment variables has shifted over time from `d:\` to `c:\`. However, older applications hard-coded with file path references to `d:\` will continue to work because the App Service platform automatically remaps `d:\` to instead point at `c:\`. As noted above, it's highly recommended to always use the environment variables when building file paths and avoid confusion over platform changes to the default root file path.
-It is important to monitor your disk utilization as your application grows. If the disk quota is reached, it can have adverse effects to your application. For example:
+It's important to monitor your disk utilization as your application grows. If the disk quota is reached, it can have adverse effects to your application. For example:
- The app may throw an error indicating not enough space on the disk. - You may see disk errors when browsing to the Kudu console.
One of the unique aspects of App Service that makes app deployment and maintenan
Within App Service, there is a number of UNC shares created in each data center. A percentage of the user content for all customers in each data center is allocated to each UNC share. Each customer's subscription has a reserved directory structure on a specific UNC share within a data center. A customer may have multiple apps created within a specific data center, so all of the directories belonging to a single customer subscription are created on the same UNC share.
-Due to how Azure services work, the specific virtual machine responsible for hosting a UNC share will change over time. It is guaranteed that UNC shares will be mounted by different virtual machines as they are brought up and down during the normal course of Azure operations. For this reason, apps should never make hard-coded assumptions that the machine information in a UNC file path will remain stable over time. Instead, they should use the convenient *faux* absolute path `%HOME%\site` that App Service provides. This faux absolute path provides a portable, app-and-user-agnostic method for referring to one's own app. By using `%HOME%\site`, one can transfer shared files from app to app without having to configure a new absolute path for each transfer.
+Due to how Azure services work, the specific virtual machine responsible for hosting a UNC share will change over time. It is guaranteed that UNC shares will be mounted by different virtual machines as they're brought up and down during the normal course of Azure operations. For this reason, apps should never make hard-coded assumptions that the machine information in a UNC file path will remain stable over time. Instead, they should use the convenient *faux* absolute path `%HOME%\site` that App Service provides. This faux absolute path provides a portable, app-and-user-agnostic method for referring to one's own app. By using `%HOME%\site`, one can transfer shared files from app to app without having to configure a new absolute path for each transfer.
<a id="TypesOfFileAccess"></a> ### Types of file access granted to an app
-The `%HOME%` directory in an app maps to a content share in Azure Storage dedicated for that app, and its size is defined by your [pricing tier](https://azure.microsoft.com/pricing/details/app-service/). It may include directories such as those for content, error and diagnostic logs, and earlier versions of the app created by source control. These directories are available to the app's application code at runtime for read and write access. Because the files are not stored locally, they are persistent across app restarts.
+The `%HOME%` directory in an app maps to a content share in Azure Storage dedicated for that app, and its size is defined by your [pricing tier](https://azure.microsoft.com/pricing/details/app-service/). It may include directories such as those for content, error and diagnostic logs, and earlier versions of the app created by source control. These directories are available to the app's application code at runtime for read and write access. Because the files aren't stored locally, they're persistent across app restarts.
On the system drive, App Service reserves `%SystemDrive%\local` for app-specific temporary local storage. Changes to files in this directory are *not* persistent across app restarts. Although an app has full read/write access to its own temporary local storage, that storage really isn't intended to be used directly by the application code. Rather, the intent is to provide temporary file storage for IIS and web application frameworks. App Service also limits the amount of storage in `%SystemDrive%\local` for each app to prevent individual apps from consuming excessive amounts of local file storage. For **Free**, **Shared**, and **Consumption** (Azure Functions) tiers, the limit is 500 MB. See the following table for other tiers:
-| SKU Family | B1/S1/etc. | B2/S2/etc. | B3/S3/etc. |
-| - | - | - | - |
-|Basic, Standard, Premium | 11 GB | 15 GB | 58 GB |
-| PremiumV2, PremiumV3, Isolated | 21 GB | 61 GB | 140 GB |
+| SKU | Local file storage |
+| - | - |
+| B1/S1/P1 | 11GB |
+| B2/S2/P2 | 15GB |
+| B3/S3/P3 | 58GB |
+| P0v3 | 11GB |
+| P1v2/P1v3/P1mv3/Isolated1/Isolated1v2 | 21GB |
+| P2v2/P2v3/P2mv3/Isolated2/Isolated2v2 | 61GB |
+| P3v2/P3v3/P3mv3/Isolated3/Isolated3v2 | 140GB |
+| Isolated4v2 | 276GB|
+| P4mv3 | 280GB |
+| Isolated5v2 | 552GB|
+| P5mv3 | 560GB |
+| Isolated6v2 | 1104GB|
Two examples of how App Service uses temporary local storage are the directory for temporary ASP.NET files and the directory for IIS compressed files. The ASP.NET compilation system uses the `%SystemDrive%\local\Temporary ASP.NET Files` directory as a temporary compilation cache location. IIS uses the `%SystemDrive%\local\IIS Temporary Compressed Files` directory to store compressed response output. Both of these types of file usage (as well as others) are remapped in App Service to per-app temporary local storage. This remapping ensures that functionality continues as expected.
The temporary local storage (`%SystemDrive%\local`) directory is not shared betw
## Network access Application code can use TCP/IP and UDP-based protocols to make outbound network connections to Internet accessible endpoints that expose external services. Apps can use these same protocols to connect to services within Azure, for example, by establishing HTTPS connections to SQL Database.
-There is also a limited capability for apps to establish one local loopback connection, and have an app listen on that local loopback socket. This feature exists primarily to enable apps that listen on local loopback sockets as part of their functionality. Each app sees a "private" loopback connection. App "A" cannot listen to a local loopback socket established by app "B".
+There's also a limited capability for apps to establish one local loopback connection, and have an app listen on that local loopback socket. This feature exists primarily to enable apps that listen on local loopback sockets as part of their functionality. Each app sees a "private" loopback connection. App "A" cannot listen to a local loopback socket established by app "B".
Named pipes are also supported as an inter-process communication (IPC) mechanism between different processes that collectively run an app. For example, the IIS FastCGI module relies on named pipes to coordinate the individual processes that run PHP pages. <a id="Code"></a> ## Code execution, processes, and memory
-As noted earlier, apps run inside of low-privileged worker processes using a random application pool identity. Application code has access to the memory space associated with the worker process, as well as any child processes that may be spawned by CGI processes or other applications. However, one app cannot access the memory or data of another app even if it is on the same virtual machine.
+As noted earlier, apps run inside of low-privileged worker processes using a random application pool identity. Application code has access to the memory space associated with the worker process, as well as any child processes that may be spawned by CGI processes or other applications. However, one app cannot access the memory or data of another app even if it's on the same virtual machine.
Apps can run scripts or pages written with supported web development frameworks. App Service doesn't configure any web framework settings to more restricted modes. For example, ASP.NET apps running on App Service run in "full" trust as opposed to a more restricted trust mode. Web frameworks, including both classic ASP and ASP.NET, can call in-process COM components (but not out of process COM components) like ADO (ActiveX Data Objects) that are registered by default on the Windows operating system.
-Apps can spawn and run arbitrary code. It is allowable for an app to do things like spawn a command shell or run a PowerShell script. However, even though arbitrary code and processes can be spawned from an app, executable programs and scripts are still restricted to the privileges granted to the parent application pool. For example, an app can spawn an executable that makes an outbound HTTP call, but that same executable cannot attempt to unbind the IP address of a virtual machine from its NIC. Making an outbound network call is allowed to low-privileged code, but attempting to reconfigure network settings on a virtual machine requires administrative privileges.
+Apps can spawn and run arbitrary code. It's allowable for an app to do things like spawn a command shell or run a PowerShell script. However, even though arbitrary code and processes can be spawned from an app, executable programs and scripts are still restricted to the privileges granted to the parent application pool. For example, an app can spawn an executable that makes an outbound HTTP call, but that same executable cannot attempt to unbind the IP address of a virtual machine from its NIC. Making an outbound network call is allowed to low-privileged code, but attempting to reconfigure network settings on a virtual machine requires administrative privileges.
<a id="Diagnostics"></a>
Areas of diagnostics logging and tracing that aren't available to apps are Windo
<a id="RegistryAccess"></a> ## Registry access
-Apps have read-only access to much (though not all) of the registry of the virtual machine they are running on. In practice, this means registry keys that allow read-only access to the local Users group are accessible by apps. One area of the registry that is currently not supported for either read or write access is the HKEY\_CURRENT\_USER hive.
+Apps have read-only access to much (though not all) of the registry of the virtual machine they're running on. In practice, this means registry keys that allow read-only access to the local Users group are accessible by apps. One area of the registry that is currently not supported for either read or write access is the HKEY\_CURRENT\_USER hive.
Write-access to the registry is blocked, including access to any per-user registry keys. From the app's perspective, write access to the registry should never be relied upon in the Azure environment since apps can (and do) get migrated across different virtual machines. The only persistent writeable storage that can be depended on by an app is the per-app content directory structure stored on the App Service UNC shares.
application-gateway Migrate V1 V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/migrate-v1-v2.md
To run the script:
``` You can pass in `$mySslCert1, $mySslCert2` (comma-separated) in the previous example as values for this parameter in the script.+
+ * **sslCertificates from Keyvault: Optional**.You can download the certificates stored in Azure Key Vault and pass it to migration script.To download the certificate as a PFX file, run following command. These commands access SecretId, and then save the content as a PFX file.
+
+ ```azurepowershell
+ $vaultName = <kv-name>
+ $certificateName = <cert-name>
+ $password = <password>
+
+ $pfxSecret = Get-AzKeyVaultSecret -VaultName $vaultName -Name $certificateName -AsPlainText
+ $secretByte = [Convert]::FromBase64String($pfxSecret)
+ $x509Cert = New-Object Security.Cryptography.X509Certificates.X509Certificate2
+ $x509Cert.Import($secretByte, $null, [Security.Cryptography.X509Certificates.X509KeyStorageFlags]::Exportable)
+ $pfxFileByte = $x509Cert.Export([Security.Cryptography.X509Certificates.X509ContentType]::Pkcs12, $password)
+
+ # Write to a file
+ [IO.File]::WriteAllBytes("KeyVaultcertificate.pfx", $pfxFileByte)
+ ```
+ For each of the cert downloaded from the Keyvault, you can create a new PSApplicationGatewaySslCertificate object via the New-AzApplicationGatewaySslCertificate command shown here. You need the path to your TLS/SSL Cert file and the password.
+
+ ```azurepowershell
+ //Convert the downloaded certificate to SSL object
+ $password = ConvertTo-SecureString <password> -AsPlainText -Force
+ $cert = New-AzApplicationGatewaySSLCertificate -Name <certname> -CertificateFile <Cert-File-Path-1> -Password $password
+ ```
+
* **trustedRootCertificates: [PSApplicationGatewayTrustedRootCertificate]: Optional**. A comma-separated list of PSApplicationGatewayTrustedRootCertificate objects that you create to represent the [Trusted Root certificates](ssl-overview.md) for authentication of your backend instances from your v2 gateway. ```azurepowershell
application-gateway Retirement Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/retirement-faq.md
On April 28,2023 we announced retirement of Application gateway V1 on 28 April 2
### What is the official date Application Gateway V1 is cut off from creation?
-New Customers are not allowed to create v1 from 1 July 2023. However, any existing V1 customers can continue to create resources until August 2024 and manage V1 resources until the retirement date of 28 April 2026.
+New Customers aren't allowed to create v1 from 1 July 2023. However, any existing V1 customers can continue to create resources until August 2024 and manage V1 resources until the retirement date of 28 April 2026.
### What happens to existing Application Gateway V1 after 28 April 2026?
-Once the deadline arrives V1 gateways are not supported. Any V1 SKU resources that are still active are stopped, and force deleted.
+Once the deadline arrives V1 gateways aren't supported. Any V1 SKU resources that are still active are stopped, and force deleted.
### What is the definition of a new customer on Application Gateway V1 SKU?
-Customers who did not have Application Gateway V1 SKU in their subscriptions in the month of June 2023 are considered as new customers. These customers wonΓÇÖt be able to create new V1 gateways from 1 July 2023.
+Customers who didn't have Application Gateway V1 SKU in their subscriptions in the month of June 2023 are considered as new customers. These customers wonΓÇÖt be able to create new V1 gateways from 1 July 2023.
### What is the definition of an existing customer on Application Gateway V1 SKU?
If you have an Application Gateway V1, [Migration from v1 to v2](./migrate-v1-v2
### Can Microsoft migrate this data for me?
-No, Microsoft cannot migrate user's data on their behalf. Users must do the migration themselves by using the self-serve options provided.
-Application Gateway v1 is built on legacy components and customers have deployed the gateways in many different ways in their architecture ,due to which customer involvement is required for migration.This also allows users to plan the migration during a maintenance window, which can help to ensure that the migration is successful with minimal downtime for the user's applications.
+No, Microsoft can't migrate user's data on their behalf. Users must do the migration themselves by using the self-serve options provided.
+Application Gateway v1 is built on legacy components and customers have deployed the gateways in many different ways in their architecture , due to which customer involvement is required for migration. This also allows users to plan the migration during a maintenance window, which can help to ensure that the migration is successful with minimal downtime for the user's applications.
### What is the time required for migration?
No. The Azure PowerShell script only migrates the configuration. Actual traffic
### Is the new v2 gateway created by the Azure PowerShell script sized appropriately to handle all of the traffic that is currently served by my v1 gateway?
-The Azure PowerShell script creates a new v2 gateway with an appropriate size to handle the traffic on your existing v1 gateway. Autoscaling is disabled by default, but you can enable AutoScaling when you run the script.
+The Azure PowerShell script creates a new v2 gateway with an appropriate size to handle the traffic on your existing v1 gateway. Auto-scaling is disabled by default, but you can enable Auto-Scaling when you run the script.
### I configured my v1 gateway to send logs to Azure storage. Does the script replicate this configuration for v2 as well? No. The script doesn't replicate this configuration for v2. You must add the log configuration separately to the migrated v2 gateway.
-### Does this script support certificates uploaded to Azure Key Vault ?
+### Does this script support certificate uploaded to Azure Key Vault?
-No. Currently the script doesn't support certificates in Key Vault.
+Yes. You can download the certificate from Keyvault and provide it as input to the migration script .
### I ran into some issues with using this script. How can I get help?
applied-ai-services Create Sas Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/create-sas-tokens.md
Previously updated : 10/26/2022 Last updated : 06/19/2023 monikerRange: '>=form-recog-2.1.0'
monikerRange: '>=form-recog-2.1.0'
In this article, learn how to create user delegation, shared access signature (SAS) tokens, using the Azure portal or Azure Storage Explorer. User delegation SAS tokens are secured with Azure AD credentials. SAS tokens provide secure, delegated access to resources in your Azure storage account. + At a high level, here's how SAS tokens work: * Your application submits the SAS token to Azure Storage as part of a REST API request.
automation Automation Runbook Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-types.md
Title: Azure Automation runbook types
description: This article describes the types of runbooks that you can use in Azure Automation and considerations for determining which type to use. Previously updated : 05/08/2023 Last updated : 06/18/2023
The following are the current limitations and known issues with PowerShell runbo
if($item) { write-output "File Created" } ``` 1. You can also upgrade your runbooks to PowerShell 7.1 or PowerShell 7.2 where the same runbook will work as expected.-
+* Ensure to import the **Newtonsoft.Json** v10 module explicitly if PowerShell 5.1 runbooks have a dependency on this version of the module.
# [PowerShell 7.1 (preview)](#tab/lps71)
The following are the current limitations and known issues with PowerShell runbo
``` - Avoid importing `Az.Accounts` module to version 2.4.0 version for PowerShell 7 runtime as there can be an unexpected behavior using this version in Azure Automation. - You might encounter formatting problems with error output streams for the job running in PowerShell 7 runtime.- - When you import a PowerShell 7.1 module that's dependent on other modules, you may find that the import button is gray even when PowerShell 7.1 version of the dependent module is installed. For example, Az PowerShell module.Compute version 4.20.0, has a dependency on Az.Accounts being >= 2.6.0. This issue occurs when an equivalent dependent module in PowerShell 5.1 doesn't meet the version requirements. For example, 5.1 version of Az.Accounts were < 2.6.0.- - When you start PowerShell 7 runbook using the webhook, it auto-converts the webhook input parameter to an invalid JSON. - We recommend that you use [ExchangeOnlineManagement](https://learn.microsoft.com/powershell/exchange/exchange-online-powershell?view=exchange-ps) module version: 3.0.0 or lower because version: 3.0.0 or higher may lead to job failures.
+- Ensure to import the **Newtonsoft.Json** v10 module explicitly if PowerShell 7.1 runbooks have a dependency on this version of the module.
# [PowerShell 7.2 (preview)](#tab/lps72)
azure-arc Version Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md
This article identifies the component versions with each release of Azure Arc-en
|`telemetrycollectors.arcdata.microsoft.com`| v1beta1 through v1beta5| |`telemetryrouters.arcdata.microsoft.com`| v1beta1 through v1beta5| |Azure Resource Manager (ARM) API version|2023-01-15-preview|
-|`arcdata` Azure CLI extension version|1.5.1 ([Download](https://aka.ms/az-cli-arcdata-ext))|
+|`arcdata` Azure CLI extension version|1.5.2 ([Download](https://aka.ms/az-cli-arcdata-ext))|
|Arc-enabled Kubernetes helm chart extension version|1.20.0| |Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.8.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.8.0 ([Download](https://aka.ms/ads-azcli-ext))| |SQL Database version | 957 |
This release introduces general availability for Azure Arc-enabled SQL Managed I
|Arc enabled Kubernetes helm chart extension version | 1.0.16701001, release train: stable | |Arc Data extension for Azure Data Studio | 0.9.5 | +
azure-arc Conceptual Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-flux2.md
If you use `azblob` source, here are the blob-specific command arguments.
| `--sp_client_cert_send_chain` | String | Specifies whether to include x5c header in client claims when acquiring a token to enable subject name / issuer based authentication for the client certificate | | `--account_key` | String | The Azure Blob Shared Key for authentication | | `--sas_token` | String | The Azure Blob SAS Token for authentication |
-| `--mi_client_id` | String | The client ID of the managed identity for authentication with Azure Blob |
+| `--managed-identity-client-id` | String | The client ID of the managed identity for authentication with Azure Blob |
> [!IMPORTANT] > When using managed identity authentication for AKS clusters and `azblob` source, the managed identity must be assigned at minimum the [Storage Blob Data Reader](/azure/role-based-access-control/built-in-roles#storage-blob-data-reader) role. Authentication using a managed identity is not yet available for Azure Arc-enabled Kubernetes clusters.
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/network-requirements.md
Last updated 01/30/2023
This article describes the networking requirements for deploying Azure Arc resource bridge (preview) in your enterprise.
-## Configuration requirements
-
-### Static Configuration
-
-Static configuration is recommended for Arc resource bridge because the resource bridge needs three static IPs in the same subnet for the control plane, appliance VM, and reserved appliance VM (for upgrade). The control plane corresponds to the `controlplanenedpoint` parameter, the appliance VM IP to `k8snodeippoolstart`, and reserved appliance VM to `k8snodeippoolend` in the `createconfig` command that creates the resource bridge configuration files. If using DHCP, reserve those IP addresses, ensuring the IPs are outside of the assignable DHCP range of IPs (i.e. the control plane IP should be treated as a reserved/static IP that no other machine on the network will use or receive from DHCP).
-
-### IP Address Prefix
-
-The subnet of the IP addresses for Arc resource bridge must lie in the IP address prefix that is passed in the `ipaddressprefix` parameter during the configuration creation. The IP address prefix is the IP prefix that is exposed by the network to which Arc resource bridge is connected. It is entered as the subnet's IP address range for the virtual network and subnet mask (IP Mask) in CIDR notation, for example `192.168.7.1/24`. The minimum IP prefix is /29. The IP address prefix should have enough available IP addresses for Gateway IP, Control Plane IP, appliance VM IP, and reserved appliance VM IP. Consult your system or network administrator to obtain the IP address prefix in CIDR notation. An IP Subnet CIDR calculator may be used to obtain this value.
-
-### DNS Server IPs
-
-DNS Server must have internal and external endpoint resolution. The appliance VM and control plane need to resolve the management machine and vice versa. All three must be able to reach the required URLs for deployment.
--
-### Gateway IP
-
-The gateway IP should be an IP from within the subnet designated in the IP address prefix.
-
-### Example minimum configuration for static IP deployment
-
-Below is an example of valid configuration values that can be passed during configuration file creation for Arc resource bridge. It is strongly recommended to use static IP addresses when deploying Arc resource bridge. Notice that the IP addresses for the Gateway, Control Plane, appliance VM and DNS server IP (for internal resolution) are within the IP prefix - this key detail ensures the successful deployment of the appliance VM.
-
-IP address Prefix (CIDR format): 192.168.0.0/29
-
-Gateway (IP format): 192.168.0.1
-
-VM IP Pool Start (IP format): 192.168.0.2
-
-VM IP Pool End (IP format): 192.168.0.3
-
-Control Plane IP (IP format): 192.168.0.4
-
-DNS servers (IP list format): 192.168.0.1, 10.0.0.5, 10.0.0.6
- ## General network requirements [!INCLUDE [network-requirement-principles](../includes/network-requirement-principles.md)]
In addition, resource bridge (preview) requires connectivity to the [Arc-enabled
## SSL proxy configuration
-If using a proxy, Arc resource bridge must be configured for proxy so that it can connect to the Azure services. To configure the Arc resource bridge with proxy, provide the proxy certificate file path during creation of the configuration files. Only pass the single proxy certificate. If a certificate bundle is passed then the deployment will fail. The proxy server endpoint can't be a .local domain. The proxy server has to also be routable/reachable from IPs within the IP prefix. Proxy configuration of the management machine isn't configured by Arc resource bridge.
+If using a proxy, Arc resource bridge must be configured for proxy so that it can connect to the Azure services.
+
+- To configure the Arc resource bridge with proxy, provide the proxy certificate file path during creation of the configuration files.
-There are only two certificates that should be relevant when deploying the Arc resource bridge behind an SSL proxy: the SSL certificate for your SSL proxy (so that the management machine and on-premises appliance VM trust your proxy FQDN and can establish an SSL connection to it), and the SSL certificate of the Microsoft download servers. This certificate must be trusted by your proxy server itself, as the proxy is the one establishing the final connection and needs to trust the endpoint. Non-Windows machines may not trust this second certificate by default, so you may need to ensure that it's trusted.
+- The format of the certificate file is *Base-64 encoded X.509 (.CER)*.
-In order to deploy Arc resource bridge, images need to be downloaded to the management machine and then uploaded to the on-premises private cloud gallery. If your proxy server throttles download speed, this may impact your ability to download the required images (~3 GB) within the allotted time (90 min).
+- Only pass the single proxy certificate. If a certificate bundle is passed then the deployment will fail.
+- The proxy server endpoint can't be a .local domain.
+
+- The proxy server has to be reachable from all IPs within the IP address prefix, including the control plane and appliance VM IPs.
+
+There are only two certificates that should be relevant when deploying the Arc resource bridge behind an SSL proxy:
+
+- SSL certificate for your SSL proxy (so that the management machine and appliance VM trust your proxy FQDN and can establish an SSL connection to it)
+
+- SSL certificate of the Microsoft download servers. This certificate must be trusted by your proxy server itself, as the proxy is the one establishing the final connection and needs to trust the endpoint. Non-Windows machines may not trust this second certificate by default, so you may need to ensure that it's trusted.
+
+In order to deploy Arc resource bridge, images need to be downloaded to the management machine and then uploaded to the on-premises private cloud gallery. If your proxy server throttles download speed, this may impact your ability to download the required images (~3.5 GB) within the allotted time (90 min).
## Exclusion list for no proxy
The default value for `noProxy` is `localhost,127.0.0.1,.svc,10.0.0.0/8,172.16.0
+
azure-cache-for-redis Cache Best Practices Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-performance.md
Fortunately, several tools exist to make benchmarking Redis easier. Two of the m
1. Install open source Redis server to a client VM you can use for testing. The redis-benchmark utility is built into the open source Redis distribution. Follow the [Redis documentation](https://redis.io/docs/getting-started/#install-redis) for instructions on how to install the open source image.
-1. The client VM used for testing should be *in the same region* as your Azure Cache for Redis instance.
+1. The client VM used for testing should be _in the same region_ as your Azure Cache for Redis instance.
-1. Make sure the client VM you use has *at least as much compute and bandwidth* as the cache instance being tested.
+1. Make sure the client VM you use has _at least as much compute and bandwidth_ as the cache instance being tested.
-1. Configure your [network isolation](cache-network-isolation.md) and [firewall](cache-configure.md#firewall) settings to ensure that the client VM is able to access your Azure Cache for Redis instance.
+1. Configure your [network isolation](cache-network-isolation.md) and [firewall](cache-configure.md#firewall) settings to ensure that the client VM is able to access your Azure Cache for Redis instance.
-1. If you're using TLS/SSL on your cache instance, you need to add the `--tls` parameter to your redis-benchmark command or use a proxy like [stunnel](https://www.stunnel.org/https://docsupdatetracker.net/index.html).
+1. If you're using TLS/SSL on your cache instance, you need to add the `--tls` parameter to your redis-benchmark command or use a proxy like [stunnel](https://www.stunnel.org/https://docsupdatetracker.net/index.html).
-1. `Redis-benchmark` uses port 6379 by default. Use the `-p` parameter to override this setting. You need to do use `-p`, if you're using the SSL/TLS (port 6380) or are using the Enterprise tier (port 10000).
+1. `Redis-benchmark` uses port 6379 by default. Use the `-p` parameter to override this setting. You need to do use `-p`, if you're using the SSL/TLS (port 6380) or are using the Enterprise tier (port 10000).
-1. If you're using an Azure Cache for Redis instance that uses [clustering](cache-how-to-scale.md), you need to add the `--cluster` parameter to your `redis-benchmark` command. Enterprise tier caches using the [Enterprise clustering policy](cache-best-practices-enterprise-tiers.md#clustering-on-enterprise) can be treated as nonclustered caches and don't need this setting.
+1. If you're using an Azure Cache for Redis instance that uses [clustering](cache-how-to-scale.md), you need to add the `--cluster` parameter to your `redis-benchmark` command. Enterprise tier caches using the [Enterprise clustering policy](cache-best-practices-enterprise-tiers.md#clustering-on-enterprise) can be treated as nonclustered caches and don't need this setting.
-1. Launch `redis-benchmark` from the CLI or shell of the VM. For instructions on how to configure and run the tool, see the [redis-benchmark documentation](https://redis.io/docs/management/optimization/benchmarks/) and the [redis-benchmark examples](#redis-benchmark-examples) sections.
+1. Launch `redis-benchmark` from the CLI or shell of the VM. For instructions on how to configure and run the tool, see the [redis-benchmark documentation](https://redis.io/docs/management/optimization/benchmarks/) and the [redis-benchmark examples](#redis-benchmark-examples) sections.
## Benchmarking recommendations -- It's important to not only test the performance of your cache under steady state conditions. *Test under failover conditions too*, and measure the CPU/Server Load on your cache during that time. You can start a failover by [rebooting the primary node](cache-administration.md#reboot). Testing under failover conditions allows you to see the throughput and latency of your application during failover conditions. Failover can happen during updates or during an unplanned event. Ideally, you don't want to see CPU/Server Load peak to more than say 80% even during a failover as that can affect performance.
+- It's important to not only test the performance of your cache under steady state conditions. _Test under failover conditions too_, and measure the CPU/Server Load on your cache during that time. You can start a failover by [rebooting the primary node](cache-administration.md#reboot). Testing under failover conditions allows you to see the throughput and latency of your application during failover conditions. Failover can happen during updates or during an unplanned event. Ideally, you don't want to see CPU/Server Load peak to more than say 80% even during a failover as that can affect performance.
- Consider using Enterprise and Premium tier Azure Cache for Redis instances. These cache sizes have better network latency and throughput because they're running on better hardware. - The Enterprise tier generally has the best performance, as Redis Enterprise allows the core Redis process to utilize multiple vCPUs. Tiers based on open source Redis, such as Standard and Premium, are only able to utilize one vCPU for the Redis process per shard. -- Benchmarking the Enterprise Flash tier can be difficult because some keys are stored on DRAM whiles some are stored on a NVMe flash disk. The keys on DRAM benchmark almost as fast as an Enterprise tier instance, but the keys on the NVMe flash disk are slower. Since the Enterprise Flash tier intelligently places the most-used keys into DRAM, ensure that your benchmark configuration matches the actual usage you expect. Consider using the `-r` parameter to randomize which keys are accessed.
+- Benchmarking the Enterprise Flash tier can be difficult because some keys are stored on DRAM whiles some are stored on a NVMe flash disk. The keys on DRAM benchmark almost as fast as an Enterprise tier instance, but the keys on the NVMe flash disk are slower. Since the Enterprise Flash tier intelligently places the most-used keys into DRAM, ensure that your benchmark configuration matches the actual usage you expect. Consider using the `-r` parameter to randomize which keys are accessed.
-- Using TLS/SSL decreases throughput performance, which can be seen clearly in the example benchmarking data in the following tables.
+- Using TLS/SSL decreases throughput performance, which can be seen clearly in the example benchmarking data in the following tables.
- Even though a Redis server is single-threaded, scaling up tends to improve throughput performance. System processes can use the extra vCPUs instead of sharing the vCPU being used by the Redis process. Scaling up is especially helpful on the Enterprise and Enterprise Flash tiers because Redis Enterprise isn't limited to a single thread. For more information, see [Enterprise tier best practices](cache-best-practices-enterprise-tiers.md#scaling). -- On the Premium tier, scaling out, clustering, is typically recommended before scaling up. Clustering allows Redis server to use more vCPUs by sharding data. Throughput should increase roughly linearly when adding shards in this case.
+- On the Premium tier, scaling out, clustering, is typically recommended before scaling up. Clustering allows Redis server to use more vCPUs by sharding data. Throughput should increase roughly linearly when adding shards in this case.
## Redis-benchmark examples
redis-benchmark -h yourcache.redis.cache.windows.net -a yourAccesskey -t GET -n
**To test throughput of a Basic, Standard, or Premium tier cache using TLS:** Pipelined GET requests with 1k payload:+ ```dos redis-benchmark -h yourcache.redis.cache.windows.net -p 6380 -a yourAccesskey -t GET -n 1000000 -d 1024 -P 50 -c 50 --tls ``` **To test throughput of an Enterprise or Enterprise Flash cache without TLS using OSS Cluster Mode:** Pipelined GET requests with 1k payload:+ ```dos redis-benchmark -h yourcache.region.redisenterprise.cache.azure.net -p 10000 -a yourAccesskey -t GET -n 1000000 -d 1024 -P 50 -c 50 --cluster ```
The following configuration was used to benchmark throughput:
redis-benchmark -h yourcache.redis.cache.windows.net -a yourAccesskey -t GET -n 1000000 -d 1024 -P 50 -c 50 ```
->[!CAUTION]
+>[!CAUTION]
>These values aren't guaranteed and there's no SLA for these numbers. We strongly recommend that you should [perform your own performance testing](cache-best-practices-performance.md) to determine the right cache size for your application. >These numbers might change as we post newer results periodically. >
-
+ ### Standard tier | Instance | Size | vCPUs | Expected network bandwidth (Mbps)| GET requests per second without SSL (1-kB value size) | GET requests per second with SSL (1-kB value size) |
-| | | | | | |
+| | :| :| :| :| :|
| C0 | 250 MB | Shared | 100 | 15,000 | 7,500 | | C1 | 1 GB | 1 | 500 | 38,000 | 20,720 | | C2 | 2.5 GB | 2 | 500 | 41,000 | 37,000 |
redis-benchmark -h yourcache.redis.cache.windows.net -a yourAccesskey -t GET -n
| C6 | 53 GB | 8 | 2,000 | 126,000 | 120,000 | ### Premium tier+ | Instance | Size | vCPUs | Expected network bandwidth (Mbps)| GET requests per second without SSL (1-kB value size) | GET requests per second with SSL (1-kB value size) |
-| | | | | | |
+| | | :|:| :| :|
| P1 | 6 GB | 2 | 1,500 | 180,000 | 172,000 | | P2 | 13 GB | 4 | 3,000 | 350,000 | 341,000 | | P3 | 26 GB | 4 | 3,000 | 350,000 | 341,000 | | P4 | 53 GB | 8 | 6,000 | 400,000 | 373,000 | | P5 | 120 GB | 32 | 6,000 | 400,000 | 373,000 |
-
-> [!Important]
-> P5 instances in the China East and China North regions use 20 cores, not 32 cores.
+
+> [!IMPORTANT]
+> P5 instances in the China East and China North regions use 20 cores, not 32 cores.
### Enterprise & Enterprise Flash tiers The Enterprise and Enterprise Flash tiers offer a choice of cluster policy: _Enterprise_ and _OSS_. Enterprise cluster policy is a simpler configuration that doesn't require the client to support clustering. OSS cluster policy, on the other hand, uses the [Redis cluster protocol](https://redis.io/docs/management/scaling) to support higher throughputs. We recommend using OSS cluster policy in most cases. For more information, see [Clustering on Enterprise](cache-best-practices-enterprise-tiers.md#clustering-on-enterprise). Benchmarks for both cluster policies are shown in the following tables.
-**Enterprise Cluster Policy**
+#### Enterprise Cluster Policy
| Instance | Size | vCPUs | Expected network bandwidth (Mbps)| GET requests per second without SSL (1-kB value size) | GET requests per second with SSL (1-kB value size) |
-| | | | | | |
+|::| | :|:| :| :|
| E10 | 12 GB | 4 | 4,000 | 300,000 | 200,000 | | E20 | 25 GB | 4 | 4,000 | 550,000 | 390,000 | | E50 | 50 GB | 8 | 8,000 | 950,000 | 530,000 |
The Enterprise and Enterprise Flash tiers offer a choice of cluster policy: _Ent
| F700 | 715 GB | 16 | 6,400 | 650,000 | 350,000 | | F1500 | 1455 GB | 32 | 12,800 | 650,000 | 360,000 |
-**OSS Cluster Policy**
+#### OSS Cluster Policy
| Instance | Size | vCPUs | Expected network bandwidth (Mbps)| GET requests per second without SSL (1-kB value size) | GET requests per second with SSL (1-kB value size) |
-| | | | | | |
+|::| | :|:| :| :|
| E10 | 12 GB | 4 | 4,000 | 1,300,000 | 800,000 | | E20 | 25 GB | 4 | 4,000 | 1,000,000 | 710,000 | | E50 | 50 GB | 8 | 8,000 | 2,000,000 | 950,000 |
The Enterprise and Enterprise Flash tiers offer a choice of cluster policy: _Ent
### Enterprise & Enterprise Flash Tiers - Scaled Out
-In addition to scaling up by moving to larger cache size, you can boost performance by [scaling out](cache-how-to-scale.md#how-to-scale-up-and-outenterprise-and-enterprise-flash-tiers). In the Enterprise tiers, scaling out is called increasing the _capacity_ of the cache instance. A cache instance by default has capacity of two--meaning a primary and replica node. An Enterprise cache instance with a capacity of four indicates that the instance was scaled out by a factor of two. Scaling out provides access to more memory and vCPUs. Details on how many vCPUs are used by the core Redis process at each cache size and capacity can be found at the [Enterprise tiers best practices page](cache-best-practices-enterprise-tiers.md#sharding-and-cpu-utilization). Scaling out is most effective when using the OSS cluster policy.
+In addition to scaling up by moving to larger cache size, you can boost performance by [scaling out](cache-how-to-scale.md#how-to-scale-up-and-outenterprise-and-enterprise-flash-tiers). In the Enterprise tiers, scaling out is called increasing the _capacity_ of the cache instance. A cache instance by default has capacity of two--meaning a primary and replica node. An Enterprise cache instance with a capacity of four indicates that the instance was scaled out by a factor of two. Scaling out provides access to more memory and vCPUs. Details on how many vCPUs are used by the core Redis process at each cache size and capacity can be found at the [Enterprise tiers best practices page](cache-best-practices-enterprise-tiers.md#sharding-and-cpu-utilization). Scaling out is most effective when using the OSS cluster policy.
The following tables show the GET requests per second at different capacities, using SSL and a 1-kB value size.
-**Scaling out - Enterprise cluster policy**
+#### Scaling out - Enterprise cluster policy
| Instance | Capacity 2 | Capacity 4 | Capacity 6 |
-| | | | |
+|::| :| :| :|
| E10 | 200,000 | 530,000 | 570,000 | | E20 | 390,000 | 520,000 | 580,000 | | E50 | 530,000 | 580,000 | 580,000 | | E100 | 580,000 | 580,000 | 580,000 | | Instance | Capacity 3 | Capacity 9 |
-| | | |
+|::| :| :|
| F300 | 310,000 | 530,000 | | F700 | 350,000 | 550,000 | | F1500 | 360,000 | 550,000 |
-**Scaling out - OSS cluster policy**
+#### Scaling out - OSS cluster policy
| Instance | Capacity 2 | Capacity 4 | Capacity 6 |
-| | | | |
+|::| :| :| :|
| E10 | 800,000 | 720,000 | 1,280,000 | | E20 | 710,000 | 950,000 | 1,250,000 | | E50 | 950,000 | 1,260,000 | 1,300,000 | | E100 | 960,000 | 1,840,000 | 1,930,000| | Instance | Capacity 3 | Capacity 9 |
-| | | |
-| F300 | 610,000 | 970,000 |
+|::||::| :| :|
+ F300 | 610,000 | 970,000 |
| F700 | 680,000 | 1,280,000 | | F1500 | 620,000 | 1,850,000 |
azure-maps Release Notes Indoor Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-indoor-module.md
This document contains information about new features and other changes to the Azure Maps Indoor Module.
+## [0.2.2]
+
+### Changes (0.2.2)
+
+- Performance improvements in dynamic styling updates.
+
+### Bug fixes (0.2.2)
+
+- Fix incorrect feature IDs usage in dynamic styling for [drawing package 2.0] derived tilesets.
+ ## [0.2.1] ### New features (0.2.1)
Stay up to date on Azure Maps:
> [Azure Maps Blog] [drawing package 2.0]: ./drawing-package-guide.md
+[0.2.2]: https://www.npmjs.com/package/azure-maps-indoor/v/0.2.2
[0.2.1]: https://www.npmjs.com/package/azure-maps-indoor/v/0.2.1 [0.2.0]: https://www.npmjs.com/package/azure-maps-indoor/v/0.2.0 [Azure Maps Creator Samples]: https://samples.azuremaps.com/?search=creator
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
Alerts triggered by these alert rules contain a payload that uses the [common al
1. On the **Actions** tab, select or create the required [action groups](./action-groups.md).
-1. (Optional) In the <a name="custom-props">**Custom properties**</a> section, if you've configured action groups for this alert rule, you can add custom properties in key:value pairs to the alert notification payload to add more information to it. Add the property **Name** and **Value** for the custom property you want included in the payload.
+1. <a name="custom-props"></a>(Optional) In the **Custom properties** section, if you've configured action groups for this alert rule, you can add your own properties to include in the alert notification payload. You can use these properties in the actions called by the action group, such as webhook, Azure function or logic app actions.
- You can also use custom properties to extract and manipulate data from alert payloads that use the [common schema](alerts-common-schema.md). You can use those values in the action group webhook or logic app.
-
- > [!NOTE]
- > In this phase the custom properties are not part of the e-mail template
+ The custom properties are specified as key:value pairs, using either static text, a dynamic value extracted from the alert payload, or a combination of both.
+
+ The format for extracting a dynamic value from the alert payload is: `${<path to schema field>}`. For example: ${data.essentials.monitorCondition}.
- The format for extracting values from the common schema, use a "$", and then the path of the [Common alert schema](alerts-common-schema.md) field inside curly brackets. For example: `${data.essentials.monitorCondition}`.
+ Use the [common alert schema](alerts-common-schema.md) format to specify the field in the payload, whether or not the action groups configured for the alert rule use the common schema.
- In the following examples, values in the **custom properties** are used to utilize data from the payload:
+ In the following examples, values in the **custom properties** are used to utilize data from a payload that uses the common alert schema:
**Example 1**
Alerts triggered by these alert rules contain a payload that uses the [common al
1. Select the **Severity**. 1. Enter values for the **Alert rule name** and the **Alert rule description**. 1. Select the **Region**.
- 1. In the <a name="managed-id">**Identity**</a> section, select which identity is used by the log alert rule to send the log query. This identity is used for authentication when the alert rule executes the log query.
+ 1. <a name="managed-id"></a>In the **Identity** section, select which identity is used by the log alert rule to send the log query. This identity is used for authentication when the alert rule executes the log query.
Keep these things in mind when selecting an identity: - A managed identity is required if you're sending a query to Azure Data Explorer.
azure-monitor Alerts Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-logic-apps.md
This example creates a logic app that uses the [common alerts schema](./alerts-c
1. Select **+** > **Add an action** to insert a new step. 1. In the **Search** field, search for and select **Initialize variable**.
- 1. In the **Name** field, enter the name of the variable, such as **AffectedResources**.
+ 1. In the **Name** field, enter the name of the variable, such as **AffectedResource**.
1. In the **Type** field, select **Array**. 1. In the **Value** field, select **Add dynamic Content**. Select the **Expression** tab and enter the string `split(triggerBody()?['data']?['essentials']?['alertTargetIDs'][0], '/')`.
This example creates a logic app that uses the [common alerts schema](./alerts-c
1. Select **+** > **Add an action** to insert another step. 1. In the **Search** field, search for and select **Azure Resource Manager** > **Read a resource**.
- 1. Populate the fields of the **Read a resource** action with the array values from the `AffectedResources` variable. In each of the fields, select the field and scroll down to **Enter a custom value**. Select **Add dynamic content**, and then select the **Expression** tab. Enter the strings from this table:
+ 1. Populate the fields of the **Read a resource** action with the array values from the `AffectedResource` variable. In each of the fields, select the field and scroll down to **Enter a custom value**. Select **Add dynamic content**, and then select the **Expression** tab. Enter the strings from this table:
|Field|String value| |||
To trigger your logic app, create an action group. Then create an alert that use
## Next steps * [Learn more about action groups](./action-groups.md)
-* [Learn more about the common alert schema](./alerts-common-schema.md)
+* [Learn more about the common alert schema](./alerts-common-schema.md)
azure-monitor Alerts Non Common Schema Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-non-common-schema-definitions.md
Title: Non-common alert schema definitions in Azure Monitor for test action grou
description: Understanding the non-common alert schema definitions for Azure Monitor for the test action group feature. Previously updated : 01/25/2022- Last updated : 06/19/2023+ # Non-common alert schema definitions for test action group (preview)
azure-monitor Alerts Processing Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-processing-rules.md
Title: Alert processing rules for Azure Monitor alerts
description: Understand Azure Monitor alert processing rules and how to configure and manage them. Previously updated : 2/23/2022- Last updated : 6/19/2023+ # Alert processing rules
azure-monitor It Service Management Connector Secure Webhook Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/it-service-management-connector-secure-webhook-connections.md
Title: 'IT Service Management Connector: Secure Webhook in Azure Monitor' description: This article shows you how to connect your IT Service Management products and services with Secure Webhook in Azure Monitor to centrally monitor and manage ITSM work items. Previously updated : 03/30/2022 Last updated : 06/19/2023 ms. reviewer: nolavime
The main benefits of the integration are:
## Next steps
-[Create ITSM work items from Azure alerts](./itsmc-overview.md)
+[Create ITSM work items from Azure alerts](./itsmc-overview.md)
azure-monitor Itsm Connector Secure Webhook Connections Azure Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsm-connector-secure-webhook-connections-azure-configuration.md
Title: 'IT Service Management Connector: Secure Webhook in Azure Monitor - Azure configurations' description: This article shows you how to configure Azure to connect your ITSM products or services with Secure Webhook in Azure Monitor to centrally monitor and manage ITSM work items. Previously updated : 04/28/2022 Last updated : 06/19/2023
azure-monitor Itsmc Connections Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections-servicenow.md
Title: Connect ServiceNow with IT Service Management Connector description: Learn how to connect ServiceNow with the IT Service Management Connector (ITSMC) in Azure Monitor to centrally monitor and manage ITSM work items. Previously updated : 2/23/2022 Last updated : 6/19/2023
azure-monitor Itsmc Connector Deletion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connector-deletion.md
Title: Delete unused ITSM connectors description: This article provides an explanation of how to delete ITSM connectors and the action groups that are associated with it. Previously updated : 2/23/2022 Last updated : 06/19/2023
azure-monitor Itsmc Dashboard Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-dashboard-errors.md
Title: Connector status errors in the ITSMC dashboard description: Learn about common errors that exist in the IT Service Management Connector dashboard. Previously updated : 2/23/2022 Last updated : 06/19/2023
azure-monitor Itsmc Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-dashboard.md
Title: Investigate errors by using the ITSMC dashboard description: Learn how to use the IT Service Management Connector dashboard to investigate errors. Previously updated : 2/23/2022 Last updated : 06/19/2022
azure-monitor Itsmc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-overview.md
Title: IT Service Management integration description: This article provides an overview of the ways you can integrate with an IT Service Management product. Previously updated : 04/28/2022- Last updated : 06/19/2023+ # IT Service Management integration
azure-monitor Itsmc Resync Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-resync-servicenow.md
Title: How to manually fix ServiceNow sync problems description: Reset the connection to ServiceNow so alerts in Microsoft Azure can again call ServiceNow Previously updated : 03/30/2022 Last updated : 06/19/2023
azure-monitor Itsmc Secure Webhook Connections Bmc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-secure-webhook-connections-bmc.md
Title: 'IT Service Management Connector: Secure Webhook in Azure Monitor - Configuration with BMC' description: This article shows you how to connect your ITSM products or services with BMC on Secure Webhook in Azure Monitor. Previously updated : 03/30/2022 Last updated : 06/19/2023
azure-monitor Itsmc Secure Webhook Connections Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-secure-webhook-connections-servicenow.md
Title: 'ITSM Connector: Configure ServiceNow for Secure Webhook' description: This article shows you how to connect your IT Service Management products and services with ServiceNow and Secure Webhook in Azure Monitor. Previously updated : 03/30/2022 Last updated : 06/19/2023
azure-monitor Itsmc Synced Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-synced-data.md
Title: Data synced from your ITSM product to LA Workspace description: This article provides an overview of data synced from your ITSM product to LA Workspace. Previously updated : 2/23/2022 Last updated : 06/19/2023
azure-monitor Itsmc Troubleshoot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-troubleshoot-overview.md
Title: Troubleshoot problems in ITSMC description: Learn how to resolve common problems in IT Service Management Connector. Previously updated : 2/23/2022 Last updated : 06/19/2023
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
Starting from version 3.2.0, if you want to capture controller "InProc" dependen
## Telemetry processors (preview)
-Yu can use telemetry processors to configure rules that are applied to request, dependency, and trace telemetry. For example, you can:
+You can use telemetry processors to configure rules that are applied to request, dependency, and trace telemetry. For example, you can:
* Mask sensitive data. * Conditionally add custom dimensions.
azure-monitor Container Insights Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-authentication.md
Click on the relevant tab for instructions to enable Managed identity authentica
When creating a new cluster from the Azure portal: On the **Integrations** tab, first check the box for *Enable Container Logs*, then check the box for *Use managed identity*. + For existing clusters, you can switch to Managed Identity authentication from the *Monitor settings* panel: Navigate to your AKS cluster, scroll through the menu on the left till you see the **Monitoring** section, there click on the **Insights** tab. In the Insights tab, click on the *Monitor Settings* option and check the box for *Use managed identity*
-If you don't see the *Use managed identity* option, you are using an SPN clusters. In that case, you must use command line tools to migrate. See other tabs for migration instructions and templates.
+
+If you don't see the *Use managed identity* option, you are using an SPN cluster. In that case, you must use command line tools to migrate. See other tabs for migration instructions and templates.
## [Azure CLI](#tab/cli)
az account set --subscription "Subscription Name"
az deployment group create --resource-group <ClusterResourceGroupName> --template-file ./existingClusterOnboarding.bicep --parameters ./existingClusterParam.json ```
-For new aks cluster:
+For new AKS cluster:
Replace and use the managed cluster resources in this [guide](https://learn.microsoft.com/azure/aks/learn/quick-kubernetes-deploy-bicep?tabs=azure-cli) ## [Terraform](#tab/terraform)
-**Enable Monitoring with MSI without syslog for new aks cluster**
+**Enable Monitoring with MSI without syslog for new AKS cluster**
1. Download Terraform template for enable monitoring msi with syslog enabled: https://aka.ms/enable-monitoring-msi-terraform
https://aka.ms/enable-monitoring-msi-terraform
5. Run `terraform plan -out main.tfplan` to initialize the Terraform deployment. 6. Run `terraform apply main.tfplan` to apply the execution plan to your cloud infrastructure.
-**Enable Monitoring with MSI with syslog for new aks cluster**
+**Enable Monitoring with MSI with syslog for new AKS cluster**
1. Download Terraform template for enable monitoring msi with syslog enabled: https://aka.ms/enable-monitoring-msi-syslog-terraform 2. Adjust the azurerm_kubernetes_cluster resource in main.tf based on what cluster settings you're going to have
https://aka.ms/enable-monitoring-msi-syslog-terraform
5. Run `terraform plan -out main.tfplan` to initialize the Terraform deployment. 6. Run `terraform apply main.tfplan` to apply the execution plan to your cloud infrastructure.
-**Enable Monitoring with MSI for existing aks cluster:**
+**Enable Monitoring with MSI for existing AKS cluster:**
1. Import the existing cluster resource first with this command: ` terraform import azurerm_kubernetes_cluster.k8s <aksResourceId>` 2. Add the oms_agent add-on profile to the existing azurerm_kubernetes_cluster resource. ```
azure-monitor Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-sources.md
In addition to the standard tiers of an application, you may need to monitor oth
| Destination | Method | Description | Reference | |:|:|:|:|
-| Azure Monitor Logs | Logs ingestion API | Collect log data from any REST client and store in Log Analytics workspace using a data collection rule. | [Logs ingestion API in Azure Monitor (preview)](logs/logs-ingestion-api-overview.md) |
+| Azure Monitor Logs | Logs ingestion API | Collect log data from any REST client and store in Log Analytics workspace using a data collection rule. | [Logs ingestion API in Azure Monitor](logs/logs-ingestion-api-overview.md) |
| | Data Collector API | Collect log data from any REST client and store in Log Analytics workspace. | [Send log data to Azure Monitor with the HTTP Data Collector API (preview)](logs/data-collector-api.md) | | Azure Monitor Metrics | Custom Metrics API | Collect metric data from any REST client and store in Azure Monitor metrics database. | [Send custom metrics for an Azure resource to the Azure Monitor metric store by using a REST API](essentials/metrics-store-custom-rest-api.md) |
azure-monitor Prometheus Remote Write Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-remote-write-managed-identity.md
This step isn't required if you're using an AKS identity since it will already h
```yml prometheus: prometheusSpec:
- externalLabels:
- cluster: <AKS-CLUSTER-NAME>
-
- ## https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write
- remoteWrite:
- - url: 'http://localhost:8081/api/v1/write'
-
- ## Azure Managed Prometheus currently exports some default mixins in Grafana.
- ## These mixins are compatible with Azure Monitor agent on your Azure Kubernetes Service cluster.
- ## However, these mixins aren't compatible with Prometheus metrics scraped by the Kube Prometheus stack.
- ## In order to make these mixins compatible, uncomment remote write relabel configuration below:
-
-
- ## writeRelabelConfigs:
- ## - sourceLabels: [metrics_path]
- ## regex: /metrics/cadvisor
- ## targetLabel: job
- ## replacement: cadvisor
- ## action: replace
- ## - sourceLabels: [job]
- ## regex: 'node-exporter'
- ## targetLabel: job
- ## replacement: node
- ## action: replace
- containers:
- - name: prom-remotewrite
- image: <CONTAINER-IMAGE-VERSION>
- imagePullPolicy: Always
- ports:
- - name: rw-port
- containerPort: 8081
- livenessProbe:
- httpGet:
- path: /health
- port: rw-port
- initialDelaySeconds: 10
- timeoutSeconds: 10
- readinessProbe:
- httpGet:
- path: /ready
- port: rw-port
- initialDelaySeconds: 10
- timeoutSeconds: 10
- env:
- - name: INGESTION_URL
- value: <INGESTION_URL>
- - name: LISTENING_PORT
- value: '8081'
- - name: IDENTITY_TYPE
- value: userAssigned
- - name: AZURE_CLIENT_ID
- value: <MANAGED-IDENTITY-CLIENT-ID>
- # Optional parameter
- - name: CLUSTER
- value: <CLUSTER-NAME>
+ externalLabels:
+ cluster: <AKS-CLUSTER-NAME>
+
+ ## https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write
+ remoteWrite:
+ - url: 'http://localhost:8081/api/v1/write'
+
+ ## Azure Managed Prometheus currently exports some default mixins in Grafana.
+ ## These mixins are compatible with Azure Monitor agent on your Azure Kubernetes Service cluster.
+ ## However, these mixins aren't compatible with Prometheus metrics scraped by the Kube Prometheus stack.
+ ## In order to make these mixins compatible, uncomment remote write relabel configuration below:
+
+ ## writeRelabelConfigs:
+ ## - sourceLabels: [metrics_path]
+ ## regex: /metrics/cadvisor
+ ## targetLabel: job
+ ## replacement: cadvisor
+ ## action: replace
+ ## - sourceLabels: [job]
+ ## regex: 'node-exporter'
+ ## targetLabel: job
+ ## replacement: node
+ ## action: replace
+
+ containers:
+ - name: prom-remotewrite
+ image: <CONTAINER-IMAGE-VERSION>
+ imagePullPolicy: Always
+ ports:
+ - name: rw-port
+ containerPort: 8081
+ livenessProbe:
+ httpGet:
+ path: /health
+ port: rw-port
+ initialDelaySeconds: 10
+ timeoutSeconds: 10
+ readinessProbe:
+ httpGet:
+ path: /ready
+ port: rw-port
+ initialDelaySeconds: 10
+ timeoutSeconds: 10
+ env:
+ - name: INGESTION_URL
+ value: <INGESTION_URL>
+ - name: LISTENING_PORT
+ value: '8081'
+ - name: IDENTITY_TYPE
+ value: userAssigned
+ - name: AZURE_CLIENT_ID
+ value: <MANAGED-IDENTITY-CLIENT-ID>
+ # Optional parameter
+ - name: CLUSTER
+ value: <CLUSTER-NAME>
```
azure-monitor Azure Data Explorer Query Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/azure-data-explorer-query-storage.md
- Title: Query exported data from Azure Monitor by using Azure Data Explorer
-description: Use Azure Data Explorer to query data that was exported from your Log Analytics workspace to an Azure Storage account.
-- Previously updated : 03/22/2022--
-# Query exported data from Azure Monitor by using Azure Data Explorer
-Exporting data from Azure Monitor to an Azure Storage account enables low-cost retention and the ability to reallocate logs to different regions. Use Azure Data Explorer to query data that was exported from your Log Analytics workspaces. After configuration, supported tables that are sent from your workspaces to a storage account will be available as a data source for Azure Data Explorer.
-
-The process flow is to:
-
-1. Export data from the Log Analytics workspace to the storage account.
-1. Create an external table in your Azure Data Explorer cluster and mapping for the data types.
-1. Query data from Azure Data Explorer.
--
-## Send data to Azure Storage
-Azure Monitor logs can be exported to a storage account by using any of the following options:
--- Export all data from your Log Analytics workspace to a storage account or event hub. Use the Log Analytics workspace data export feature of Azure Monitor Logs. For more information, see [Log Analytics workspace data export in Azure Monitor](./logs-data-export.md).-- Scheduled export from a log query by using a logic app workflow. This method is similar to the data export feature but allows you to send filtered or aggregated data to Azure Storage. This method is subject to [log query limits](../service-limits.md#log-analytics-workspaces). For more information, see [Archive data from a Log Analytics workspace to Azure Storage by using Azure Logic Apps](./logs-export-logic-app.md).-- One-time export by using a logic app workflow. For more information, see [Azure Monitor Logs connector for Azure Logic Apps](../../connectors/connectors-azure-monitor-logs.md).-- One-time export to a local machine by using a PowerShell script. For more information, see [Invoke-AzOperationalInsightsQueryExport](https://www.powershellgallery.com/packages/Invoke-AzOperationalInsightsQueryExport).-
-> [!TIP]
-> You can use an existing Azure Data Explorer cluster or create a new dedicated cluster with the needed configurations.
-
-## Create an external table located in Azure Blob Storage
-Use [external tables](/azure/data-explorer/kusto/query/schema-entities/externaltables) to link Azure Data Explorer to a storage account. An external table is a Kusto schema entity that references data stored outside a Kusto database. Like tables, an external table has a well-defined schema. Unlike tables, data is stored and managed outside of a Kusto cluster. The exported data from the previous section is saved in JSON lines.
-
-To create a reference, you require the schema of the exported table. Use the [getschema](/azure/data-explorer/kusto/query/getschemaoperator) operator from Log Analytics to retrieve this information, which includes the table's columns and their data types.
--
-You can now use the output to create the Kusto query for building the external table.
-Follow the guidance in [Create and alter external tables in Azure Storage or Azure Data Lake](/azure/data-explorer/kusto/management/external-tables-azurestorage-azuredatalake) to create an external table in a JSON format. Then run the query from your Azure Data Explorer database.
-
->[!NOTE]
->The external table creation is built from two processes. The first process is to create the external table. The second process is to create the mapping.
-
-The following PowerShell script creates the [create](/azure/data-explorer/kusto/management/external-tables-azurestorage-azuredatalake#create-external-table-mapping) commands for the table and the mapping:
-
-```powershell
-PARAM(
- $resourcegroupname, #The name of the Azure resource group
- $TableName, # The Log Analytics table you want to convert to an external table
- $MapName, # The name of the map
- $subscriptionId, # The ID of the subscription
- $WorkspaceId, # The Log Analytics WorkspaceId
- $WorkspaceName, # The Log Analytics workspace name
- $BlobURL, # The Blob URL where the data is saved
- $ContainerAccessKey, # The blob container Access Key (option to add an SAS URL)
- $ExternalTableName = $null # The External Table name, null to use the same name
-)
-
-if($null -eq $ExternalTableName) {
- $ExternalTableName = $TableName
-}
-
-$query = $TableName + ' | getschema | project ColumnName, DataType'
-
-$output = (Invoke-AzOperationalInsightsQuery -WorkspaceId $WorkspaceId -Query $query).Results
-
-$FirstCommand = @()
-$SecondCommand = @()
-
-foreach ($record in $output) {
- if ($record.DataType -eq 'System.DateTime') {
- $dataType = 'datetime'
- } elseif ($record.DataType -eq 'System.Int32') {
- $dataType = 'int32'
- } elseif ($record.DataType -eq 'System.Double') {
- $dataType = 'double'
- } else {
- $dataType = 'string'
- }
- $FirstCommand += $record.ColumnName + ":" + "$dataType" + ","
- $SecondCommand += "{`"column`":" + "`"" + $record.ColumnName + "`"," + "`"datatype`":`"$dataType`",`"path`":`"$." + $record.ColumnName + "`"},"
-}
-$schema = ($FirstCommand -join '') -replace ',$'
-$mapping = ($SecondCommand -join '') -replace ',$'
-
-$CreateExternal = @'
-.create external table {0} ({1})
-kind=blob
-partition by (TimeGeneratedPartition:datetime = bin(TimeGenerated, 1min))
-pathformat = (datetime_pattern("'y='yyyy'/m='MM'/d='dd'/h='HH'/m='mm", TimeGeneratedPartition))
-dataformat=multijson
-(
- h@'{2}/WorkspaceResourceId=/subscriptions/{4}/resourcegroups/{6}/providers/microsoft.operationalinsights/workspaces/{5};{3}'
-)
-with
-(
- docstring = "Docs",
- folder = "ExternalTables"
-)
-'@ -f $TableName, $schema, $BlobURL, $ContainerAccessKey, $subscriptionId, $WorkspaceName.ToLower(), $resourcegroupname.ToLower(),$WorkspaceId
-
-$createMapping = @'
-.create external table {0} json mapping "{1}" '[{2}]'
-'@ -f $ExternalTableName, $MapName, $mapping
-
-Write-Host -ForegroundColor Red 'Copy and run the following commands (one by one), on your Azure Data Explorer cluster query window to create the external table and mappings:'
-write-host -ForegroundColor Green $CreateExternal
-Write-Host -ForegroundColor Green $createMapping
-```
-
-The following image shows an example of the output:
--
->[!TIP]
->* Copy, paste, and then run the output of the script in your Azure Data Explorer client tool to create the table and mapping.
->* To use all the data inside the container, alter the script and change the URL to be `https://your.blob.core.windows.net/containername;SecKey`.
-
-## Query the exported data from Azure Data Explorer
-
-After you configure the mapping, you can query the exported data from Azure Data Explorer. Your query requires the [external_table](/azure/data-explorer/kusto/query/externaltablefunction) function, as shown in the following example:
-
-```kusto
-external_table("HBTest","map") | take 10000
-```
-
-[![Screenshot that shows the Query Log Analytics exported data.](media/azure-data-explorer-query-storage/external-table-query.png)](media/azure-data-explorer-query-storage/external-table-query.png#lightbox)
-
-## Next steps
-
-Learn to [write queries in Azure Data Explorer](/azure/data-explorer/write-queries).
azure-monitor Basic Logs Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-query.md
For more information, see [Set a table's log data plan](basic-logs-configure.md)
> [!NOTE]
-> Other tools that use the Azure API for querying - for example, Grafana and Power BI - cannot access Basic Logs.
+> Other tools that use the Azure API for querying - for example, Grafana and Power BI - cannot access Basic Logs.
+
+> [!NOTE]
+> Billing of queries on Basic Logs is not yet enabled. You can query Basic Logs for free until early 2023.
## Limitations Queries with Basic Logs are subject to the following limitations:
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
When you link workspaces to a cluster, the pricing tier is changed to cluster, a
If your linked workspace is using the legacy Per Node pricing tier, it will be billed based on data ingested against the cluster's commitment tier, and no longer Per Node. Per-node data allocations from Microsoft Defender for Cloud will continue to be applied.
-If a cluster is deleted, billing for the cluster will stop even if the cluster is within it's 31-day commitment period.
+If a cluster is deleted, billing for the cluster will stop even if the cluster is within its 31-day commitment period.
For more information on how to create a dedicated cluster and specify its billing type, see [Create a dedicated cluster](logs-dedicated-clusters.md#create-a-dedicated-cluster).
For situations in which older or archived logs must be intensively queried with
Because [workspace-based Application Insights resources](../app/create-workspace-resource.md) store their data in a Log Analytics workspace, the billing for data ingestion and retention is done by the workspace where the Application Insights data is located. For this reason, you can use all options of the Log Analytics pricing model, including [commitment tiers](#commitment-tiers), along with pay-as-you-go. > [!TIP]
-> Looking to adjust retention settings on your Application Insights tables? The table names have changed for workspace based components, see [Application Insights Table Structure](https://learn.microsoft.com/azure/azure-monitor/app/convert-classic-resource#table-structure)
+> Looking to adjust retention settings on your Application Insights tables? The table names have changed for workspace based components, see [Application Insights Table Structure](/azure/azure-monitor/app/convert-classic-resource#table-structure)
Data ingestion and data retention for a [classic Application Insights resource](/previous-versions/azure/azure-monitor/app/create-new-resource) follow the same pay-as-you-go pricing as workspace-based resources, but they can't use commitment tiers.
In some scenarios, combining this data can result in cost savings. Typically, th
- [LinuxAuditLog](/azure/azure-monitor/reference/tables/linuxauditlog) - [SysmonEvent](/azure/azure-monitor/reference/tables/sysmonevent) - [ProtectionStatus](/azure/azure-monitor/reference/tables/protectionstatus)-- [Update](/azure/azure-monitor/reference/tables/update) and [UpdateSummary](/azure/azure-monitor/reference/tables/updatesummary) when the Update Management solution isn't running in the workspace or solution targeting is enabled. See [What data types are included in the 500-MB data daily allowance?](../../defender-for-cloud/plan-defender-for-servers-data-workspace.md#log-analytics-pricing-faq).
+- [Update](/azure/azure-monitor/reference/tables/update) and [UpdateSummary](/azure/azure-monitor/reference/tables/updatesummary) when the Update Management solution isn't running in the workspace or solution targeting is enabled. See [What data types are included in the 500-MB data daily allowance?](../../defender-for-cloud/faq-defender-for-servers.yml).
The count of monitored servers is calculated on an hourly granularity. The daily data allocation contributions from each monitored server are aggregated at the workspace level. If the workspace is in the legacy Per Node pricing tier, the Microsoft Defender for Cloud and Log Analytics allocations are combined and applied jointly to all billable ingested data.
On your bill, the service will be **Insight and Analytics** for Log Analytics us
### Standard and Premium pricing tiers
-Workspaces cannot be created in or moved to the **Standard** or **Premium** pricing tiers since October 1, 2016. Workspaces already in these pricing tiers can continue to use them, but if a workspace is moved out of these tiers, it can't be moved back. The Standard and Preium pricing tiers have fixed data retention of 30 days and 365 days, respectively. Workspaces in these pricing tiers don't support the use of [Basic Logs](basic-logs-configure.md) and Data Archive. Data ingestion meters on your Azure bill for these legacy tiers are called "Data Analyzed."
+Workspaces cannot be created in or moved to the **Standard** or **Premium** pricing tiers since October 1, 2016. Workspaces already in these pricing tiers can continue to use them, but if a workspace is moved out of these tiers, it can't be moved back. The Standard and Premium pricing tiers have fixed data retention of 30 days and 365 days, respectively. Workspaces in these pricing tiers don't support the use of [Basic Logs](basic-logs-configure.md) and Data Archive. Data ingestion meters on your Azure bill for these legacy tiers are called "Data Analyzed."
### Microsoft Defender for Cloud with legacy pricing tiers
azure-monitor Custom Logs Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/custom-logs-migrate.md
Title: Migrate from Data Collector API and custom fields-enabled tables to DCR-based custom log collection
-description: Steps that you must perform when migrating from Data Collector API and custom fields-enabled tables to DCR-based custom log collection.
- Previously updated : 01/06/2022
+ Title: Migrate from the HTTP Data Collector API to the Log Ingestion API
+description: Migrate from the legacy Azure Monitor Data Collector API to the Log Ingestion API, which provides more processing power and greater flexibility.
++++ Last updated : 05/23/2023
-# Migrate from Data Collector API and custom fields-enabled tables to DCR-based custom log collection
-This article describes how to migrate from [Data Collector API](data-collector-api.md) or [custom fields](custom-fields.md) in Azure Monitor to [DCR-based custom log collection](../essentials/data-collection-rule-overview.md). It includes configuration required for custom tables created in your Log Analytics workspace so that they can be used by [Logs ingestion API](logs-ingestion-api-overview.md) and [workspace transformations](../essentials/data-collection-transformations.md#workspace-transformation-dcr).
+# Migrate from the HTTP Data Collector API to the Log Ingestion API to send data to Azure Monitor Logs
-> [!IMPORTANT]
-> You do not need to follow this article if you are configuring your DCR-based custom logs [using the Azure Portal](tutorial-workspace-transformations-portal.md) since the configuration will be performed for you. This article only applies if you're configuring using Resource Manager templates APIs.
+The Azure Monitor [Log Ingestion API](../logs/logs-ingestion-api-overview.md) provides more processing power and greater flexibility in ingesting logs and [managing tables](../logs/manage-logs-tables.md) than the legacy [HTTP Data Collector API](../logs/data-collector-api.md). This article describes the differences between the Data Collector API and the Log Ingestion API and provides guidance and best practices for migrating to the new Log Ingestion API.
+
+> [!NOTE]
+> As a Microsoft MVP, [Morten Waltorp Knudsen](https://mortenknudsen.net/) contributed to and provided material feedback for this article. For an example of how you can automate the setup and ongoing use of the Log Ingestion API, see Morten's publicly available [AzLogDcrIngestPS PowerShell module](https://github.com/KnudsenMorten/AzLogDcrIngestPS).
+
+## Advantages of the Log Ingestion API
+
+The Log Ingestion API provides the following advantages over the Data Collector API:
+
+- Supports [transformations](../essentials/data-collection-transformations.md), which enable you to modify the data before it's ingested into the destination table, including filtering and data manipulation.
+- Lets you send data to multiple destinations.
+- Enables you to manage the destination table schema, including column names, and whether to add new columns to the destination table when the source data schema changes.
+## Prerequisites
+
+The migration procedure described in this article assumes you have:
+
+- A Log Analytics workspace where you have at least [contributor rights](manage-access.md#azure-rbac).
+- [Permissions to create data collection rules](../essentials/data-collection-rule-overview.md#permissions) in the Log Analytics workspace.
+- [An Azure AD application to authenticate API calls](../logs/tutorial-logs-ingestion-portal.md#create-azure-ad-application) or any other Resource Manager authentication scheme.
+
+## Create new resources required for the Log ingestion API
+
+The Log Ingestion API requires you to create two new types of resources, which the HTTP Data Collector API doesn't require:
-## Background
-To use a table with the [Logs ingestion API](logs-ingestion-api-overview.md) or with a [workspace transformation](../essentials/data-collection-transformations.md#workspace-transformation-dcr), it must be configured to support new features. When you complete the process described in this article, the following actions are taken:
+- [Data collection endpoints](../essentials/data-collection-endpoint-overview.md), from which the the data you collect is ingested into the pipeline for processing.
+- [Data collection rules](../essentials/data-collection-rule-overview.md), which define [data transformations](../essentials/data-collection-transformations.md) and the destination table to which the data is ingested.
-- The table is reconfigured to enable all DCR-based custom logs features. This includes DCR and DCE support and management with the new **Tables** control plane.-- Any previously defined custom fields will stop populating.-- The Data Collector API will continue to work but won't create any new columns. Data will only populate into any columns that was created prior to migration.-- The schema and historic data is preserved and can be accessed the same way it was previously.
+## Migrate existing custom tables or create new tables
-## Applicable scenarios
-This article is only applicable if all of the following criteria apply:
+If you have an existing custom table to which you currently send data using the Data Collector API, you can:
+
+- Migrate the table to continue ingesting data into the same table using the Log Ingestion API.
+- Maintain the existing table and data and set up a new table into which you ingest data using the Log Ingestion API. You can then delete the old table when you're ready.
+
+ This is the preferred option, especially if you to need to make changes to the existing table. Changes to existing data types and multiple schema changes to existing Data Collector API custom tables can lead to errors.
+
+This table summarizes considerations to keep in mind for each option:
+
+||Table migration|Side-by-side implementation|
+|-|-|-|
+|**Table and column naming**|Reuse existing table name.<br>Column naming options: <br>- Use new column names and define a transformation to direct incoming data to the newly named column.<br>- Continue using old names.|Set the new table name freely.<br>Need to adjust integrations, dashboards, and alerts before switching to the new table.|
+|**Migration procedure**|One-off table migration. Not possible to roll back a migrated table. |Migration can be done gradually, per table.|
+|**Post-migration**|You can continue to ingest data using the HTTP Data Collector API with existing columns, except custom columns.<br>Ingest data into new columns using the Log Ingestion API only.| Data in the old table is available until the end of retention period.<br>When you first set up a new table or make schema changes, it can take 10-15 minutes for the data changes to start appearing in the destination table.|
+
+To convert a table that uses the Data Collector API to data collection rules and the Log Ingestion API, issue this API call against the table:
+
+```rest
+POST https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{tableName}/migrate?api-version=2021-12-01-preview
+```
+This call is idempotent, so it has no effect if the table has already been converted.
+
+The API call enables all DCR-based custom logs features on the table. The Data Collector API will continue to ingest data into existing columns, but won't create any new columns. Any previously defined [custom fields](../logs/custom-fields.md) won't continue to be populated. Another way to migrate an existing table to using data collection rules, but not necessarily the Log Ingestion API is applying a [workspace transformation](../logs/tutorial-workspace-transformations-portal.md) to the table.
+
+> [!IMPORTANT]
+> - Column names must start with a letter and can consist of up to 45 alphanumeric characters and the characters `_` and `-`.
+> - The following are reserved column names: `Type`, `TenantId`, `resource`, `resourceid`, `resourcename`, `resourcetype`, `subscriptionid`, `tenanted`.
+> - Custom columns you add to an Azure table must have the suffix `_CF`.
+> - If you update the table schema in your Log Analytics workspace, you must also update the input stream definition in the data collection rule to ingest data into new or modified columns.
-- You're going to send data to the table using the [Logs ingestion API](logs-ingestion-api-overview.md) or configure a transformation for the table in the [workspace transformation DCR](../essentials/data-collection-transformations.md#workspace-transformation-dcr), preserving both schema and historical data in that table.-- The table was either created using the Data Collector API, or has custom fields defined in it. -- You want to migrate using the APIs instead of the Azure portal as described in [Send custom logs to Azure Monitor Logs using the Azure portal](tutorial-logs-ingestion-portal.md) or [Add transformation in workspace data collection rule using the Azure portal](tutorial-workspace-transformations-portal.md).
+## Call the Log Ingestion API
-If all of these conditions aren't true, then you can use DCR-based log collection without following the procedure described here.
+The Log Ingestion API lets you send up to 1 MB of compressed or uncompressed data per call. If you need to send more than 1 MB of data, you can send multiple calls in parallel. This is a change from the Data Collector API, which lets you send up to 32 MB of data per call.
-## Migration procedure
-If the table that you're targeting with DCR-based log collection fits the criteria above, then you must perform the following steps:
+For information about how to call the Log Ingestion API, see [Log Ingestion REST API call](../logs/logs-ingestion-api-overview.md#rest-api-call).
-1. Configure your data collection rule (DCR) following procedures at [Send data to Azure Monitor using Logs ingestion API (Resource Manager templates)](tutorial-logs-ingestion-api.md) or [Add transformation in workspace data collection rule to Azure Monitor using Resource Manager templates](tutorial-workspace-transformations-api.md).
+## Modify table schemas and data collection rules based on changes to source data object
-1. If using the Logs ingestion API, also [configure the data collection endpoint (DCE)](tutorial-logs-ingestion-api.md#create-data-collection-endpoint) and the agent or component that will be sending data to the API.
+While the Data Collector API automatically adjusts the destination table schema when the source data object schema changes, the Log Ingestion API doesn't. This ensures that you don't collect new data into columns that you didn't intend to create.
-1. Issue the following API call against your table. This call is idempotent, so there will be no effect if the table has already been migrated.
+When the source data schema changes, you can:
- ```rest
- POST https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{tableName}/migrate?api-version=2021-12-01-preview
- ```
+- [Modify destination table schemas](../logs/create-custom-table.md) and [data collection rules](../essentials/data-collection-rule-edit.md) to align with source data schema changes.
+- [Define a transformation](../essentials/data-collection-transformations.md) in the data collection rule to send the new data into existing columns in the destination table.
+- Leave the destination table and data collection rule unchanged. In this case, you won't ingest the new data.
-1. Discontinue use of the Data Collector API and start using the new Logs ingestion API.
+> [!NOTE]
+> You can't reuse a column name with a data type that's different to the original data type defined for the column.
## Next steps
azure-portal Set Preferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/set-preferences.md
The theme that you choose affects the background and font colors that appear in
Alternatively, you can choose a theme from the **High contrast theme** section. These themes can make the Azure portal easier to read, especially if you have a visual impairment. Selecting either the white or black high-contrast theme will override any other theme selections.
-### Focus navigation
-
-Choose whether or not to enable focus navigation.
-
-If enabled, only one screen at a time will be visible as you step through a process in the portal. If disabled, as you move through the steps of a process, you'll be able to move between them through a horizontal scroll bar.
- ### Startup page Choose one of the following options for the page you'll see when you first sign in to the Azure portal.
backup Backup Azure Monitoring Built In Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-monitoring-built-in-monitor.md
You can monitor all your backup items via a Recovery Services vault. Navigating
>For items backed-up to Azure using DPM, the list will show all the data sources protected (both disk and online) using the DPM server. If the protection is stopped for the datasource with backup data retained, the datasource will be still listed in the portal. You can go to the details of the data source to see if the recovery points are present in disk, online or both. Also, datasources for which the online protection is stopped but data is retained, billing for the online recovery points continue until the data is completely deleted. > > The DPM version must be DPM 1807 (5.1.378.0) or DPM 2019 ( version 10.19.58.0 or above), for the backup items to be visible in the Recovery Services vault portal.
+>
+>For DPM, MABS and MARS, the Backup Item (VM name, cluster name, host name, volume or folder name) and Protection Group cannot include '<', '>', '%', '&', ':', '\', '?', '/', '#' or any control characters.
## Backup Jobs in Backup center
backup Sql Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sql-support-matrix.md
Title: Azure Backup support matrix for SQL Server Backup in Azure VMs description: Provides a summary of support settings and limitations when backing up SQL Server in Azure VMs with the Azure Backup service. Previously updated : 05/23/2022 Last updated : 06/19/2023
_*The database size limit depends on the data transfer rate that we support and
* TDE - enabled database backup is supported. To restore a TDE-encrypted database to another SQL Server, you need to first [restore the certificate to the destination server](/sql/relational-databases/security/encryption/move-a-tde-protected-database-to-another-sql-server). The backup compression for TDE-enabled databases for SQL Server 2016 and newer versions is available, but at lower transfer size as explained [here](https://techcommunity.microsoft.com/t5/sql-server/backup-compression-for-tde-enabled-databases-important-fixes-in/ba-p/385593). * The backup and restore operations for mirror databases and database snapshots aren't supported. * SQL Server **Failover Cluster Instance (FCI)** isn't supported.
-* Back up of databases with extensions in their names arenΓÇÖt supported. This is because the IIS server performs the [file extension request filtering](/iis/configuration/system.webserver/security/requestfiltering/fileextensions). However, note that we have allowlisted _.ad_, _.cs_, and _.master_ that can be used in the database names.
+* Back up of databases with extensions in their names arenΓÇÖt supported. This is because the IIS server performs the [file extension request filtering](/iis/configuration/system.webserver/security/requestfiltering/fileextensions). However, note that we've allowlisted `.ad`, `.cs`, and `.master` that can be used in the database names.
## Backup throughput performance
Azure Backup supports a consistent data transfer rate of 350 MBps for full and d
- The backup schedules are spread across a subset of databases. Multiple backups running concurrently on a VM shares the network consumption rate between the backups. [Learn more](faq-backup-sql-server.yml#can-i-control-how-many-concurrent-backups-run-on-the-sql-server-) about how to control the number of concurrent backups. >[!NOTE]
-> [Download the detailed Resource Planner](https://download.microsoft.com/download/A/B/5/AB5D86F0-DCB7-4DC3-9872-6155C96DE500/SQL%20Server%20in%20Azure%20VM%20Backup%20Scale%20Calculator.xlsx) to calculate the approximate number of protected databases that are recommended per server based on the VM resources, bandwidth and the backup policy.
+>- The higher throughput is automatically throttled when the following conditions are met:
+> - All the databases should be above the size of *4 TB*.
+> - The databases should be hosted on Azure VMs that have *maximum uncached disk throughput metric greater than 800 MBpS*.
+>- [Download the detailed Resource Planner](https://download.microsoft.com/download/A/B/5/AB5D86F0-DCB7-4DC3-9872-6155C96DE500/SQL%20Server%20in%20Azure%20VM%20Backup%20Scale%20Calculator.xlsx) to calculate the approximate number of protected databases that are recommended per server based on the VM resources, bandwidth and the backup policy.
## Next steps
bastion Native Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/native-client.md
description: Learn how to configure Bastion for native client connections.
Previously updated : 06/12/2023 Last updated : 06/19/2023
This article helps you configure your Bastion deployment to accept connections f
:::image type="content" source="./media/native-client/native-client-architecture.png" alt-text="Diagram shows a connection via native client." lightbox="./media/native-client/native-client-architecture.png":::
-* Your capabilities on the VM when connecting via native client are dependent on what is enabled on the native client.
-* You can configure this feature by either modifying an existing Bastion deployment, or you can deploy Bastion with the feature configuration already specified.
+You can configure this feature by modifying an existing Bastion deployment, or you can deploy Bastion with the feature configuration already specified. Your capabilities on the VM when connecting via native client are dependent on what is enabled on the native client.
-> [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
->
+>[!NOTE]
+>[!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
## Deploy Bastion with the native client feature
cognitive-services Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/How-to/postman.md
Title: How to run Multivariate Anomaly Detection API (GA version) in Postman?
+ Title: How to run Multivariate Anomaly Detector API (GA version) in Postman?
description: Learn how to detect anomalies in your data either as a batch, or on streaming data with Postman.
Last updated 12/20/2022
-# How to run Multivariate Anomaly Detection API in Postman?
+# How to run Multivariate Anomaly Detector API in Postman?
This article will walk you through the process of using Postman to access the Multivariate Anomaly Detection REST API.
Select this button to fork the API collection in Postman and follow the steps in
[![Run in Postman](../media/postman/button.svg)](https://app.getpostman.com/run-collection/18763802-b90da6d8-0f98-4200-976f-546342abcade?action=collection%2Ffork&collection-url=entityId%3D18763802-b90da6d8-0f98-4200-976f-546342abcade%26entityType%3Dcollection%26workspaceId%3De1370b45-5076-4885-884f-e9a97136ddbc#?env%5BMVAD%5D=W3sia2V5IjoibW9kZWxJZCIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZSwidHlwZSI6ImRlZmF1bHQiLCJzZXNzaW9uVmFsdWUiOiJlNjQxZTJlYy01Mzg5LTExZWQtYTkyMC01MjcyNGM4YTZkZmEiLCJzZXNzaW9uSW5kZXgiOjB9LHsia2V5IjoicmVzdWx0SWQiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWUsInR5cGUiOiJkZWZhdWx0Iiwic2Vzc2lvblZhbHVlIjoiOGZkZTAwNDItNTM4YS0xMWVkLTlhNDEtMGUxMGNkOTEwZmZhIiwic2Vzc2lvbkluZGV4IjoxfSx7ImtleSI6Ik9jcC1BcGltLVN1YnNjcmlwdGlvbi1LZXkiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWUsInR5cGUiOiJzZWNyZXQiLCJzZXNzaW9uVmFsdWUiOiJjNzNjMGRhMzlhOTA0MjgzODA4ZjBmY2E0Zjc3MTFkOCIsInNlc3Npb25JbmRleCI6Mn0seyJrZXkiOiJlbmRwb2ludCIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZSwidHlwZSI6ImRlZmF1bHQiLCJzZXNzaW9uVmFsdWUiOiJodHRwczovL211bHRpLWFkLXRlc3QtdXNjeC5jb2duaXRpdmVzZXJ2aWNlcy5henVyZS5jb20vIiwic2Vzc2lvbkluZGV4IjozfSx7ImtleSI6ImRhdGFTb3VyY2UiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWUsInR5cGUiOiJkZWZhdWx0Iiwic2Vzc2lvblZhbHVlIjoiaHR0cHM6Ly9tdmFkZGF0YXNldC5ibG9iLmNvcmUud2luZG93cy5uZXQvc2FtcGxlLW9uZXRhYmxlL3NhbXBsZV9kYXRhXzVfMzAwMC5jc3YiLCJzZXNzaW9uSW5kZXgiOjR9XQ==)
-## Multivariate Anomaly Detection API
+## Multivariate Anomaly Detector API
1. Select environment as **MVAD**.
cognitive-services Best Practices Multivariate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/concepts/best-practices-multivariate.md
keywords: anomaly detection, machine learning, algorithms
-# Best practices for using the Multivariate Anomaly Detection API
+# Best practices for using the Multivariate Anomaly Detector API
-This article will provide guidance around recommended practices to follow when using the multivariate Anomaly Detection (MVAD) APIs.
+This article provides guidance around recommended practices to follow when using the multivariate Anomaly Detector (MVAD) APIs.
In this tutorial, you'll: > [!div class="checklist"]
In this tutorial, you'll:
## API usage
-Follow the instructions in this section to avoid errors while using MVAD. If you still get errors, please refer to the [full list of error codes](./troubleshoot.md) for explanations and actions to take.
+Follow the instructions in this section to avoid errors while using MVAD. If you still get errors, refer to the [full list of error codes](./troubleshoot.md) for explanations and actions to take.
[!INCLUDE [mvad-input-params](../includes/mvad-input-params.md)]
Now you're able to run your code with MVAD APIs without any error. What could be
* The underlying model of MVAD has millions of parameters. It needs a minimum number of data points to learn an optimal set of parameters. The empirical rule is that you need to provide **5,000 or more data points (timestamps) per variable** to train the model for good accuracy. In general, the more the training data, better the accuracy. However, in cases when you're not able to accrue that much data, we still encourage you to experiment with less data and see if the compromised accuracy is still acceptable. * Every time when you call the inference API, you need to ensure that the source data file contains just enough data points. That is normally `slidingWindow` + number of data points that **really** need inference results. For example, in a streaming case when every time you want to inference on **ONE** new timestamp, the data file could contain only the leading `slidingWindow` plus **ONE** data point; then you could move on and create another zip file with the same number of data points (`slidingWindow` + 1) but moving ONE step to the "right" side and submit for another inference job.
- Anything beyond that or "before" the leading sliding window won't impact the inference result at all and may only cause performance downgrade.Anything below that may lead to an `NotEnoughInput` error.
+ Anything beyond that or "before" the leading sliding window won't impact the inference result at all and may only cause performance downgrade. Anything below that may lead to an `NotEnoughInput` error.
### Timestamp round-up
In a group of variables (time series), each variable may be collected from an in
| 12:01:34 | 1.7 | | 12:02:04 | 2.0 |
-We have two variables collected from two sensors which send one data point every 30 seconds. However, the sensors aren't sending data points at a strict even frequency, but sometimes earlier and sometimes later. Because MVAD will take into consideration correlations between different variables, timestamps must be properly aligned so that the metrics can correctly reflect the condition of the system. In the above example, timestamps of variable 1 and variable 2 must be properly 'rounded' to their frequency before alignment.
+We have two variables collected from two sensors which send one data point every 30 seconds. However, the sensors aren't sending data points at a strict even frequency, but sometimes earlier and sometimes later. Because MVAD takes into consideration correlations between different variables, timestamps must be properly aligned so that the metrics can correctly reflect the condition of the system. In the above example, timestamps of variable 1 and variable 2 must be properly 'rounded' to their frequency before alignment.
-Let's see what happens if they're not pre-processed. If we set `alignMode` to be `Outer` (which means union of two sets), the merged table will be
+Let's see what happens if they're not pre-processed. If we set `alignMode` to be `Outer` (which means union of two sets), the merged table is:
| timestamp | Variable-1 | Variable-2 | | | -- | -- |
Let's see what happens if they're not pre-processed. If we set `alignMode` to be
| 12:02:04 | `nan` | 2.0 | | 12:02:08 | 1.3 | `nan` |
-`nan` indicates missing values. Obviously, the merged table isn't what you might have expected. Variable 1 and variable 2 interleave, and the MVAD model can't extract information about correlations between them. If we set `alignMode` to `Inner`, the merged table will be empty as there's no common timestamp in variable 1 and variable 2.
+`nan` indicates missing values. Obviously, the merged table isn't what you might have expected. Variable 1 and variable 2 interleave, and the MVAD model can't extract information about correlations between them. If we set `alignMode` to `Inner`, the merged table is empty as there's no common timestamp in variable 1 and variable 2.
-Therefore, the timestamps of variable 1 and variable 2 should be pre-processed (rounded to the nearest 30-second timestamps) and the new time series are
+Therefore, the timestamps of variable 1 and variable 2 should be pre-processed (rounded to the nearest 30-second timestamps) and the new time series are:
*Variable-1*
There are some limitations in both the training and inference APIs, you should b
## Model quality ### How to deal with false positive and false negative in real scenarios?
-We have provided severity which indicates the significance of anomalies. False positives may be filtered out by setting up a threshold on the severity. Sometimes too many false positives may appear when there are pattern shifts in the inference data. In such cases a model may need to be retrained on new data. If the training data contains too many anomalies, there could be false negatives in the detection results. This is because the model learns patterns from the training data and anomalies may bring bias to the model. Thus proper data cleaning may help reduce false negatives.
+We have provided severity that indicates the significance of anomalies. False positives may be filtered out by setting up a threshold on the severity. Sometimes too many false positives may appear when there are pattern shifts in the inference data. In such cases a model may need to be retrained on new data. If the training data contains too many anomalies, there could be false negatives in the detection results. This is because the model learns patterns from the training data and anomalies may bring bias to the model. Thus proper data cleaning may help reduce false negatives.
### How to estimate which model is best to use according to training loss and validation loss? Generally speaking, it's hard to decide which model is the best without a labeled dataset. However, we can leverage the training and validation losses to have a rough estimation and discard those bad models. First, we need to observe whether training losses converge. Divergent losses often indicate poor quality of the model. Second, loss values may help identify whether underfitting or overfitting occurs. Models that are underfitting or overfitting may not have desired performance. Third, although the definition of the loss function doesn't reflect the detection performance directly, loss values may be an auxiliary tool to estimate model quality. Low loss value is a necessary condition for a good model, thus we may discard models with high loss values.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/overview.md
With Anomaly Detector, you can either detect anomalies in one variable using Uni
### Univariate Anomaly Detection
-The Univariate Anomaly Detection API enables you to monitor and detect abnormalities in your time series data without having to know machine learning. The algorithms adapt by automatically identifying and applying the best-fitting models to your data, regardless of industry, scenario, or data volume. Using your time series data, the API determines boundaries for anomaly detection, expected values, and which data points are anomalies.
+The Univariate Anomaly Detector API enables you to monitor and detect abnormalities in your time series data without having to know machine learning. The algorithms adapt by automatically identifying and applying the best-fitting models to your data, regardless of industry, scenario, or data volume. Using your time series data, the API determines boundaries for anomaly detection, expected values, and which data points are anomalies.
![Line graph of detect pattern changes in service requests.](./media/anomaly_detection2.png)
cognitive-services Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/tutorials/azure-data-explorer.md
The [Anomaly Detector API](/azure/cognitive-services/anomaly-detector/overview-m
### Function 1: series_uv_anomalies_fl()
-The function **[series_uv_anomalies_fl()](/azure/data-explorer/kusto/functions-library/series-uv-anomalies-fl?tabs=adhoc)** detects anomalies in time series by calling the [Univariate Anomaly Detection API](../overview.md). The function accepts a limited set of time series as numerical dynamic arrays and the required anomaly detection sensitivity level. Each time series is converted into the required JSON (JavaScript Object Notation) format and posts it to the Anomaly Detector service endpoint. The service response has dynamic arrays of high/low/all anomalies, the modeled baseline time series, its normal high/low boundaries (a value above or below the high/low boundary is an anomaly) and the detected seasonality.
+The function **[series_uv_anomalies_fl()](/azure/data-explorer/kusto/functions-library/series-uv-anomalies-fl?tabs=adhoc)** detects anomalies in time series by calling the [Univariate Anomaly Detector API](../overview.md). The function accepts a limited set of time series as numerical dynamic arrays and the required anomaly detection sensitivity level. Each time series is converted into the required JSON (JavaScript Object Notation) format and posts it to the Anomaly Detector service endpoint. The service response has dynamic arrays of high/low/all anomalies, the modeled baseline time series, its normal high/low boundaries (a value above or below the high/low boundary is an anomaly) and the detected seasonality.
### Function 2: series_uv_change_points_fl()
-The function **[series_uv_change_points_fl()](/azure/data-explorer/kusto/functions-library/series-uv-change-points-fl?tabs=adhoc)** finds change points in time series by calling the Univariate Anomaly Detection API. The function accepts a limited set of time series as numerical dynamic arrays, the change point detection threshold, and the minimum size of the stable trend window. Each time series is converted into the required JSON format and posts it to the Anomaly Detector service endpoint. The service response has dynamic arrays of change points, their respective confidence, and the detected seasonality.
+The function **[series_uv_change_points_fl()](/azure/data-explorer/kusto/functions-library/series-uv-change-points-fl?tabs=adhoc)** finds change points in time series by calling the Univariate Anomaly Detector API. The function accepts a limited set of time series as numerical dynamic arrays, the change point detection threshold, and the minimum size of the stable trend window. Each time series is converted into the required JSON format and posts it to the Anomaly Detector service endpoint. The service response has dynamic arrays of change points, their respective confidence, and the detected seasonality.
These two functions are user-definedΓÇ»[tabular functions](/azure/data-explorer/kusto/query/functions/user-defined-functions#tabular-function) applied using theΓÇ»[invoke operator](/azure/data-explorer/kusto/query/invokeoperator). You can either embed its code in your query or you can define it as a stored function in your database.
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/whats-new.md
We have also added links to some user-generated content. Those items will be mar
* Multivariate Anomaly Detection will begin charging as of January 10th, 2023. For pricing details, see the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/anomaly-detector/). ### Dec 2022
-* Multivariate Anomaly Detection SDK is updated to match with GA API for four languages.
+* The following SDKs for Multivariate Anomaly Detection are updated to match with the generally available REST API.
|SDK Package |Sample Code | |||
We have also added links to some user-generated content. Those items will be mar
### September 2021 * Anomaly Detector (univariate) available in Jio India West.
-* Multivariate anomaly detection APIs deployed in five more regions: East Asia, West US, Central India, Korea Central, Germany West Central.
+* Multivariate anomaly detector APIs deployed in five more regions: East Asia, West US, Central India, Korea Central, Germany West Central.
### August 2021
-* Multivariate anomaly detection APIs deployed in five more regions: West US 3, Japan East, Brazil South, Central US, Norway East. Now in total 15 regions are supported.
+* Multivariate anomaly detector APIs deployed in five more regions: West US 3, Japan East, Brazil South, Central US, Norway East. Now in total 15 regions are supported.
### July 2021
-* Multivariate anomaly detection APIs deployed in four more regions: Australia East, Canada Central, North Europe, and Southeast Asia. Now in total 10 regions are supported.
+* Multivariate anomaly detector APIs deployed in four more regions: Australia East, Canada Central, North Europe, and Southeast Asia. Now in total 10 regions are supported.
* Anomaly Detector (univariate) available in West US 3 and Norway East. ### June 2021
-* Multivariate anomaly detection APIs available in more regions: West US 2, West Europe, East US 2, South Central US, East US, and UK South.
+* Multivariate anomaly detector APIs available in more regions: West US 2, West Europe, East US 2, South Central US, East US, and UK South.
* Anomaly Detector (univariate) available in Azure cloud for US Government. * Anomaly Detector (univariate) available in Azure China (China North 2).
We have also added links to some user-generated content. Those items will be mar
* [IoT Edge module](https://azuremarketplace.microsoft.com/marketplace/apps/azure-cognitive-service.edge-anomaly-detector) (univariate) published. * Anomaly Detector (univariate) available in Azure China (China East 2).
-* Multivariate anomaly detection APIs preview in selected regions (West US 2, West Europe).
+* Multivariate anomaly detector APIs preview in selected regions (West US 2, West Europe).
### September 2020
We have also added links to some user-generated content. Those items will be mar
## Videos * Nov 12, 2022 AI Show: [Multivariate Anomaly Detection is GA](/shows/ai-show/ep-70-the-latest-from-azure-multivariate-anomaly-detection) (Seth with Louise Han).
-* May 7, 2021 [New to Anomaly Detector: Multivariate Capabilities](/shows/AI-Show/New-to-Anomaly-Detector-Multivariate-Capabilities) - AI Show on the new multivariate anomaly detection APIs with Tony Xing and Seth Juarez
+* May 7, 2021 [New to Anomaly Detector: Multivariate Capabilities](/shows/AI-Show/New-to-Anomaly-Detector-Multivariate-Capabilities) - AI Show on the new multivariate anomaly detector APIs with Tony Xing and Seth Juarez
* April 20, 2021 AI Show Live | Episode 11| New to Anomaly Detector: Multivariate Capabilities - AI Show live recording with Tony Xing and Seth Juarez * May 18, 2020 [Inside Anomaly Detector](/shows/AI-Show/Inside-Anomaly-Detector) - AI Show with Qun Ying and Seth Juarez * September 19, 2019 **[UGC]** [Detect Anomalies in Your Data with the Anomaly Detector](https://www.youtube.com/watch?v=gfb63wvjnYQ) - Video by Jon Wood
cognitive-services Call Center Telephony Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/call-center-telephony-integration.md
To support real-time scenarios, like Virtual Agent and Agent Assist in Call Centers, an integration with the Call Centers telephony system is required.
-Typically, the integration with Microsoft Speech Services is handled by a telephony client connected to the customers SIP/RTP processor, for example, to a Session Border Controller (SBC).
+Typically, integration with the Speech service is handled by a telephony client connected to the customers SIP/RTP processor, for example, to a Session Border Controller (SBC).
Usually the telephony client handles the incoming audio stream from the SIP/RTP processor, the conversion to PCM and connects the streams using continuous recognition. It also triages the processing of the results, for example, analysis of speech transcripts for Agent Assist or connect with a dialog processing engine (for example, Azure Botframework or Power Virtual Agent) for Virtual Agent.
-For easier integration the Speech Service also supports ΓÇ£ALAW in WAV containerΓÇ¥ and ΓÇ£MULAW in WAV containerΓÇ¥ for audio streaming.
+For easier integration the Speech service also supports ΓÇ£ALAW in WAV containerΓÇ¥ and ΓÇ£MULAW in WAV containerΓÇ¥ for audio streaming.
To build this integration we recommend using the [Speech SDK](./speech-sdk.md).
To build this integration we recommend using the [Speech SDK](./speech-sdk.md).
> [!TIP] > For guidance on reducing Text to speech latency check out the **[How to lower speech synthesis latency](./how-to-lower-speech-synthesis-latency.md?pivots=programming-language-csharp)** guide. >
-> In addition, consider implementing a Text to speech cache to store all synthesized audio and playback from the cache in case a string has previously been synthesized.
+> In addition, consider implementing a text to speech cache to store all synthesized audio and playback from the cache in case a string has previously been synthesized.
## Next steps
cognitive-services Custom Commands Encryption Of Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-commands-encryption-of-data-at-rest.md
By default, your subscription uses Microsoft-managed encryption keys. However, y
> [!IMPORTANT]
-> Customer-managed keys are only available resources created after 27 June, 2020. To use CMK with Speech Services, you will need to create a new Speech resource. Once the resource is created, you can use Azure Key Vault to set up your managed identity.
+> Customer-managed keys are only available resources created after 27 June, 2020. To use CMK with the Speech service, you will need to create a new Speech resource. Once the resource is created, you can use Azure Key Vault to set up your managed identity.
-To request the ability to use customer-managed keys, fill out and submit Customer-Managed Key Request Form. It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with Speech Services, you'll need to create a new Speech resource from the Azure portal.
+To request the ability to use customer-managed keys, fill out and submit Customer-Managed Key Request Form. It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with the Speech service, you'll need to create a new Speech resource from the Azure portal.
> [!NOTE] > **Customer-managed keys (CMK) are supported only for Custom Commands.** >
cognitive-services Custom Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-commands.md
Good candidates for Custom Commands have a fixed vocabulary with well-defined se
## Getting started with Custom Commands
-Our goal with Custom Commands is to reduce your cognitive load to learn all the different technologies and focus building your voice commanding app. First step for using Custom Commands to <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" target="_blank">create an Azure Speech resource </a>. You can author your Custom Commands app on the Speech Studio and publish it, after which an on-device application can communicate with it using the Speech SDK.
+Our goal with Custom Commands is to reduce your cognitive load to learn all the different technologies and focus building your voice commanding app. First step for using Custom Commands to <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" target="_blank">create a Speech resource</a>. You can author your Custom Commands app on the Speech Studio and publish it, after which an on-device application can communicate with it using the Speech SDK.
#### Authoring flow for Custom Commands ![Authoring flow for Custom Commands](media/voice-assistants/custom-commands-flow.png "The Custom Commands authoring flow")
cognitive-services Custom Speech Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-speech-overview.md
Title: Custom Speech overview - Speech service
-description: Custom Speech is a set of online tools that allows you to evaluate and improve the Microsoft speech to text accuracy for your applications, tools, and products.
+description: Custom Speech is a set of online tools that allows you to evaluate and improve the speech to text accuracy for your applications, tools, and products.
With Custom Speech, you can upload your own data, test and train a custom model,
Here's more information about the sequence of steps shown in the previous diagram: 1. [Create a project](how-to-custom-speech-create-project.md) and choose a model. Use a <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Speech resource</a> that you create in the Azure portal. If you will train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. See footnotes in the [regions](regions.md#speech-service) table for more information.
-1. [Upload test data](./how-to-custom-speech-upload-data.md). Upload test data to evaluate the Microsoft speech to text offering for your applications, tools, and products.
+1. [Upload test data](./how-to-custom-speech-upload-data.md). Upload test data to evaluate the speech to text offering for your applications, tools, and products.
1. [Test recognition quality](how-to-custom-speech-inspect-data.md). Use the [Speech Studio](https://aka.ms/speechstudio/customspeech) to play back uploaded audio and inspect the speech recognition quality of your test data. 1. [Test model quantitatively](how-to-custom-speech-evaluate-data.md). Evaluate and improve the accuracy of the speech to text model. The Speech service provides a quantitative word error rate (WER), which you can use to determine if additional training is required. 1. [Train a model](how-to-custom-speech-train-model.md). Provide written transcripts and related text, along with the corresponding audio data. Testing a model before and after training is optional but recommended.
cognitive-services Devices Sdk Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/devices-sdk-release-notes.md
The following sections list changes in the most recent releases.
## Speech Devices SDK 0.2.12733: 2018-May release
-The first public preview release of the Cognitive Services Speech Devices SDK.
+The first public preview release of the Speech Devices SDK.
cognitive-services Embedded Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/embedded-speech.md
zone_pivot_groups: programming-languages-set-thirteen
Embedded Speech is designed for on-device [speech to text](speech-to-text.md) and [text to speech](text-to-speech.md) scenarios where cloud connectivity is intermittent or unavailable. For example, you can use embedded speech in industrial equipment, a voice enabled air conditioning unit, or a car that might travel out of range. You can also develop hybrid cloud and offline solutions. For scenarios where your devices must be in a secure environment like a bank or government entity, you should first consider [disconnected containers](../containers/disconnected-containers.md). > [!IMPORTANT]
-> Microsoft limits access to embedded speech. You can apply for access through the Azure Cognitive Services [embedded speech limited access review](https://aka.ms/csgate-embedded-speech). For more information, see [Limited access for embedded speech](/legal/cognitive-services/speech-service/embedded-speech/limited-access-embedded-speech?context=/azure/cognitive-services/speech-service/context/context).
+> Microsoft limits access to embedded speech. You can apply for access through the Azure Cognitive Services Speech [embedded speech limited access review](https://aka.ms/csgate-embedded-speech). For more information, see [Limited access for embedded speech](/legal/cognitive-services/speech-service/embedded-speech/limited-access-embedded-speech?context=/azure/cognitive-services/speech-service/context/context).
## Platform requirements
cognitive-services How To Configure Openssl Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-configure-openssl-linux.md
export SSL_CERT_FILE=/etc/pki/tls/certs/ca-bundle.crt
## Certificate revocation checks
-When the Speech SDK connects to the Speech Service, it checks the Transport Layer Security (TLS/SSL) certificate. The Speech SDK verifies that the certificate reported by the remote endpoint is trusted and hasn't been revoked. This verification provides a layer of protection against attacks involving spoofing and other related vectors. The check is accomplished by retrieving a certificate revocation list (CRL) from a certificate authority (CA) used by Azure. A list of Azure CA download locations for updated TLS CRLs can be found in [this document](../../security/fundamentals/tls-certificate-changes.md).
+When the Speech SDK connects to the Speech service, it checks the Transport Layer Security (TLS/SSL) certificate. The Speech SDK verifies that the certificate reported by the remote endpoint is trusted and hasn't been revoked. This verification provides a layer of protection against attacks involving spoofing and other related vectors. The check is accomplished by retrieving a certificate revocation list (CRL) from a certificate authority (CA) used by Azure. A list of Azure CA download locations for updated TLS CRLs can be found in [this document](../../security/fundamentals/tls-certificate-changes.md).
-If a destination posing as the Speech Service reports a certificate that's been revoked in a retrieved CRL, the SDK will terminate the connection and report an error via a `Canceled` event. The authenticity of a reported certificate can't be checked without an updated CRL. Therefore, the Speech SDK will also treat a failure to download a CRL from an Azure CA location as an error.
+If a destination posing as the Speech service reports a certificate that's been revoked in a retrieved CRL, the SDK will terminate the connection and report an error via a `Canceled` event. The authenticity of a reported certificate can't be checked without an updated CRL. Therefore, the Speech SDK will also treat a failure to download a CRL from an Azure CA location as an error.
> [!WARNING] > If your solution uses proxy or firewall it should be configured to allow access to all certificate revocation list URLs used by Azure. Note that many of these URLs are outside of `microsoft.com` domain, so allowing access to `*.microsoft.com` is not enough. See [this document](../../security/fundamentals/tls-certificate-changes.md) for details. In exceptional cases you may ignore CRL failures (see [the correspondent section](#bypassing-or-ignoring-crl-failures)), but such configuration is strongly not recommended, especially for production scenarios.
If a destination posing as the Speech Service reports a certificate that's been
One cause of CRL-related failures is the use of large CRL files. This class of error is typically only applicable to special environments with extended CA chains. Standard public endpoints shouldn't encounter this class of issue.
-The default maximum CRL size used by the Speech SDK (10 MB) can be adjusted per config object. The property key for this adjustment is `CONFIG_MAX_CRL_SIZE_KB` and the value, specified as a string, is by default "10000" (10 MB). For example, when creating a `SpeechRecognizer` object (that manages a connection to the Speech Service), you can set this property in its `SpeechConfig`. In the snippet below, the configuration is adjusted to permit a CRL file size up to 15 MB.
+The default maximum CRL size used by the Speech SDK (10 MB) can be adjusted per config object. The property key for this adjustment is `CONFIG_MAX_CRL_SIZE_KB` and the value, specified as a string, is by default "10000" (10 MB). For example, when creating a `SpeechRecognizer` object (that manages a connection to the Speech service), you can set this property in its `SpeechConfig`. In the snippet below, the configuration is adjusted to permit a CRL file size up to 15 MB.
::: zone pivot="programming-language-csharp"
speechConfig.properties.SetPropertyByString("OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FA
::: zone-end
-To turn off certificate revocation checks, set the property `"OPENSSL_DISABLE_CRL_CHECK"` to `"true"`. Then, while connecting to the Speech Service, there will be no attempt to check or download a CRL and no automatic verification of a reported TLS/SSL certificate.
+To turn off certificate revocation checks, set the property `"OPENSSL_DISABLE_CRL_CHECK"` to `"true"`. Then, while connecting to the Speech service, there will be no attempt to check or download a CRL and no automatic verification of a reported TLS/SSL certificate.
::: zone pivot="programming-language-csharp"
cognitive-services How To Control Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-control-connections.md
Title: Service connectivity how-to - Speech SDK
-description: Learn how to monitor for connection status and manually pre-connect or disconnect from the Speech Service.
+description: Learn how to monitor for connection status and manually connect or disconnect from the Speech service.
# How to monitor and control service connections with the Speech SDK
-`SpeechRecognizer` and other objects in the Speech SDK automatically connect to the Speech Service when it's appropriate. Sometimes, you may either want additional control over when connections begin and end or want more information about when the Speech SDK establishes or loses its connection. The supporting `Connection` class provides this capability.
+`SpeechRecognizer` and other objects in the Speech SDK automatically connect to the Speech service when it's appropriate. Sometimes, you may either want extra control over when connections begin and end or want more information about when the Speech SDK establishes or loses its connection. The supporting `Connection` class provides this capability.
## Retrieve a Connection object
-A `Connection` can be obtained from most top-level Speech SDK objects via a static `From...` factory method, e.g. `Connection::FromRecognizer(recognizer)` for `SpeechRecognizer`.
+A `Connection` can be obtained from most top-level Speech SDK objects via a static `From...` factory method, for example, `Connection::FromRecognizer(recognizer)` for `SpeechRecognizer`.
::: zone pivot="programming-language-csharp"
Connection connection = Connection.fromRecognizer(recognizer);
## Monitor for connections and disconnections
-A `Connection` raises `Connected` and `Disconnected` events when the corresponding status change happens in the Speech SDK's connection to the Speech Service. You can listen to these events to know the latest connection state.
+A `Connection` raises `Connected` and `Disconnected` events when the corresponding status change happens in the Speech SDK's connection to the Speech service. You can listen to these events to know the latest connection state.
::: zone pivot="programming-language-csharp"
connection.disconnected.addEventListener((s, connectionEventArgs) -> {
## Connect and disconnect
-`Connection` has explicit methods to start or end a connection to the Speech Service. Reasons you may want to use these include:
+`Connection` has explicit methods to start or end a connection to the Speech service. Reasons you may want to control the connection include:
-- "Pre-connecting" to the Speech Service to allow the first interaction to start as quickly as possible
+- Preconnecting to the Speech service to allow the first interaction to start as quickly as possible
- Establishing connection at a specific time in your application's logic to gracefully and predictably handle initial connection failures - Disconnecting to clear an idle connection when you don't expect immediate reconnection but also don't want to destroy the object Some important notes on the behavior when manually modifying connection state: - Trying to connect when already connected will do nothing. It will not generate an error. Monitor the `Connected` and `Disconnected` events if you want to know the current state of the connection.-- A failure to connect that originates from a problem that has no involvement with the Speech Service -- such as attempting to do so from an invalid state -- will throw or return an error as appropriate to the programming language. Failures that require network resolution -- such as authentication failures -- will not throw or return an error but instead generate a `Canceled` event on the top-level object the `Connection` was created from.-- Manually disconnecting from the Speech Service during an ongoing interaction will result in a connection error and loss of data for that interaction. Connection errors are surfaced on the appropriate top-level object's `Canceled` event.
+- A failure to connect that originates from a problem that has no involvement with the Speech service--such as attempting to do so from an invalid state--will throw or return an error as appropriate to the programming language. Failures that require network resolution--such as authentication failures--will not throw or return an error but instead generate a `Canceled` event on the top-level object the `Connection` was created from.
+- Manually disconnecting from the Speech service during an ongoing interaction results in a connection error and loss of data for that interaction. Connection errors are surfaced on the appropriate top-level object's `Canceled` event.
::: zone pivot="programming-language-csharp"
cognitive-services How To Custom Commands Deploy Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-deploy-cicd.md
In this article, you learn how to set up continuous deployment for your Custom C
## Export/Import/Publish
-The scripts are hosted at [Cognitive Services Voice Assistant - Custom Commands](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/tree/master/custom-commands). Clone the scripts in the bash directory to your repository. Make sure you maintain the same path.
+The scripts are hosted at [Voice Assistant - Custom Commands](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/tree/master/custom-commands). Clone the scripts in the bash directory to your repository. Make sure you maintain the same path.
### Set up a pipeline
The scripts are hosted at [Cognitive Services Voice Assistant - Custom Commands]
In case you want to keep the definition of your application in a repository, we provide the scripts for deployments from source code. Since the scripts are in bash, If you are using Windows you'll need to install the [Linux subsystem](/windows/wsl/install-win10).
-The scripts are hosted at [Cognitive Services Voice Assistant - Custom Commands](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/tree/master/custom-commands). Clone the scripts in the bash directory to your repository. Make sure you maintain the same path.
+The scripts are hosted at [Voice Assistant - Custom Commands](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/tree/master/custom-commands). Clone the scripts in the bash directory to your repository. Make sure you maintain the same path.
### Prepare your repository
cognitive-services How To Custom Commands Developer Flow Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-developer-flow-test.md
To set up the client, checkout [Windows Voice Assistant Client](https://github.c
> [!div class="mx-imgBorder"] > ![WVAC Create profile](media/custom-commands/conversation.png)
-## Test programatically with the Cognitive Services Voice Assistant Test Tool
+## Test programatically with the Voice Assistant Test Tool
The Voice Assistant Test Tool is a configurable .NET Core C# console application for end-to-end functional regression tests for your Microsoft Voice Assistant.
cognitive-services How To Custom Speech Continuous Integration Continuous Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-continuous-integration-continuous-deployment.md
Most teams require a manual review and approval process for deployment to a prod
Use the following tools for CI/CD automation workflows for Custom Speech: - [Azure CLI](/cli/azure/) to create an Azure service principal authentication, query Azure subscriptions, and store test results in Azure Blob.-- [Azure Speech CLI](spx-overview.md) to interact with the Speech Service from the command line or an automated workflow.
+- [Azure AI Speech CLI](spx-overview.md) to interact with the Speech service from the command line or an automated workflow.
## DevOps solution for Custom Speech using GitHub Actions
cognitive-services How To Custom Speech Evaluate Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-evaluate-data.md
no-loc: [$$, '\times', '\over']
# Test accuracy of a Custom Speech model
-In this article, you learn how to quantitatively measure and improve the accuracy of the Microsoft speech to text model or your own custom models. [Audio + human-labeled transcript](how-to-custom-speech-test-and-train.md#audio--human-labeled-transcript-data-for-training-or-testing) data is required to test accuracy. You should provide from 30 minutes to 5 hours of representative audio.
+In this article, you learn how to quantitatively measure and improve the accuracy of the base speech to text model or your own custom models. [Audio + human-labeled transcript](how-to-custom-speech-test-and-train.md#audio--human-labeled-transcript-data-for-training-or-testing) data is required to test accuracy. You should provide from 30 minutes to 5 hours of representative audio.
[!INCLUDE [service-pricing-advisory](includes/service-pricing-advisory.md)] ## Create a test
-You can test the accuracy of your custom model by creating a test. A test requires a collection of audio files and their corresponding transcriptions. You can compare a custom model's accuracy with a Microsoft speech to text base model or another custom model. After you [get](#get-test-results) the test results, [evaluate](#evaluate-word-error-rate) the word error rate (WER) compared to speech recognition results.
+You can test the accuracy of your custom model by creating a test. A test requires a collection of audio files and their corresponding transcriptions. You can compare a custom model's accuracy with a speech to text base model or another custom model. After you [get](#get-test-results) the test results, [evaluate](#evaluate-word-error-rate) the word error rate (WER) compared to speech recognition results.
::: zone pivot="speech-studio"
cognitive-services How To Custom Speech Upload Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-upload-data.md
zone_pivot_groups: speech-studio-cli-rest
# Upload training and testing datasets for Custom Speech
-You need audio or text data for testing the accuracy of Microsoft speech recognition or training your custom models. For information about the data types supported for testing or training your model, see [Training and testing datasets](how-to-custom-speech-test-and-train.md).
+You need audio or text data for testing the accuracy of speech recognition or training your custom models. For information about the data types supported for testing or training your model, see [Training and testing datasets](how-to-custom-speech-test-and-train.md).
> [!TIP] > You can also use the [online transcription editor](how-to-custom-speech-transcription-editor.md) to create and refine labeled audio datasets.
To upload your own datasets in Speech Studio, follow these steps:
1. Select **Custom Speech** > Your project name > **Speech datasets** > **Upload data**. 1. Select the **Training data** or **Testing data** tab. 1. Select a dataset type, and then select **Next**.
-1. Specify the dataset location, and then select **Next**. You can choose a local file or enter a remote location such as Azure Blob URL. If you select remote location, and you don't use trusted Azure services security mechanism (see next Note), then the remote location should be an URL that can be retrieved with a simple anonymous GET request. For example, a [SAS URL](/azure/storage/common/storage-sas-overview) or a publicly accessible URL. URLs that require extra authorization, or expect user interaction are not supported.
+1. Specify the dataset location, and then select **Next**. You can choose a local file or enter a remote location such as Azure Blob URL. If you select remote location, and you don't use trusted Azure services security mechanism (see next Note), then the remote location should be a URL that can be retrieved with a simple anonymous GET request. For example, a [SAS URL](/azure/storage/common/storage-sas-overview) or a publicly accessible URL. URLs that require extra authorization, or expect user interaction are not supported.
> [!NOTE] > If you use Azure Blob URL, you can ensure maximum security of your dataset files by using trusted Azure services security mechanism. You will use the same techniques as for Batch transcription and plain Storage Account URLs for your dataset files. See details [here](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism).
To create a dataset and connect it to an existing project, use the `spx csr data
- Set the `project` parameter to the ID of an existing project. This is recommended so that you can also view and manage the dataset in Speech Studio. You can run the `spx csr project list` command to get available projects. - Set the required `kind` parameter. The possible set of values for dataset kind are: Language, Acoustic, Pronunciation, and AudioFiles.-- Set the required `contentUrl` parameter. This is the location of the dataset. If you don't use trusted Azure services security mechanism (see next Note), then the `contentUrl` parameter should be an URL that can be retrieved with a simple anonymous GET request. For example, a [SAS URL](/azure/storage/common/storage-sas-overview) or a publicly accessible URL. URLs that require extra authorization, or expect user interaction are not supported.
+- Set the required `contentUrl` parameter. This is the location of the dataset. If you don't use trusted Azure services security mechanism (see next Note), then the `contentUrl` parameter should be a URL that can be retrieved with a simple anonymous GET request. For example, a [SAS URL](/azure/storage/common/storage-sas-overview) or a publicly accessible URL. URLs that require extra authorization, or expect user interaction are not supported.
> [!NOTE] > If you use Azure Blob URL, you can ensure maximum security of your dataset files by using trusted Azure services security mechanism. You will use the same techniques as for Batch transcription and plain Storage Account URLs for your dataset files. See details [here](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism).
To create a dataset and connect it to an existing project, use the [Datasets_Cre
- Set the `project` property to the URI of an existing project. This is recommended so that you can also view and manage the dataset in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects. - Set the required `kind` property. The possible set of values for dataset kind are: Language, Acoustic, Pronunciation, and AudioFiles.-- Set the required `contentUrl` property. This is the location of the dataset. If you don't use trusted Azure services security mechanism (see next Note), then the `contentUrl` parameter should be an URL that can be retrieved with a simple anonymous GET request. For example, a [SAS URL](/azure/storage/common/storage-sas-overview) or a publicly accessible URL. URLs that require extra authorization, or expect user interaction are not supported.
+- Set the required `contentUrl` property. This is the location of the dataset. If you don't use trusted Azure services security mechanism (see next Note), then the `contentUrl` parameter should be a URL that can be retrieved with a simple anonymous GET request. For example, a [SAS URL](/azure/storage/common/storage-sas-overview) or a publicly accessible URL. URLs that require extra authorization, or expect user interaction are not supported.
> [!NOTE] > If you use Azure Blob URL, you can ensure maximum security of your dataset files by using trusted Azure services security mechanism. You will use the same techniques as for Batch transcription and plain Storage Account URLs for your dataset files. See details [here](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism).
cognitive-services How To Track Speech Sdk Memory Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-track-speech-sdk-memory-usage.md
Title: How to track Speech SDK memory usage - Speech service
-description: The Speech Service SDK supports numerous programming languages for speech to text and text to speech conversion, along with speech translation. This article discusses memory management tooling built into the SDK.
+description: The Speech SDK supports numerous programming languages for speech to text and text to speech conversion, along with speech translation. This article discusses memory management tooling built into the SDK.
cognitive-services How To Windows Voice Assistants Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-windows-voice-assistants-get-started.md
# Get started with voice assistants on Windows
-This guide will take you through the steps to begin developing a voice assistant on Windows.
+This guide takes you through the steps to begin developing a voice assistant on Windows.
## Set up your development environment
-To start developing a voice assistant for Windows, you will need to make sure you have the proper development environment.
+To start developing a voice assistant for Windows, you'll need to make sure you have the proper development environment.
-- **Visual Studio:** You will need to install [Microsoft Visual Studio 2017](https://visualstudio.microsoft.com/), Community Edition or higher
+- **Visual Studio:** You'll need to install [Microsoft Visual Studio 2017](https://visualstudio.microsoft.com/), Community Edition or higher
- **Windows version**: A PC with a Windows Insider fast ring build of Windows and the Windows Insider version of the Windows SDK. This sample code is verified as working on Windows Insider Release Build 19025.vb_release_analog.191112-1600 using Windows SDK 19018. Any Build or SDK above the specified versions should be compatible. - **UWP development tools**: The Universal Windows Platform development workload in Visual Studio. See the UWP [Get set up](/windows/uwp/get-started/get-set-up) page to get your machine ready for developing UWP Applications. - **A working microphone and audio output** ## Obtain resources from Microsoft
-Some resources necessary for a completely customized voice agent on Windows will require resources from Microsoft. The [UWP Voice Assistant Sample](windows-voice-assistants-faq.yml#the-uwp-voice-assistant-sample) provides sample versions of these resources for initial development and testing, so this section is unnecessary for initial development.
+Some resources necessary for a customized voice agent on Windows requires resources from Microsoft. The [UWP Voice Assistant Sample](windows-voice-assistants-faq.yml#the-uwp-voice-assistant-sample) provides sample versions of these resources for initial development and testing, so this section is unnecessary for initial development.
- **Keyword model:** Voice activation requires a keyword model from Microsoft in the form of a .bin file. The .bin file provided in the UWP Voice Assistant Sample is trained on the keyword *Contoso*.-- **Limited Access Feature Token:** Since the ConversationalAgent APIs provide access to microphone audio, they are protected under Limited Access Feature restrictions. To use a Limited Access Feature, you will need to obtain a Limited Access Feature token connected to the package identity of your application from Microsoft.
+- **Limited Access Feature Token:** Since the ConversationalAgent APIs provide access to microphone audio, they're protected under Limited Access Feature restrictions. To use a Limited Access Feature, you'll need to obtain a Limited Access Feature token connected to the package identity of your application from Microsoft.
## Establish a dialog service
For a complete voice assistant experience, the application will need a dialog se
- Provide the text to a bot - Translate the text response of the bot to an audio output
-These are the requirements to create a basic dialog service using Direct Line Speech.
+Here are the requirements to create a basic dialog service using Direct Line Speech.
-- **Speech resource:** A resource for Cognitive Speech Services for speech to text and text to speech conversions. Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource).
+- **Speech resource:** An Azure resource for Speech features such as speech to text and text to speech. Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource).
- **Bot Framework bot:** A bot created using Bot Framework version 4.2 or above that's subscribed to [Direct Line Speech](./direct-line-speech.md) to enable voice input and output. [This guide](./tutorial-voice-enable-your-bot-speech-sdk.md) contains step-by-step instructions to make an "echo bot" and subscribe it to Direct Line Speech. You can also go [here](https://blog.botframework.com/2018/05/07/build-a-microsoft-bot-framework-bot-with-the-bot-builder-sdk-v4/) for steps on how to create a customized bot, then follow the same steps [here](./tutorial-voice-enable-your-bot-speech-sdk.md) to subscribe it to Direct Line Speech, but with your new bot rather than the "echo bot". ## Try out the sample app
cognitive-services Ingestion Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/ingestion-client.md
Title: Ingestion Client - Speech service
-description: In this article we describe a tool released on GitHub that enables customers push audio files to Speech Service easily and quickly
+description: In this article we describe a tool released on GitHub that enables customers push audio files to Speech service easily and quickly
cognitive-services Keyword Recognition Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/keyword-recognition-overview.md
Running keyword verification and speech to text in parallel yields the following
### Keyword verification responses and latency considerations
-For each request to the service, keyword verification returns one of two responses: accepted or rejected. The processing latency varies depending on the length of the keyword and the length of the audio segment expected to contain the keyword. Processing latency doesn't include network cost between the client and Azure Speech services.
+For each request to the service, keyword verification returns one of two responses: accepted or rejected. The processing latency varies depending on the length of the keyword and the length of the audio segment expected to contain the keyword. Processing latency doesn't include network cost between the client and Speech services.
| Keyword verification response | Description | | -- | -- |
The Speech SDK enables easy use of personalized on-device keyword recognition mo
| Scenario | Description | Samples | | -- | -- | - |
-| End-to-end keyword recognition with speech to text | Best suited for products that will use a customized on-device keyword model from custom keyword with Azure Speech keyword verification and speech to text. This scenario is the most common. | <ul><li>[Voice assistant sample code](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant)</li><li>[Tutorial: Voice enable your assistant built using Azure Bot Service with the C# Speech SDK](./tutorial-voice-enable-your-bot-speech-sdk.md)</li><li>[Tutorial: Create a custom commands application with simple voice commands](./how-to-develop-custom-commands-application.md)</li></ul> |
+| End-to-end keyword recognition with speech to text | Best suited for products that will use a customized on-device keyword model from custom keyword with keyword verification and speech to text. This scenario is the most common. | <ul><li>[Voice assistant sample code](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant)</li><li>[Tutorial: Voice enable your assistant built using Azure Bot Service with the C# Speech SDK](./tutorial-voice-enable-your-bot-speech-sdk.md)</li><li>[Tutorial: Create a custom commands application with simple voice commands](./how-to-develop-custom-commands-application.md)</li></ul> |
| Offline keyword recognition | Best suited for products without network connectivity that will use a customized on-device keyword model from custom keyword. | <ul><li>[C# on Windows UWP sample](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/csharp/uwp/keyword-recognizer)</li><li>[Java on Android sample](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/java/android/keyword-recognizer)</li></ul> ## Next steps
cognitive-services Multi Device Conversation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/multi-device-conversation.md
Title: Multi-device Conversation overview - Speech Service
+ Title: Multi-device Conversation overview - Speech service
description: Multi-device conversation makes it easy to create a speech or text conversation between multiple clients and coordinate the messages that are sent between them.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/overview.md
# What is the Speech service?
-The Speech service provides speech to text and text to speech capabilities with an [Azure Speech resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource). You can transcribe speech to text with high accuracy, produce natural-sounding text to speech voices, translate spoken audio, and use speaker recognition during conversations.
+The Speech service provides speech to text and text to speech capabilities with an [Speech resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource). You can transcribe speech to text with high accuracy, produce natural-sounding text to speech voices, translate spoken audio, and use speaker recognition during conversations.
:::image type="content" border="false" source="media/overview/speech-features-highlight.png" alt-text="Image of tiles that highlight some Speech service features.":::
cognitive-services Quickstart Custom Commands Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/quickstart-custom-commands-application.md
At this time, Custom Commands supports speech resources created in regions that
## Prerequisites > [!div class="checklist"]
-> * <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" target="_blank">Create an Azure Speech resource in a region that supports Custom Commands.</a> Refer to the **Region Availability** section above for list of supported regions.
+> * <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" target="_blank">Create a Speech resource in a region that supports Custom Commands.</a> Refer to the **Region Availability** section above for list of supported regions.
> * Download the sample [Smart Room Lite](https://aka.ms/speech/cc-quickstart) json file. > * Download the latest version of [Windows Voice Assistant Client](https://aka.ms/speech/va-samples-wvac).
cognitive-services Multi Device Conversation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/quickstarts/multi-device-conversation.md
Title: 'Quickstart: Multi-device Conversation - Speech Service'
+ Title: 'Quickstart: Multi-device Conversation - Speech service'
description: In this quickstart, you'll learn how to create and join clients to a multi-device conversation by using the Speech SDK.
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/releasenotes.md
Title: What's new - Speech Service
+ Title: What's new - Speech service
description: Find out about new releases and features for the Azure Cognitive Service for Speech.
cognitive-services Resiliency And Recovery Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/resiliency-and-recovery-plan.md
Follow these steps to configure your client to monitor for errors:
1. Find the [list of regionally available endpoints in our documentation](./rest-speech-to-text.md). 2. Select a primary and one or more secondary/backup regions from the list.
-3. From Azure portal, create Speech Service resources for each region.
+3. From Azure portal, create Speech service resources for each region.
- If you have set a specific quota, you may also consider setting the same quota in the backup regions. See details in [Speech service Quotas and Limits](./speech-services-quotas-and-limits.md). 4. Each region has its own STS token service. For the primary region and any backup regions your client configuration file needs to know the:
The recovery from regional failures for this usage type can be instantaneous and
Data assets, models or deployments in one region can't be made visible or accessible in any other region.
-You should create Speech Service resources in both a main and a secondary region by following the same steps as used for default endpoints.
+You should create Speech service resources in both a main and a secondary region by following the same steps as used for default endpoints.
### Custom Speech
-Custom Speech Service doesn't support automatic failover. We suggest the following steps to prepare for manual or automatic failover implemented in your client code. In these steps, you replicate custom models in a secondary region. With this preparation, your client code can switch to a secondary region when the primary region fails.
+Custom Speech service doesn't support automatic failover. We suggest the following steps to prepare for manual or automatic failover implemented in your client code. In these steps, you replicate custom models in a secondary region. With this preparation, your client code can switch to a secondary region when the primary region fails.
1. Create your custom model in one main region (Primary). 2. Run the [Models_CopyTo](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_CopyTo) operation to replicate the custom model to all prepared regions (Secondary).
cognitive-services Rest Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-text-to-speech.md
User-Agent: <Your application name>
<speak version='1.0' xml:lang='en-US'><voice xml:lang='en-US' xml:gender='Male' name='en-US-ChristopherNeural'>
- Microsoft Speech Service Text to speech API
+ I'm excited to try text to speech!
</voice></speak> ``` <sup>*</sup> For the Content-Length, you should use your own content length. In most cases, this value is calculated automatically.
cognitive-services Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/sovereign-clouds.md
Last updated 05/10/2022
-# Speech Services in sovereign clouds
+# Speech service in sovereign clouds
## Azure Government (United States)
Available to US government entities and their partners only. See more informatio
### Endpoint information
-This section contains Speech Services endpoint information for the usage with [Speech SDK](speech-sdk.md), [Speech to text REST API](rest-speech-to-text.md), and [Text to speech REST API](rest-text-to-speech.md).
+This section contains Speech service endpoint information for the usage with [Speech SDK](speech-sdk.md), [Speech to text REST API](rest-speech-to-text.md), and [Text to speech REST API](rest-text-to-speech.md).
-#### Speech Services REST API
+#### Speech service REST API
-Speech Services REST API endpoints in Azure Government have the following format:
+Speech service REST API endpoints in Azure Government have the following format:
| REST API type / operation | Endpoint format | |--|--|
Available to organizations with a business presence in China. See more informati
### Endpoint information
-This section contains Speech Services endpoint information for the usage with [Speech SDK](speech-sdk.md), [Speech to text REST API](rest-speech-to-text.md), and [Text to speech REST API](rest-text-to-speech.md).
+This section contains Speech service endpoint information for the usage with [Speech SDK](speech-sdk.md), [Speech to text REST API](rest-speech-to-text.md), and [Text to speech REST API](rest-text-to-speech.md).
-#### Speech Services REST API
+#### Speech service REST API
-Speech Services REST API endpoints in Azure China have the following format:
+Speech service REST API endpoints in Azure China have the following format:
| REST API type / operation | Endpoint format | |--|--|
cognitive-services Speech Container Cstt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-cstt.md
If you have been approved to run the container disconnected from the internet, t
In order to prepare and configure a disconnected custom speech to text container you will need two separate speech resources: -- A regular Azure Speech Service resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan. This is used to train, download, and configure your custom speech models for use in your container.-- An Azure Speech Service resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan. This is used to download your disconnected container license file required to run the container in disconnected mode.
+- A regular Azure Cognitive Services Speech resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan. This is used to train, download, and configure your custom speech models for use in your container.
+- An Azure Cognitive Services Speech resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan. This is used to download your disconnected container license file required to run the container in disconnected mode.
Follow these steps to download and run the container in disconnected environments.
-1. [Download a model for the disconnected container](#download-a-model-for-the-disconnected-container). For this step, use a regular Azure Speech Service resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan.
-1. [Download the disconnected container license](#download-the-disconnected-container-license). For this step, use an Azure Speech Service resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan.
-1. [Run the disconnected container for service](#run-the-disconnected-container). For this step, use an Azure Speech Service resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan.
+1. [Download a model for the disconnected container](#download-a-model-for-the-disconnected-container). For this step, use a regular Azure Cognitive Services Speech resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan.
+1. [Download the disconnected container license](#download-the-disconnected-container-license). For this step, use an Azure Cognitive Services Speech resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan.
+1. [Run the disconnected container for service](#run-the-disconnected-container). For this step, use an Azure Cognitive Services Speech resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan.
### Download a model for the disconnected container
-For this step, use a regular Azure Speech Service resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan.
+For this step, use a regular Azure Cognitive Services Speech resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan.
[!INCLUDE [Custom speech container run](includes/containers-cstt-common-run.md)]
You can only use a license file with the appropriate container and model that yo
| `{API_KEY}` | The key for your Speech resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | | `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` |
-For this step, use an Azure Speech Service resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan.
+For this step, use an Azure Cognitive Services Speech resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan.
```bash docker run --rm -it -p 5000:5000 \
Wherever the container is run, the license file must be mounted to the container
| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` | | `{CONTAINER_OUTPUT_DIRECTORY}` | Location of the output folder on the container's local filesystem.<br/><br/>For example: `/path/to/output/directory` |
-For this step, use an Azure Speech Service resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan.
+For this step, use an Azure Cognitive Services Speech resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan.
```bash docker run --rm -it -p 5000:5000 --memory {MEMORY_SIZE} --cpus {NUMBER_CPUS} \
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto.md
keywords: on-premises, Docker, container
By using containers, you can use a subset of the Speech service features in your own environment. In this article, you'll learn how to download, install, and run a Speech container. > [!NOTE]
-> Disconnected container pricing and commitment tiers vary from standard containers. For more information, see [Speech Services pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+> Disconnected container pricing and commitment tiers vary from standard containers. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
## Prerequisites
cognitive-services Speech Container Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-overview.md
While you're waiting for approval, you can [setup the prerequisites](speech-cont
The Speech containers send billing information to Azure by using a Speech resource on your Azure account. > [!NOTE]
-> Connected and disconnected container pricing and commitment tiers vary. For more information, see [Speech Services pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+> Connected and disconnected container pricing and commitment tiers vary. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
Speech containers aren't licensed to run without being connected to Azure for metering. You must configure your container to communicate billing information with the metering service at all times. For more information, see [billing arguments](speech-container-howto.md#billing-arguments).
cognitive-services Speech Encryption Of Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-encryption-of-data-at-rest.md
# Speech service encryption of data at rest
-Speech Service automatically encrypts your data when it is persisted it to the cloud. Speech service encryption protects your data and to help you to meet your organizational security and compliance commitments.
+Speech service automatically encrypts your data when it is persisted it to the cloud. Speech service encryption protects your data and to help you to meet your organizational security and compliance commitments.
## About Cognitive Services encryption
cognitive-services Speech Services Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-private-link.md
[Azure Private Link](../../private-link/private-link-overview.md) lets you connect to services in Azure by using a [private endpoint](../../private-link/private-endpoint-overview.md). A private endpoint is a private IP address that's accessible only within a specific [virtual network](../../virtual-network/virtual-networks-overview.md) and subnet.
-This article explains how to set up and use Private Link and private endpoints with Speech Services in Azure Cognitive Services.
-This article then describes how to remove private endpoints later, but still use the Speech resource.
+This article explains how to set up and use Private Link and private endpoints with the Speech service. This article then describes how to remove private endpoints later, but still use the Speech resource.
> [!NOTE] > Before you proceed, review [how to use virtual networks with Cognitive Services](../cognitive-services-virtual-networks.md). - Setting up a Speech resource for the private endpoint scenarios requires performing the following tasks: 1. [Create a custom domain name](#create-a-custom-domain-name) 1. [Turn on private endpoints](#turn-on-private-endpoints)
If you plan to access the resource by using only a private endpoint, you can ski
``` > [!NOTE]
-> The resolved IP address points to a virtual network proxy endpoint, which dispatches the network traffic to the private endpoint for the Cognitive Services resource. The behavior will be different for a resource with a custom domain name but *without* private endpoints. See [this section](#dns-configuration) for details.
+> The resolved IP address points to a virtual network proxy endpoint, which dispatches the network traffic to the private endpoint for the Speech resource. The behavior will be different for a resource with a custom domain name but *without* private endpoints. See [this section](#dns-configuration) for details.
## Adjust an application to use a Speech resource with a private endpoint
-A Speech resource with a custom domain interacts with Speech Services in a different way.
+A Speech resource with a custom domain interacts with the Speech service in a different way.
This is true for a custom-domain-enabled Speech resource both with and without private endpoints. Information in this section applies to both scenarios. Follow instructions in this section to adjust existing applications and solutions to use a Speech resource with a custom domain name and a private endpoint turned on.
-A Speech resource with a custom domain name and a private endpoint turned on uses a different way to interact with Speech Services. This section explains how to use such a resource with the Speech Services REST APIs and the [Speech SDK](speech-sdk.md).
+A Speech resource with a custom domain name and a private endpoint turned on uses a different way to interact with the Speech service. This section explains how to use such a resource with the Speech service REST APIs and the [Speech SDK](speech-sdk.md).
> [!NOTE]
-> A Speech resource without private endpoints that uses a custom domain name also has a special way of interacting with Speech Services.
+> A Speech resource without private endpoints that uses a custom domain name also has a special way of interacting with the Speech service.
> This way differs from the scenario of a Speech resource that uses a private endpoint. > This is important to consider because you may decide to remove private endpoints later. > See [Adjust an application to use a Speech resource without private endpoints](#adjust-an-application-to-use-a-speech-resource-without-private-endpoints) later in this article.
Follow these steps to modify your code:
1. Determine the application endpoint URL: - [Turn on logging for your application](how-to-use-logging.md) and run it to log activity.
- - In the log file, search for `SPEECH-ConnectionUrl`. In matching lines, the `value` parameter contains the full URL that your application used to reach Speech Services.
+ - In the log file, search for `SPEECH-ConnectionUrl`. In matching lines, the `value` parameter contains the full URL that your application used to reach the Speech service.
Example:
After this modification, your application should work with the private-endpoint-
In this article, we've pointed out several times that enabling a custom domain for a Speech resource is *irreversible*. Such a resource will use a different way of communicating with Speech service, compared to the ones that are using [regional endpoint names](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints).
-This section explains how to use a Speech resource with a custom domain name but *without* any private endpoints with the Speech Services REST APIs and [Speech SDK](speech-sdk.md). This might be a resource that was once used in a private endpoint scenario, but then had its private endpoints deleted.
+This section explains how to use a Speech resource with a custom domain name but *without* any private endpoints with the Speech service REST APIs and [Speech SDK](speech-sdk.md). This might be a resource that was once used in a private endpoint scenario, but then had its private endpoints deleted.
### DNS configuration
cognitive-services Speech Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-studio-overview.md
Title: "Speech Studio overview - Speech service"
-description: Speech Studio is a set of UI-based tools for building and integrating features from Azure Speech service in your applications.
+description: Speech Studio is a set of UI-based tools for building and integrating features from Speech service in your applications.
cognitive-services Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-to-text.md
Title: Speech to text overview - Speech service
-description: Get an overview of the benefits and capabilities of the speech to text feature of the Speech Service.
+description: Get an overview of the benefits and capabilities of the speech to text feature of the Speech service.
cognitive-services Spx Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-basics.md
Title: "Quickstart: The Speech CLI - Speech service"
-description: In this Azure Speech CLI quickstart, you interact with speech to text, text to speech, and speech translation without having to write code.
+description: In this Azure AI Speech CLI quickstart, you interact with speech to text, text to speech, and speech translation without having to write code.
-# Quickstart: Get started with the Azure Speech CLI
+# Quickstart: Get started with the Azure AI Speech CLI
-In this article, you'll learn how to use the Azure Speech CLI (also called SPX) to access Speech services such as speech to text, text to speech, and speech translation, without having to write any code. The Speech CLI is production ready, and you can use it to automate simple workflows in the Speech service by using `.bat` or shell scripts.
+In this article, you'll learn how to use the Azure AI Speech CLI (also called SPX) to access Speech services such as speech to text, text to speech, and speech translation, without having to write any code. The Speech CLI is production ready, and you can use it to automate simple workflows in the Speech service by using `.bat` or shell scripts.
This article assumes that you have working knowledge of the Command Prompt window, terminal, or PowerShell.
cognitive-services Spx Batch Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-batch-operations.md
# Run batch operations with the Speech CLI
-Common tasks when using Azure Speech services, are batch operations. In this article, you'll learn how to do batch speech to text (speech recognition), batch text to speech (speech synthesis) with the Speech CLI. Specifically, you'll learn how to:
+Common tasks when using the Speech service, are batch operations. In this article, you'll learn how to do batch speech to text (speech recognition), batch text to speech (speech synthesis) with the Speech CLI. Specifically, you'll learn how to:
* Run batch speech recognition on a directory of audio files * Run batch speech synthesis by iterating over a `.tsv` file
cognitive-services Spx Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-overview.md
Title: The Azure Speech CLI
+ Title: The Azure AI Speech CLI
description: In this article, you learn about the Speech CLI, a command-line tool for using Speech service without having to write any code.
To get started with the Speech CLI, see the [quickstart](spx-basics.md). This ar
## Next steps -- [Get started with the Azure Speech CLI](spx-basics.md)
+- [Get started with the Azure AI Speech CLI](spx-basics.md)
- [Speech CLI configuration options](./spx-data-store-configuration.md) - [Speech CLI batch operations](./spx-batch-operations.md)
cognitive-services Swagger Documentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/swagger-documentation.md
Speech service offers a Swagger specification to interact with a handful of REST
> [!NOTE] > Speech service has several REST APIs for [Speech to text](rest-speech-to-text.md) and [Text to speech](rest-text-to-speech.md). >
-> However only [Speech to text REST API](rest-speech-to-text.md) is documented in the Swagger specification. See the documents referenced in the previous paragraph for the information on all other Speech Services REST APIs.
+> However only [Speech to text REST API](rest-speech-to-text.md) is documented in the Swagger specification. See the documents referenced in the previous paragraph for the information on all other Speech service REST APIs.
## Generating code from the Swagger specification
cognitive-services Windows Voice Assistants Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/windows-voice-assistants-overview.md
Title: Voice Assistants on Windows overview - Speech Service
+ Title: Voice Assistants on Windows overview - Speech service
description: An overview of the voice assistants on Windows, including capabilities and development resources available.
cognitive-services Create Sas Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/how-to-guides/create-sas-tokens.md
Previously updated : 03/24/2023 Last updated : 06/19/2023 # Create SAS tokens for your storage containers In this article, you learn how to create user delegation, shared access signature (SAS) tokens, using the Azure portal or Azure Storage Explorer. User delegation SAS tokens are secured with Azure AD credentials. SAS tokens provide secure, delegated access to resources in your Azure storage account. + >[!TIP] > > [Managed identities](create-use-managed-identities.md) provide an alternate method for you to grant access to your storage data without the need to include SAS tokens with your HTTP requests. *See*, [Managed identities for Document Translation](create-use-managed-identities.md).
cognitive-services Document Translation Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/quickstarts/document-translation-sdk.md
Previously updated : 06/14/2023 Last updated : 06/19/2023 zone_pivot_groups: programming-languages-document-sdk
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/whats-new.md
Translator is a language service that enables users to translate text and docume
Translator service supports language translation for more than 100 languages. If your language community is interested in partnering with Microsoft to add your language to Translator, contact us via the [Translator community partner onboarding form](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR-riVR3Xj0tOnIRdZOALbM9UOU1aMlNaWFJOOE5YODhRR1FWVzY0QzU1OS4u).
+## June 2023
+
+**Documentation updates**
+* The [Document Translation SDK overview](document-translation/document-sdk-overview.md) is now available to provide guidance and resources for the .NET/C# and Python SDKs.
+* The [Document Translation SDK quickstart](document-translation/quickstarts/document-translation-sdk.md) is now available for the C# and Python programming languages.
+ ## May 2023 **Announcing new releases for Build 2023**
cognitive-services Conversation Summarization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/how-to/conversation-summarization.md
-# How to use conversation summarization (preview)
+# How to use conversation summarization
[!INCLUDE [availability](../includes/regional-availability.md)]
-> [!IMPORTANT]
-> The conversation summarization feature is a preview capability provided ΓÇ£AS ISΓÇ¥ and ΓÇ£WITH ALL FAULTS.ΓÇ¥ As such, Conversation Summarization (preview) should not be implemented or deployed in any production use. The customer is solely responsible for any use of conversation summarization.
- ## Conversation summarization types - Chapter title and narrative (general conversation) are designed to summarize a conversation into chapter titles, and a summarization of the conversation's contents. This summarization type works on conversations with any number of parties.
cognitive-services Document Summarization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/how-to/document-summarization.md
-# How to use document summarization (preview)
-
-> [!IMPORTANT]
-> The summarization features described in this documentation are preview capabilities provided ΓÇ£AS ISΓÇ¥ and ΓÇ£WITH ALL FAULTS.ΓÇ¥ As such, document summarization (preview) should not be implemented or deployed in any production use. The customer is solely responsible for any use of document summarization.
+# How to use document summarization
Document summarization is designed to shorten content that users consider too long to read. Both extractive and abstractive summarization condense articles, papers, or documents to key sentences.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/language-support.md
Use this article to learn which natural languages are supported by document and
| Spanish | `es` | | | Portuguese | `pt` | |
-# [Conversation summarization (preview)](#tab/conversation-summarization)
+# [Conversation summarization](#tab/conversation-summarization)
-## Languages supported by conversation summarization (preview)
+## Languages supported by conversation summarization
Conversation summarization supports the following languages:
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/overview.md
-# What is document and conversation summarization (preview)?
+# What is document and conversation summarization?
[!INCLUDE [availability](includes/regional-availability.md)]
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/quickstart.md
Title: "Quickstart: Use Document Summarization (preview)"
+ Title: "Quickstart: Use Document Summarization"
description: Use this quickstart to start using Document Summarization.
zone_pivot_groups: programming-languages-text-analytics
-# Quickstart: using document summarization and conversation summarization (preview)
+# Quickstart: using document summarization and conversation summarization
[!INCLUDE [availability](includes/regional-availability.md)]
cognitive-services Region Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/region-support.md
# Regional availability
-Use this article to learn which regions are supported by all summarization features. More regions will be added to this list as they become available.
+Some summarization features are only available in limited regions. More regions will be added to this list as they become available.
## Regional availability table |Region|Document abstractive summarization|Conversation issue and resolution summarization|Conversation narrative summarization with chapters|Custom summarization| ||||||-| |North Europe|&#9989;|&#9989;|&#9989;|&#10060;|
-|East US|&#9989;|&#9989;|&#9989;|&#10060;|
+|East US|&#9989;|&#9989;|&#9989;|&#9989;|
|UK South|&#9989;|&#9989;|&#9989;|&#10060;|
-|Southeast Asia|&#10060;|&#10060;|&#10060;|&#9989;|
+|Southeast Asia|&#9989;|&#9989;|&#9989;|&#10060;|
## Next steps
cognitive-services Abuse Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/abuse-monitoring.md
+
+ Title: Azure OpenAI Service abuse monitoring
+
+description: Learn about the abuse monitoring capabilities of Azure OpenAI Service
++++ Last updated : 06/16/2023++
+keywords:
++
+# Abuse Monitoring
+
+Azure OpenAI Service detects and mitigates instances of recurring content and/or behaviors that suggest use of the service in a manner that may violate the [Code of Conduct](/legal/cognitive-services/openai/code-of-conduct?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext) or other applicable product terms. Details on how data is handled can be found on the [Data, Privacy and Security page](/legal/cognitive-services/openai/data-privacy?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext).
+
+## Components of abuse monitoring
+
+There are several components to abuse monitoring:
+
+- **Content Classification**: Classifier models detect harmful language and/or images in user prompts (inputs) and completions (outputs). The system looks for categories of harms as defined in the [Content Requirements](/legal/cognitive-services/openai/code-of-conduct?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext), and assigns severity levels as described in more detail on the [Content Filtering page](/azure/cognitive-services/openai/concepts/content-filter).
+
+- **Abuse Pattern Capture**: Azure OpenAI ServiceΓÇÖs abuse monitoring looks at customer usage patterns and employs algorithms and heuristics to detect indicators of potential abuse. Detected patterns consider, for example, the frequency and severity at which harmful content is detected in a customerΓÇÖs prompts and completions.
+
+- **Human Review and Decision**: When prompts and/or completions are flagged through content classification and abuse pattern capture as described above, authorized Microsoft employees may assess the flagged content, and either confirm or correct the classification or determination based on predefined guidelines and policies. Data can be accessed for human review <u>only</u> by authorized Microsoft employees via Secure Access Workstations (SAWs) with Just-In-Time (JIT) request approval granted by team managers. For Azure OpenAI Service resources deployed in the European Economic Area, the authorized Microsoft employees are located in the European Economic Area.
+
+- **Notification and Action**: When a threshold of abusive behavior has been confirmed based on the preceding three steps, the customer is informed of the determination by email. Except in cases of severe or recurring abuse, customers typically are given an opportunity to explain or remediateΓÇöand implement mechanisms to prevent recurrence ofΓÇöthe abusive behavior. Failure to address the behaviorΓÇöor recurring or severe abuseΓÇömay result in suspension or termination of the customerΓÇÖs access to Azure OpenAI resources and/or capabilities.
+
+## Next steps
+
+- Learn more about the [underlying models that power Azure OpenAI](../concepts/models.md).
+- Learn more about understanding and mitigating risks associated with your application: [Overview of Responsible AI practices for Azure OpenAI models](/legal/cognitive-services/openai/overview?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext).
+- Learn more about how data is processed in connection with content filtering and abuse monitoring: [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext#preventing-abuse-and-harmful-content-generation).
cognitive-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/use-your-data.md
+
+ Title: 'Using your data with Azure OpenAI Service'
+
+description: Use this article to learn about using your data for better text generation in Azure OpenAI.
+++++++ Last updated : 06/01/2023
+recommendations: false
++
+# Azure OpenAI on your data (preview)
+
+Azure OpenAI on your data enables you to run supported chat models such as ChatGPT and GPT-4 on your data without needing to train or fine-tune models. Running models on your data enables you to chat on top of, and analyze your data with greater accuracy and speed. By doing so, you can unlock valuable insights that can help you make better business decisions, identify trends and patterns, and optimize your operations. One of the key benefits of Azure OpenAI on your data is its ability to tailor the content of conversational AI.
+
+To get started, [connect your data source](../use-your-data-quickstart.md) using [Azure OpenAI Studio](https://oai.azure.com/) and start asking questions and chatting on your data.
+
+Because the model has access to, and can reference specific sources to support its responses, answers are not only based on its pretrained knowledge but also on the latest information available in the designated data source. This grounding data also helps the model avoid generating responses based on outdated or incorrect information.
+
+> [!NOTE]
+> To get started, you need to already have been approved for [Azure OpenAI access](../overview.md#how-do-i-get-access-to-azure-openai) and have an [Azure OpenAI Service resource](../how-to/create-resource.md) with either the gpt-35-turbo or the gpt-4 models deployed.
+
+## What is Azure OpenAI on your data
+
+Azure OpenAI on your data works with OpenAI's powerful ChatGPT (gpt-35-turbo) and GPT-4 language models, enabling them to provide responses based on your data. You can access Azure OpenAI on your data using a REST API or the web-based interface in the [Azure OpenAI Studio](https://oai.azure.com/) to create a solution that connects to your data to enable an enhanced chat experience.
+
+One of the key features of Azure OpenAI on your data is its ability to retrieve and utilize data in a way that enhances the model's output. Azure OpenAI on your data, together with Azure Cognitive Search, determines what data to retrieve from the designated data source based on the user input and provided conversation history. This data is then augmented and resubmitted as a prompt to the OpenAI model, with retrieved information being appended to the original prompt. Although retrieved data is being appended to the prompt, the resulting input is still processed by the model like any other prompt. Once the data has been retrieved and the prompt has been submitted to the model, the model uses this information to provide a completion. See the [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext) article for more information.
+
+## Data source options
+
+Azure OpenAI on your data uses an [Azure Cognitive Services](/azure/search/search-what-is-azure-search) index to determine what data to retrieve based on user inputs and provided conversation history. We recommend using Azure OpenAI Studio to create your index from a blob storage or local files. See the [quickstart article](../use-your-data-quickstart.md?pivots=programming-language-studio) for more information.
+
+You can optionally use an existing Azure Cognitive Search index as a data source. If you use an existing service, youΓÇÖll get better quality if your data is broken down into smaller chunks so that the model can use only the most relevant portions when composing a response. You can also use the available [data preparation script](https://github.com/microsoft/sample-app-aoai-chatGPT/tree/main/scripts) to create an index you can use Azure OpenAI, and with your documents broken down into manageable chunks.
+
+## Data formats and file types
+
+Azure OpenAI on your data supports the following filetypes:
+
+* `.txt`
+* `.md`
+* `.html`
+* Microsoft Word files
+* Microsoft PowerPoint files
+* PDF
+
+There are some caveats about document structure and how it might affect the quality of responses from the model:
+
+* The model provides the best citation titles from markdown (`.md`) files.
+
+* If a document is a PDF file, the text contents are extracted as a preprocessing step (unless you're connecting your own Azure Cognitive Search index). If your document contains images, graphs, or other visual content, the model's response quality depends on the quality of the text that can be extracted from them.
+
+* If you're converting data from an unsupported format into a supported format, make sure the conversion:
+
+ * Doesn't lead to significant data loss.
+ * Doesn't add unexpected noise to your data.
+
+ This will impact the quality of Azure Cognitive Search and the model response.
+
+## Recommended settings
+
+Use the following sections to help you configure Azure OpenAI on your data for optimal results.
+
+### System message
+
+Give the model instructions about how it should behave and any context it should reference when generating a response. You can describe the assistant’s personality, what it should and shouldn’t answer, and how to format responses. There’s no token limit for the system message, but will be included with every API call and counted against the overall token limit. The system message will be truncated if it's greater than 200 tokens.
+
+For example, if you're creating a chatbot where the data consists of transcriptions of quarterly financial earnings calls, you might use the following system message:
+
+*"You are a financial chatbot useful for answering questions from financial reports. You are given excerpts from the earnings call. Please answer the questions by parsing through all dialogue."*
+
+This system message can help improve the quality of the response by specifying the domain (in this case finance) and mentioning that the data consists of call transcriptions. It helps set the necessary context for the model to respond appropriately.
+
+> [!NOTE]
+> The system message is only guidance. The model might not adhere to every instruction specified because it has been primed with certain behaviors such as objectivity, and avoiding controversial statements. Unexpected behavior may occur if the system message contradicts with these behaviors.
+
+### Maximum response
+
+Set a limit on the number of tokens per model response. The upper limit for Azure OpenAI on Your Data is 1500. This is equivalent to setting the `max_tokens` parameter in the API.
+
+### Limit responses to your data
+
+This option encourages the model to respond using your data only, and is selected by default. If you unselect this option, the model may more readily apply its internal knowledge to respond. Determine the correct selection based on your use case and scenario.
+
+### Semantic search
+
+> [!IMPORTANT]
+> * Semantic search is subject to [additional pricing](/azure/search/semantic-search-overview#availability-and-pricing)
+> * Currently Azure OpenAI on your data supports semantic search for English data only. Only enable semantic search if both your documents and use case are in English.
+
+If [semantic search](/azure/search/semantic-search-overview) is enabled for your Azure Cognitive Search service, you are more likely to produce better retrieval of your data, which can improve response and citation quality.
+
+### Index field mapping
+
+If you're using your own index, you will be prompted in the Azure OpenAI Studio to define which fields you want to map for answering questions when you add your data source. You can provide multiple fields for *Content data*, and should include all fields that have text pertaining to your use case.
++
+In this example, the fields mapped to **Content data** and **Title** provide information to the model to answer questions. **Title** is also used to title citation text. The field mapped to **File name** generates the citation names in the response.
+
+Mapping these fields correctly helps ensure the model has better response and citation quality.
+
+### Interacting with the model
+
+Use the following practices for best results when chatting with the model.
+
+**Conversation history**
+
+* Before starting a new conversation (or asking a question that is not related to the previous ones), clear the chat history.
+* Getting different responses for the same question between the first conversational turn and subsequent turns can be expected because the conversation history changes the current state of the model. If you receive incorrect answers, report it as a quality bug.
+
+**Model response**
+
+* If you are not satisfied with the model response for a specific question, try either making the question more specific or more generic to see how the model responds, and reframe your question accordingly.
+
+* [Chain-of-thought prompting](advanced-prompt-engineering.md?pivots=programming-language-chat-completions#chain-of-thought-prompting) has been shown to be effective in getting the model to produce desired outputs for complex questions/tasks.
+
+**Question length**
+
+Avoid asking long questions and break them down into multiple questions if possible. The GPT models have limits on the number of tokens they can accept. Token limits are counted toward: the user question, the system message, the retrieved search documents (chunks), internal prompts, the conversation history (if any), and the response. If the question exceeds the token limit, it will be truncated.
+
+**Multi-lingual support**
+
+* Azure OpenAI on your data supports queries that are in the same language as the documents. For example, if your data is in Japanese, then queries need to be in Japanese too.
+
+* Currently Azure OpenAI on your data supports [semantic search](/azure/search/semantic-search-overview) for English data only. Don't enable semantic search if your data is in other languages.
+
+* We recommend using a system message to inform the model that your data is in another language. For example:
+
+ *ΓÇ£You are an AI assistant that helps people find information. You retrieve Japanese documents, and you should read them carefully in Japanese and answer in Japanese.ΓÇ¥*
+
+* If you have documents in multiple languages, we recommend building a new index for each language and connecting them separately to Azure OpenAI.
+
+### Using the API
+
+Consider setting the following parameters even if they are optional for using the API.
++
+|Parameter |Recommendation |
+|||
+|`fieldsMapping` | Explicitly set the title and content fields of your index. This impacts the search retrieval quality of Azure Cognitive Search, which impacts the overall response and citation quality. |
+|`roleInformation` | Corresponds to the ΓÇ£System MessageΓÇ¥ in the Azure OpenAI Studio. See the [System message](#system-message) section above for recommendations. |
+
+#### Streaming data
+
+You can send a streaming request using the `stream` parameter, allowing data to be sent and received incrementally, without waiting for the entire API response. This can improve performance and user experience, especially for large or dynamic data.
+
+```json
+{
+ "stream": true,
+ "dataSources": [
+ {
+ "type": "AzureCognitiveSearch",
+ "parameters": {
+ "endpoint": "'$SearchEndpoint'",
+ "key": "'$SearchKey'",
+ "indexName": "'$SearchIndex'"
+ }
+ }
+ ],
+ "messages": [
+ {
+ "role": "user",
+ "content": "What are the differences between Azure Machine Learning and Azure Cognitive Services?"
+ }
+ ]
+}
+```
+
+#### Conversation history for better results
+
+When chatting with a model, providing a history of the chat will help the model return higher quality results.
+
+```json
+{
+ "dataSources": [
+ {
+ "type": "AzureCognitiveSearch",
+ "parameters": {
+ "endpoint": "'$SearchEndpoint'",
+ "key": "'$SearchKey'",
+ "indexName": "'$SearchIndex'"
+ }
+ }
+ ],
+ "messages": [
+ {
+ "role": "user",
+ "content": "What are the differences between Azure Machine Learning and Azure Cognitive Services?"
+ },
+ {
+ "role": "tool",
+ "content": "{\"citations\": [{\"content\": \" Title: Cognitive Services and Machine Learning\\n
+ },
+ {
+ "role": "assistant",
+ "content": " \nAzure Machine Learning is a product and service tailored for data scientists to build, train, and deploy machine learning models [doc1]..."
+ },
+ {
+ "role": "user",
+ "content": "How do I use Azure machine learning?"
+ }
+ ]
+}
+```
++
+## Next steps
+* [Get started using your data with Azure OpenAI](../use-your-data-quickstart.md)
+* [Introduction to prompt engineering](./prompt-engineering.md)
cognitive-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/reference.md
Output formatting adjusted for ease of reading, actual output is a single block
| ```logit_bias``` | object | Optional | null | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.| | ```user``` | string | Optional | | A unique identifier representing your end-user, which can help Azure OpenAI to monitor and detect abuse.|
+## Completions extensions
+Extensions for chat completions, for example Azure OpenAI on your data.
+
+**Use chat completions extensions**
+
+```http
+POST {your-resource-name}/openai/deployments/{deployment-id}/extensions/chat/completions?api-version={api-version}
+```
+
+**Path parameters**
+
+| Parameter | Type | Required? | Description |
+|--|--|--|--|
+| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
+| ```deployment-id``` | string | Required | The name of your model deployment. You're required to first deploy a model before you can make calls |
+| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
+
+**Supported versions**
+- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+
+#### Example request
+
+```Console
+curl -i -X POST YOUR_RESOURCE_NAME/openai/deployments/YOUR_DEPLOYMENT_NAME/extensions/chat/completions?api-version=2023-06-01-preview \
+-H "Content-Type: application/json" \
+-H "api-key: YOUR_API_KEY" \
+-H "chatgpt_url: YOUR_RESOURCE_URL" \
+-H "chatgpt_key: YOUR_API_KEY" \
+-d \
+'
+{
+ "dataSources": [
+ {
+ "type": "AzureCognitiveSearch",
+ "parameters": {
+ "endpoint": "'YOUR_AZURE_COGNITIVE_SEARCH_ENDPOINT'",
+ "key": "'YOUR_AZURE_COGNITIVE_SEARCH_KEY'",
+ "indexName": "'YOUR_AZURE_COGNITIVE_SEARCH_INDEX_NAME'"
+ }
+ }
+ ],
+ "messages": [
+ {
+ "role": "user",
+ "content": "What are the differences between Azure Machine Learning and Azure Cognitive Services?"
+ }
+ ]
+}
+'
+```
+
+#### Example response
+
+```json
+{
+ "id": "12345678-1a2b-3c4e5f-a123-12345678abcd",
+ "model": "",
+ "created": 1684304924,
+ "object": "chat.completion",
+ "choices": [
+ {
+ "index": 0,
+ "messages": [
+ {
+ "role": "tool",
+ "content": "{\"citations\": [{\"content\": \"\\nCognitive Services are cloud-based artificial intelligence (AI) services...\", \"id\": null, \"title\": \"What is Cognitive Services\", \"filepath\": null, \"url\": null, \"metadata\": {\"chunking\": \"orignal document size=250. Scores=0.4314117431640625 and 1.72564697265625.Org Highlight count=4.\"}, \"chunk_id\": \"0\"}], \"intent\": \"[\\\"Learn about Azure Cognitive Services.\\\"]\"}",
+ "end_turn": false
+ },
+ {
+ "role": "assistant",
+ "content": " \nAzure Cognitive Services are cloud-based artificial intelligence (AI) services that help developers build cognitive intelligence into applications without having direct AI or data science skills or knowledge. [doc1]. Azure Machine Learning is a cloud service for accelerating and managing the machine learning project lifecycle. [doc1].",
+ "end_turn": true
+ }
+ ]
+ }
+ ]
+}
+```
+
+| Parameters | Type | Required? | Default | Description |
+|--|--|--|--|--|
+| `messages` | array | Required | null | The messages to generate chat completions for, in the chat format. |
+| `dataSources` | array | Required | | The data sources to be used for the Azure OpenAI on your data feature. |
+| `temperature` | number | Optional | 0 | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. |
+| `top_p` | number | Optional | 1 |An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with `top_p` probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.|
+| `stream` | boolean | Optional | false | If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a message `"messages": [{"delta": {"content": "[DONE]"}, "index": 2, "end_turn": true}]` |
+| `stop` | string or array | Optional | null | Up to 2 sequences where the API will stop generating further tokens. |
+| `max_tokens` | integer | Optional | 1000 | The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return is `4096 - prompt_tokens`. |
+
+The following parameters can be used inside of the `parameters` field inside of `dataSources`.
+
+| Parameters | Type | Required? | Default | Description |
+|--|--|--|--|--|
+| `type` | string | Required | null | The data source to be used for the Azure OpenAI on your data feature. For Azure Cognitive search the value is `AzureCognitiveSearch`. |
+| `endpoint` | string | Required | null | The data source endpoint. |
+| `key` | string | Required | null | One of the Azure Cognitive Search admin keys for your service. |
+| `indexName` | string | Required | null | The search index to be used. |
+| `fieldsMapping` | dictionary | Optional | null | Index data column mapping. |
+| `inScope` | boolean | Optional | true | If set, this value will limit responses specific to the grounding data content. |
+| `topNDocuments` | number | Optional | 5 | Number of documents that need to be fetched for document augmentation. |
+| `queryType` | string | Optional | simple | Indicates which query option will be used for Azure Cognitive Search. |
+| `semanticConfiguration` | string | Optional | null | The semantic search configuration. Only available when `queryType` is set to `semantic`. |
+| `roleInformation` | string | Optional | null | Gives the model instructions about how it should behave and the context it should reference when generating a response. Corresponds to the ΓÇ£System MessageΓÇ¥ in Azure OpenAI Studio. <!--See [Using your data](./concepts/use-your-data.md#system-message) for more information.--> ThereΓÇÖs a 100 token limit, which counts towards the overall token limit.|
## Image generation
cognitive-services Use Your Data Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/use-your-data-quickstart.md
+
+ Title: 'Use your own data with Azure OpenAI service'
+
+description: Use this article to import and use your data in Azure OpenAI.
+++++++ Last updated : 05/04/2023
+recommendations: false
+zone_pivot_groups: openai-use-your-data
++
+# Quickstart: Chat with Azure OpenAI models using your own data
+
+In this quickstart you can use your own data with Azure OpenAI models. Using Azure OpenAI's models on your data can provide you with a powerful conversational AI platform that enables faster and more accurate communication.
+
+## Prerequisites
+
+- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.
+- Access granted to Azure OpenAI in the desired Azure subscription.
+
+ Azure OpenAI requires registration and is currently only available to approved enterprise customers and partners. [See Limited access to Azure OpenAI Service](/legal/cognitive-services/openai/limited-access?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext) for more information. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
+
+- An Azure OpenAI resource with a chat model deployed (for example, GPT-3 or GPT-4). For more information about model deployment, see the [resource deployment guide](./how-to/create-resource.md).
+
+> [!div class="nextstepaction"]
+> [I ran into an issue with the prerequisites.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=OVERVIEW&Pillar=AOAI&Product=ownData&Page=quickstart&Section=Prerequisites)
++++++++
+## Clean up resources
+
+If you want to clean up and remove an OpenAI or Azure Cognitive Search resource, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
+
+- [Cognitive Services resources](../cognitive-services-apis-create-account.md#clean-up-resources)
+- [Azure Cognitive Search resources](/azure/search/search-get-started-portal#clean-up-resources)
+- [Azure app service resources](/azure/app-service/quickstart-dotnetcore?pivots=development-environment-vs#clean-up-resources)
+
+## Next steps
+- Learn more about [using your data in Azure OpenAI Service](./concepts/use-your-data.md)
+- [Chat app sample code on GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT/tree/main).
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/whats-new.md
keywords:
## June 2023
+### Use Azure OpenAI on your own data (preview)
+
+- [Azure OpenAI on your data](./concepts/use-your-data.md) is now available in preview, enabling you to chat with OpenAI models such as ChatGPT and GPT-4 and receive responses based on your data.
+ ### UK South - Azure OpenAI is now available in the UK South region. Check the [models page](concepts/models.md), for the latest information on model availability in each region.
communication-services Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation.md
Some of the common use cases that can be built using Call Automation include:
- Increase engagement by building automated customer outreach programs for marketing and customer service. - Analyze in a post-call process your unmixed audio recordings for quality assurance purposes.
-Azure Communication Services Call Automation can be used to build calling workflows for customer service scenarios, as depicted in the high-level architecture below. You can answer inbound calls or make outbound calls. Execute actions like playing a welcome message, connecting the customer to a live agent on an ACS Calling SDK client app to answer the incoming call request. With support for ACS PSTN or Direct Routing, you can then connect this workflow back to your contact center.
+Azure Communication Services Call Automation can be used to build calling workflows for customer service scenarios, as depicted in the high-level architecture. You can answer inbound calls or make outbound calls. Execute actions like playing a welcome message, connecting the customer to a live agent on an ACS Calling SDK client app to answer the incoming call request. With support for ACS PSTN or Direct Routing, you can then connect this workflow back to your contact center.
![Diagram of calling flow for a customer service scenario.](./media/call-automation-architecture.png)
The following list presents the set of features that are currently available in
## Architecture
-Call Automation uses a REST API interface to receive requests and provide responses to all actions performed within the service. Due to the asynchronous nature of calling, most actions will have corresponding events that are triggered when the action completes successfully or fails.
+Call Automation uses a REST API interface to receive requests and provide responses to all actions performed within the service. Due to the asynchronous nature of calling, most actions have corresponding events that are triggered when the action completes successfully or fails.
Azure Communication Services uses Event Grid to deliver the [IncomingCall event](./incoming-call-notification.md) and HTTPS Webhooks for all mid-call action callbacks.
Azure Communication Services uses Event Grid to deliver the [IncomingCall event]
These actions are performed before the destination endpoint listed in the IncomingCall event notification is connected. Web hook callback events only communicate the ΓÇ£answerΓÇ¥ pre-call action, not for reject or redirect actions. **Answer**
-Using the IncomingCall event from Event Grid and Call Automation SDK, a call can be answered by your application. This action allows for IVR scenarios where an inbound PSTN call can be answered programmatically by your application. Other scenarios include answering a call on behalf of a user.
+Using the IncomingCall event from Event Grid and Call Automation SDK, a call can be answered by your application. This action allows for IVR scenarios where your application can programmatically answer inbound PSTN calls. Other scenarios include answering a call on behalf of a user.
**Reject** To reject a call means your application can receive the IncomingCall event and prevent the call from being connected to the destination endpoint. **Redirect**
-Using the IncomingCall event from Event Grid, a call can be redirected to one or more endpoints creating a single or simultaneous ringing (sim-ring) scenario. This means the call isn't answered by your application, it's simply ΓÇÿredirectedΓÇÖ to another destination endpoint to be answered.
+Using the IncomingCall event from Event Grid, a call can be redirected to one or more endpoints creating a single or simultaneous ringing (sim-ring) scenario. Redirect action doesn't answer the call, the call is simply redirected or forwarded to another destination endpoint to be answered.
**Create Call** Create Call action can be used to place outbound calls to phone numbers and to other communication users. Use cases include your application placing outbound calls to proactively inform users about an outage or notify about an order update.
When your application answers a call or places an outbound call, you can play an
After your application has played an audio prompt, you can request user input to drive business logic and navigation in your application. To learn more, view our [concepts](./recognize-action.md) and how-to guide for [Gathering user input](../../how-tos/call-automation/recognize-action.md). **Transfer**
-When your application answers a call or places an outbound call to an endpoint, that call can be transferred to another destination endpoint. Transferring a 1:1 call will remove your application's ability to control the call using the Call Automation SDKs.
+When your application answers a call or places an outbound call to an endpoint, that call can be transferred to another destination endpoint. Transferring a 1:1 call removes your application's ability to control the call using the Call Automation SDKs.
**Record** You decide when to start/pause/resume/stop recording based on your application business logic, or you can grant control to the end user to trigger those actions. To learn more, view our [concepts](./../voice-video-calling/call-recording.md) and [quickstart](../../quickstarts/voice-video-calling/get-started-call-recording.md). **Hang-up**
-When your application has answered a one-to-one call, the hang-up action will remove the call leg and terminate the call with the other endpoint. If there are more than two participants in the call (group call), performing a ΓÇÿhang-upΓÇÖ action will remove your applicationΓÇÖs endpoint from the group call.
+When your application has answered a one-to-one call, the hang-up action removes the call leg and terminates the call with the other endpoint. If there are more than two participants in the call (group call), performing a ΓÇÿhang-upΓÇÖ action removes your applicationΓÇÖs endpoint from the group call.
**Terminate**
-Whether your application has answered a one-to-one or group call, or placed an outbound call with one or more participants, this action will remove all participants and end the call. This operation is triggered by setting `forEveryOne` property to true in Hang-Up call action.
+Whether your application has answered a one-to-one or group call, or placed an outbound call with one or more participants, this action removes all participants and ends the call. This operation is triggered by setting `forEveryOne` property to true in Hang-Up call action.
**Cancel media operations**
-Based on business logic your application may need to cancel ongoing and queued media operations. Depending on the media operation canceled and the ones in queue, you will received a webhook event indicating that the action has been canceled.
+Based on business logic your application may need to cancel ongoing and queued media operations. Depending on the media operation canceled and the ones in queue, you'll receive a webhook event indicating that the action has been canceled.
## Events
-The following table outlines the current events emitted by Azure Communication Services. The two tables below show events emitted by Event Grid and from the Call Automation as webhook events.
+The following table outlines the current events emitted by Azure Communication Services. The following two tables describe the events emitted by Event Grid and from the Call Automation as webhook events.
### Event Grid events
The Call Automation events are sent to the web hook callback URI specified when
| CallTransferFailed | The transfer of your applicationΓÇÖs call leg failed | | AddParticipantSucceeded| Your application added a participant | | AddParticipantFailed | Your application was unable to add a participant |
-| RemoveParticipantSucceeded| Your application has successfuly removed a participant from the call. |
+| RemoveParticipantSucceeded| Your application has successfully removed a participant from the call. |
| RemoveParticipantFailed | Your application was unable to remove a participant from the call. | | ParticipantsUpdated | The status of a participant changed while your applicationΓÇÖs call leg was connected to a call | | PlayCompleted | Your application successfully played the audio file provided |
The Call Automation events are sent to the web hook callback URI specified when
| RecognizeFailed | Recognition of user input was unsuccessful <br/>*to learn more about recognize action events view our how-to guide for [gathering user input](../../how-tos/call-automation/recognize-action.md)*| |RecordingStateChanged | Status of recording action has changed from active to inactive or vice versa. |
-To understand which events are published for different actions, refer to [this guide](../../how-tos/call-automation/actions-for-call-control.md) that provides code samples as well as sequence diagrams for various call control flows.
+To understand which events are published for different actions, refer to [this guide](../../how-tos/call-automation/actions-for-call-control.md) that provides code samples and sequence diagrams for various call control flows.
To learn how to secure the callback event delivery, refer to [this guide](../../how-tos/call-automation/secure-webhook-endpoint.md).
To learn how to secure the callback event delivery, refer to [this guide](../../
> [Get started with Call Automation](./../../quickstarts/call-automation/Callflows-for-customer-interactions.md) Here are some articles of interest to you: -- Understand how your resource will be [charged for various calling use cases](../pricing.md) with examples. -- Learn how to [manage an inbound phone call](../../quickstarts/call-automation/redirect-inbound-telephony-calls.md).
+- Understand how your resource is [charged for various calling use cases](../pricing.md) with examples.
+- Try out the quickstart to [place an outbound call](../../quickstarts/call-automation/quickstart-make-an-outbound-call.md).
+- Learn about [usage and operational logs](../analytics/logs/call-automation-logs.md) published by call automation.
communication-services Play Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/play-action.md
As part of compliance requirements in various industries, vendors are expected t
- Play action isn't enabled to work with Teams Interoperability. ## Next Steps
-Check out our how-to guide to learn [how-to play custom voice prompts](../../how-tos/call-automation/play-action.md) to users.
+- Check out our how-to guide to learn [how-to play custom voice prompts](../../how-tos/call-automation/play-action.md) to users.
communication-services Telephony Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/telephony-concept.md
This option requires:
- [Plan for Azure direct routing](./direct-routing-infrastructure.md) - [Session Border Controllers certified for Azure Communication Services direct routing](./certified-session-border-controllers.md) - [Pricing](../pricing.md)
+- Learn about [call automation API](../call-automation/call-automation.md) that enables you to build server-based calling workflows to control and manage calls for phone numbers and direct routing
### Quickstarts - [Get a phone number](../../quickstarts/telephony/get-phone-number.md) - [Outbound call to a phone number](../../quickstarts/telephony/pstn-call.md)-- [Redirect inbound telephony calls with Call Automation](../../quickstarts/call-automation/redirect-inbound-telephony-calls.md)
+- [Use call automation to build calling workflow that can place calls to phone numbers, play voice prompts and more](../../quickstarts/call-automation/quickstart-make-an-outbound-call.md)
communication-services Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-recording.md
# Call Recording overview - > [!NOTE] > Call Recording is not enabled for [Teams interoperability](../teams-interop.md).
Many countries/regions and states have laws and regulations that apply to call r
Regulations around the maintenance of personal data require the ability to export user data. In order to support these requirements, recording metadata files include the participantId for each call participant in the `participants` array. You can cross-reference the MRIs in the `participants` array with your internal user identities to identify participants in a call.
-## Known Issues
-
-It's possible that when a call is created using Call Automation, you won't get a value in the `serverCallId`. If that's the case, get the `serverCallId` from the `CallConnected` event method described in [Get serverCallId](../../quickstarts/call-automation/callflows-for-customer-interactions.md).
- ## Next steps For more information, see the following articles: - Learn more about Call recording, check out the [Call Recording Quickstart](../../quickstarts/voice-video-calling/get-started-call-recording.md).
+- Learn more about call recording [Insights](../analytics/insights/call-recording-insights.md) and [Logs](../analytics/logs/recording-logs.md)
- Learn more about [Call Automation](../../quickstarts/call-automation/callflows-for-customer-interactions.md). - Learn more about [Video Calling](../../quickstarts/voice-video-calling/get-started-with-video-calling.md).
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
For more information, see the following articles:
- Familiarize yourself with general [call flows](../call-flows.md) - Learn about [call types](../voice-video-calling/about-call-types.md)
+- Learn about [call automation API](../call-automation/call-automation.md) that enables you to build server-based calling workflows that can route and control calls with client applications.
- [Plan your PSTN solution](../telephony/plan-solution.md)+
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/overview.md
In addition to REST APIs, [Azure Communication Services client libraries](./conc
Scenarios for Azure Communication Services include: -- **Business to Consumer (B2C).** Employees and services engage external customers using voice, video, and text chat in browser and native apps. An organization can send and receive SMS messages, or [operate an interactive voice response system (IVR)](https://github.com/microsoft/botframework-telephony/blob/main/EnableTelephony.md) using a phone number you acquire through Azure. [Integration with Microsoft Teams](./quickstarts/voice-video-calling/get-started-teams-interop.md) can be used to connect consumers to Teams meetings hosted by employees; ideal for remote healthcare, banking, and product support scenarios where employees might already be familiar with Teams.
+- **Business to Consumer (B2C).** Employees and services engage external customers using voice, video, and text chat in browser and native apps. An organization can send and receive SMS messages, or [operate an interactive voice response system (IVR)](./concepts/call-automation/call-automation.md) using Call Automation and a phone number you acquire through Azure. [Integration with Microsoft Teams](./quickstarts/voice-video-calling/get-started-teams-interop.md) can be used to connect consumers to Teams meetings hosted by employees; ideal for remote healthcare, banking, and product support scenarios where employees might already be familiar with Teams.
- **Consumer to Consumer (C2C).** Build engaging consumer-to-consumer interaction with voice, video, and rich text chat. Any type of user interface can be built on Azure Communication Services SDKs, or use complete application samples and an open-source UI toolkit to help you get started quickly. To learn more, check out our [Microsoft Mechanics video](https://www.youtube.com/watch?v=apBX7ASurgM) or the resources linked next.
After creating a Communication Services resource you can start building client s
|**[Create your first user access token](./quickstarts/identity/access-tokens.md)**|User access tokens authenticate clients against your Azure Communication Services resource. These tokens are provisioned and reissued using Communication Services Identity APIs and SDKs.| |**[Get started with voice and video calling](./quickstarts/voice-video-calling/getting-started-with-calling.md)**| Azure Communication Services allows you to add voice and video calling to your browser or native apps using the Calling SDK. | |**[Add telephony calling to your app](./quickstarts/telephony/pstn-call.md)**|With Azure Communication Services, you can add telephony calling capabilities to your application.|
+| **[Make an outbound call from your app](./quickstarts/call-automation/quickstart-make-an-outbound-call.md)**| Azure Communication Services Call Automation allows you to make an outbound call with an interactive voice response system using Call Automation SDKs and REST APIs.|
|**[Join your calling app to a Teams meeting](./quickstarts/voice-video-calling/get-started-teams-interop.md)**|Azure Communication Services can be used to build custom meeting experiences that interact with Microsoft Teams. Users of your Communication Services solution(s) can interact with Teams participants over voice, video, chat, and screen sharing.| |**[Get started with chat](./quickstarts/chat/get-started.md)**|The Azure Communication Services Chat SDK is used to add rich real-time text chat into your applications.| |**[Connect a Microsoft Bot to a phone number](https://github.com/microsoft/botframework-telephony)**|Telephony channel is a channel in Microsoft Bot Framework that enables the bot to interact with users over the phone. It uses the power of Microsoft Bot Framework combined with the Azure Communication Services and the Azure Speech Services. |
communication-services Callflows For Customer Interactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/call-automation/callflows-for-customer-interactions.md
- Title: Build a customer interaction workflow using Call Automation-
-description: Quickstart on how to use Call Automation to answer a call, recognize DTMF input, and add a participant to a call.
---- Previously updated : 09/06/2022---
-zone_pivot_groups: acs-js-csharp-java-python
--
-# Build a customer interaction workflow using Call Automation
--
-In this quickstart, you'll learn how to build an application that uses the Azure Communication Services Call Automation SDK to handle the following scenario:
-- handling the `IncomingCall` event from Event Grid-- answering a call-- playing an audio file and recognizing input(DTMF) from caller-- adding a communication user to the call such as a customer service agent who uses a web application built using Calling SDKs to connect to Azure Communication Services-----
-## Subscribe to IncomingCall event
-
-IncomingCall is an Azure Event Grid event for notifying incoming calls to your Communication Services resource. To learn more about it, see [this guide](../../concepts/call-automation/incoming-call-notification.md).
-1. Navigate to your resource on Azure portal and select `Events` from the left side menu.
-2. Select `+ Event Subscription` to create a new subscription.
-3. Filter for Incoming Call event.
-4. Choose endpoint type as web hook and provide the public url generated for your application by ngrok. Make sure to provide the exact api route that you programmed to receive the event previously. In this case, it would be <ngrok_url>/api/incomingCall.
-
- ![Screenshot of portal page to create a new event subscription.](./media/event-susbcription.png)
-
- If your application does not send 200Ok back to Event Grid in time, Event Grid will use exponential backoff retry to send the incoming call event again. However, an incoming call only rings for 30 seconds, and acting on a call after that will not work. To avoid retries after a call expires, we recommend setting the retry policy in the `Additional Features` tab as: Max Event Delivery Attempts to 2 and Event Time to Live to 1 minute. Learn more about retries [here](../../../event-grid/delivery-and-retry.md).
-5. Select create to start the creation of subscription and validation of your endpoint as mentioned previously. The subscription is ready when the provisioning status is marked as succeeded.
-
-This subscription currently has no filters and hence all incoming calls will be sent to your application. To filter for specific phone number or a communication user, use the Filters tab.
-
-## Testing the application
-
-1. Place a call to the number you acquired in the Azure portal.
-2. Your Event Grid subscription to the `IncomingCall` should execute and call your application that will request to answer the call.
-3. When the call is connected, a `CallConnected` event will be sent to your application's callback url. At this point, the application will request audio to be played and to receive input from the caller.
-4. From your phone, press any three number keys, or press one number key and then # key.
-5. When the input has been received and recognized, the application will make a request to add a participant to the call.
-6. Once the added user answers, you can talk to them.
--
-## Clean up resources
-
-If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
-
-## Next steps
-- Learn more about [Call Automation](../../concepts/call-automation/call-automation.md) and its features. -- Learn how to [redirect inbound telephony calls](../../quickstarts/call-automation/redirect-inbound-telephony-calls.md) with Call Automation.-- Learn more about [Play action](../../concepts/call-automation/play-action.md).-- Learn more about [Recognize action](../../concepts/call-automation/recognize-action.md).-- Learn more about [Handle Call Automation Events with EventProcessor](../../how-tos/call-automation/handle-events-with-event-processor.md).
communication-services Redirect Inbound Telephony Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/call-automation/redirect-inbound-telephony-calls.md
- Title: Azure Communication Services Call Automation how-to for redirecting inbound PSTN calls-
-description: Provides a how-to for redirecting inbound telephony calls with Call Automation.
---- Previously updated : 09/06/2022---
-zone_pivot_groups: acs-csharp-java
--
-# Redirect inbound telephony calls with Call Automation
--
-Get started with Azure Communication Services by using the Call Automation SDKs to build automated calling workflows that listen for and manage inbound calls placed to a phone number or received via ACS direct routing.
---
-## Subscribe to IncomingCall event
-
-IncomingCall is an Azure Event Grid event for notifying incoming calls to your Communication Services resource. To learn more about it, see [this guide](../../concepts/call-automation/incoming-call-notification.md).
-1. Navigate to your resource on Azure portal and select `Events` from the left side menu.
-2. Select `+ Event Subscription` to create a new subscription.
-3. Filter for Incoming Call event.
-4. Choose endpoint type as web hook and provide the public url generated for your application by ngrok. Make sure to provide the exact api route that you programmed to receive the event previously. In this case, it would be <ngrok_url>/api/incomingCall.
-
- ![Screenshot of portal page to create a new event subscription.](./media/event-susbcription.png)
-
- If your application does not send 200Ok back to Event Grid in time, Event Grid will use exponential backoff retry to send the incoming call event again. However, an incoming call only rings for 30 seconds, and acting on a call after that will not work. To avoid retries after a call expires, we recommend setting the retry policy in the `Additional Features` tab as: Max Event Delivery Attempts to 2 and Event Time to Live to 1 minute. Learn more about retries [here](../../../event-grid/delivery-and-retry.md).
-5. Select create to start the creation of subscription and validation of your endpoint as mentioned previously. The subscription is ready when the provisioning status is marked as succeeded.
-
-This subscription currently has no filters and hence all incoming calls will be sent to your application. To filter for specific phone number or a communication user, use the Filters tab.
-
-## Testing the application
-
-1. Place a call to the number you acquired in the Azure portal (see prerequisites above).
-2. Your Event Grid subscription to the IncomingCall should execute and call your application.
-3. The call will be redirected to the endpoint(s) you specified in your application.
-
-Since this call flow involves a redirected call instead of answering it, pre-call web hook callbacks to notify your application the other endpoint accepted the call aren't published.
-
-## Clean up resources
-
-If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../../quickstarts/create-communication-resource.md#clean-up-resources).
-
-## Next steps
--- Learn more about [Call Automation](../../concepts/call-automation/call-automation.md) and its features. -- Learn about [Play action](../../concepts/call-automation/play-Action.md) to play audio in a call.-- Learn how to build a [call workflow](../../quickstarts/call-automation/callflows-for-customer-interactions.md) for a customer support scenario.
communication-services Get Started Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-call-recording.md
# Call Recording Quickstart - This quickstart gets you started with Call Recording for voice and video calls. To start using the Call Recording APIs, you must have a call in place. Make sure you're familiar with [Calling client SDK](get-started-with-video-calling.md) and/or [Call Automation](../call-automation/callflows-for-customer-interactions.md#build-a-customer-interaction-workflow-using-call-automation) to build the end-user calling experience. ::: zone pivot="programming-language-csharp"
communications-gateway Monitor Azure Communications Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/monitor-azure-communications-gateway.md
For more information of filtering and splitting, see [Advanced features of Azure
## Alerts
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data.
-
-Azure Communications Gateway doesn't currently support alerts.
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](/azure/azure-monitor/alerts/alerts-metric-overview) and the [activity log](/azure/azure-monitor/alerts/activity-log-alerts). Different types of alerts have benefits and drawbacks.
## Next steps -- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+- See [Monitoring Azure Communications Gateway data reference](monitoring-azure-communications-gateway-data-reference.md) for a reference of the metrics, logs, and other important values created by Azure Communications Gateway.
+
+- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
confidential-computing How To Create Custom Image Confidential Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/how-to-create-custom-image-confidential-vm.md
This "how to" shows you how to use the Azure Command-Line Interface (Azure CLI) to create a custom image for your confidential virtual machine (confidential VM) in Azure. The Azure CLI is used to create and manage Azure resources via either the command line or scripts.
+Creating a custom image allows you to preconfigure your confidential VM with specific software, settings, and security measures that meet your requirements. If you want to bring an Ubuntu image that is not [confidential VM compatible](/azure/confidential-computing/confidential-vm-overview#os-support), you can follow the steps below to see what the minimum requirements are for your image.
+ ## Prerequisites If you don't have an Azure subscription, [create a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
az group create --name $resourceGroupName --location eastus
``` ## Next Steps > [!div class="nextstepaction"]
-> [Connect and attest the CVM through Microsoft Azure Attestation Sample App](quick-create-confidential-vm-azure-cli-amd.md)
+> [Connect and attest the CVM through Microsoft Azure Attestation Sample App](quick-create-confidential-vm-azure-cli-amd.md)
container-registry Quickstart Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/quickstart-client-libraries.md
Last updated 10/11/2022
-zone_pivot_groups: programming-languages-set-ten
+zone_pivot_groups: programming-languages-set-fivedevlangs
ms.devlang: azurecli # Quickstart: Use the Azure Container Registry client libraries Use this article to get started with the client library for Azure Container Registry. Follow these steps to try out example code for data-plane operations on images and artifacts.
class DeleteImagesAsync(object):
::: zone-end +
+## Get started
+
+[Source code][go_source] | [Package (pkg.go.dev)][go_package] | [REST API reference][go_docs]
+
+### Install the package
+
+Install the Azure Container Registry client library for Go with `go get`:
+
+```bash
+go get github.com/Azure/azure-sdk-for-go/sdk/containers/azcontainerregistry
+```
+
+## Authenticate the client
+
+When you're developing and debugging your application locally, you can use [azidentity.NewDefaultAzureCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#NewDefaultAzureCredential) to authenticate. We recommend using a [managed identity](/azure/active-directory/managed-identities-azure-resources/overview) in a production environment.
+
+```go
+import (
+ "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
+ "github.com/Azure/azure-sdk-for-go/sdk/containers/azcontainerregistry"
+ "log"
+)
+
+func main() {
+ cred, err := azidentity.NewDefaultAzureCredential(nil)
+ if err != nil {
+ log.Fatalf("failed to obtain a credential: %v", err)
+ }
+
+ client, err := azcontainerregistry.NewClient("https://myregistry.azurecr.io", cred, nil)
+ if err != nil {
+ log.Fatalf("failed to create client: %v", err)
+ }
+}
+```
+See the [azidentity](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity) documentation for more information about other authentication approaches.
+
+## Examples
+
+Each sample assumes the container registry endpoint URL is "https://myregistry.azurecr.io".
+
+### List tags
+
+This sample assumes the registry has a repository `hello-world`.
+
+```go
+import (
+ "context"
+ "fmt"
+ "github.com/Azure/azure-sdk-for-go/sdk/containers/azcontainerregistry"
+ "log"
+)
+
+func Example_listTagsWithAnonymousAccess() {
+ client, err := azcontainerregistry.NewClient("https://myregistry.azurecr.io", nil, nil)
+ if err != nil {
+ log.Fatalf("failed to create client: %v", err)
+ }
+ ctx := context.Background()
+ pager := client.NewListTagsPager("library/hello-world", nil)
+ for pager.More() {
+ page, err := pager.NextPage(ctx)
+ if err != nil {
+ log.Fatalf("failed to advance page: %v", err)
+ }
+ for _, v := range page.Tags {
+ fmt.Printf("tag: %s\n", *v.Name)
+ }
+ }
+}
+```
+
+### Set artifact properties
+
+This sample assumes the registry has a repository `hello-world` with image tagged `latest`.
+
+```go
+package azcontainerregistry_test
+
+import (
+ "context"
+ "fmt"
+ "github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
+ "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
+ "github.com/Azure/azure-sdk-for-go/sdk/containers/azcontainerregistry"
+ "log"
+)
+
+func Example_setArtifactProperties() {
+ cred, err := azidentity.NewDefaultAzureCredential(nil)
+ if err != nil {
+ log.Fatalf("failed to obtain a credential: %v", err)
+ }
+ client, err := azcontainerregistry.NewClient("https://myregistry.azurecr.io", cred, nil)
+ if err != nil {
+ log.Fatalf("failed to create client: %v", err)
+ }
+ ctx := context.Background()
+ res, err := client.UpdateTagProperties(ctx, "library/hello-world", "latest", &azcontainerregistry.ClientUpdateTagPropertiesOptions{
+ Value: &azcontainerregistry.TagWriteableProperties{
+ CanWrite: to.Ptr(false),
+ CanDelete: to.Ptr(false),
+ }})
+ if err != nil {
+ log.Fatalf("failed to finish the request: %v", err)
+ }
+ fmt.Printf("repository library/hello-world - tag latest: 'CanWrite' property: %t, 'CanDelete' property: %t\n", *res.Tag.ChangeableAttributes.CanWrite, *res.Tag.ChangeableAttributes.CanDelete)
+}
+```
+
+### Delete images
+
+```go
+package azcontainerregistry_test
+
+import (
+ "context"
+ "fmt"
+ "github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
+ "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
+ "github.com/Azure/azure-sdk-for-go/sdk/containers/azcontainerregistry"
+ "log"
+)
+
+func Example_deleteImages() {
+ cred, err := azidentity.NewDefaultAzureCredential(nil)
+ if err != nil {
+ log.Fatalf("failed to obtain a credential: %v", err)
+ }
+ client, err := azcontainerregistry.NewClient("https://myregistry.azurecr.io", cred, nil)
+ if err != nil {
+ log.Fatalf("failed to create client: %v", err)
+ }
+ ctx := context.Background()
+ repositoryPager := client.NewListRepositoriesPager(nil)
+ for repositoryPager.More() {
+ repositoryPage, err := repositoryPager.NextPage(ctx)
+ if err != nil {
+ log.Fatalf("failed to advance repository page: %v", err)
+ }
+ for _, r := range repositoryPage.Repositories.Names {
+ manifestPager := client.NewListManifestsPager(*r, &azcontainerregistry.ClientListManifestsOptions{
+ OrderBy: to.Ptr(azcontainerregistry.ArtifactManifestOrderByLastUpdatedOnDescending),
+ })
+ for manifestPager.More() {
+ manifestPage, err := manifestPager.NextPage(ctx)
+ if err != nil {
+ log.Fatalf("failed to advance manifest page: %v", err)
+ }
+ imagesToKeep := 3
+ for i, m := range manifestPage.Manifests.Attributes {
+ if i >= imagesToKeep {
+ for _, t := range m.Tags {
+ fmt.Printf("delete tag from image: %s", *t)
+ _, err := client.DeleteTag(ctx, *r, *t, nil)
+ if err != nil {
+ log.Fatalf("failed to delete tag: %v", err)
+ }
+ }
+ _, err := client.DeleteManifest(ctx, *r, *m.Digest, nil)
+ if err != nil {
+ log.Fatalf("failed to delete manifest: %v", err)
+ }
+ fmt.Printf("delete image with digest: %s", *m.Digest)
+ }
+ }
+ }
+ }
+ }
+}
+```
++ ## Clean up resources If you want to clean up and remove an Azure container registry, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
In this quickstart, you learned about using the Azure Container Registry client
[pip_link]: https://pypi.org [python_identity]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/identity/azure-identity [python_source]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/containerregistry/azure-containerregistry
+[go_source]: https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/containers/azcontainerregistry
+[go_package]: https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/containers/azcontainerregistry
+[go_docs]: /rest/api/containerregistry/
container-registry Tutorial Enable Registry Cache Auth Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-enable-registry-cache-auth-cli.md
Title: Enable Cache ACR with authentication - Azure CLI
description: Learn how to enable Cache ACR with authentication using Azure CLI. Previously updated : 04/19/2022 Last updated : 06/17/2022
This article walks you through the steps of enabling Cache ACR with authenticati
## Prerequisites
-* You can use the [Azure Cloud Shell][Azure Cloud Shell] or a local installation of the Azure CLI to run the command examples in this article. If you'd like to use it locally, version 2.0.74 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][Install Azure CLI].
+* You can use the [Azure Cloud Shell][Azure Cloud Shell] or a local installation of the Azure CLI to run the command examples in this article. If you'd like to use it locally, version 2.46.0 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][Install Azure CLI].
* You have an existing Key Vault to store credentials. Learn more about [creating and storing credentials in a Key Vault.][create-and-store-keyvault-credentials] * You can set and retrieve secrets from your Key Vault. Learn more about [set and retrieve a secret from Key Vault.][set-and-retrieve-a-secret]
container-registry Tutorial Enable Registry Cache Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-enable-registry-cache-cli.md
Title: Enable Cache for ACR (preview) - Azure CLI
description: Learn how to enable Registry Cachein your Azure Container Registry using Azure CLI. Previously updated : 04/19/2022 Last updated : 06/17/2022
This article is part three of a six-part tutorial series. [Part one](tutorial-re
## Prerequisites
-* You can use the [Azure Cloud Shell][Azure Cloud Shell] or a local installation of the Azure CLI to run the command examples in this article. If you'd like to use it locally, version 2.0.74 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][Install Azure CLI].
+* You can use the [Azure Cloud Shell][Azure Cloud Shell] or a local installation of the Azure CLI to run the command examples in this article. If you'd like to use it locally, version 2.46.0 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][Install Azure CLI].
## Configure Cache for ACR (preview) - Azure CLI
cosmos-db Indexing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/indexing.md
However, the sequence of the paths in the compound index must exactly match the
`db.coll.find().sort({age:1,name:1})`
-> [!NOTE]
-> Compound indexes are only used in queries that sort results. For queries that have multiple filters that don't need to sort, create multipe single field indexes.
- ### Multikey indexes Azure Cosmos DB creates multikey indexes to index content stored in arrays. If you index a field with an array value, Azure Cosmos DB automatically indexes every element in the array.
cost-management-billing Cancel Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/cancel-azure-subscription.md
tags: billing
Previously updated : 05/25/2023 Last updated : 06/19/2023
The following table describes the permission required to cancel a subscription.
||| |Subscriptions created when you sign up for Azure through the Azure website. For example, when you sign up for an [Azure Free Account](https://azure.microsoft.com/offers/ms-azr-0044p/), [account with pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) or as a [Visual studio subscriber](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers/). | Service administrator and subscription owner | |[Microsoft Enterprise Agreement](https://azure.microsoft.com/pricing/enterprise-agreement/) and [Enterprise Dev/Test](https://azure.microsoft.com/offers/ms-azr-0148p/) | Service administrator and subscription owner |
-|[Azure plan](https://azure.microsoft.com/offers/ms-azr-0017g/) and [Azure plan for DevTest](https://azure.microsoft.com/offers/ms-azr-0148g/) | Owners of the subscription |
+|[Azure plan](https://azure.microsoft.com/offers/ms-azr-0017g/) and [Azure plan for DevTest](https://azure.microsoft.com/offers/ms-azr-0148g/) | Subscription owners |
An account administrator without the service administrator or subscription owner role canΓÇÖt cancel an Azure subscription. However, an account administrator can make themself the service administrator and then they can cancel a subscription. For more information, see [Change the Service Administrator](../../role-based-access-control/classic-administrators.md#change-the-service-administrator).
If you have a support plan associated with the subscription, it's shown in the c
If you have any Azure resources associated with the subscription, they're shown in the cancellation process. Otherwise, they're not shown.
-A billing account owner uses the following steps to cancel a subscription.
- A subscription owner can navigate in the Azure portal to **Subscriptions** and then start at step 3. 1. In the Azure portal, navigate to **Cost Management + Billing**.
data-factory Connector Troubleshoot Synapse Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-synapse-sql.md
This article provides suggestions to troubleshoot common problems with the Azure
| If this is a transient issue (for example, an instable network connection), add retry in the activity policy to mitigate. | For more information, see [Pipelines and activities](./concepts-pipelines-activities.md#activity-policy). | | If the error message contains the string "Client with IP address '...' is not allowed to access the server", and you're trying to connect to Azure SQL Database, the error is usually caused by an Azure SQL Database firewall issue. | In the Azure SQL Server firewall configuration, enable the **Allow Azure services and resources to access this server** option. For more information, see [Azure SQL Database and Azure Synapse IP firewall rules](/azure/azure-sql/database/firewall-configure). | |If the error message contains `Login failed for user '<token-identified principal>'`, this error is usually caused by not granting enough permission to your service principal or system-assigned managed identity or user-assigned managed identity (depends on which authentication type you choose) in your database. |Grant enough permission to your service principal or system-assigned managed identity or user-assigned managed identity in your database. <br/><br/> **For Azure SQL Database**:<br/>&nbsp;&nbsp;&nbsp;&nbsp;- If you use service principal authentication, follow [Service principal authentication](connector-azure-sql-database.md#service-principal-authentication).<br/>&nbsp;&nbsp;&nbsp;&nbsp;- If you use system-assigned managed identity authentication, follow [System-assigned managed identity authentication](connector-azure-sql-database.md#managed-identity).<br/>&nbsp;&nbsp;&nbsp;&nbsp;- If you use user-assigned managed identity authentication, follow [User-assigned managed identity authentication](connector-azure-sql-database.md#user-assigned-managed-identity-authentication). <br/>&nbsp;&nbsp;&nbsp;<br/>**For Azure Synapse Analytics**:<br/>&nbsp;&nbsp;&nbsp;&nbsp;- If you use service principal authentication, follow [Service principal authentication](connector-azure-sql-data-warehouse.md#service-principal-authentication).<br/>&nbsp;&nbsp;&nbsp;&nbsp;- If you use system-assigned managed identity authentication, follow [System-assigned managed identities for Azure resources authentication](connector-azure-sql-data-warehouse.md#managed-identity).<br/>&nbsp;&nbsp;&nbsp;&nbsp;- If you use user-assigned managed identity authentication, follow [User-assigned managed identity authentication](connector-azure-sql-data-warehouse.md#user-assigned-managed-identity-authentication).<br/>&nbsp;&nbsp;&nbsp;<br/>**For Azure SQL Managed Instance**: <br/>&nbsp;&nbsp;&nbsp;&nbsp;- If you use service principal authentication, follow [Service principal authentication](connector-azure-sql-managed-instance.md#service-principal-authentication).<br/>&nbsp;&nbsp;&nbsp;- If you use system-assigned managed identity authentication, follow [System-assigned managed identity authentication](connector-azure-sql-managed-instance.md#managed-identity).<br/>&nbsp;&nbsp;&nbsp;- If you use user-assigned managed identity authentication, follow [User-assigned managed identity authentication](connector-azure-sql-managed-instance.md#user-assigned-managed-identity-authentication).|
+ | If you meet the error message that contains `The server was not found or was not accessible` when using Azure SQL Managed Instance, this error is usually caused by not enabling the Azure SQL Managed Instance public endpoint.| Refer to [Configure public endpoint in Azure SQL Managed Instance](/azure/azure-sql/managed-instance/public-endpoint-configure) to enable the Azure SQL Managed Instance public endpoint. |
## Error code: SqlOperationFailed
data-factory How To Use Azure Key Vault Secrets Pipeline Activities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-use-azure-key-vault-secrets-pipeline-activities.md
Title: Use Azure Key Vault secrets in pipeline activities
-description: Learn how to fetch stored credentials from Azure key vault and use them during data factory pipeline runs.
+description: Learn how to fetch stored credentials from Azure Key Vault and use them during data factory pipeline runs.
defender-for-cloud Adaptive Application Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-application-controls.md
description: This document helps you use adaptive application control in Microso
Previously updated : 02/06/2023 Last updated : 06/14/2023 # Use adaptive application controls to reduce your machines' attack surfaces
Some of the functions available from the REST API include:
> > Remove the following properties before using the JSON in the **Put** request: recommendationStatus, configurationStatus, issues, location, and sourceSystem.
-## FAQ - Adaptive application controls
--- [Are there any options to enforce the application controls?](#are-there-any-options-to-enforce-the-application-controls)-- [Why do I see a Qualys app in my recommended applications?](#why-do-i-see-a-qualys-app-in-my-recommended-applications)-
-### Are there any options to enforce the application controls?
-
-No enforcement options are currently available. Adaptive application controls are intended to provide **security alerts** if any application runs other than the ones you've defined as safe. They have a range of benefits ([What are the benefits of adaptive application controls?](#what-are-the-benefits-of-adaptive-application-controls)) and are customizable as shown on this page.
-
-### Why do I see a Qualys app in my recommended applications?
-
-[Microsoft Defender for Servers](defender-for-servers-introduction.md) includes vulnerability scanning for your machines. You don't need a Qualys license or even a Qualys account - everything's handled seamlessly inside Defender for Cloud. For details of this scanner and instructions for how to deploy it, see [Defender for Cloud's integrated Qualys vulnerability assessment solution](deploy-vulnerability-assessment-vm.md).
-
-To ensure no alerts are generated when Defender for Cloud deploys the scanner, the adaptive application controls recommended allowlist includes the scanner for all machines.
- ## Next steps On this page, you learned how to use adaptive application control in Microsoft Defender for Cloud to define allowlists of applications running on your Azure and non-Azure machines. To learn more about some other cloud workload protection features, see: - [Understanding just-in-time (JIT) VM access](just-in-time-access-overview.md) - [Securing your Azure Kubernetes clusters](defender-for-kubernetes-introduction.md)
+- View common question about [Adaptive application controls](faq-defender-for-servers.yml)
defender-for-cloud Adaptive Network Hardening https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-network-hardening.md
description: Learn how to use actual traffic patterns to harden your network sec
Previously updated : 12/13/2022 Last updated : 06/14/2023 + # Improve your network security posture with adaptive network hardening Adaptive network hardening is an agentless feature of Microsoft Defender for Cloud - nothing needs to be installed on your machines to benefit from this network hardening tool.
Applying [network security groups (NSG)](../virtual-network/network-security-gro
Adaptive network hardening provides recommendations to further harden the NSG rules. It uses a machine learning algorithm that factors in actual traffic, known trusted configuration, threat intelligence, and other indicators of compromise, and then provides recommendations to allow traffic only from specific IP/port tuples.
-For example, let's say the existing NSG rule is to allow traffic from 140.20.30.10/24 on port 22. Based on traffic analysis, adaptive network hardening might recommend narrowing the range to allow traffic from 140.23.30.10/29, and deny all other traffic to that port. For the full list of supported ports, see the FAQ entry [Which ports are supported?](#which-ports-are-supported).
+For example, let's say the existing NSG rule is to allow traffic from 140.20.30.10/24 on port 22. Based on traffic analysis, adaptive network hardening might recommend narrowing the range to allow traffic from 140.23.30.10/29, and deny all other traffic to that port. For the full list of supported ports, see the common questions entry [Which ports are supported?](faq-defender-for-servers.yml).
## View hardening alerts and recommended rules
Some important guidelines for modifying an adaptive network hardening rule:
Creating and modifying "deny" rules is done directly on the NSG. For more information, see [Create, change, or delete a network security group](../virtual-network/manage-network-security-group.md). -- A **Deny all traffic** rule is the only type of "deny" rule that would be listed here, and it cannot be modified. You can, however, delete it (see [Delete a rule](#delete-rule)). To learn about this type of rule, see the FAQ entry [When should I use a "Deny all traffic" rule?](#when-should-i-use-a-deny-all-traffic-rule).
+- A **Deny all traffic** rule is the only type of "deny" rule that would be listed here, and it cannot be modified. You can, however, delete it (see [Delete a rule](#delete-rule)). To learn about this type of rule, see the common questions entry [When should I use a "Deny all traffic" rule?](faq-defender-for-servers.yml).
To modify an adaptive network hardening rule:
To delete an adaptive network hardening rule for your current session:
![Deleting a rule.](./media/adaptive-network-hardening/delete-hard-rule.png)
-## FAQ - Adaptive network hardening
--- [Which ports are supported?](#which-ports-are-supported)-- [Are there any prerequisites or VM extensions required for adaptive network hardening?](#are-there-any-prerequisites-or-vm-extensions-required-for-adaptive-network-hardening)-
-### Which ports are supported?
-
-Adaptive network hardening recommendations are only supported on the following specific ports (for both UDP and TCP):
-
-13, 17, 19, 22, 23, 53, 69, 81, 111, 119, 123, 135, 137, 138, 139, 161, 162, 389, 445, 512, 514, 593, 636, 873, 1433, 1434, 1900, 2049, 2301, 2323, 2381, 3268, 3306, 3389, 4333, 5353, 5432, 5555, 5800, 5900, 5900, 5985, 5986, 6379, 6379, 7000, 7001, 7199, 8081, 8089, 8545, 9042, 9160, 9300, 11211, 16379, 26379, 27017, 37215
-
-### Are there any prerequisites or VM extensions required for adaptive network hardening?
-
-Adaptive network hardening is an agentless feature of Microsoft Defender for Cloud - nothing needs to be installed on your machines to benefit from this network hardening tool.
-
-### When should I use a "Deny all traffic" rule?
+## Next steps
-A **Deny all traffic** rule is recommended when, as a result of running the algorithm, Defender for Cloud does not identify traffic that should be allowed, based on the existing NSG configuration. Therefore, the recommended rule is to deny all traffic to the specified port. The name of this type of rule is displayed as "*System Generated*". After enforcing this rule, its actual name in the NSG will be a string comprised of the protocol, traffic direction, "DENY", and a random number.
+- View common question about [adaptive network hardening](faq-defender-for-servers.yml)
defender-for-cloud Apply Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/apply-security-baseline.md
Previously updated : 11/09/2021 Last updated : 06/15/2023 # Apply Azure security baselines to machines
To compare machines with the OS security baselines:
- To view the list of machines that have been assessed, open **Affected resources**. - To view the list of findings for one machine, select a machine from the **Unhealthy resources** tab. A page will open listing only the findings for that machine.
-## FAQ - Hardening an OS according to the security baseline
--- [Apply Azure security baselines to machines](#apply-azure-security-baselines-to-machines)
- - [Availability](#availability)
- - [What are the hardening recommendations?](#what-are-the-hardening-recommendations)
- - [Compare machines in your subscriptions with the OS security baselines](#compare-machines-in-your-subscriptions-with-the-os-security-baselines)
- - [FAQ - Hardening an OS according to the security baseline](#faqhardening-an-os-according-to-the-security-baseline)
- - [How do I deploy the prerequisites for the security configuration recommendations?](#how-do-i-deploy-the-prerequisites-for-the-security-configuration-recommendations)
- - [Why is a machine shown as not applicable?](#why-is-a-machine-shown-as-not-applicable)
- - [Next steps](#next-steps)
-
-### How do I deploy the prerequisites for the security configuration recommendations?
-
-To deploy the Guest Configuration extension with its prerequisites:
--- For selected machines, follow the security recommendation **Guest Configuration extension should be installed on your machines** from the **Implement security best practices** security control.--- At scale, assign the policy initiative **Deploy prerequisites to enable Guest Configuration policies on virtual machines**.-
-### Why is a machine shown as not applicable?
-
-The list of resources in the **Not applicable** tab includes a **Reason** column. Some of the common reasons include:
-
-| Reason | Details |
-|-|--|
-| **No scan data available on the machine** | There aren't any compliance results for this machine in Azure Resource Graph. All compliance results are written to Azure Resource Graph by the Guest Configuration extension. You can check the data in Azure Resource Graph using the sample queries in [Azure Policy Guest Configuration - sample ARG queries](../governance/policy/samples/resource-graph-samples.md?tabs=azure-cli#azure-policy-guest-configuration).|
-| **Guest Configuration extension is not installed on the machine** | The machine is missing the Guest Configuration extension, which is a prerequisite for assessing the compliance with the Azure security baseline. |
-| **System managed identity is not configured on the machine** | A system-assigned, managed identity must be deployed on the machine. |
-| **The recommendation is disabled in policy** | The policy definition that assesses the OS baseline is disabled on the scope that includes the relevant machine. |
- ## Next steps In this document, you learned how to use Defender for Cloud's guest configuration recommendations to compare the hardening of your OS with the Azure security baseline.
To learn more about these configuration settings, see:
- [Windows security baseline](../governance/policy/samples/guest-configuration-baseline-windows.md) - [Linux security baseline](../governance/policy/samples/guest-configuration-baseline-linux.md) - [Microsoft cloud security benchmark](/security/benchmark/azure/overview)
+- Check out [common questions](faq-defender-for-servers.yml) about Defender for Servers.
defender-for-cloud Asset Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/asset-inventory.md
Title: Using the asset inventory to view your security posture with Microsoft Defender for Cloud description: Learn about Microsoft Defender for Cloud's asset management experience providing full visibility over all your Defender for Cloud monitored resources. Previously updated : 01/03/2023 Last updated : 06/14/2023
Examples of using Azure Resource Graph Explorer to access and explore software i
) on vmId ```
-## FAQ - Inventory
-
-### Why aren't all of my resources shown, such as subscriptions, machines, storage accounts?
-
-The inventory view lists your Defender for Cloud connected resources from a Cloud Security Posture Management (CSPM) perspective. The filters show only the resources with active recommendations.
-
-For example, if you have access to eight subscriptions but only seven currently have recommendations, filter by **Resource type = Subscriptions** shows only the seven subscriptions with active recommendations:
--
-### Why do some of my resources show blank values in the Defender for Cloud or monitoring agent columns?
-
-Not all Defender for Cloud monitored resources require agents. For example, Defender for Cloud doesn't require agents to monitor Azure Storage accounts or PaaS resources, such as disks, Logic Apps, Data Lake Analysis, and Event Hubs.
-
-When pricing or agent monitoring isn't relevant for a resource, nothing will be shown in those columns of inventory.
-- ## Next steps This article described the asset inventory page of Microsoft Defender for Cloud.
For more information on related tools, see the following pages:
- [Azure Resource Graph (ARG)](../governance/resource-graph/index.yml) - [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/)
+- View common question about [asset inventory](faq-defender-for-servers.yml)
defender-for-cloud Auto Deploy Azure Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/auto-deploy-azure-monitoring-agent.md
description: Learn how to deploy the Azure Monitor Agent on your Azure, multiclo
Previously updated : 03/01/2023 Last updated : 06/18/2023
To deploy the Azure Monitor Agent with Defender for Cloud:
1. For the **Log Analytics agent/Azure Monitor Agent**, select **Edit configuration**.
- 1. For the Auto-provisioning configuration agent type, select **Azure Monitor Agent**.
+ 1. For the Autoprovisioning configuration agent type, select **Azure Monitor Agent**.
- :::image type="content" source="media/auto-deploy-azure-monitoring-agent/select-azure-monitor-agent-auto-provision.png" alt-text="Screenshot showing selecting Azure Monitor Agent for auto-provisioning." lightbox="media/auto-deploy-azure-monitoring-agent/select-azure-monitor-agent-auto-provision.png":::
+ :::image type="content" source="media/auto-deploy-azure-monitoring-agent/select-azure-monitor-agent-auto-provision.png" alt-text="Screenshot showing selecting Azure Monitor Agent for autoprovisioning." lightbox="media/auto-deploy-azure-monitoring-agent/select-azure-monitor-agent-auto-provision.png":::
By default: - The Azure Monitor Agent is installed on all existing machines in the selected subscription, and on all new machines created in the subscription. - The Log Analytics agent isn't uninstalled from machines that already have it installed. You can [leave the Log Analytics agent](#impact-of-running-with-both-the-log-analytics-and-azure-monitor-agents) on the machine, or you can manually [remove the Log Analytics agent](../azure-monitor/agents/azure-monitor-agent-migration.md) if you don't require it for other protections. - The agent sends data to the default workspace for the subscription. You can also [configure a custom workspace](#configure-custom-destination-log-analytics-workspace) to send data to.
- - You can't enable [collection of additional security events](#additional-security-events-collection).
+ - You can't enable [collection of other security events](#other-security-events-collection).
## Impact of running with both the Log Analytics and Azure Monitor Agents
Learn more about [migrating to the Azure Monitor Agent](../azure-monitor/agents/
### Configure custom destination Log Analytics workspace
-When you install the Azure Monitor Agent with auto-provisioning, you can define the destination workspace of the installed extensions. By default, the destination is the ΓÇ£default workspaceΓÇ¥ that Defender for Cloud creates for each region in the subscription: `defaultWorkspace-<subscriptionId>-<regionShortName>`. Defender for Cloud automatically configures the data collection rules, workspace solution, and additional extensions for that workspace.
+When you install the Azure Monitor Agent with autoprovisioning, you can define the destination workspace of the installed extensions. By default, the destination is the ΓÇ£default workspaceΓÇ¥ that Defender for Cloud creates for each region in the subscription: `defaultWorkspace-<subscriptionId>-<regionShortName>`. Defender for Cloud automatically configures the data collection rules, workspace solution, and other extensions for that workspace.
If you configure a custom Log Analytics workspace: -- Defender for Cloud only configures the data collection rules and additional extensions for the custom workspace. You'll have to configure the workspace solution on the custom workspace.-- Machines with Log Analytics agent that report to a Log Analytics workspace with the security solution are billed even when the Defender for Servers plan isn't enabled. Machines with the Azure Monitor Agent are billed only when the plan is enabled on the subscription. The security solution is still required on the workspace to work with the plans features and to be eligible for the 500-MB benefit.
+- Defender for Cloud only configures the data collection rules and other extensions for the custom workspace. You have to configure the workspace solution on the custom workspace.
+- Machines with Log Analytics agent that reports to a Log Analytics workspace with the security solution are billed even when the Defender for Servers plan isn't enabled. Machines with the Azure Monitor Agent are billed only when the plan is enabled on the subscription. The security solution is still required on the workspace to work with the plans features and to be eligible for the 500-MB benefit.
To configure a custom destination workspace for the Azure Monitor Agent:
To configure a custom destination workspace for the Azure Monitor Agent:
### Log analytics workspace solutions
-The Azure Monitor Agent requires Log analytics workspace solutions. These solutions are automatically installed when you auto-provision the Azure Monitor Agent with the default workspace.
+The Azure Monitor Agent requires Log analytics workspace solutions. These solutions are automatically installed when you autoprovision the Azure Monitor Agent with the default workspace.
The required [Log Analytics workspace solutions](/previous-versions/azure/azure-monitor/insights/solutions) for the data that you're collecting are: - Security posture management (CSPM) ΓÇô **SecurityCenterFree solution** - Defender for Servers Plan 2 ΓÇô **Security solution**
-### Additional extensions for Defender for Cloud
+### Other extensions for Defender for Cloud
-The Azure Monitor Agent requires more extensions. The ASA extension, which supports endpoint protection recommendations, fileless attack detection, and Adaptive Application controls, is automatically installed when you auto-provision the Azure Monitor Agent.
+The Azure Monitor Agent requires more extensions. The ASA extension, which supports endpoint protection recommendations, fileless attack detection, and Adaptive Application controls, is automatically installed when you autoprovision the Azure Monitor Agent.
-### Additional security events collection
+### Other security events collection
-When you auto-provision the Log Analytics agent in Defender for Cloud, you can choose to collect additional security events to the workspace. When you auto-provision the Azure Monitor agent in Defender for Cloud, the option to collect additional security events to the workspace isn't available. Defender for Cloud doesn't rely on these security events, but they can be helpful for investigations through Microsoft Sentinel.
+When you autoprovision the Log Analytics agent in Defender for Cloud, you can choose to collect other security events to the workspace. When you autoprovision the Azure Monitor agent in Defender for Cloud, the option to collect other security events to the workspace isn't available. Defender for Cloud doesn't rely on these security events, but they can be helpful for investigations through Microsoft Sentinel.
-If you want to collect security events when you auto-provision the Azure Monitor Agent, you can create a [Data Collection Rule](../azure-monitor/essentials/data-collection-rule-overview.md) to collect the required events. Learn [how do it with PowerShell or with Azure Policy](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/how-to-configure-security-events-collection-with-azure-monitor/ba-p/3770719).
+If you want to collect security events when you autoprovision the Azure Monitor Agent, you can create a [Data Collection Rule](../azure-monitor/essentials/data-collection-rule-overview.md) to collect the required events. Learn [how do it with PowerShell or with Azure Policy](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/how-to-configure-security-events-collection-with-azure-monitor/ba-p/3770719).
-Like for Log Analytics workspaces, Defender for Cloud users are eligible for [500-MB of free data](plan-defender-for-servers-data-workspace.md#log-analytics-pricing-faq) daily on defined data types that include security events.
+Like for Log Analytics workspaces, Defender for Cloud users are eligible for [500 MB of free data](faq-defender-for-servers.yml) daily on defined data types that include security events.
## Next steps
defender-for-cloud Concept Agentless Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-data-collection.md
Previously updated : 09/28/2022 Last updated : 06/18/2023
The scanning environment where disks are analyzed is regional, volatile, isolate
:::image type="content" source="media/concept-agentless-data-collection/agentless-scanning-process.png" alt-text="Diagram of the process for collecting operating system data through agentless scanning.":::
-## FAQ
-
-### How does scanning affect the instances?
-
-Since the scanning process is an out-of-band analysis of snapshots, it doesn't impact the actual workloads and isn't visible by the guest operating system.
-
-### How does scanning affect the account/subscription?
-
-The scanning process has minimal footprint on your accounts and subscriptions.
-
-| Cloud provider | Changes |
-|||
-| Azure | - Adds a ΓÇ£VM Scanner OperatorΓÇ¥ role assignment<br>- Adds a ΓÇ£vmScannersΓÇ¥ resource with the relevant configurations used to manage the scanning process |
-| AWS | - Adds role assignment<br>- Adds authorized audience to OpenIDConnect provider<br>- Snapshots are created next to the scanned volumes, in the same account, during the scan (typically for a few minutes) |
-
-### What is the scan freshness?
-
-Each VM is scanned every 24 hours.
-
-### Which permissions are used by agentless scanning?
-
-The roles and permissions used by Defender for Cloud to perform agentless scanning on your Azure and AWS environments are listed here. In Azure, these permissions are automatically added to your subscriptions when you enable agentless scanning. In AWS, these permissions are [added to the CloudFormation stack in your AWS connector](enable-vulnerability-assessment-agentless.md#agentless-vulnerability-assessment-on-aws).
--- Azure permissions - The built-in role ΓÇ£VM scanner operatorΓÇ¥ has read-only permissions for VM disks which are required for the snapshot process. The detailed list of permissions is:-
- - `Microsoft.Compute/disks/read`
- - `Microsoft.Compute/disks/beginGetAccess/action`
- - `Microsoft.Compute/virtualMachines/instanceView/read`
- - `Microsoft.Compute/virtualMachines/read`
- - `Microsoft.Compute/virtualMachineScaleSets/instanceView/read`
- - `Microsoft.Compute/virtualMachineScaleSets/read`
- - `Microsoft.Compute/virtualMachineScaleSets/virtualMachines/read`
- - `Microsoft.Compute/virtualMachineScaleSets/virtualMachines/instanceView/read`
--- AWS permissions - The role ΓÇ£VmScannerΓÇ¥ is assigned to the scanner when you enable agentless scanning. This role has the minimal permission set to create and clean up snapshots (scoped by tag) and to verify the current state of the VM. The detailed permissions are:-
- | Attribute | Value |
- |||
- | SID | **VmScannerDeleteSnapshotAccess** |
- | Actions | ec2:DeleteSnapshot |
- | Conditions | "StringEquals":{"ec2:ResourceTag/CreatedByΓÇ¥:<br>"Microsoft Defender for Cloud"} |
- | Resources | arn:aws:ec2:::snapshot/ |
- | Effect | Allow |
-
- | Attribute | Value |
- |||
- | SID | **VmScannerAccess** |
- | Actions | ec2:ModifySnapshotAttribute <br> ec2:DeleteTags <br> ec2:CreateTags <br> ec2:CreateSnapshots <br> ec2:CopySnapshots <br> ec2:CreateSnapshot |
- | Conditions | None |
- | Resources | arn:aws:ec2:::instance/ <br> arn:aws:ec2:::snapshot/ <br> arn:aws:ec2:::volume/ |
- | Effect | Allow |
-
- | Attribute | Value |
- |||
- | SID | **VmScannerVerificationAccess** |
- | Actions | ec2:DescribeSnapshots <br> ec2:DescribeInstanceStatus |
- | Conditions | None |
- | Resources | * |
- | Effect | Allow |
-
- | Attribute | Value |
- |||
- | SID | **VmScannerEncryptionKeyCreation** |
- | Actions | kms:CreateKey |
- | Conditions | None |
- | Resources | * |
- | Effect | Allow |
-
- | Attribute | Value |
- |||
- | SID | **VmScannerEncryptionKeyManagement** |
- | Actions | kms:TagResource <br> kms:GetKeyRotationStatus <br> kms:PutKeyPolicy <br> kms:GetKeyPolicy <br> kms:CreateAlias <br> kms:ListResourceTags |
- | Conditions | None |
- | Resources | arn:aws:kms::${AWS::AccountId}:key/ <br> arn:aws:kms:*:${AWS::AccountId}:alias/DefenderForCloudKey |
- | Effect | Allow |
-
- | Attribute | Value |
- |||
- | SID | **VmScannerEncryptionKeyUsage** |
- | Actions | kms:GenerateDataKeyWithoutPlaintext <br> kms:DescribeKey <br> kms:RetireGrant <br> kms:CreateGrant <br> kms:ReEncryptFrom |
- | Conditions | None |
- | Resources | arn:aws:kms::${AWS::AccountId}:key/ |
- | Effect | Allow |
-
-### Which data is collected from snapshots?
-
-Agentless scanning collects data similar to the data an agent collects to perform the same analysis. Raw data, PIIs or sensitive business data isn't collected, and only metadata results are sent to Defender for Cloud.
-
-### What are the costs related to agentless scanning?
-
-Agentless scanning is included in Defender Cloud Security Posture Management (CSPM) and Defender for Servers P2 plans. No other costs will incur to Defender for Cloud when enabling it.
-
-> [!NOTE]
-> AWS charges for retention of disk snapshots. Defender for Cloud scanning process actively tries to minimize the period during which a snapshot is stored in your account (typically up to a few minutes), but you may be charged by AWS a minimal overhead cost for the disk snapshots storage.
-
-### How are VM snapshots secured?
-
-Agentless scanning protects disk snapshots according to MicrosoftΓÇÖs highest security standards. To ensure VM snapshots are private and secure during the analysis process, some of the measures taken are:
--- Data is encrypted at rest and in-transit.-- Snapshots are immediately deleted when the analysis process is complete.-- Snapshots remain within their original AWS or Azure region. EC2 snapshots aren't copied to Azure.-- Isolation of environments per customer account/subscription.-- Only metadata containing scan results is sent outside the isolated scanning environment.-- All operations are audited.- ## Next steps This article explains how agentless scanning works and how it helps you collect data from your machines.
-Learn more about how to [enable vulnerability assessment with agentless scanning](enable-vulnerability-assessment-agentless.md).
+- Learn more about how to [enable vulnerability assessment with agentless scanning](enable-vulnerability-assessment-agentless.md).
+
+- Check out Defender for Cloud's [common questions](faq-data-collection-agents.yml) for more information on agentless scanning for machines.
defender-for-cloud Continuous Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/continuous-export.md
description: Learn how to configure continuous export of security alerts and rec
Previously updated : 01/19/2023 Last updated : 06/19/2023 # Continuously export Microsoft Defender for Cloud data
-Microsoft Defender for Cloud generates detailed security alerts and recommendations. To analyze the information in these alerts and recommendations, you can export them to Azure Log Analytics, Event Hubs, or to another [SIEM, SOAR, or IT Service Management solution](export-to-siem.md). You can stream the alerts and recommendations as they're generated or define a schedule to send periodic snapshots of all of the new data.
+Microsoft Defender for Cloud generates detailed security alerts and recommendations. To analyze the information in these alerts and recommendations, you can export them to Azure Log Analytics, Event Hubs, or to another [SIEM, SOAR, or IT classic deployment model solution](export-to-siem.md). You can stream the alerts and recommendations as they're generated or define a schedule to send periodic snapshots of all of the new data.
-With **continuous export**, you fully customize *what* will be exported and *where* it will go. For example, you can configure it so that:
+With **continuous export**, you can fully customize what information to export and where it goes. For example, you can configure it so that:
- All high severity alerts are sent to an Azure event hub - All medium or higher severity findings from vulnerability assessment scans of your SQL servers are sent to a specific Log Analytics workspace
This article describes how to configure continuous export to Log Analytics works
|-|:-| |Release state:|General availability (GA)| |Pricing:|Free|
-|Required roles and permissions:|<ul><li>**Security admin** or **Owner** on the resource group</li><li>Write permissions for the target resource.</li><li>If you're using the [Azure Policy 'DeployIfNotExist' policies](#configure-continuous-export-at-scale-using-the-supplied-policies), you'll also need permissions for assigning policies</li><li>To export data to Event Hubs, you'll need Write permission on the Event Hubs Policy.</li><li>To export to a Log Analytics workspace:<ul><li>if it **has the SecurityCenterFree solution**, you'll need a minimum of read permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/read`</li><li>if it **doesn't have the SecurityCenterFree solution**, you'll need write permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/action`</li><li>Learn more about [Azure Monitor and Log Analytics workspace solutions](/previous-versions/azure/azure-monitor/insights/solutions)</li></ul></li></ul>|
+|Required roles and permissions:|<ul><li>**Security admin** or **Owner** on the resource group</li><li>Write permissions for the target resource.</li><li>If you're using the [Azure Policy 'DeployIfNotExist' policies](#configure-continuous-export-at-scale-using-the-supplied-policies), you need the permissions that allow you to assign policies</li><li>To export data to Event Hubs, you need Write permission on the Event Hubs Policy.</li><li>To export to a Log Analytics workspace:<ul><li>if it **has the SecurityCenterFree solution**, you need a minimum of read permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/read`</li><li>if it **doesn't have the SecurityCenterFree solution**, you need write permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/action`</li><li>Learn more about [Azure Monitor and Log Analytics workspace solutions](/previous-versions/azure/azure-monitor/insights/solutions)</li></ul></li></ul>|
|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)| ## What data types can be exported?
If you're setting up a continuous export to Log Analytics or Azure Event Hubs:
1. Select the data type you'd like to export and choose from the filters on each type (for example, export only high severity alerts). 1. Select the export frequency:
- - **Streaming** ΓÇô assessments will be sent when a resourceΓÇÖs health state is updated (if no updates occur, no data will be sent).
- - **Snapshots** ΓÇô a snapshot of the current state of the selected data types will be sent once a week per subscription. To identify snapshot data, look for the field ``IsSnapshot``.
+ - **Streaming** ΓÇô assessments are sent when a resourceΓÇÖs health state is updated (if no updates occur, no data is sent).
+ - **Snapshots** ΓÇô a snapshot of the current state of the selected data types that are sent once a week per subscription. To identify snapshot data, look for the field ``IsSnapshot``.
If your selection includes one of these recommendations, you can include the vulnerability assessment findings together with them: - [SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/82e20e14-edc5-4373-bfc4-f13121257c37)
To deploy your continuous export configurations across your organization, use th
:::image type="content" source="./media/continuous-export/export-policy-assign.png" alt-text="Assigning the Azure Policy."::: 1. Open each tab and set the parameters as desired:
- 1. In the **Basics** tab, set the scope for the policy. To use centralized management, assign the policy to the Management Group containing the subscriptions that will use continuous export configuration.
+ 1. In the **Basics** tab, set the scope for the policy. To use centralized management, assign the policy to the Management Group containing the subscriptions that use continuous export configuration.
1. In the **Parameters** tab, set the resource group and data type details. > [!TIP] > Each parameter has a tooltip explaining the options available to you.
To view the event schemas of the exported data types, visit the [Log Analytics t
## Export data to an Azure Event Hubs or Log Analytics workspace in another tenant
-You can export data to an Azure Event Hubs or Log Analytics workspace in a different tenant, without using [Azure Lighthouse](../lighthouse/overview.md). When collecting data into a tenant, you can analyze the data from one central location.
+You can **not** export data to an Azure Event Hubs or Log Analytics workspace in a different tenant, without using [Azure Lighthouse](../lighthouse/overview.md). When collecting data into a tenant, you can analyze the data from one central location.
-To export data to an Azure Event Hubs or Log Analytics workspace in a different tenant:
+To export data to an Azure Event Hubs or Log Analytics workspace in a different tenant **with Azure Lighthouse**:
1. In the tenant that has the Azure Event Hubs or Log Analytics workspace, [invite a user](../active-directory/external-identities/what-is-b2b.md#easily-invite-guest-users-from-the-azure-portal) from the tenant that hosts the continuous export configuration. 1. For a Log Analytics workspace: After the user accepts the invitation to join the tenant, assign the user in the workspace tenant one of these roles: Owner, Contributor, Log Analytics Contributor, Sentinel Contributor, Monitoring Contributor
You can enable continuous export as a trusted service, so that you can send data
:::image type="content" source="media/continuous-export/export-as-trusted.png" alt-text="Screenshot that shows where the checkbox is located to select export as trusted service.":::
-You'll now need to add the relevant role assignment on the destination Event Hub.
+You need to add the relevant role assignment on the destination Event Hubs.
**To add the relevant role assignment on the destination Event Hub**:
-1. Navigate to the selected Event Hub.
+1. Navigate to the selected Event Hubs.
1. Select **Access Control** > **Add role assignment**
You'll now need to add the relevant role assignment on the destination Event Hub
1. Search for and select **Windows Azure Security Resource Provider**.
- :::image type="content" source="media/continuous-export/windows-security-resource.png" alt-text="Screenshot that shows you where to enter and search for Windows Azure Security Resource Provider." lightbox="media/continuous-export/windows-security-resource.png":::
+ :::image type="content" source="media/continuous-export/windows-security-resource.png" alt-text="Screenshot that shows you where to enter and search for Microsoft Azure Security Resource Provider." lightbox="media/continuous-export/windows-security-resource.png":::
1. Select **Review + assign**.
To view alerts and recommendations from Defender for Cloud in Azure Monitor, con
- Optionally, configure the [Action Group](../azure-monitor/alerts/action-groups.md) that you'd like to trigger. Action groups can trigger email sending, ITSM tickets, WebHooks, and more. ![Azure Monitor alert rule.](./media/continuous-export/azure-monitor-alert-rule.png)
-You'll now see new Microsoft Defender for Cloud alerts or recommendations (depending on your configured continuous export rules and the condition you defined in your Azure Monitor alert rule) in Azure Monitor alerts, with automatic triggering of an action group (if provided).
+The Microsoft Defender for Cloud alerts or recommendations appears (depending on your configured continuous export rules and the condition you defined in your Azure Monitor alert rule) in Azure Monitor alerts, with automatic triggering of an action group (if provided).
## Manual one-time export of alerts and recommendations
To download a CSV report for alerts or recommendations, open the **Security aler
> [!NOTE] > These reports contain alerts and recommendations for resources from the currently selected subscriptions.
-## FAQ - Continuous export
-
-### What are the costs involved in exporting data?
-
-There's no cost for enabling a continuous export. Costs might be incurred for ingestion and retention of data in your Log Analytics workspace, depending on your configuration there.
-
-Many alerts are only provided when you've enabled Defender plans for your resources. A good way to preview the alerts you'll get in your exported data is to see the alerts shown in Defender for Cloud's pages in the Azure portal.
-
-Learn more about [Log Analytics workspace pricing](https://azure.microsoft.com/pricing/details/monitor/).
-
-Learn more about [Azure Event Hubs pricing](https://azure.microsoft.com/pricing/details/event-hubs/).
-
-For general information about Defender for Cloud pricing, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
-
-### Does the export include data about the current state of all resources?
-
-No. Continuous export is built for streaming of **events**:
--- **Alerts** received before you enabled export won't be exported.-- **Recommendations** are sent whenever a resource's compliance state changes. For example, when a resource turns from healthy to unhealthy. Therefore, as with alerts, recommendations for resources that haven't changed state since you enabled export won't be exported.-- **Secure score** per security control or subscription is sent when a security control's score changes by 0.01 or more.-- **Regulatory compliance status** is sent when the status of the resource's compliance changes.-
-### Why are recommendations sent at different intervals?
-
-Different recommendations have different compliance evaluation intervals, which can range from every few minutes to every few days. So, the amount of time that it takes for recommendations to appear in your exports varies.
-
-### Does continuous export support any business continuity or disaster recovery (BCDR) scenarios?
-
-Continuous export can be helpful in to prepare for BCDR scenarios where the target resource is experiencing an outage or other disaster. However, it's the organization's responsibility to prevent data loss by establishing backups according to the guidelines from Azure Event Hubs, Log Analytics workspace, and Logic App.
-
-Learn more in [Azure Event Hubs - Geo-disaster recovery](../event-hubs/event-hubs-geo-dr.md).
-
-### What is the minimum SAS policy permissions required when exporting data to Azure Event Hubs?
-
-**Send** is the minimum SAS policy permissions required. For step-by-step instructions, see **Step 1. Create an Event Hubs namespace and event hub with send permissions** in [this article](./export-to-splunk-or-qradar.md#step-1-create-an-event-hubs-namespace-and-event-hub-with-send-permissions).
- ## Next steps In this article, you learned how to configure continuous exports of your recommendations and alerts. You also learned how to download your alerts data as a CSV file.
For related material, see the following documentation:
- [Microsoft Sentinel documentation](../sentinel/index.yml) - [Azure Monitor documentation](../azure-monitor/index.yml) - [Export data types schemas](https://aka.ms/ASCAutomationSchemas)
+- Check out [common questions](faq-general.yml) about continuous export.
defender-for-cloud Defender For Cloud Planning And Operations Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-planning-and-operations-guide.md
This guide provides the background for how Defender for Cloud fits into your org
In the next section, you'll learn how to plan for each one of those areas and apply those recommendations based on your requirements. > [!NOTE]
-> Read [Defender for Cloud frequently asked questions (FAQ)](faq-general.yml) for a list of common questions that can also be useful during the designing and planning phase.
+> Read [Defender for Cloud common questions](faq-general.yml) for a list of common questions that can also be useful during the designing and planning phase.
## Security roles and access controls
When automatic provisioning is enabled in the security policy, the [data collect
If at some point you want to disable Data Collection, you can turn it off in the security policy. However, because the Log Analytics agent may be used by other Azure management and monitoring services, the agent won't be uninstalled automatically when you turn off data collection in Defender for Cloud. You can manually uninstall the agent if needed. > [!NOTE]
-> To find a list of supported VMs, read the [Defender for Cloud frequently asked questions (FAQ)](faq-vms.yml).
+> To find a list of supported VMs, read the [Defender for Cloud common questions](faq-vms.yml).
### Workspace
In this document, you learned how to plan for Defender for Cloud adoption. Learn
- [Managing and responding to security alerts in Defender for Cloud](managing-and-responding-alerts.md) - [Monitoring partner solutions with Defender for Cloud](./partner-integration.md) - Learn how to monitor the health status of your partner solutions.-- [Defender for Cloud FAQ](faq-general.yml) - Find frequently asked questions about using the service.
+- [Defender for Cloud common questions](faq-general.yml) - Find frequently asked questions about using the service.
- [Azure Security blog](/archive/blogs/azuresecurity/) - Read blog posts about Azure security and compliance.
defender-for-cloud Defender For Container Registries Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-introduction.md
Title: Microsoft Defender for container registries - the benefits and features description: Learn about the benefits and features of Microsoft Defender for container registries. Previously updated : 04/07/2022 Last updated : 06/18/2023
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md
zone_pivot_groups: k8s-host Previously updated : 10/30/2022 Last updated : 06/14/2023 # Enable Microsoft Defender for Containers
A full list of supported alerts is available in the [reference table of all Defe
[!INCLUDE [Remove the profile](./includes/defender-for-containers-remove-profile.md)] ::: zone-end - ## Learn More You can check out the following blogs:
Now that you enabled Defender for Containers, you can:
- [Scan your ACR images for vulnerabilities](defender-for-containers-vulnerability-assessment-azure.md) - [Scan your Amazon AWS ECR images for vulnerabilities](defender-for-containers-vulnerability-assessment-elastic.md)
+- Check out [common questions](faq-defender-for-containers.yml) about Defender for Containers.
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Previously updated : 06/12/2023 Last updated : 06/14/2023 # Overview of Microsoft Defender for Containers
Defender for Containers also includes host-level threat detection with over 60 K
Defender for Cloud monitors the attack surface of multicloud Kubernetes deployments based on the [MITRE ATT&CK® matrix for Containers](https://www.microsoft.com/security/blog/2021/04/29/center-for-threat-informed-defense-teams-up-with-microsoft-partners-to-build-the-attck-for-containers-matrix/), a framework developed by the [Center for Threat-Informed Defense](https://mitre-engenuity.org/ctid/) in close partnership with Microsoft.
-## FAQ - Defender for Containers
--- [What are the options to enable the new plan at scale?](#what-are-the-options-to-enable-the-new-plan-at-scale)-- [Does Microsoft Defender for Containers support AKS clusters with virtual machines scale sets?](#does-microsoft-defender-for-containers-support-aks-clusters-with-virtual-machines-scale-sets)-- [Does Microsoft Defender for Containers support AKS without scale set (default)?](#does-microsoft-defender-for-containers-support-aks-without-scale-set-default)-- [Do I need to install the Log Analytics VM extension on my AKS nodes for security protection?](#do-i-need-to-install-the-log-analytics-vm-extension-on-my-aks-nodes-for-security-protection)-
-### What are the options to enable the new plan at scale?
-
-You can use the Azure Policy `Configure Microsoft Defender for Containers to be enabled`, to enable Defender for Containers at scale. You can also see all of the options that are available to [enable Microsoft Defender for Containers](defender-for-containers-enable.md).
-
-### Does Microsoft Defender for Containers support AKS clusters with virtual machines scale sets?
-
-Yes.
-
-### Does Microsoft Defender for Containers support AKS without scale set (default)?
-
-No. Only Azure Kubernetes Service (AKS) clusters that use Virtual Machine Scale Sets for the nodes is supported.
-
-### Do I need to install the Log Analytics VM extension on my AKS nodes for security protection?
-
-No, AKS is a managed service, and manipulation of the IaaS resources isn't supported. The Log Analytics VM extension isn't needed and may result in extra charges.
- ## Learn More Learn more about Defender for Containers in the following blogs:
The release state of Defender for Containers is broken down by two dimensions: e
In this overview, you learned about the core elements of container security in Microsoft Defender for Cloud. To enable the plan, see:
-> [!div class="nextstepaction"]
-> [Enable Defender for Containers](defender-for-containers-enable.md)
+- [Enable Defender for Containers](defender-for-containers-enable.md)
+- Check out [common questions](faq-defender-for-containers.yml) about Defender for Containers.
defender-for-cloud Defender For Containers Vulnerability Assessment Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md
Title: Identify vulnerabilities in Azure Container Registry with Microsoft Defen
description: Learn how to use Defender for Containers to scan images in your Azure Container Registry to find vulnerabilities. Previously updated : 05/28/2023 Last updated : 06/14/2023
To provide findings for the recommendation, Defender for Cloud collects the inve
:::image type="content" source="media/defender-for-containers-vulnerability-assessment-azure/view-running-containers-vulnerability.png" alt-text="Screenshot of recommendations showing your running containers with the vulnerabilities associated with the images used by each container." lightbox="media/defender-for-containers-vulnerability-assessment-azure/view-running-containers-vulnerability.png":::
-## FAQ
-
-### How does Defender for Containers scan an image?
-
-Defender for Containers pulls the image from the registry and runs it in an isolated sandbox with the Qualys scanner. The scanner extracts a list of known vulnerabilities.
-
-Defender for Cloud filters and classifies findings from the scanner. When an image is healthy, Defender for Cloud marks it as such. Defender for Cloud generates security recommendations only for images that have issues to be resolved. By only notifying you when there are problems, Defender for Cloud reduces the potential for unwanted informational alerts.
-
-### How can I identify pull events performed by the scanner?
-
-To identify pull events performed by the scanner, do the following steps:
-
-1. Search for pull events with the UserAgent of *AzureContainerImageScanner*.
-1. Extract the identity associated with this event.
-1. Use the extracted identity to identify pull events from the scanner.
-
-### What is the difference between Not Applicable Resources and Unverified Resources?
--- **Not applicable resources** are resources for which the recommendation can't give a definitive answer. The not applicable tab includes reasons for each resource that could not be assessed.-- **Unverified resources** are resources that have been scheduled to be assessed, but have not been assessed yet.-
-### Does Microsoft share any information with Qualys in order to perform image scans?
-
-No, the Qualys scanner is hosted by Microsoft, and no customer data is shared with Qualys.
-
-### Can I get the scan results via REST API?
-
-Yes. The results are under [Sub-Assessments REST API](/rest/api/defenderforcloud/sub-assessments/list). Also, you can use Azure Resource Graph (ARG), the Kusto-like API for all of your resources: a query can fetch a specific scan.
-
-### Why is Defender for Cloud alerting me to vulnerabilities about an image that isnΓÇÖt in my registry?
-
-Some images may reuse tags from an image that was already scanned. For example, you may reassign the tag ΓÇ£LatestΓÇ¥ every time you add an image to a digest. In such cases, the ΓÇÿoldΓÇÖ image does still exist in the registry and may still be pulled by its digest. If the image has security findings and is pulled, it will expose security vulnerabilities.
-
-### Does Defender for Containers scan images in Microsoft Container Registry?
-
-Currently, Defender for Containers can scan images in Azure Container Registry (ACR) and AWS Elastic Container Registry (ECR) only.
-Docker Registry, Microsoft Artifact Registry/Microsoft Container Registry, and Microsoft Azure Red Hat OpenShift (ARO) built-in container image registry are not supported.
-Images should first be imported to ACR. Learn more about [importing container images to an Azure container registry](../container-registry/container-registry-import-images.md?tabs=azure-cli).
- ## Next steps
-Learn more about the Defender for Cloud [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads).
+- Learn more about the Defender for Cloud [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads).
+- Check out [common questions](faq-defender-for-containers.yml) about Defender for Containers.
defender-for-cloud Defender For Containers Vulnerability Assessment Elastic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-elastic.md
Title: Identify vulnerabilities in Amazon AWS Elastic Container Registry with Mi
description: Learn how to use Defender for Containers to scan images in your Amazon AWS Elastic Container Registry (ECR) to find vulnerabilities. Previously updated : 09/11/2022 Last updated : 06/14/2023
To create a rule:
:::image type="content" source="./media/remediate-vulnerability-findings-vm/modify-rule.png" alt-text="Screenshot of how to modify or delete an existing rule."::: 1. To view or delete the rule, select the ellipsis menu ("..."). -->
-## FAQs
-
-### Can I get the scan results via REST API?
-
-Yes. The results are under [Sub-Assessments REST API](/rest/api/defenderforcloud/sub-assessments/list). Also, you can use Azure Resource Graph (ARG), the Kusto-like API for all of your resources: a query can fetch a specific scan.
- ## Next steps Learn more about: - Defender for Cloud [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads) - [Multicloud protections](multicloud.yml) for your AWS account
+- Check out [common questions](faq-defender-for-containers.yml) about Defender for Containers.
defender-for-cloud Defender For Sql Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-usage.md
Previously updated : 07/28/2022 Last updated : 06/14/2023 # Enable Microsoft Defender for SQL servers on machines
To view alerts:
[Learn more about managing and responding to alerts](managing-and-responding-alerts.md).
-## FAQ - Microsoft Defender for SQL servers on machines
-
-### If I enable this Microsoft Defender plan on my subscription, are all SQL servers on the subscription protected?
-
-No. To defend a SQL Server deployment on an Azure virtual machine, or a SQL Server running on an Azure Arc-enabled machine, Defender for Cloud requires:
--- a Log Analytics agent on the machine-- the relevant Log Analytics workspace to have the Microsoft Defender for SQL solution enabled-
-The subscription *status*, shown in the SQL server page in the Azure portal, reflects the default workspace status and applies to all connected machines. Only the SQL servers on hosts with a Log Analytics agent reporting to that workspace are protected by Defender for Cloud.
-
-### Is there a performance effect from deploying Microsoft Defender for Azure SQL on machines?
-
-The focus of **Microsoft Defender for SQL on machines** is obviously security. But we also care about your business and so we've prioritized performance to ensure the minimal effect on your SQL servers.
-
-The service has a split architecture to balance data uploading and speed with performance:
--- Some of our detectors, including an [extended events trace](/azure/azure-sql/database/xevent-db-diff-from-svr) named `SQLAdvancedThreatProtectionTraffic`, run on the machine for real-time speed advantages.-- Other detectors run in the cloud to spare the machine from heavy computational loads.-
-Lab tests of our solution showed CPU usage averaging 3% for peak slices, comparing it against benchmark loads. An analysis of our current user data shows a negligible effect on CPU and memory usage.
-
-Of course, performance always varies between environments, machines, and loads. The statements and numbers above are provided as a general guideline, not a guarantee for any individual deployment.
- ## Next steps For related information, see these resources:
For related information, see these resources:
- [Security alerts for SQL Database and Azure Synapse Analytics](alerts-reference.md#alerts-sql-db-and-warehouse) - [Set up email notifications for security alerts](configure-email-notifications.md) - [Learn more about Microsoft Sentinel](../sentinel/index.yml)
+- Check out [common questions](faq-defender-for-databases.yml) about Defender for Databases.
defender-for-cloud Defender For Storage Classic Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-classic-enable.md
Title: Enable and configure Microsoft Defender for Storage (classic) - Microsoft Defender for Cloud description: Learn about how to enable and configure Microsoft Defender for Storage (classic). Previously updated : 03/16/2023 Last updated : 06/15/2023
When you create a new Databricks workspace, you have the ability to add a tag th
The Microsoft Defender for Storage account inherits the tag of the Databricks workspace, which prevents Defender for Storage from turning on automatically.
-## FAQ - Microsoft Defender for Storage (classic) pricing
-
-### Can I switch from an existing per-transaction pricing under the Defender for Storage (classic) plan to the new per-storage account pricing under the new Defender for Storage plan?
-
-Yes, you can migrate to the per-storage account pricing under the new Defender for Storage plan in the Azure portal or using any of the supported enablement methods.
-
-### Can I return to per-transaction pricing in the Defender for Storage (classic) plan after switching to per-storage account pricing?
-
-Yes, you can [enable per-transaction pricing](#set-up-microsoft-defender-for-storage-classic) under the Defender for Storage (classic) plan to migrate back from per-storage account pricing using all enablement methods except for the Azure portal.
-
-### Will you continue supporting per-transaction pricing in the Defender for Storage (classic) plan?
-
-Yes, you can [enable per-transaction pricing](#set-up-microsoft-defender-for-storage-classic) under the Defender for Storage (classic) plan from all the supported enablement methods, except for the Azure portal.
-
-### Under the Defender for Storage (classic) per-storage account pricing, can I exclude specific storage accounts from protections?
-
-No, you can only enable per-storage account pricing under the Defender for Storage (classic) plan at the subscription level. All storage accounts in the subscriptions are protected.
-
-### How long does it take for per-storage account pricing to be enabled in the Defender for Storage (classic) plan?
-
-When you enable Microsoft Defender for Storage at the subscription level for per-storage account or per-transaction pricing under the Defender for Storage (classic) plan, it takes up to 24 hours for the plan to be enabled.
-
-### Is there any difference in the feature set of per-storage account pricing compared to the legacy per-transaction pricing in the Defender for Storage (classic) plan?
-
-No. Both per-storage account and per-transaction pricing under the Defender for Storage (classic) plan include the same features. The only difference is the pricing structure.
-
-### How can I estimate the cost for each pricing under the Defender for Storage (classic) plan?
-
-To estimate the cost according to each pricing for your environment under the Defender for Storage (classic) plan, we created a [pricing estimation workbook](https://aka.ms/dfstoragecosttool) and a PowerShell script that you can run in your environment.
- ## Next steps - Check out the [alerts for Azure Storage](alerts-reference.md#alerts-azurestorage) - Learn about the [features and benefits of Defender for Storage](defender-for-storage-introduction.md)
+- Check out [common questions](faq-defender-for-storage-classic.yml) about Defender for Storage classic.
defender-for-cloud Defender For Storage Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-classic.md
Title: Microsoft Defender for Storage (classic) - Microsoft Defender for Cloud description: Learn about the benefits and features of Microsoft Defender for Storage (classic). Previously updated : 03/16/2023 Last updated : 06/15/2023
Migrating to the new plan is a simple process, read here about [how to migrate f
You can [enable Microsoft Defender for Storage (classic)](../storage/common/azure-defender-storage-configure.md) at either the subscription level (recommended) or the resource level.
-Defender for Storage (classic) continually analyzes the telemetry stream generated by the [Azure Blob Storage](https://azure.microsoft.com/services/storage/blobs/), [Azure Files](https://azure.microsoft.com/products/storage/files/), and [Azure Data Lake Storage](https://azure.microsoft.com/products/storage/data-lake-storage) services. When potentially malicious activities are detected, security alerts are generated. These alerts are displayed in Microsoft Defender for Cloud, together with the details of the suspicious activity along with the relevant investigation steps, remediation actions, and security recommendations.
+Defender for Storage (classic) continually analyzes the data stream generated by the [Azure Blob Storage](https://azure.microsoft.com/services/storage/blobs/), [Azure Files](https://azure.microsoft.com/products/storage/files/), and [Azure Data Lake Storage](https://azure.microsoft.com/products/storage/data-lake-storage) services. When potentially malicious activities are detected, security alerts are generated. These alerts are displayed in Microsoft Defender for Cloud. Any details of suspicious activity along with the relevant investigation steps, remediation actions, and security recommendations are presented here.
-Analyzed telemetry of Azure Blob Storage includes operation types such as `Get Blob`, `Put Blob`, `Get Container ACL`, `List Blobs`, and `Get Blob Properties`. Examples of analyzed Azure Files operation types include `Get File`, `Create File`, `List Files`, `Get File Properties`, and `Put Range`.
+Analyzed data of Azure Blob Storage includes operation types such as `Get Blob`, `Put Blob`, `Get Container ACL`, `List Blobs`, and `Get Blob Properties`. Examples of analyzed Azure Files operation types include `Get File`, `Create File`, `List Files`, `Get File Properties`, and `Put Range`.
-Defender for Storage (classic) doesn't access the Storage account data and has no impact on its performance.
+Defender for Storage (classic) doesn't access the Storage account data and has no effect on its performance.
You can learn more by watching this video from the Defender for Cloud in the Field video series: - [Defender for Storage (classic) in the field](episode-thirteen.md)
-For more clarification about Defender for Storage (classic), see the [commonly asked questions](#common-questionsmicrosoft-defender-for-storage-classic).
+For more clarification about Defender for Storage (classic), see the [commonly asked questions](faq-defender-for-storage-classic.yml).
## Availability
For more clarification about Defender for Storage (classic), see the [commonly a
Defender for Storage (classic) provides: -- **Azure-native security** - With 1-click enablement, Defender for Storage (classic) protects data stored in Azure Blob, Azure Files, and Data Lakes. As an Azure-native service, Defender for Storage (classic) provides centralized security across all data assets that are managed by Azure and is integrated with other Azure security services such as Microsoft Sentinel.
+- **Azure-native security** - With 1-click enablement, Defender for Storage (classic) protects data stored in Azure Blob, Azure Files, and Data Lakes. As an Azure-native service, Defender for Storage (classic) provides centralized security across all data assets that Azure manages and is integrated with other Azure security services such as Microsoft Sentinel.
- **Rich detection suite** - Powered by Microsoft Threat Intelligence, the detections in Defender for Storage (classic) cover the top storage threats such as unauthenticated access, compromised credentials, social engineering attacks, data exfiltration, privilege abuse, and malicious content.
Security alerts are triggered for the following scenarios (typically from 1-2 ho
|Type of threat | Description | ||| |**Unusual access to an account** | For example, access from a TOR exit node, suspicious IP addresses, unusual applications, unusual locations, and anonymous access without authentication. |
-|**Unusual behavior in an account** | Behavior that deviates from a learned baseline, such as a change of access permissions in an account, unusual access inspection, unusual data exploration, unusual deletion of blobs/files, or unusual data extraction. |
-|**Hash reputation based Malware detection** | Detection of known malware based on full blob/file hash. This can help detect ransomware, viruses, spyware, and other malware uploaded to an account, prevent it from entering the organization, and spreading to more users and resources. See also [Limitations of hash reputation analysis](#limitations-of-hash-reputation-analysis). |
+|**Unusual behavior in an account** | Behavior that deviates from a learned baseline. For example, a change of access permissions in an account, unusual access inspection, unusual data exploration, unusual deletion of blobs/files, or unusual data extraction. |
+|**Hash reputation based Malware detection** | Detection of known malware based on full blob/file hash. Which can help detect ransomware, viruses, spyware, and other malware uploaded to an account, prevent it from entering the organization, and spreading to more users and resources. See also [Limitations of hash reputation analysis](#limitations-of-hash-reputation-analysis). |
|**Unusual file uploads** | Unusual cloud service packages and executable files that have been uploaded to an account. | | **Public visibility** | Potential break-in attempts by scanning containers and pulling potentially sensitive data from publicly accessible containers. | | **Phishing campaigns** | When content that's hosted on Azure Storage is identified as part of a phishing attack that's impacting Microsoft 365 users. |
Security alerts are triggered for the following scenarios (typically from 1-2 ho
> [!TIP] > For a comprehensive list of all Defender for Storage (classic) alerts, see the [alerts reference page](alerts-reference.md#alerts-azurestorage). It is essential to review the prerequisites, as certain security alerts are only accessible under the new Defender for Storage plan. The information in the reference page is beneficial for workload owners seeking to understand detectable threats and enables Security Operations Center (SOC) teams to familiarize themselves with detections prior to conducting investigations. Learn more about what's in a Defender for Cloud security alert, and how to manage your alerts in [Manage and respond to security alerts in Microsoft Defender for Cloud](managing-and-responding-alerts.md).
-Alerts include details of the incident that triggered them, and recommendations on how to investigate and remediate threats. Alerts can be exported to Microsoft Sentinel or any other third-party SIEM or any other external tool. Learn more in [Stream alerts to a SIEM, SOAR, or IT Service Management solution](export-to-siem.md).
+Alerts include details of the incident that triggered them, and recommendations on how to investigate and remediate threats. Alerts can be exported to Microsoft Sentinel or any other third-party SIEM or any other external tool. Learn more in [Stream alerts to a SIEM, SOAR, or IT classic deployment model solution](export-to-siem.md).
## Limitations of hash reputation analysis > [!TIP] > If you're looking to have your uploaded blobs scanned for malware in near real-time, we recommend that you upgrade to the new Defender for Storage plan. Learn more about [Malware Scanning](defender-for-storage-malware-scan.md). -- **Hash reputation isn't deep file inspection** - Microsoft Defender for Storage (classic) uses hash reputation analysis supported by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) to determine whether an uploaded file is suspicious. The threat protection tools donΓÇÖt scan the uploaded files; rather they analyze the telemetry generated from the Blobs Storage and Files services. Defender for Storage (classic) then compares the hashes of newly uploaded files with hashes of known viruses, trojans, spyware, and ransomware.
+- **Hash reputation isn't deep file inspection** - Microsoft Defender for Storage (classic) uses hash reputation analysis supported by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) to determine whether an uploaded file is suspicious. The threat protection tools donΓÇÖt scan the uploaded files; rather they analyze the data generated from the Blobs Storage and Files services. Defender for Storage (classic) then compares the hashes of newly uploaded files with hashes of known viruses, trojans, spyware, and ransomware.
-- **Hash reputation analysis isn't supported for all files protocols and operation types** - Some, but not all, of the telemetry logs contain the hash value of the related blob or file. In some cases, the telemetry doesn't contain a hash value. As a result, some operations can't be monitored for known malware uploads. Examples of such unsupported use cases include SMB file-shares and when a blob is created using [Put Block](/rest/api/storageservices/put-block) and [Put Block List](/rest/api/storageservices/put-block-list).-
-## Common questions - Microsoft Defender for Storage (classic)
--- [Are there differences in features between the new Defender for Storage plan and the legacy Defender for Storage Classic plan?](#are-there-differences-in-features-between-the-new-defender-for-storage-plan-and-the-legacy-defender-for-storage-classic-plan)-- [How do I estimate charges at the account level?](#how-do-i-estimate-charges-at-the-account-level)-- [Can I exclude a specific Azure storage account from a protected subscription?](#can-i-exclude-a-specific-azure-storage-account-from-a-protected-subscription)-- [Can I switch from the per-transaction pricing in Defender for Storage (classic) to the new Defender for Storage plan?](#can-i-switch-from-the-per-transaction-pricing-in-defender-for-storage-classic-to-the-new-defender-for-storage-plan)-- [Can I exclude specific storage accounts from protection in the new Defender for Storage plan?](#can-i-exclude-specific-storage-accounts-from-protection-in-the-new-defender-for-storage-plan)-
-### Are there differences in features between the new Defender for Storage plan and the legacy Defender for Storage Classic plan?
-
-Yes. The new Defender for Storage plan offers additional security capabilities, such as near real-time malware scanning and sensitive data threat detection. This plan also provides a more predictable pricing structure for better control over coverage and costs. Learn more about the [benefits of migrating to the new plan](defender-for-storage-classic-migrate.md).
-
-### How do I estimate charges at the account level?
-
-To get an estimate of Defender for Storage classic costs, use the [Price Estimation Workbook](https://portal.azure.com/#blade/AppInsightsExtension/UsageNotebookBlade/ComponentId/Azure%20Security%20Center/ConfigurationId/community-Workbooks%2FAzure%20Security%20Center%2FPrice%20Estimation/Type/workbook/WorkbookTemplateName/Price%20Estimation) in the Azure portal.
-
-### Can I exclude a specific Azure Storage account from a protected subscription?
-
-Yes, you can [exclude specific storage accounts](defender-for-storage-classic-enable.md#exclude-a-storage-account-from-a-protected-subscription-in-the-per-transaction-plan) from protected subscriptions in Defender for Storage (classic).
-
-### Can I switch from the per-transaction pricing in Defender for Storage (classic) to the new Defender for Storage plan?
-
-Yes, you can move to the new Defender for Storage plan with per-storage account pricing through the Azure portal or other supported methods. This change isn't automatic, you'll need to actively make the switch. Learn about how to [migrate to the new Defender for Storage](defender-for-storage-classic-migrate.md).
-
-### Can I exclude specific storage accounts from protection in the new Defender for Storage plan?
-
-Yes, the new Defender for Storage plan with per-storage account pricing allows you to exclude and configure specific storage accounts within protected subscriptions. However, you'll need to set up the exclusion again after you migrate to the new plan. Learn about how to [migrate to the new Defender for Storage](defender-for-storage-classic-migrate.md).
+- **Hash reputation analysis isn't supported for all files protocols and operation types** - Some, but not all, of the data logs contain the hash value of the related blob or file. In some cases, the data doesn't contain a hash value. As a result, some operations can't be monitored for known malware uploads. Examples of such unsupported use cases include SMB file-shares and when a blob is created using [Put Block](/rest/api/storageservices/put-block) and [Put blocklist](/rest/api/storageservices/put-block-list).
## Next steps In this article, you learned about Microsoft Defender for Storage (classic).
-> [!div class="nextstepaction"]
-> [Enable Defender for Storage (classic)](defender-for-storage-classic-enable.md)
+
+- [Enable Defender for Storage (classic)](defender-for-storage-classic-enable.md)
+- Check out [common questions](faq-defender-for-storage-classic.yml) about Defender for Storage classic.
defender-for-cloud Defender For Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-introduction.md
Title: Microsoft Defender for Storage - the benefits and features description: Learn about the benefits and features of Microsoft Defender for Storage. Previously updated : 03/23/2023 Last updated : 06/15/2023
With a simple agentless setup at scale, you can [enable Defender for Storage](..
|Required roles and permissions:|For Malware Scanning and sensitive data threat detection at subscription and storage account levels, you need Owner roles (subscription owner/storage account owner) or specific roles with corresponding data actions. To enable Activity Monitoring, you need 'Security Admin' permissions. Read more about the required permissions.| |Clouds:|:::image type="icon" source="../defender-for-cloud/media/icons/yes-icon.png"::: Commercial clouds\*<br>:::image type="icon" source="../defender-for-cloud/media/icons/yes-icon.png"::: Azure Government (Only for activity monitoring)<br>:::image type="icon" source="../defender-for-cloud/media/icons/no-icon.png"::: Azure China 21Vianet<br>:::image type="icon" source="../defender-for-cloud/media/icons/no-icon.png"::: Connected AWS accounts|
-\* Azure DNS Zone is not supported for Malware Scanning and sensitive data threat detection.
+\* Azure DNS Zone isn't supported for Malware Scanning and sensitive data threat detection.
## What are the benefits of Microsoft Defender for Storage?
Defender for Storage provides the following:
- **Detection of entities without identities**: Defender for Storage detects suspicious activities generated by entities without identities that access your data using misconfigured and overly permissive Shared Access Signatures (SAS tokens) that may have leaked or compromised so that you can improve the security hygiene and reduce the risk of unauthorized access. This capability is an expansion of the Activity Monitoring security alerts suite. -- **Coverage of the top cloud storage threats**: Powered by Microsoft Threat Intelligence, behavioral models, and machine learning models to detect unusual and suspicious activities. The Defender for Storage security alerts cover the top cloud storage threats, such as sensitive data exfiltration, data corruption, and malicious file uploads.
+- **Coverage of the top cloud storage threats**: Powered by Microsoft Threat Intelligence, behavioral models, and machine learning models to detect unusual and suspicious activities. The Defender for Storage security alerts covers the top cloud storage threats, such as sensitive data exfiltration, data corruption, and malicious file uploads.
- **Comprehensive security without enabling logs**: When Microsoft Defender for Storage is enabled, it continuously analyzes both the data plane and control plane telemetry stream generated by Azure Blob Storage, Azure Files, and Azure Data Lake Storage services without the requirement of enabling diagnostic logs. -- **Frictionless enablement at scale**: Microsoft Defender for Storage is an agentless solution, easy to deploy, and enables security protection at scale using a native solution to Azure with just a single click.
+- **Frictionless enablement at scale**: Microsoft Defender for Storage is an agentless solution, easy to deploy, and enables security protection at scale using a native Azure solution.
## How does the service work? ### Activity monitoring
-Defender for Storage continuously analyzes data and control plane logs from protected storage accounts when enabled. There's no need to turn on resource logs for security benefits. Using Microsoft Threat Intelligence, it identifies suspicious signatures such as malicious IP addresses, Tor exit nodes, and potentially dangerous apps. It also builds data models and uses statistical and machine-learning methods to spot baseline activity anomalies, which may indicate malicious behavior. You'll receive security alerts for suspicious activities, but Defender for Storage ensures you won't get too many similar alerts. Activity monitoring won't affect performance, ingestion capacity, or access to your data.
+Defender for Storage continuously analyzes data and control plane logs from protected storage accounts when enabled. There's no need to turn on resource logs for security benefits. Use Microsoft Threat Intelligence to identify suspicious signatures such as malicious IP addresses, Tor exit nodes, and potentially dangerous apps. It also builds data models and uses statistical and machine-learning methods to spot baseline activity anomalies, which may indicate malicious behavior. You receive security alerts for suspicious activities, but Defender for Storage ensures you won't get too many similar alerts. Activity monitoring won't affect performance, ingestion capacity, or access to your data.
:::image type="content" source="media/defender-for-storage-introduction/activity-monitoring.png" alt-text="Diagram showing how activity monitoring identifies threats to your data."::: ### Malware Scanning (powered by Microsoft Defender Antivirus)
-Malware Scanning in Defender for Storage helps protect storage accounts from malicious content by performing a full malware scan on uploaded content in near real time, leveraging Microsoft Defender Antivirus capabilities. It is designed to help fulfill security and compliance requirements to handle untrusted content. Every file type is scanned, and scan results are returned for every file. The Malware Scanning capability is an agentless SaaS solution that allows simple setup at scale, with zero maintenance, and supports automating response at scale.
+Malware Scanning in Defender for Storage helps protect storage accounts from malicious content by performing a full malware scan on uploaded content in near real time, applying Microsoft Defender Antivirus capabilities. It's designed to help fulfill security and compliance requirements to handle untrusted content. Every file type is scanned, and scan results are returned for every file. The Malware Scanning capability is an agentless SaaS solution that allows simple setup at scale, with zero maintenance, and supports automating response at scale.
This is a configurable feature in the new Defender for Storage plan that is priced per GB scanned. Learn more about [Malware Scanning](defender-for-storage-malware-scan.md).
The ΓÇÿsensitive data threat detectionΓÇÖ capability enables security teams to e
ΓÇÿSensitive data threat detectionΓÇÖ is powered by the ΓÇ£Sensitive Data DiscoveryΓÇ¥ engine, an agentless engine that uses a smart sampling method to find resources with sensitive data. The service is integrated with Microsoft Purview's sensitive information types (SITs) and classification labels, allowing seamless inheritance of your organization's sensitivity settings.
-This is a configurable feature in the new Defender for Storage plan. You can choose to enable or disable it with no additional cost.
+This is a configurable feature in the new Defender for Storage plan. You can choose to enable or disable it with no other cost.
For more details, visit [Sensitive data threat detection](defender-for-storage-data-sensitivity.md). ## Pricing and cost controls ### Per storage account pricing
-The new Microsoft Defender for Storage plan has predictable pricing based on the number of storage accounts you protect. With the option to enable at the subscription or resource level and exclude specific storage accounts from protected subscriptions, you have increased flexibility to manage your security coverage. The pricing plan simplifies the cost calculation process, allowing you to scale easily as your needs change. Additional charges may apply to storage accounts with high-volume transactions.
+The new Microsoft Defender for Storage plan has predictable pricing based on the number of storage accounts you protect. With the option to enable at the subscription or resource level and exclude specific storage accounts from protected subscriptions, you have increased flexibility to manage your security coverage. The pricing plan simplifies the cost calculation process, allowing you to scale easily as your needs change. Other charges may apply to storage accounts with high-volume transactions.
### Malware Scanning - Billing per GB, monthly capping, and configuration Malware Scanning is charged on a per-gigabyte basis for scanned data. To ensure cost predictability, a monthly cap can be established for each storage account's scanned data volume, per-month basis. This cap can be set subscription-wide, affecting all storage accounts within the subscription, or applied to individual storage accounts. Under protected subscriptions, you can configure specific storage accounts with different limits.
-By default, the limit is set to 5,000GB per month per storage account. Once this threshold is exceeded, scanning will cease for the remaining blobs, with a 20GB confidence interval. For configuration details, refer to [configure Defender for Storage](../storage/common/azure-defender-storage-configure.md).
+By default, the limit is set to 5,000 GB per month per storage account. Once this threshold is exceeded, scanning will cease for the remaining blobs, with a 20-GB confidence interval. For configuration details, refer to [configure Defender for Storage](../storage/common/azure-defender-storage-configure.md).
### Enablement at scale with granular controls
Defender for Storage offers two capabilities to detect malicious content uploade
### Malware Scanning (paid add-on feature available only on the new plan)
-**Malware Scanning** leverages Microsoft Defender Antivirus (MDAV) to scan blobs uploaded to Blob storage, providing a comprehensive analysis that includes deep file scans and hash reputation analysis. This feature provides an enhanced level of detection against potential threats.
+**Malware Scanning** uses Microsoft Defender Antivirus (MDAV) to scan blobs uploaded to Blob storage, providing a comprehensive analysis that includes deep file scans and hash reputation analysis. This feature provides an enhanced level of detection against potential threats.
### Hash reputation analysis (available in all plans)
-**Hash reputation analysis** detects potential malware in Blob storage and Azure Files by comparing the hash values of newly uploaded blobs/files against those of known malware by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684). Not all file protocols and operation types are supported with this capability, leading to some operations not being monitored for potential malware uploads. Unsupported use cases include SMB file shares and when a blob is created using [Put Block](/rest/api/storageservices/put-block) and [Put Block List](/rest/api/storageservices/put-block-list).
+**Hash reputation analysis** detects potential malware in Blob storage and Azure Files by comparing the hash values of newly uploaded blobs/files against those of known malware by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684). Not all file protocols and operation types are supported with this capability, leading to some operations not being monitored for potential malware uploads. Unsupported use cases include SMB file shares and when a blob is created using [Put Block](/rest/api/storageservices/put-block) and [Put blocklist](/rest/api/storageservices/put-block-list).
In summary, Malware Scanning, which is only available on the new plan for Blob storage, offers a more comprehensive approach to malware detection by analyzing the full content of files and incorporating hash reputation analysis in its scanning methodology.
-## Common questions
-
-### Is it possible to enable Defender for Storage on a resource level?
-
-Yes, it's possible to enable Defender for Storage at the resource level and set up Malware Scanning and Sensitivity Scanning accordingly. Keep in mind that enabling it at the subscription level is the recommended approach, as it will automatically protect all new storage accounts.
-
-### Can I exclude certain storage accounts from protection?
-
-Yes, you can exclude storage accounts from protection.
-
-### How long does it take for subscription-level enablement to take effect?
-
-Enabling Defender for Storage at the subscription level may take up to 24 hours to be fully enabled across all storage accounts.
-
-### Is there a difference in features between the new and Defender for Storage (classic)?
-
-Yes, there is a difference in the capabilities of the two plans. New and future security capabilities will only be available in the new Defender for Storage plan. If you want to access these new capabilities, you'll need to enable the new plan.
-
-### Will the Defender for Storage (classic) continue to be supported?
-
-The Defender for Storage (classic) will still continue to be supported for three years after the release of the new Defender for Storage to general availability (GA).
-
-### Can I switch back to the Defender for Storage (classic)?
-
-Yes, you can use the REST API to return to the Defender for Storage (classic) plan.
-
-If you want to switch back to the Defender for Storage (classic) plan, you need to do two things. First, disable the new Defender for Storage plan that is enabled now. Second, check if there are any policies that can re-enable the new plan and turn them off too. The two Azure built-in policies enabling the new plan are **Configure Microsoft Defender for Storage to be enabled** and **Configure basic Microsoft Defender for Storage to be enabled (Activity Monitoring only).**
-
-### How can I calculate the cost of each plan?
-
-To estimate the cost of Defender for Storage, we've provided a pricing estimation workbook and a PowerShell script that you can run in your environment.
- ## Next steps In this article, you learned about Microsoft Defender for Storage.
-> [!div class="nextstepaction"]
-> [Enable Defender for Storage](enable-enhanced-security.md)
+- [Enable Defender for Storage](enable-enhanced-security.md)
+- Check out [common questions](faq-defender-for-storage.yml) about Defender for Storage.
defender-for-cloud Deploy Vulnerability Assessment Byol Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-byol-vm.md
Example (this example doesn't include valid license details):
-publicKey 'MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCOiOLXjOywMfLZIBGPZLwSocf1Q64GASLK9OHFEmanBl1nkJhZDrZ4YD5lM98fThYbAx1Rde2iYV1ze/wDlX4cIvFAyXuN7HbdkeIlBl6vWXEBZpUU17bOdJOUGolzEzNBhtxi/elEZLghq9Chmah82me/okGMIhJJsCiTtglVQIDAQAB' ```
-## FAQ - BYOL vulnerability scanner
--- [If I deploy a Qualys agent, what communications settings are required?](#if-i-deploy-a-qualys-agent-what-communications-settings-are-required)-- [Why do I have to specify a resource group when configuring a BYOL solution?](#why-do-i-have-to-specify-a-resource-group-when-configuring-a-byol-solution)-
-### If I deploy a Qualys agent, what communications settings are required?
-
-The Qualys Cloud Agent is designed to communicate with Qualys's SOC at regular intervals for updates, and to perform the various operations required for product functionality. To allow the agent to communicate seamlessly with the SOC, configure your network security to allow inbound and outbound traffic to the Qualys SOC CIDR and URLs.
-
-There are multiple Qualys platforms across various geographic locations. The SOC CIDR and URLs differ depending on the host platform of your Qualys subscription. To identify your Qualys host platform, use this page <https://www.qualys.com/platform-identification/>.
-
-### Why do I have to specify a resource group when configuring a BYOL solution?
-
-When you set up your solution, you must choose a resource group to attach it to. The solution isn't an Azure resource, so it won't be included in the list of the resource groupΓÇÖs resources. Nevertheless, it's attached to that resource group. If you later delete the resource group, the BYOL solution is unavailable.
- ## Next steps
-> [!div class="nextstepaction"]
-> [Remediate the findings from your vulnerability assessment solution](remediate-vulnerability-findings-vm.md)
+- [Remediate the findings from your vulnerability assessment solution](remediate-vulnerability-findings-vm.md)
+- Check out these [common questions](faq-vulnerability-assessments.yml) about vulnerability assessment.
Defender for Cloud also offers vulnerability analysis for your:
defender-for-cloud Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-vm.md
Previously updated : 07/12/2022 Last updated : 06/12/2023 + # Defender for Cloud's integrated Qualys vulnerability scanner for Azure and hybrid machines A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Defender for Cloud regularly checks your connected machines to ensure they're running vulnerability assessment tools.
The vulnerability scanner extension works as follows:
Your machines will appear in one or more of the following groups: - **Healthy resources** ΓÇô Defender for Cloud has detected a vulnerability assessment solution running on these machines.
- - **Unhealthy resources** ΓÇô A vulnerability scanner extension can be deployed to these machines.
- - **Not applicable resources** ΓÇô these machines [aren't supported for the vulnerability scanner extension](#why-does-my-machine-show-as-not-applicable-in-the-recommendation).
+ - **Unhealthy resources** ΓÇô A vulnerability scanner extension can be deployed to these machines.
+ - **Not applicable resources** ΓÇô [these machines aren't supported for the vulnerability scanner extension](faq-vulnerability-assessments.yml).
1. From the list of unhealthy machines, select the ones to receive a vulnerability assessment solution and select **Remediate**.
The following commands trigger an on-demand scan:
- **Windows machines**: ```REG ADD HKLM\SOFTWARE\Qualys\QualysAgent\ScanOnDemand\Vulnerability /v "ScanOnDemand" /t REG_DWORD /d "1" /f``` - **Linux machines**: ```sudo /usr/local/qualys/cloud-agent/bin/cloudagentctl.sh action=demand type=vm```
-## FAQ
--- [Are there any additional charges for the Qualys license?](#are-there-any-additional-charges-for-the-qualys-license)-- [What prerequisites and permissions are required to install the Qualys extension?](#what-prerequisites-and-permissions-are-required-to-install-the-qualys-extension)-- [Can I remove the Defender for Cloud Qualys extension?](#can-i-remove-the-defender-for-cloud-qualys-extension)-- [How can I check that the Qualys extension is properly installed?](#how-can-i-check-that-the-qualys-extension-is-properly-installed)-- [How does the extension get updated?](#how-does-the-extension-get-updated)-- [Why does my machine show as "not applicable" in the recommendation?](#why-does-my-machine-show-as-not-applicable-in-the-recommendation)-- [Can the built-in vulnerability scanner find vulnerabilities on the VMs network?](#can-the-built-in-vulnerability-scanner-find-vulnerabilities-on-the-vms-network)-- [Does the scanner integrate with my existing Qualys console?](#does-the-scanner-integrate-with-my-existing-qualys-console)-- [How quickly will the scanner identify newly disclosed critical vulnerabilities?](#how-quickly-will-the-scanner-identify-newly-disclosed-critical-vulnerabilities)-
-### Are there any additional charges for the Qualys license?
-
-No. The built-in scanner is free to all Microsoft Defender for Servers users. The recommendation deploys the scanner with its licensing and configuration information. No additional licenses are required.
-
-### What prerequisites and permissions are required to install the Qualys extension?
-
-You'll need write permissions for any machine on which you want to deploy the extension.
-
-The Microsoft Defender for Cloud vulnerability assessment extension (powered by Qualys), like other extensions, runs on top of the Azure Virtual Machine agent. So it runs as Local Host on Windows, and Root on Linux.
-
-During setup, Defender for Cloud checks to ensure that the machine can communicate over HTTPS (default port 443) with the following two Qualys data centers:
--- `https://qagpublic.qg3.apps.qualys.com` - Qualys' US data center-- `https://qagpublic.qg2.apps.qualys.eu` - Qualys' European data center-
-The extension doesn't currently accept any proxy configuration details. However, you can configure the Qualys agent's proxy settings locally in the Virtual Machine. Please follow the guidance in the Qualys documentation:
--- [Windows proxy configuration](https://qualysguard.qg2.apps.qualys.com/portal-help/en/ca/agents/win_proxy.htm)-- [Linux proxy configuration](https://qualysguard.qg2.apps.qualys.com/portal-help/en/ca/agents/linux_proxy.htm)-
-### Can I remove the Defender for Cloud Qualys extension?
-
-If you want to remove the extension from a machine, you can do it manually or with any of your programmatic tools.
-
-You'll need the following details:
--- On Linux, the extension is called "LinuxAgent.AzureSecurityCenter" and the publisher name is "Qualys".-- On Windows, the extension is called "WindowsAgent.AzureSecurityCenter" and the provider name is "Qualys".-
-### How can I check that the Qualys extension is properly installed?
-
-You can use the `curl` command to check the connectivity to the relevant Qualys URL. A valid response would be: `{"code":404,"message":"HTTP 404 Not Found"}`
-
-In addition, make sure that the DNS resolution for these URLs is successful and that everything is [valid with the certificate authority](https://success.qualys.com/support/s/article/000001856) that is used.
-
-### How does the extension get updated?
-
-Like the Microsoft Defender for Cloud agent itself and all other Azure extensions, minor updates of the Qualys scanner might automatically happen in the background. All agents and extensions are tested extensively before being automatically deployed.
-
-### Why does my machine show as "not applicable" in the recommendation?
-
-If you have machines in the **not applicable** resources group, Defender for Cloud can't deploy the vulnerability scanner extension on those machines because:
--- The vulnerability scanner included with Microsoft Defender for Cloud is only available for machines protected by [Microsoft Defender for Servers](defender-for-servers-introduction.md).--- It's a PaaS resource, such as an image in an AKS cluster or part of a virtual machine scale set.--- It's not running one of the supported operating systems:-
- | **Vendor** | **OS** | **Supported versions** |
- ||--|--|
- | Microsoft | Windows | All |
- | Amazon | Amazon Linux | 2015.09-2018.03 |
- | Amazon | Amazon Linux 2 | 2017.03-2.0.2021 |
- | Red Hat | Enterprise Linux | 5.4+, 6, 7-7.9, 8-8.6, 9 beta |
- | Red Hat | CentOS | 5.4-5.11, 6-6.7, 7-7.8, 8-8.5 |
- | Red Hat | Fedora | 22-33 |
- | SUSE | Linux Enterprise Server (SLES) | 11, 12, 15, 15 SP1, 15 SP2, 15 SP3 |
- | SUSE | openSUSE | 12, 13, 15.0-15.3 |
- | SUSE | Leap | 42.1 |
- | Oracle | Enterprise Linux | 5.11, 6, 7-7.9, 8-8.5 |
- | Debian | Debian | 7.x-11.x |
- | Ubuntu | Ubuntu | 12.04 LTS, 14.04 LTS, 15.x, 16.04 LTS, 18.04 LTS, 19.10, 20.04 LTS |
-
-### Can the built-in vulnerability scanner find vulnerabilities on the VMs network?
-
-No. The scanner runs on your machine to look for vulnerabilities of the machine itself, not for your network.
-
-### Does the scanner integrate with my existing Qualys console?
-
-The Defender for Cloud extension is a separate tool from your existing Qualys scanner. Licensing restrictions mean that it can only be used within Microsoft Defender for Cloud.
-
-### How quickly will the scanner identify newly disclosed critical vulnerabilities?
-
-Within 48 hrs of the disclosure of a critical vulnerability, Qualys incorporates the information into their processing and can identify affected machines.
- ## Next steps > [!div class="nextstepaction"]
defender-for-cloud Devops Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/devops-faq.md
- Title: Defender for DevOps FAQ
-description: If you're having issues with Defender for DevOps perhaps, you can solve it with these frequently asked questions.
- Previously updated : 04/18/2023--
-# Defender for DevOps frequently asked questions (FAQ)
-
-If you're having issues with Defender for DevOps these frequently asked questions may assist you,
-
-## FAQ
--- [Scan specific folders for secrets in ADO repos with CredScan](#scan-specific-folders-for-secrets-in-ado-repos-with-credscan)-- [I'm getting an error while trying to connect](#im-getting-an-error-while-trying-to-connect)-- [Why can't I find my repository](#why-cant-i-find-my-repository)-- [Secret scan didn't run on my code](#secret-scan-didnt-run-on-my-code)-- [I donΓÇÖt see generated SARIF file in the path I chose to drop it](#i-dont-see-generated-sarif-file-in-the-path-i-chose-to-drop-it)-- [I donΓÇÖt see the results for my ADO projects in Microsoft Defender for Cloud](#i-dont-see-the-results-for-my-ado-projects-in-microsoft-defender-for-cloud)-- [Why is my Azure DevOps repository not refreshing to healthy?](#why-is-my-azure-devops-repository-not-refreshing-to-healthy) -- [I donΓÇÖt see Recommendations for findings](#i-dont-see-recommendations-for-findings)-- [What information does Defender for DevOps store about me and my enterprise, and where is the data stored and processed?](#what-information-does-defender-for-devops-store-about-me-and-my-enterprise-and-where-is-the-data-stored-and-processed)-- [Why are Delete source code and Write Code permissions required for Azure DevOps?](#why-are-delete-source-and-write-code-permissions-required-for-azure-devops)-- [Is Exemptions capability available and tracked for app sec vulnerability management](#is-exemptions-capability-available-and-tracked-for-app-sec-vulnerability-management)-- [Is continuous, automatic scanning available?](#is-continuous-automatic-scanning-available)-- [Is it possible to block the developers committing code with exposed secrets](#is-it-possible-to-block-the-developers-committing-code-with-exposed-secrets)-- [I'm not able to configure Pull Request Annotations](#im-not-able-to-configure-pull-request-annotations)-- [What programming languages are supported by Defender for DevOps?](#what-programming-languages-are-supported-by-defender-for-devops) -- [I'm getting an error that informs me that there's no CLI tool](#im-getting-an-error-that-informs-me-that-theres-no-cli-tool)-- [Can I migrate the connector to a different region?](#can-i-migrate-the-connector-to-a-different-region)-
-### Scan specific folders for secrets in ADO repos with CredScan
-If you want to scan specific folders in Azure DevOps repos with CredScan, you can use:
-env:
- credscan_targetdirectory: 'NameOfFolderToScanForSecrets/'
-
-A full ADO YAML file for a pipeline that does CredScan scanning for secrets on a specific folder could look like this:
-```yml
-trigger:
- branches:
- include:
- - main
- - master
-
-pool:
- vmImage: "windows-latest"
-
-steps:
- - task: MicrosoftSecurityDevOps@1
- displayName: "Microsoft Security DevOps"
- inputs:
- categories: 'secrets'
- break: false
- env:
- credscan_targetdirectory: 'NameOfFolderToScanForSecrets/'
-```
-
-### I'm getting an error while trying to connect
-
-When you select the *Authorize* button, the account that you're logged in with is used. That account can have the same email but may have a different tenant. Make sure you have the right account/tenant combination selected in the popup consent screen and Visual Studio.
-
-You can [check which account is signed in](https://app.vssps.visualstudio.com/profile/view).
-
-### Why can't I find my repository
-
-The Azure DevOps service only supports `TfsGit`.
-
-Ensure that you've [onboarded your repositories](./quickstart-onboard-devops.md?branch=main) to Microsoft Defender for Cloud. If you still can't see your repository, ensure that you're signed in with the correct Azure DevOps organization user account. Your Azure subscription and Azure DevOps Organization need to be in the same tenant. If the user for the connector is wrong, you need to delete the previously created connector, sign in with the correct user account and re-create the connector.
-
-### Secret scan didn't run on my code
-
-To ensure your code is scanned for secrets, make sure you've [onboarded your repositories](./quickstart-onboard-devops.md?branch=main) to Defender for Cloud.
-
-In addition to onboarding resources, you must have the [Microsoft Security DevOps (MSDO) Azure DevOps extension](./azure-devops-extension.md?branch=main) configured for your pipelines. The extension runs secret scan along with other scanners.
-
-If no secrets are identified through scans, the total exposed secret for the resource shows `Healthy` in Defender for Cloud.
-
-If secret scan isn't enabled (meaning MSDO isn't configured for your pipeline) or a scan isn't performed for at least 14 days, the resource shows as `N/A` in Defender for Cloud.
-
-### I donΓÇÖt see generated SARIF file in the path I chose to drop it
-
-If you donΓÇÖt see SARIF file in the expected path, you may have chosen a different drop path than the `CodeAnalysisLogs/msdo.sarif` one. Currently you should drop your SARIF files to `CodeAnalysisLogs/msdo.sarif`.
-
-### I donΓÇÖt see the results for my ADO projects in Microsoft Defender for Cloud
-
-When you use classic pipeline configuration, make sure you don't change artifact name. This can result not seeing the results for your project.
-
-Currently, OSS vulnerability findings are only available for GitHub repositories. Azure DevOps repositories will have the total exposed secrets, IaC misconfigurations, and code security findings available. It will show `N/A` for OSS vulnerabilities. You can learn more about how to [Review your findings](defender-for-devops-introduction.md).
-
-### Why is my Azure DevOps repository not refreshing to healthy?
-
-For a previously unhealthy scan result to be healthy again, updated healthy scan results need to be from the same build definition as the one that generated the findings in the first place. A common scenario where this issue occurs is when testing with different pipelines. For results to refresh appropriately, scan results need to be for the same pipeline(s) and branch(es).
-
-If no scan is performed for 14 days, the scan results revert to `N/A`.
-
-### I donΓÇÖt see Recommendations for findings
-
-Ensure that you've onboarded the project with the connector and that your repository (that build is for), is onboarded to Microsoft Defender for Cloud. You can learn how to [onboard your DevOps repository](./quickstart-onboard-devops.md?branch=main) to Defender for Cloud.
-
-You must have more than a [stakeholder license](https://azure.microsoft.com/pricing/details/devops/azure-devops-services/) to the repos to onboard them, and you need to be at least the security reader on the subscription where the connector is created. You can confirm if you've onboarded the repositories by seeing them in the inventory list in Microsoft Defender for Cloud.
-
-### What information does Defender for DevOps store about me and my enterprise, and where is the data stored and processed?
-
-Defender for DevOps connects to your source code management system, for example, Azure DevOps, GitHub, to provide a central console for your DevOps resources and security posture. Defender for DevOps processes and stores the following information:
--- Metadata on your connected source code management systems and associated repositories. This data includes user, organizational, and authentication information.--- Scan results for recommendations and assessments results and details.-
-Data is stored within the region your connector is created in and flows into [Microsoft Defender for Cloud](defender-for-cloud-introduction.md). You should consider which region to create your connector in, for any data residency requirements as you design and create your DevOps connector.
-
-Defender for DevOps currently doesn't process or store your code, build, and audit logs.
-
-Learn more about [Microsoft Privacy Statement](https://go.microsoft.com/fwLink/?LinkID=521839&amp;clcid=0x9).
-
-### Why are Delete Source and Write Code permissions required for Azure DevOps?
-
-Azure DevOps doesn't have the necessary granularity for its permissions. These permissions are required for some of the Defender for DevOps features, such as pull request annotations in order to work.
-
-### Is Exemptions capability available and tracked for app sec vulnerability management?
-
-Exemptions aren't available for Defender for DevOps within Microsoft Defender for Cloud.
-
-### Is continuous, automatic scanning available?
-
-Currently scanning occurs at build time.
-
-### Is it possible to block the developers committing code with exposed secrets?
-
-The ability to block developers from committing code with exposed secrets isn't currently available.
-
-### I'm not able to configure Pull Request Annotations
-
-Make sure you have write (owner/contributor) access to the subscription. If you don't have this type of access today, you can get it through [activating an Azure Active Directory role in PIM](/azure/active-directory/privileged-identity-management/pim-how-to-activate-role).
-
-### What programming languages are supported by Defender for DevOps?
-
-The following languages are supported by Defender for DevOps:
--- Python-- JavaScript-- TypeScript-
-### I'm getting an error that informs me that there's no CLI tool
-
-When you run the pipeline in Azure DevOps, you receive the following error:
-`no such file or directory, scandir 'D:\a\_msdo\versions\microsoft.security.devops.cli'`.
--
-This error can be seen in the extensions job as well.
--
-This error occurs if you're missing the dependency of `dotnet6` in the pipeline's YAML file. DotNet6 is required to allow the Microsoft Security DevOps extension to run. Include this as a task in your YAML file to eliminate the error.
-
-You can learn more about [Microsoft Security DevOps](https://marketplace.visualstudio.com/items?itemName=ms-securitydevops.microsoft-security-devops-azdevops).
-
-### Can I migrate the connector to a different region?
-
-For example, can I migrate the connector from the Central US region to the West Europe region?
-
-We donΓÇÖt support automatic migration for the Defender for DevOps connectors from one region to another at this time.
-
-If you want to move a connectorΓÇÖs location, for example a GitHub or Azure DevOps connector, to be stored in a different region than the original one where the connector was created, the recommendation is to delete the existing connector and then to create another connector in the new region.
-
-## Next steps
--- [Overview of Defender for DevOps](defender-for-devops-introduction.md)
defender-for-cloud Endpoint Protection Recommendations Technical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/endpoint-protection-recommendations-technical.md
description: How the endpoint protection solutions are discovered and identified
Previously updated : 03/08/2022 Last updated : 06/15/2023 # Endpoint protection assessment and recommendations in Microsoft Defender for Cloud
Microsoft Antimalware extension logs are available at:
### Support
-For more help, contact the Azure experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/forums/). Or file an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/) and select Get support. For information about using Azure Support, read the [Microsoft Azure support FAQ](https://azure.microsoft.com/support/faq/).
+For more help, contact the Azure experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/forums/). Or file an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/) and select Get support. For information about using Azure Support, read the [Microsoft Azure support common questions](https://azure.microsoft.com/support/faq/).
defender-for-cloud Episode Thirty Three https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-thirty-three.md
Last updated 06/13/2023
<br> <iframe src="https://aka.ms/docs/player?id=abceb157-b850-42f0-8b83-92cbef16c893" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe> -- [01:48](/shows/mdc-in-the-field/agentless-container-posture-management#time=01m48s) - Overview of the Defender CSPM
+- [01:48](/shows/mdc-in-the-field/agentless-container-posture-management#time=01m48s) - Overview of Defender CSPM
- [03:06](/shows/mdc-in-the-field/agentless-container-posture-management#time=03m06s) - What container capabilities are included in Defender CSPM - [05:00](/shows/mdc-in-the-field/agentless-container-posture-management#time=05m00s) - How to find Container's insights using Attack Path - [06:14](/shows/mdc-in-the-field/agentless-container-posture-management#time=06m14s) - How agentless container posture management works
Last updated 06/13/2023
## Recommended resources -- Learn more about [Defender for APIs](concept-agentless-containers.md)
+- Learn more about [Agentless Container Posture](concept-agentless-containers.md)
- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS) - Learn more about [Microsoft Security](https://msft.it/6002T9HQY)
defender-for-cloud Episode Thirty Two https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-thirty-two.md
Last updated 06/08/2023
## Next steps > [!div class="nextstepaction"]
-> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
+> [Agentless Container Posture Management in Defender for Cloud](episode-thirty-three.md)
defender-for-cloud Exempt Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/exempt-resource.md
Learn more in the following pages:
- [How to create queries with Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md) - [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/)
-## FAQ - Exemption rules
--- [What happens when one recommendation is in multiple policy initiatives?](#what-happens-when-one-recommendation-is-in-multiple-policy-initiatives)-- [Are there any recommendations that don't support exemption?](#are-there-any-recommendations-that-dont-support-exemption)-
-### What happens when one recommendation is in multiple policy initiatives?
-
-Sometimes, a security recommendation appears in more than one policy initiative. If you've got multiple instances of the same recommendation assigned to the same subscription, and you create an exemption for the recommendation, it will affect all of the initiatives that you have permission to edit.
-
-For example, the recommendation **** is part of the default policy initiative assigned to all Azure subscriptions by Microsoft Defender for Cloud. It's also in XXXXX.
-
-If you try to create an exemption for this recommendation, you'll see one of the two following messages:
--- If you **have** the necessary permissions to edit both initiatives, you'll see:-
- *This recommendation is included in several policy initiatives: [initiative names separated by comma]. Exemptions will be created on all of them.*
--- If you **don't have** sufficient permissions on both initiatives, you'll see this message instead:-
- *You have limited permissions to apply the exemption on all the policy initiatives, the exemptions will be created only on the initiatives with sufficient permissions.*
-
-### Are there any recommendations that don't support exemption?
-
-These generally available recommendations don't support exemption:
--- All advanced threat protection types should be enabled in SQL managed instance advanced data security settings-- All advanced threat protection types should be enabled in SQL server advanced data security settings-- Container CPU and memory limits should be enforced-- Container images should be deployed from trusted registries only-- Container with privilege escalation should be avoided-- Containers sharing sensitive host namespaces should be avoided-- Containers should listen on allowed ports only-- Default IP Filter Policy should be Deny-- Immutable (read-only) root filesystem should be enforced for containers-- IoT Devices - Open Ports On Device-- IoT Devices - Permissive firewall policy in one of the chains was found-- IoT Devices - Permissive firewall rule in the input chain was found-- IoT Devices - Permissive firewall rule in the output chain was found-- IP Filter rule large IP range-- Least privileged Linux capabilities should be enforced for containers-- Machines should be configured securely-- Overriding or disabling of containers AppArmor profile should be restricted-- Privileged containers should be avoided-- Running containers as root user should be avoided-- Services should listen on allowed ports only-- SQL servers should have an Azure Active Directory administrator provisioned-- Usage of host networking and ports should be restricted-- Usage of pod HostPath volume mounts should be restricted to a known list to restrict node access from compromised containers ## Next steps
defender-for-cloud Incidents Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/incidents-reference.md
Learn how to [manage security incidents](incidents.md#managing-security-incident
| **Security incident detected suspicious Kubernetes cluster activity (Preview)** | This incident indicates that suspicious activity has been detected on your Kubernetes cluster following suspicious user activity. Multiple alerts from different Defender for Cloud plans have been triggered on the same cluster, which increases the fidelity of malicious activity in your environment. The suspicious activity on your Kubernetes cluster might indicate that a threat actor has gained unauthorized access to your environment and is attempting to compromise it. | High | | **Security incident detected suspicious storage activity (Preview)** | Scenario 1: This incident indicates that suspicious storage activity has been detected following suspicious user or service principal activity. Multiple alerts from different Defender for Cloud plans have been triggered on the same resource, which increases the fidelity of malicious activity in your environment. Suspicious account activity might indicate that a threat actor gained unauthorized access to your environment, and the succeeding suspicious storage activity may suggest they are attempting to access potentially sensitive data. <br><br> Scenario 2: This incident indicates that suspicious storage activity has been detected following suspicious user or service principal activity. Multiple alerts from different Defender for Cloud plans have been triggered from the same IP address, which increases the fidelity of malicious activity in your environment. Suspicious account activity might indicate that a threat actor gained unauthorized access to your environment, and the succeeding suspicious storage activity may suggest they are attempting to access potentially sensitive data. | High | | **Security incident detected suspicious Azure toolkit activity (Preview)** | This incident indicates that suspicious activity has been detected following the potential usage of an Azure toolkit. Multiple alerts from different Defender for Cloud plans have been triggered on the same user or service principal, which increases the fidelity of malicious activity in your environment. The usage of an Azure toolkit can indicate that an attacker has gained unauthorized access to your environment and is attempting to compromise it. | High |
+| **Security incident detected suspicious app service activity (Preview)** | Scenario 1: This incident indicates that suspicious activity has been detected in your app service environment. Multiple alerts from different Defender for Cloud plans have been triggered on the same resource, which increases the fidelity of malicious activity in your environment. Suspicious app service activity might indicate that a threat actor is targeting your application and may be attempting to compromise it. <br><br> Scenario 2: This incident indicates that suspicious activity has been detected in your app service environment. Multiple alerts from different Defender for Cloud plans have been triggered from the same IP address, which increases the fidelity of malicious activity in your environment. Suspicious app service activity might indicate that a threat actor is targeting your application and may be attempting to compromise it.ΓÇï | High |
| **Security incident detected compromised machine** | This incident indicates suspicious activity on one or more of your virtual machines. Multiple alerts from different Defender for Cloud plans have been triggered in chronological order on the same resource, following the MITRE ATT&CK framework. This might indicate a threat actor has gained unauthorized access to your environment and successfully compromised this machine.| Medium/High | | **Security incident detected compromised machine with botnet communication** | This incident indicates suspicious botnet activity on your virtual machine. Multiple alerts from different Defender for Cloud plans have been triggered in chronological order on the same resource, following the MITRE ATT&CK framework. This might indicate a threat actor has gained unauthorized access to your environment and is attempting to compromise it. | Medium/High | | **Security incident detected compromised machines with botnet communication** | This incident indicates suspicious botnet activity on your virtual machines. Multiple alerts from different Defender for Cloud plans have been triggered in chronological order on the same resource, following the MITRE ATT&CK framework. This might indicate a threat actor has gained unauthorized access to your environment and is attempting to compromise it. | Medium/High |
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
description: Learn about deploying Microsoft Defender for Endpoint from Microsof
Previously updated : 04/24/2023 Last updated : 06/14/2023 # Protect your endpoints with Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint
To remove the Defender for Endpoint solution from your machines:
1. Follow the steps in [Offboard devices from the Microsoft Defender for Endpoint service](/microsoft-365/security/defender-endpoint/offboard-machines) from the Defender for Endpoint documentation.
-## FAQ - Microsoft Defender for Cloud integration with Microsoft Defender for Endpoint
--- [What's this "MDE.Windows" / "MDE.Linux" extension running on my machine?](#whats-this-mdewindows--mdelinux-extension-running-on-my-machine)-- [What are the licensing requirements for Microsoft Defender for Endpoint?](#what-are-the-licensing-requirements-for-microsoft-defender-for-endpoint)-- [Do I need to buy a separate anti-malware solution to protect my machines?](#do-i-need-to-buy-a-separate-anti-malware-solution-to-protect-my-machines)-- [If I already have a license for Microsoft Defender for Endpoint, can I get a discount for Microsoft Defender for Servers?](#if-i-already-have-a-license-for-microsoft-defender-for-endpoint-can-i-get-a-discount-for-microsoft-defender-for-servers)-- [How do I switch from a third-party EDR tool?](#how-do-i-switch-from-a-third-party-edr-tool)-
-### What's this "MDE.Windows" / "MDE.Linux" extension running on my machine?
-
-In the past, Microsoft Defender for Endpoint was provisioned by the Log Analytics agent. When [we expanded support to include Windows Server 2019](release-notes-archive.md#microsoft-defender-for-endpoint-integration-with-azure-defender-now-supports-windows-server-2019-and-windows-10-on-windows-virtual-desktop-released-for-general-availability-ga) and Linux, we also added an extension to perform the automatic onboarding.
-
-Defender for Cloud automatically deploys the extension to machines running:
--- Windows Server 2019 and Windows Server 2022-- Windows Server 2012 R2 and 2016 if [MDE Unified Solution integration](#enable-the-integration) is enabled-- Windows 10 on Azure Virtual Desktop.-- Other versions of Windows Server if Defender for Cloud doesn't recognize the OS version (for example, when a custom VM image is used). In this case, Microsoft Defender for Endpoint is still provisioned by the Log Analytics agent.-- Linux.-
-> [!IMPORTANT]
-> If you delete the MDE.Windows/MDE.Linux extension, it will not remove Microsoft Defender for Endpoint. To offboard the machine, see [Offboard Windows servers.](/microsoft-365/security/defender-endpoint/configure-server-endpoints#offboard-windows-servers).
-
-### I enabled the solution but the `MDE.Windows`/`MDE.Linux` extension isn't showing on my machine
-
-If you enabled the integration, but still don't see the extension running on your machines:
-
-1. You need to wait at least 12 hours to be sure there's an issue to investigate.
-1. If after 12 hours you still don't see the extension running on your machines, check that you've met [Prerequisites](#prerequisites) for the integration.
-1. Ensure you've enabled the [Microsoft Defender for Servers](defender-for-servers-introduction.md) plan for the subscriptions related to the machines you're investigating.
-1. If you've moved your Azure subscription between Azure tenants, some manual preparatory steps are required before Defender for Cloud will deploy Defender for Endpoint. For full details, [contact Microsoft support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
-
-### What are the licensing requirements for Microsoft Defender for Endpoint?
-
-Licenses for Defender for Endpoint for servers are included with **Microsoft Defender for Servers**.
-
-### Do I need to buy a separate anti-malware solution to protect my machines?
-
-No. With MDE integration in Defender for Servers, you'll also get malware protection on your machines.
--- On Windows Server 2012 R2 with MDE unified solution integration enabled, Defender for Servers will deploy [Microsoft Defender Antivirus](/microsoft-365/security/defender-endpoint/microsoft-defender-antivirus-windows) in *active mode*.-- On newer Windows Server operating systems, Microsoft Defender Antivirus is part of the operating system and will be enabled in *active mode*.-- On Linux, Defender for Servers will deploy MDE including the anti-malware component, and set the component in *passive mode*.-
-### If I already have a license for Microsoft Defender for Endpoint, can I get a discount for Microsoft Defender for Servers?
-
-If you already have a license for **Microsoft Defender for Endpoint for Servers** , you won't pay for that part of your [Microsoft Defender for Servers Plan 2](plan-defender-for-servers-select-plan.md#plan-features) license. Learn more about [the Microsoft 365 license](/microsoft-365/security/defender-endpoint/minimum-requirements#licensing-requirements).
-
-To request your discount, [contact Defender for Cloud's support team](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview). You'll need to provide the relevant workspace ID, region, and number of Microsoft Defender for Endpoint for servers licenses applied for machines in the given workspace.
-
-The discount will be effective starting from the approval date, and won't take place retroactively.
-
-### How do I switch from a third-party EDR tool?
-
-Full instructions for switching from a non-Microsoft endpoint solution are available in the Microsoft Defender for Endpoint documentation: [Migration overview](/windows/security/threat-protection/microsoft-defender-atp/switch-to-microsoft-defender-migration).
-
-### Which Microsoft Defender for Endpoint plan is supported in Defender for Servers?
-
-Defender for Servers Plan 1 and Plan 2 provides the capabilities of [Microsoft Defender for Endpoint Plan 2](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint).
- ## Next steps - [Platforms and features supported by Microsoft Defender for Cloud](security-center-os-coverage.md) - [Learn how recommendations help you protect your Azure resources](review-security-recommendations.md)
+- View common question about the [Defender for Cloud integration with Microsoft Defender for Endpoint](faq-defender-for-servers.yml)
defender-for-cloud Just In Time Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/just-in-time-access-overview.md
Title: Understanding just-in-time virtual machine access in Microsoft Defender for Cloud description: This document explains how just-in-time VM access in Microsoft Defender for Cloud helps you control access to your Azure virtual machines Previously updated : 05/15/2022 Last updated : 06/12/2023 # Understanding just-in-time (JIT) VM access
When Defender for Cloud finds a machine that can benefit from JIT, it adds that
![Just-in-time (JIT) virtual machine (VM) access recommendation.](./media/just-in-time-explained/unhealthy-resources.png)
-## FAQ - Just-in-time virtual machine access
-
-### What permissions are needed to configure and use JIT?
-
-JIT Requires [Microsoft Defender for Servers Plan 2](plan-defender-for-servers-select-plan.md#plan-features) to be enabled on the subscription.
-
-**Reader** and **SecurityReader** roles can both view the JIT status and parameters.
-
-If you want to create custom roles that can work with JIT, you'll need the details from the table below.
-
-If you are setting up JIT on your Amazon Web Service (AWS) VM, you will need to [connect your AWS account](quickstart-onboard-aws.md) to Microsoft Defender for Cloud.
-
-> [!TIP]
-> To create a least-privileged role for users that need to request JIT access to a VM, and perform no other JIT operations, use the [Set-JitLeastPrivilegedRole script](https://github.com/Azure/Azure-Security-Center/tree/main/Powershell%20scripts/JIT%20Scripts/JIT%20Custom%20Role) from the Defender for Cloud GitHub community pages.
-
-| To enable a user to: | Permissions to set|
-| | |
-|Configure or edit a JIT policy for a VM | *Assign these actions to the role:* <ul><li>On the scope of a subscription or resource group that is associated with the VM:<br/> `Microsoft.Security/locations/jitNetworkAccessPolicies/write` </li><li> On the scope of a subscription or resource group of VM: <br/>`Microsoft.Compute/virtualMachines/write`</li></ul> |
-|Request JIT access to a VM | *Assign these actions to the user:* <ul><li> `Microsoft.Security/locations/jitNetworkAccessPolicies/initiate/action` </li><li> `Microsoft.Security/locations/jitNetworkAccessPolicies/*/read` </li><li> `Microsoft.Compute/virtualMachines/read` </li><li> `Microsoft.Network/networkInterfaces/*/read` </li> <li> `Microsoft.Network/publicIPAddresses/read` </li></ul> |
-|Read JIT policies| *Assign these actions to the user:* <ul><li>`Microsoft.Security/locations/jitNetworkAccessPolicies/read`</li><li>`Microsoft.Security/locations/jitNetworkAccessPolicies/initiate/action`</li><li>`Microsoft.Security/policies/read`</li><li>`Microsoft.Security/pricings/read`</li><li>`Microsoft.Compute/virtualMachines/read`</li><li>`Microsoft.Network/*/read`</li>|
-
-> [!Note]
-> Only the `Microsoft.Security` permissions are relevant for AWS.
- ## Next steps This page explained _why_ just-in-time (JIT) virtual machine (VM) access should be used. To learn about _how_ to enable JIT and request access to your JIT-enabled VMs, see the following:
defender-for-cloud Just In Time Access Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/just-in-time-access-usage.md
description: Learn how just-in-time VM access (JIT) in Microsoft Defender for Cl
Previously updated : 12/11/2022 Last updated : 06/14/2023 # Secure your management ports with just-in-time access You can use Microsoft Defender for Cloud's just-in-time (JIT) access to protect your Azure virtual machines (VMs) from unauthorized network access. Many times firewalls contain allow rules that leave your VMs vulnerable to attack. JIT lets you allow access to your VMs only when the access is needed, on the ports needed, and for the period of time needed.
-Learn more about [how JIT works](just-in-time-access-overview.md) and the [permissions required to configure and use JIT](just-in-time-access-overview.md#what-permissions-are-needed-to-configure-and-use-jit).
+Learn more about [how JIT works](just-in-time-access-overview.md) and the [permissions required to configure and use JIT](#prerequisites).
In this article, you'll learn you how to include JIT in your security program, including how to:
In this article, you'll learn you how to include JIT in your security program, i
|--|:-| | Release state: | General availability (GA) | | Supported VMs: | :::image type="icon" source="./medi)<br> :::image type="icon" source="./media/icons/yes-icon.png"::: AWS EC2 instances (Preview) |
-| Required roles and permissions: | **Reader**, **SecurityReader**, or a [custom role](just-in-time-access-overview.md#what-permissions-are-needed-to-configure-and-use-jit) can view the JIT status and parameters.<br>To create a least-privileged role for users that only need to request JIT access to a VM, use the [Set-JitLeastPrivilegedRole script](https://github.com/Azure/Microsoft-Defender-for-Cloud/tree/main/Powershell%20scripts/JIT%20Scripts/JIT%20Custom%20Role). |
+| Required roles and permissions: | **Reader**, **SecurityReader**, or a [custom role](#prerequisites) can view the JIT status and parameters.<br>To create a least-privileged role for users that only need to request JIT access to a VM, use the [Set-JitLeastPrivilegedRole script](https://github.com/Azure/Microsoft-Defender-for-Cloud/tree/main/Powershell%20scripts/JIT%20Scripts/JIT%20Custom%20Role). |
| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts (preview) |
+## Prerequisites
+
+- JIT Requires [Microsoft Defender for Servers Plan 2](plan-defender-for-servers-select-plan.md#plan-features) to be enabled on the subscription.
+
+- **Reader** and **SecurityReader** roles can both view the JIT status and parameters.
+
+- If you want to create custom roles that can work with JIT, you'll need the details from the following table:
+
+ | To enable a user to: | Permissions to set|
+ | | |
+ |Configure or edit a JIT policy for a VM | *Assign these actions to the role:* <ul><li>On the scope of a subscription or resource group that is associated with the VM:<br/> `Microsoft.Security/locations/jitNetworkAccessPolicies/write` </li><li> On the scope of a subscription or resource group of VM: <br/>`Microsoft.Compute/virtualMachines/write`</li></ul> |
+ |Request JIT access to a VM | *Assign these actions to the user:* <ul><li> `Microsoft.Security/locations/jitNetworkAccessPolicies/initiate/action` </li><li> `Microsoft.Security/locations/jitNetworkAccessPolicies/*/read` </li><li> `Microsoft.Compute/virtualMachines/read` </li><li> `Microsoft.Network/networkInterfaces/*/read` </li> <li> `Microsoft.Network/publicIPAddresses/read` </li></ul> |
+ |Read JIT policies| *Assign these actions to the user:* <ul><li>`Microsoft.Security/locations/jitNetworkAccessPolicies/read`</li><li>`Microsoft.Security/locations/jitNetworkAccessPolicies/initiate/action`</li><li>`Microsoft.Security/policies/read`</li><li>`Microsoft.Security/pricings/read`</li><li>`Microsoft.Compute/virtualMachines/read`</li><li>`Microsoft.Network/*/read`</li>|
+
+ > [!NOTE]
+ > Only the `Microsoft.Security` permissions are relevant for AWS.
+
+- To set up JIT on your Amazon Web Service (AWS) VM, you will need to [connect your AWS account](quickstart-onboard-aws.md) to Microsoft Defender for Cloud.
+
+ > [!TIP]
+ > To create a least-privileged role for users that need to request JIT access to a VM, and perform no other JIT operations, use the [Set-JitLeastPrivilegedRole script](https://github.com/Azure/Azure-Security-Center/tree/main/Powershell%20scripts/JIT%20Scripts/JIT%20Custom%20Role) from the Defender for Cloud GitHub community pages.
+ ## Work with JIT VM access using Microsoft Defender for Cloud You can use Defender for Cloud or you can programmatically enable JIT VM access with your own custom options, or you can enable JIT with default, hard-coded parameters from Azure Virtual machines.
defender-for-cloud Multi Factor Authentication Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/multi-factor-authentication-enforcement.md
Title: Microsoft Defender for Cloud's security recommendations for MFA description: Learn how to enforce multi-factor authentication for your Azure subscriptions using Microsoft Defender for Cloud Previously updated : 06/11/2023 Last updated : 06/15/2023 # Manage multi-factor authentication (MFA) enforcement on your subscriptions
To see which accounts don't have MFA enabled, use the following Azure Resource G
> [!TIP] > Alternatively, you can use the Defender for Cloud REST API method [Assessments - Get](/rest/api/defenderforcloud/assessments/get). -
-## FAQ - MFA in Defender for Cloud
--- [We're already using CA policy to enforce MFA. Why do we still get the Defender for Cloud recommendations?](#were-already-using-ca-policy-to-enforce-mfa-why-do-we-still-get-the-defender-for-cloud-recommendations)-- [We're using a third-party MFA tool to enforce MFA. Why do we still get the Defender for Cloud recommendations?](#were-using-a-third-party-mfa-tool-to-enforce-mfa-why-do-we-still-get-the-defender-for-cloud-recommendations)-- [Why does Defender for Cloud show user accounts without permissions on the subscription as "requiring MFA"?](#why-does-defender-for-cloud-show-user-accounts-without-permissions-on-the-subscription-as-requiring-mfa)-- [We're enforcing MFA with PIM. Why are PIM accounts shown as noncompliant?](#were-enforcing-mfa-with-pim-why-are-pim-accounts-shown-as-noncompliant)-- [Can I exempt or dismiss some of the accounts?](#can-i-exempt-or-dismiss-some-of-the-accounts)-- [Are there any limitations to Defender for Cloud's identity and access protections?](#are-there-any-limitations-to-defender-for-clouds-identity-and-access-protections)-
-### We're already using CA policy to enforce MFA. Why do we still get the Defender for Cloud recommendations?
-To investigate why the recommendations are still being generated, verify the following configuration options in your MFA CA policy:
--- You've included the accounts in the **Users** section of your MFA CA policy (or one of the groups in the **Groups** section)-- The Azure Management app ID (797f4846-ba00-4fd7-ba43-dac1f8f63013), or all apps, are included in the **Apps** section of your MFA CA policy-- The Azure Management app ID isn't excluded in the **Apps** section of your MFA CA policy-
-### We're using a third-party MFA tool to enforce MFA. Why do we still get the Defender for Cloud recommendations?
-Defender for Cloud's MFA recommendations doesn't support third-party MFA tools (for example, DUO).
-
-If the recommendations are irrelevant for your organization, consider marking them as "mitigated" as described in [Exempting resources and recommendations from your secure score](exempt-resource.md). You can also [disable a recommendation](tutorial-security-policy.md#disable-a-security-recommendation).
-
-### Why does Defender for Cloud show user accounts without permissions on the subscription as "requiring MFA"?
-Defender for Cloud's MFA recommendations refers to [Azure RBAC](../role-based-access-control/role-definitions-list.md) roles and the [Azure classic subscription administrators](../role-based-access-control/classic-administrators.md) role. Verify that none of the accounts have such roles.
-
-### We're enforcing MFA with PIM. Why are PIM accounts shown as noncompliant?
-Defender for Cloud's MFA recommendations currently doesn't support PIM accounts. You can add these accounts to a CA Policy in the Users/Group section.
-
-### Can I exempt or dismiss some of the accounts?
-
-The capability to exempt some accounts that donΓÇÖt use MFA is available on the new recommendations in preview:
--- Accounts with owner permissions on Azure resources should be MFA enabled-- Accounts with write permissions on Azure resources should be MFA enabled-- Accounts with read permissions on Azure resources should be MFA enabled-
-To exempt account(s), follow these steps:
-
-1. Select an MFA recommendation associated with an unhealthy account.
-1. In the Accounts tab, select an account to exempt.
-1. Select the three dots button, then select **Exempt account**.
-1. Select a scope and exemption reason.
-
-If you would like to see which accounts are exempt, navigate to **Exempted accounts** for each recommendation.
-
-> [!TIP]
-> When you exempt an account, it won't be shown as unhealthy and won't cause a subscription to appear unhealthy.
-
-### Are there any limitations to Defender for Cloud's identity and access protections?
-There are some limitations to Defender for Cloud's identity and access protections:
--- Identity recommendations aren't available for subscriptions with more than 6,000 accounts. In these cases, these types of subscriptions will be listed under Not applicable tab.-- Identity recommendations aren't available for Cloud Solution Provider (CSP) partner's admin agents.-- Identity recommendations donΓÇÖt identify accounts that are managed with a privileged identity management (PIM) system. If you're using a PIM tool, you might see inaccurate results in the **Manage access and permissions** control.-- Identity recommendations don't support Azure AD conditional access policies with included Directory Roles instead of users and groups.- ## Next steps To learn more about recommendations that apply to other Azure resource types, see the following article: - [Protecting your network in Microsoft Defender for Cloud](protect-network-resources.md)
+- Check out [common questions](faq-general.yml) about MFA.
defender-for-cloud Plan Defender For Servers Data Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers-data-workspace.md
description: Review data residency and workspace design for Microsoft Defender f
Previously updated : 05/30/2023 Last updated : 06/15/2023 # Plan data residency and workspaces for Defender for Servers
You can store your server information in the default workspace or you can use a
- You must have at least read permissions for the workspace. - If the *Security & Audit* solution is installed in a workspace, Defender for Cloud uses the existing solution.
-## Log Analytics pricing FAQ
--- [Will I be charged for machines without the Log Analytics agent installed?](#will-i-be-charged-for-machines-without-the-log-analytics-agent-installed)-- [If a Log Analytics agent reports to multiple workspaces, will I be charged twice?](#if-a-log-analytics-agent-reports-to-multiple-workspaces-will-i-be-charged-twice)-- [If a Log Analytics agent reports to multiple workspaces, is the 500-MB free data ingestion available on all of them?](#if-a-log-analytics-agent-reports-to-multiple-workspaces-is-the-500-mb-free-data-ingestion-available-on-all-of-them)-- [Is the 500-MB free data ingestion calculated for an entire workspace or strictly per machine?](#is-the-500-mb-free-data-ingestion-calculated-for-an-entire-workspace-or-strictly-per-machine)-- [What data types are included in the 500-MB data daily allowance?](#what-data-types-are-included-in-the-500-mb-data-daily-allowance)-- [How can I monitor my daily usage](#how-can-i-monitor-my-daily-usage)-- [How can I manage my costs?](#how-can-i-manage-my-costs)-
-### If I enable Defender for Clouds Servers plan on the subscription level, do I need to enable it on the workspace level?
-
-When you enable the Servers plan on the subscription level, Defender for Cloud enables the Servers plan on your default workspaces automatically. Connect to the default workspace by selecting **Connect Azure VMs to the default workspace(s) created by Defender for Cloud** option and selecting **Apply**.
--
-However, if you're using a custom workspace in place of the default workspace, you need to enable the Servers plan on all of your custom workspaces that don't have it enabled.
-
-If you're using a custom workspace and enable the plan on the subscription level only, the `Microsoft Defender for servers should be enabled on workspaces` recommendation appears on the Recommendations page. This recommendation gives you the option to enable the servers plan on the workspace level with the Fix button. You're charged for all VMs in the subscription even if the Servers plan isn't enabled for the workspace. The VMs won't benefit from features that depend on the Log Analytics workspace, such as Microsoft Defender for Endpoint, VA solution (MDVM/Qualys), and Just-in-Time VM access.
-
-Enabling the Servers plan on both the subscription and its connected workspaces, won't incur a double charge. The system will identify each unique VM.
-
-If you enable the Servers plan on cross-subscription workspaces, connected VMs from all subscriptions will be billed, including subscriptions that don't have the Servers plan enabled.
-
-### Will I be charged for machines without the Log Analytics agent installed?
-
-Yes. When you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) on an Azure subscription or a connected AWS account, you'll be charged for all machines that are connected to your Azure subscription or AWS account. The term machines include Azure virtual machines, Azure Virtual Machine Scale Sets instances, and Azure Arc-enabled servers. Machines that don't have Log Analytics installed are covered by protections that don't depend on the Log Analytics agent.
-
-### If a Log Analytics agent reports to multiple workspaces, will I be charged twice?
-
-If a machine, reports to multiple workspaces, and all of them have Defender for Servers enabled, the machines will be billed for each attached workspace.
-
-### If a Log Analytics agent reports to multiple workspaces, is the 500-MB free data ingestion available on all of them?
-
-Yes. If you configure your Log Analytics agent to send data to two or more different Log Analytics workspaces (multi-homing), you'll get 500-MB free data ingestion for each workspace. It's calculated per node, per reported workspace, per day, and available for every workspace that has a 'Security' or 'AntiMalware' solution installed. You'll be charged for any data ingested over the 500-MB limit.
-
-### Is the 500-MB free data ingestion calculated for an entire workspace or strictly per machine?
-
-You receive a daily allowance of 500 MB of free data ingestion for each virtual machine (VM) connected to the workspace. This allocation specifically applies to the [security data types](#what-data-types-are-included-in-the-500-mb-data-daily-allowance) collected directly by Defender for Cloud.
-
-The data allowance is a daily rate calculated across all connected machines. Your total daily free limit is equal to the **[number of machines] x 500 MB**. So even if on a given day some machines send 100 MB and others send 800 MB, if the total data from all machines doesn't exceed your daily free limit, you won't be charged extra.
-
-### What data types are included in the 500-MB data daily allowance?
-
-Defender for Cloud's billing is closely tied to the billing for Log Analytics. [Microsoft Defender for Servers](defender-for-servers-introduction.md) provides a 500 MB/node/day allocation for machines against the following subset of [security data types](/azure/azure-monitor/reference/tables/tables-category#security):
--- [SecurityAlert](/azure/azure-monitor/reference/tables/securityalert)-- [SecurityBaseline](/azure/azure-monitor/reference/tables/securitybaseline)-- [SecurityBaselineSummary](/azure/azure-monitor/reference/tables/securitybaselinesummary)-- [SecurityDetection](/azure/azure-monitor/reference/tables/securitydetection)-- [SecurityEvent](/azure/azure-monitor/reference/tables/securityevent)-- [WindowsFirewall](/azure/azure-monitor/reference/tables/windowsfirewall)-- [SysmonEvent](/azure/azure-monitor/reference/tables/sysmonevent)-- [ProtectionStatus](/azure/azure-monitor/reference/tables/protectionstatus)-- [Update](/azure/azure-monitor/reference/tables/update) and [UpdateSummary](/azure/azure-monitor/reference/tables/updatesummary) when the Update Management solution isn't running in the workspace or solution targeting is enabled.-
-If the workspace is in the legacy Per Node pricing tier, the Defender for Cloud and Log Analytics allocations are combined and applied jointly to all billable ingested data.
-
-### How can I monitor my daily usage?
-
-You can view your data usage in two different ways, the Azure portal, or by running a script.
-
-**To view your usage in the Azure portal**:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Navigate to **Log Analytics workspaces**.
-
-1. Select your workspace.
-
-1. Select **Usage and estimated costs**.
-
- :::image type="content" source="media/plan-defender-for-servers-data-workspace/data-usage.png" alt-text="Screenshot of your data usage of your log analytics workspace. " lightbox="media/plan-defender-for-servers-data-workspace/data-usage.png":::
-
-You can also view estimated costs under different pricing tiers by selecting :::image type="icon" source="media/plan-defender-for-servers-data-workspace/drop-down-icon.png" border="false"::: for each pricing tier.
--
-**To view your usage by using a script**:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Navigate to **Log Analytics workspaces** > **Logs**.
-
-1. Select your time range. Learn about [time ranges](../azure-monitor/logs/log-analytics-tutorial.md).
-
-1. Copy and past the following query into the **Type your query here** section.
-
- ```azurecli
- let Unit= 'GB';
- Usage
- | where IsBillable == 'TRUE'
- | where DataType in ('SecurityAlert', 'SecurityBaseline', 'SecurityBaselineSummary', 'SecurityDetection', 'SecurityEvent', 'WindowsFirewall', 'MaliciousIPCommunication', 'SysmonEvent', 'ProtectionStatus', 'Update', 'UpdateSummary')
- | project TimeGenerated, DataType, Solution, Quantity, QuantityUnit
- | summarize DataConsumedPerDataType = sum(Quantity)/1024 by DataType, DataUnit = Unit
- | sort by DataConsumedPerDataType desc
- ```
-
-1. Select **Run**.
-
- :::image type="content" source="media/plan-defender-for-servers-data-workspace/select-run.png" alt-text="Screenshot showing where to enter your query and where the select run button is located." lightbox="media/plan-defender-for-servers-data-workspace/select-run.png":::
-
-You can learn how to [Analyze usage in Log Analytics workspace](../azure-monitor/logs/analyze-usage.md).
-
-Based on your usage, you won't be billed until you've used your daily allowance. If you're receiving a bill, it's only for the data used after the 500-MB limit is reached, or for other service that doesn't fall under the coverage of Defender for Cloud.
-
-### How can I manage my costs?
-
-You may want to manage your costs and limit the amount of data collected for a solution by limiting it to a particular set of agents. Use [solution targeting](/previous-versions/azure/azure-monitor/insights/solution-targeting) to apply a scope to the solution and target a subset of computers in the workspace. If you're using solution targeting, Defender for Cloud lists the workspace as not having a solution.
-> [!IMPORTANT]
-> Solution targeting has been deprecated because the Log Analytics agent is being replaced with the Azure Monitor agent and solutions in Azure Monitor are being replaced with insights. You can continue to use solution targeting if you already have it configured, but it is not available in new regions.
-> The feature will not be supported after August 31, 2024.
-> Regions that support solution targeting until the deprecation date are:
->
-> | Region code | Region name |
-> | : | :- |
-> | CCAN | canadacentral |
-> | CHN | switzerlandnorth |
-> | CID | centralindia |
-> | CQ | brazilsouth |
-> | CUS | centralus |
-> | DEWC | germanywestcentral |
-> | DXB | UAENorth |
-> | EA | eastasia |
-> | EAU | australiaeast |
-> | EJP | japaneast |
-> | EUS | eastus |
-> | EUS2 | eastus2 |
-> | NCUS | northcentralus |
-> | NEU | NorthEurope |
-> | NOE | norwayeast |
-> | PAR | FranceCentral |
-> | SCUS | southcentralus |
-> | SE | KoreaCentral |
-> | SEA | southeastasia |
-> | SEAU | australiasoutheast |
-> | SUK | uksouth |
-> | WCUS | westcentralus |
-> | WEU | westeurope |
-> | WUS | westus |
-> | WUS2 | westus2 |
->
-> | Air-gapped clouds | Region code | Region name |
-> | :- | :- | :- |
-> | UsNat | EXE | usnateast |
-> | UsNat | EXW | usnatwest |
-> | UsGov | FF | usgovvirginia |
-> | China | MC | ChinaEast2 |
-> | UsGov | PHX | usgovarizona |
-> | UsSec | RXE | usseceast |
-> | UsSec | RXW | ussecwest |
- ## Next steps
-After you work through these planning steps, review [Defender for Server access roles](plan-defender-for-servers-roles.md).
+- After you work through these planning steps, review [Defender for Server access roles](plan-defender-for-servers-roles.md).
+- Check out the [common questions](faq-defender-for-servers.yml) about workspaces in Defender for Servers.
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Title: Connect your AWS account to Microsoft Defender for Cloud description: Defend your AWS resources with Microsoft Defender for Cloud Previously updated : 04/23/2023 Last updated : 06/15/2023 zone_pivot_groups: connect-aws-accounts
AWS Systems Manager is required for automating tasks across your AWS resources.
Defender for Cloud discovers the EC2 instances in the connected AWS account and uses SSM to onboard them to Azure Arc. > [!TIP]
- > For the list of supported operating systems, see [What operating systems for my EC2 instances are supported?](#what-operating-systems-for-my-ec2-instances-are-supported) in the FAQ.
+ > For the list of supported operating systems, see [What operating systems for my EC2 instances are supported?](faq-general.yml) in the common questions.
1. Select the **Resource Group** and **Azure Region** that the discovered AWS EC2s will be onboarded to in the selected subscription. 1. Enter the **Service Principal ID** and **Service Principal Client Secret** for Azure Arc as described here [Create a Service Principal for onboarding at scale](../azure-arc/servers/onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale)
To view all the active recommendations for your resources by resource type, use
:::image type="content" source="./media/quickstart-onboard-aws/aws-resource-types-in-inventory.png" alt-text="screenshot of the asset inventory page's resource type filter showing the AWS options.":::
-## FAQ - AWS in Defender for Cloud
-
-### What operating systems for my EC2 instances are supported?
-
-For a list of the AMIs with the SSM Agent preinstalled see [this page in the AWS docs](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-technical-details.html#ami-preinstalled-agent).
-
-For other operating systems, the SSM Agent should be installed manually using the following instructions:
--- [Install SSM Agent for a hybrid environment (Windows)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-win.html)-- [Install SSM Agent for a hybrid environment (Linux)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-linux.html)-
-### For the CSPM plan, what IAM permissions are needed to discover AWS resources?
-
-The following IAM permissions are needed to discover AWS resources:
-
-| DataCollector | AWS Permissions |
-|--|--|
-| API Gateway | `apigateway:GET` |
-| Application Auto Scaling | `application-autoscaling:Describe*` |
-| Auto scaling | `autoscaling-plans:Describe*` <br> `autoscaling:Describe*` |
-| Certificate manager | `acm-pca:Describe*` <br> `acm-pca:List*` <br> `acm:Describe*` <br> `acm:List*` |
-| CloudFormation | `cloudformation:Describe*` <br> `cloudformation:List*` |
-| CloudFront | `cloudfront:DescribeFunction` <br> `cloudfront:GetDistribution` <br> `cloudfront:GetDistributionConfig` <br> `cloudfront:List*` |
-| CloudTrail | `cloudtrail:Describe*` <br> `cloudtrail:GetEventSelectors` <br> `cloudtrail:List*` <br> `cloudtrail:LookupEvents` |
-| CloudWatch | `cloudwatch:Describe*` <br> `cloudwatch:List*` |
-| CloudWatch logs | `logs:DescribeLogGroups` <br> `logs:DescribeMetricFilters` |
-| CodeBuild | `codebuild:DescribeCodeCoverages` <br> `codebuild:DescribeTestCases` <br> `codebuild:List*` |
-| Config Service | `config:Describe*` <br> `config:List*` |
-| DMS ΓÇô database migration service | `dms:Describe*` <br> `dms:List*` |
-| DAX | `dax:Describe*` |
-| DynamoDB | `dynamodb:Describe*` <br> `dynamodb:List*` |
-| Ec2 | `ec2:Describe*` <br> `ec2:GetEbsEncryptionByDefault` |
-| ECR | `ecr:Describe*` <br> `ecr:List*` |
-| ECS | `ecs:Describe*` <br> `ecs:List*` |
-| EFS | `elasticfilesystem:Describe*` |
-| EKS | `eks:Describe*` <br> `eks:List*` |
-| Elastic Beanstalk | `elasticbeanstalk:Describe*` <br> `elasticbeanstalk:List*` |
-| ELB ΓÇô elastic load balancing (v1/2) | `elasticloadbalancing:Describe*` |
-| Elastic search | `es:Describe*` <br> `es:List*` |
-| EMR ΓÇô elastic map reduce | `elasticmapreduce:Describe*` <br> `elasticmapreduce:GetBlockPublicAccessConfiguration` <br> `elasticmapreduce:List*` <br> `elasticmapreduce:View*` |
-| GuardDuty | `guardduty:DescribeOrganizationConfiguration` <br> `guardduty:DescribePublishingDestination` <br> `guardduty:List*` |
-| IAM | `iam:Generate*` <br> `iam:Get*` <br> `iam:List*` <br> `iam:Simulate*` |
-| KMS | `kms:Describe*` <br> `kms:List*` |
-| Lambda | `lambda:GetPolicy` <br> `lambda:List*` |
-| Network firewall | `network-firewall:DescribeFirewall` <br> `network-firewall:DescribeFirewallPolicy` <br> `network-firewall:DescribeLoggingConfiguration` <br> `network-firewall:DescribeResourcePolicy` <br> `network-firewall:DescribeRuleGroup` <br> `network-firewall:DescribeRuleGroupMetadata` <br> `network-firewall:ListFirewallPolicies` <br> `network-firewall:ListFirewalls` <br> `network-firewall:ListRuleGroups` <br> `network-firewall:ListTagsForResource` |
-| RDS | `rds:Describe*` <br> `rds:List*` |
-| RedShift | `redshift:Describe*` |
-| S3 and S3Control | `s3:DescribeJob` <br> `s3:GetEncryptionConfiguration` <br> `s3:GetBucketPublicAccessBlock` <br> `s3:GetBucketTagging` <br> `s3:GetBucketLogging` <br> `s3:GetBucketAcl` <br> `s3:GetBucketLocation` <br> `s3:GetBucketPolicy` <br> `s3:GetReplicationConfiguration` <br> `s3:GetAccountPublicAccessBlock` <br> `s3:GetObjectAcl` <br> `s3:GetObjectTagging` <br> `s3:List*` |
-| SageMaker | `sagemaker:Describe*` <br> `sagemaker:GetSearchSuggestions` <br> `sagemaker:List*` <br> `sagemaker:Search` |
-| Secret manager | `secrets
-| Simple notification service ΓÇô SNS | `sns:Check*` <br> `sns:List*` |
-| SSM | `ssm:Describe*` <br> `ssm:List*` |
-| SQS | `sqs:List*` <br> `sqs:Receive*` |
-| STS | `sts:GetCallerIdentity` |
-| WAF | `waf-regional:Get*` <br> `waf-regional:List*` <br> `waf:List*` <br> `wafv2:CheckCapacity` <br> `wafv2:Describe*` <br> `wafv2:List*` |
- ## Learn more You can check out the following blogs:
Connecting your AWS account is part of the multicloud experience available in Mi
- [Security recommendations for AWS resources - a reference guide](recommendations-reference-aws.md). - [Connect your GCP projects to Microsoft Defender for Cloud](quickstart-onboard-gcp.md) - [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshooting-the-native-multicloud-connector)
+- Check out [common questions](faq-general.yml) about onboarding your AWS account.
defender-for-cloud Quickstart Onboard Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-devops.md
By connecting your Azure DevOps repositories to Defender for Cloud, you'll exten
- **Defender for Cloud's Workload Protection features** - Extends Defender for Cloud's threat detection capabilities and advanced defenses to your Azure DevOps resources.
-API calls performed by Defender for Cloud count against the [Azure DevOps Global consumption limit](/azure/devops/integrate/concepts/rate-limits). For more information, see the [FAQ section](#faq).
+API calls performed by Defender for Cloud count against the [Azure DevOps Global consumption limit](/azure/devops/integrate/concepts/rate-limits). For more information, see the [common questions](faq-defender-for-devops.yml) for Defender for DevOps.
## Prerequisites
The Inventory page populates with your selected repositories, and the Recommenda
- Learn how to [create your first pipeline](/azure/devops/pipelines/create-first-pipeline).
-## FAQ
-
-### Do API calls made by Defender for Cloud count against my consumption limit?
-
-Yes, API calls made by Defender for Cloud count against the [Azure DevOps Global consumption limit](/azure/devops/integrate/concepts/rate-limits). Defender for Cloud makes calls on-behalf of the user who onboards the connector.
-
-### Why is my organization list empty in the UI?
-
-If your organization list is empty in the UI after you onboarded an Azure DevOps connector, you need to ensure that the organization in Azure DevOps is connected to the Azure tenant that has the user who authenticated the connector.
-
-For information on how to correct this issue, check out the [DevOps trouble shooting guide](troubleshooting-guide.md#troubleshoot-azure-devops-organization-connector-issues).
-
-### I have a large Azure DevOps organization with many repositories. Can I still onboard?
-
-Yes, there is no limit to how many Azure DevOps repositories you can onboard to Defender for DevOps.
-
-However, there are two main implications when onboarding large organizations ΓÇô speed and throttling. The speed of discovery for your DevOps repositories is determined by the number of projects for each connector (approximately 100 projects per hour). Throttling can happen because Azure DevOps API calls have a [global rate limit](/azure/devops/integrate/concepts/rate-limits) and we limit the calls for project discovery to use a small portion of overall quota limits.
-
-Consider using an alternative Azure DevOps identity (i.e., an Organization Administrator account used as a service account) to avoid individual accounts from being throttled when onboarding large organizations. Below are some scenarios of when to use an alternate identity to onboard a Defender for DevOps connector:
--- Large number of Azure DevOps Organizations and Projects (~500 Projects or more).-- Large number of concurrent builds which peak during work hours.-- Authorized user is a [Power Platform](/power-platform/) user making additional Azure DevOps API calls, using up the global rate limit quotas.-
-Once you have onboarded the Azure DevOps repositories using this account and [configured and ran the Microsoft Security DevOps Azure DevOps extension](/azure/defender-for-cloud/azure-devops-extension) in your CI/CD pipeline, then the scanning results will appear near instantaneously in Microsoft Defender for Cloud.
- ## Next steps
-Learn more about [Defender for DevOps](defender-for-devops-introduction.md).
+- Learn more about [Defender for DevOps](defender-for-devops-introduction.md).
+
+- Learn how to [configure pull request annotations](enable-pull-request-annotations.md) in Defender for Cloud.
-Learn how to [configure pull request annotations](enable-pull-request-annotations.md) in Defender for Cloud.
+- Check out [common questions](faq-defender-for-devops.yml) about Defender for DevOps
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Title: Connect your GCP project to Microsoft Defender for Cloud description: Monitoring your GCP resources from Microsoft Defender for Cloud Previously updated : 04/23/2023 Last updated : 06/15/2023 zone_pivot_groups: connect-gcp-accounts
To view all the active recommendations for your resources by resource type, use
:::image type="content" source="./media/quickstart-onboard-gcp/gcp-resource-types-in-inventory.png" alt-text="Asset inventory page's resource type filter showing the GCP options" lightbox="media/quickstart-onboard-gcp/gcp-resource-types-in-inventory.png":::
-## FAQ - Connecting GCP projects to Microsoft Defender for Cloud
-
-### Is there an API for connecting my GCP resources to Defender for Cloud?
-
-Yes. To create, edit, or delete Defender for Cloud cloud connectors with a REST API, see the details of the [Connectors API](/rest/api/defenderforcloud/security-connectors).
-
-### What GCP regions are supported by Defender for Cloud?
-
-Defender for Cloud supports and scans all available regions on GCP public cloud.
- ## Next steps Connecting your GCP project is part of the multicloud experience available in Microsoft Defender for Cloud. For related information, see the following pages:
Connecting your GCP project is part of the multicloud experience available in Mi
- [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md) - [Google Cloud resource hierarchy](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy) - Learn about the Google Cloud resource hierarchy in Google's online docs - [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshooting-the-native-multicloud-connector)
+- Check out [common questions](faq-general.yml) about connecting your GCP project.
defender-for-cloud Regulatory Compliance Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/regulatory-compliance-dashboard.md
Title: 'Tutorial: Regulatory compliance checks - Microsoft Defender for Cloud' description: 'Tutorial: Learn how to Improve your regulatory compliance using Microsoft Defender for Cloud.' Previously updated : 05/09/2023 Last updated : 06/18/2023 # Tutorial: Improve your regulatory compliance
For example, you might want Defender for Cloud to email a specific user when a c
:::image type="content" source="media/release-notes/regulatory-compliance-triggers-workflow-automation.png" alt-text="Using changes to regulatory compliance assessments to trigger a workflow automation." lightbox="media/release-notes/regulatory-compliance-triggers-workflow-automation.png":::
-## FAQ - Regulatory compliance dashboard
--- [How do I know which benchmark or standard to use?](#how-do-i-know-which-benchmark-or-standard-to-use)-- [What standards are supported in the compliance dashboard?](#what-standards-are-supported-in-the-compliance-dashboard)-- [Why do some controls appear grayed out?](#why-do-some-controls-appear-grayed-out)-- [How can I remove a built-in standard, like PCI-DSS, ISO 27001, or SOC2 TSP from the dashboard?](#how-can-i-remove-a-built-in-standard-like-pci-dss-iso-27001-or-soc2-tsp-from-the-dashboard)-- [I made the suggested changes based on the recommendation, but it isn't being reflected in the dashboard?](#i-made-the-suggested-changes-based-on-the-recommendation-but-it-isnt-being-reflected-in-the-dashboard)-- [What permissions do I need to access the compliance dashboard?](#what-permissions-do-i-need-to-access-the-compliance-dashboard)-- [The regulatory compliance dashboard isn't loading for me](#the-regulatory-compliance-dashboard-isnt-loading-for-me)-- [How can I view a report of passing and failing controls per standard in my dashboard?](#how-can-i-view-a-report-of-passing-and-failing-controls-per-standard-in-my-dashboard)-- [How can I download a report with compliance data in a format other than PDF?](#how-can-i-download-a-report-with-compliance-data-in-a-format-other-than-pdf)-- [How can I create exceptions for some of the policies in the regulatory compliance dashboard?](#how-can-i-create-exceptions-for-some-of-the-policies-in-the-regulatory-compliance-dashboard)-- [What Microsoft Defender plans or licenses do I need to use the regulatory compliance dashboard?](#what-microsoft-defender-plans-or-licenses-do-i-need-to-use-the-regulatory-compliance-dashboard)-
-### How do I know which benchmark or standard to use?
-[Microsoft cloud security benchmark](/security/benchmark/azure/introduction) (MCSB) is the canonical set of security recommendations and best practices defined by Microsoft, aligned with common compliance control frameworks including [CIS Control Framework](https://www.cisecurity.org/benchmark/azure/), [NIST SP 800-53](https://csrc.nist.gov/publications/detail/sp/800-53/rev-5/final) and PCI-DSS. MCSB is a comprehensive cloud agnostic set of security principles designed to recommend the most up-to-date technical guidelines for Azure along with other clouds such as AWS and GCP. We recommend MCSB to customers who want to maximize their security posture and align their compliance status with industry standards.
-
-The [CIS Benchmark](https://www.cisecurity.org/benchmark/azure/) is authored by an independent entity ΓÇô Center for Internet Security (CIS) ΓÇô and contains recommendations on a subset of core Azure services. We work with CIS to try to ensure that their recommendations are up to date with the latest enhancements in Azure, but they're sometimes delayed and can become outdated. Nonetheless, some customers like to use this objective, third-party assessment from CIS as their initial and primary security baseline.
-
-Since weΓÇÖve released the Microsoft cloud security benchmark, many customers have chosen to migrate to it as a replacement for CIS benchmarks.
-
-### What standards are supported in the compliance dashboard?
-By default, the regulatory compliance dashboard shows you the Microsoft cloud security benchmark. The Microsoft cloud security benchmark is the Microsoft-authored guidelines for security, and compliance best practices based on common compliance frameworks. Learn more in the [Microsoft cloud security benchmark introduction](../security/benchmarks/introduction.md).
-
-To track your compliance with any other standard, you'll need to explicitly add them to your dashboard.
-
-You can add other standards such as Azure CIS 1.3.0, NIST SP 800-53, NIST SP 800-171, SWIFT CSP CSCF-v2020, UK Official and UK NHS, HIPAA, Canada Federal PBMM, ISO 27001, SOC2-TSP, and PCI-DSS 3.2.1.
-
-**AWS**: When users onboard, every AWS account has the AWS Foundational Security Best Practices assigned. This is the AWS-specific guideline for security and compliance best practices based on common compliance frameworks.
-
-Users that have one Defender bundle enabled can enable other standards.
-
-Available AWS regulatory standards:
--- CIS 1.2.0-- PCI DSS 3.2.1-- AWS Foundational Security Best Practices-
-To add regulatory compliance standards on AWS accounts:
-
-1. Navigate to **Environment settings**.
-
-1. Select the relevant account.
-
-1. Select **Standards**.
-
-1. Select **Add** and choose **Standard**.
-
-1. Choose a standard from the drop-down menu.
-
-1. Select **Save**.
-
- :::image type="content" source="media/update-regulatory-compliance-packages/add-aws-regulatory-compliance.png" alt-text="Screenshot of adding regulatory compliance standard to AWS account." lightbox="media/update-regulatory-compliance-packages/add-aws-regulatory-compliance.png":::
-
-More standards will be added to the dashboard and included in the information on [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md).
-
-### Why do some controls appear grayed out?
-
-For each compliance standard in the dashboard, there's a list of the standard's controls. For the applicable controls, you can view the details of passing and failing assessments.
-
-Some controls are grayed out. These controls don't have any Defender for Cloud assessments associated with them. Some may be procedure or process-related, and so can't be verified by Defender for Cloud. Some don't have any automated policies or assessments implemented yet, but will have in the future. And some controls may be the platform's responsibility as explained in [Shared responsibility in the cloud](../security/fundamentals/shared-responsibility.md).
-
-### How can I remove a built-in standard, like PCI-DSS, ISO 27001, or SOC2 TSP from the dashboard?
-
-To customize the regulatory compliance dashboard, and focus only on the standards that are applicable to you, you can remove any of the displayed regulatory standards that aren't relevant to your organization. To remove a standard, follow the instructions in [Remove a standard from your dashboard](update-regulatory-compliance-packages.md#remove-a-standard-from-your-dashboard).
-
-### I made the suggested changes based on the recommendation, but it isn't being reflected in the dashboard?
-
-After you take action to resolve recommendations, wait 12 hours to see the changes to your compliance data. Assessments are run approximately every 12 hours, so you'll see the effect on your compliance data only after the assessments run.
-
-### What permissions do I need to access the compliance dashboard?
-
-To access all compliance data in your tenant, you need to have at least a **Reader** level of permissions on the applicable scope of your tenant, or all relevant subscriptions.
-
-The minimum set of roles for accessing the dashboard and managing standards is **Resource Policy Contributor** and **Security Admin**.
-
-### The regulatory compliance dashboard isn't loading for me
-
-To use the regulatory compliance dashboard, Defender for Cloud must be enabled at the subscription level. If the dashboard isn't loading correctly, try the following steps:
-
-1. Clear your browser's cache.
-1. Try a different browser.
-1. Try opening the dashboard from a different network location.
-
-### How can I view a report of passing and failing controls per standard in my dashboard?
-
-On the main dashboard, you can see a report of passing and failing controls for (1) the 'top 4' lowest compliance standards in the dashboard. To see all the passing/failing controls status, select (2) **Show all _x_** (where x is the number of standards you're tracking). A context plane displays the compliance status for every one of your tracked standards.
---
-### How can I download a report with compliance data in a format other than PDF?
-
-When you select **Download report**, select the standard and the format (PDF or CSV). The resulting report will reflect the current set of subscriptions you've selected in the portal's filter.
--- The PDF report shows a summary status for the standard you selected-- The CSV report provides detailed results per resource, as it relates to policies associated with each control-
-Currently, there's no support for downloading a report for a custom policy; only for the supplied regulatory standards.
-
-### How can I create exceptions for some of the policies in the regulatory compliance dashboard?
-
-For policies are built into Defender for Cloud and included in the secure score, you can create exemptions for one or more resources directly in the portal as explained in [Exempting resources and recommendations from your secure score](exempt-resource.md).
-
-For other policies, you can create an exemption directly in the policy itself, by following the instructions in [Azure Policy exemption structure](../governance/policy/concepts/exemption-structure.md).
-
-### What Microsoft Defender plans or licenses do I need to use the regulatory compliance dashboard?
-
-If you've got *any* of the Microsoft Defender plans (except for Defender for Servers Plan 1) enabled on *any* of your Azure resources, you can access Defender for Cloud's regulatory compliance dashboard and all of its data.
-
-> [!NOTE]
-> For Defender for Servers you'll get regulatory compliance only for plan 2.
- ## Next steps In this tutorial, you learned about using Defender for CloudΓÇÖs regulatory compliance dashboard to:
To learn more, see these related pages:
- [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md) - Learn how to select which standards appear in your regulatory compliance dashboard. - [Managing security recommendations in Defender for Cloud](review-security-recommendations.md) - Learn how to use recommendations in Defender for Cloud to help protect your Azure resources.
+- Check out [common questions](faq-regulatory-compliance.yml) about regulatory compliance.
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
Learn more in:
- [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md) - [Tutorial: Improve your regulatory compliance](regulatory-compliance-dashboard.md)-- [FAQ - Regulatory compliance dashboard](regulatory-compliance-dashboard.md#faqregulatory-compliance-dashboard)
+- [FAQ - Regulatory compliance dashboard](faq-regulatory-compliance.yml)
### Four new recommendations related to guest configuration (in preview)
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: This page is updated frequently with the latest updates in Defender for Cloud. Previously updated : 06/15/2023 Last updated : 06/18/2023 # What's new in Microsoft Defender for Cloud?
This extended support increases coverage and visibility over your cloud estate w
- For new customers enabling agentless scanning in AWS - encrypted disks coverage is built in and supported by default. - For existing customers that already have an AWS connector with agentless scanning enabled, you need to reapply the CloudFormation stack to your onboarded AWS accounts to update and add the new permissions that are required to process encrypted disks. The updated CloudFormation template includes new assignments that allow Defender for Cloud to process encrypted disks.
-You can learn more about the [permissions used to scan AWS instances](concept-agentless-data-collection.md#which-permissions-are-used-by-agentless-scanning).
+You can learn more about the [permissions used to scan AWS instances](faq-permissions.yml).
**To re-apply your CloudFormation stack**:
defender-for-cloud Secure Score Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secure-score-security-controls.md
Title: Secure score description: Description of Microsoft Defender for Cloud's secure score and its security controls Previously updated : 04/20/2023 Last updated : 06/19/2023 # Secure score
Microsoft Defender for Cloud has two main goals:
The central feature in Defender for Cloud that enables you to achieve those goals is the **secure score**.
+All Defender for Cloud customers automatically gain access to the secure score when they enable Defender for Cloud. Microsoft Cloud Security Benchmark (MCSB), formally known as Azure Security Benchmark, is automatically applied to your environments and will generate all the built-in recommendations that are part of this default initiative.
+ Defender for Cloud continually assesses your cross-cloud resources for security issues. It then aggregates all the findings into a single score so that you can tell, at a glance, your current security situation: the higher the score, the lower the identified risk level. - In the Azure portal pages, the secure score is shown as a percentage value and the underlying values are also clearly presented:
Even though Defender for Cloud's default security initiative, the Azure Security
[!INCLUDE [security-center-controls-and-recommendations](../../includes/asc/security-control-recommendations.md)]
-## FAQ - Secure score
-
-### If I address only three out of four recommendations in a security control, will my secure score change?
-
-No. It won't change until you remediate all of the recommendations for a single resource. To get the maximum score for a control, you must remediate all recommendations for all resources.
-
-### If a recommendation isn't applicable to me, and I disable it in the policy, will my security control be fulfilled and my secure score updated?
-
-Yes. We recommend disabling recommendations when they're inapplicable in your environment. For instructions on how to disable a specific recommendation, see [Disable security recommendations](./tutorial-security-policy.md#disable-a-security-recommendation).
-
-### If a security control offers me zero points towards my secure score, should I ignore it?
-
-In some cases, you'll see a control max score greater than zero, but the impact is zero. When the incremental score for fixing resources is negligible, it's rounded to zero. Don't ignore these recommendations because they still bring security improvements. The only exception is the "Additional Best Practice" control. Remediating these recommendations won't increase your score, but it will enhance your overall security.
- ## Next steps This article described the secure score and the included security controls.
For related material, see the following articles:
- [Learn about the different elements of a recommendation](review-security-recommendations.md) - [Learn how to remediate recommendations](implement-security-recommendations.md) - [View the GitHub-based tools for working programmatically with secure score](https://github.com/Azure/Azure-Security-Center/tree/master/Secure%20Score)--
+- Check out [common questions](faq-cspm.yml) about secure score.
defender-for-cloud Sql Azure Vulnerability Assessment Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-azure-vulnerability-assessment-manage.md
Title: Manage vulnerability findings in your Azure SQL databases using Microsoft
description: Learn how to remediate software vulnerabilities and disable findings with the express configuration on Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics. Previously updated : 05/31/2023 Last updated : 06/14/2023
Express configuration isn't supported in PowerShell cmdlets but you can use Powe
Invoke express configuration using [Azure CLI](express-configuration-azure-commands.md).
-### FAQ
--- [What happens to the old scan results and baselines after I switch to express configuration?](#what-happens-to-the-old-scan-results-and-baselines-after-i-switch-to-express-configuration)-- [Can I set up recurring scans with express configuration?](#can-i-set-up-recurring-scans-with-express-configuration)-- [Is there a way with express configuration to get the weekly email report that is provided in the classic configuration?](#is-there-a-way-with-express-configuration-to-get-the-weekly-email-report-that-is-provided-in-the-classic-configuration)-- [Why canΓÇÖt I set database policies anymore?](#why-cant-i-set-database-policies-anymore)-- [Can I revert back to the classic configuration?](#can-i-revert-back-to-the-classic-configuration)-- [Will we see express configuration for other types of SQL?](#will-we-see-express-configuration-for-other-types-of-sql)-- [Can I choose which experience will be the default?](#can-i-choose-which-experience-will-be-the-default)-- [Does express configuration change scan behavior?](#does-express-configuration-change-scan-behavior)-- [Does express configuration have any effect on pricing?](#does-express-configuration-have-any-effect-on-pricing)-- [What does the 1-MB cap per rule mean?](#what-does-the-1-mb-cap-per-rule-mean)-
-#### What happens to the old scan results and baselines after I switch to express configuration?
-
-Old results and baselines settings remain available on your storage account, but won't be updated or used by the system. You don't need to maintain these files for SQL vulnerability assessment to work after you switch to express configuration, but you can keep your old baseline definitions for future reference.
-
-When express configuration is enabled, you don't have direct access to the result and baseline data because it's stored on internal Microsoft storage.
-
-#### Can I set up recurring scans with express configuration?
-
-Express configuration automatically sets up reccuring scans for all databases under your server. This is the default and is not configurable at server or database level.
-
-#### Is there a way with express configuration to get the weekly email report that is provided in the classic configuration?
-
-You can use workflow automation and Logic Apps email scheduling, following the Microsoft Defender for Cloud processes:
--- Time based triggers-- Scan based triggers-- Support for disabled rules-
-#### Why canΓÇÖt I set database policies anymore?
-
-SQL vulnerability assessment reports all vulnerabilities and misconfigurations in your environment, so it helps to have all databases included. Defender for SQL is billed per server, not per database.
-
-#### Can I revert back to the classic configuration?
-
-Yes. You can revert back to the classic configuration using the existing REST APIs and PowerShell cmdlets. When you revert back to the classic configuration, you see a notification in the Azure portal to change to the express configuration.
-
-#### Will we see express configuration for other types of SQL?
-
-Stay tuned for updates!
-
-#### Can I choose which experience will be the default?
-
-No. Express configuration will be the default for every new supported Azure SQL database.
-
-#### Does express configuration change scan behavior?
-
-No, express configuration provides the same scanning behavior and performance.
-
-#### Does express configuration have any effect on pricing?
-
-Express configuration doesn't require a storage account, so you don't need to pay extra storage fees unless you choose to keep old scan and baseline data.
-
-#### What does the 1-MB cap per rule mean?
-
-Any individual rule can't produce results that are more than 1 MB. When that limit is reached, the results for the rule are stopped. You can't set a baseline for the rule, the rule isn't included in the overall recommendation health, and the results are shown as "Not applicable".
- ### Troubleshooting #### Revert back to the classic configuration
To handle Boolean types as true/false, set the baseline result with binary input
- Learn more about [Microsoft Defender for Azure SQL](defender-for-sql-introduction.md). - Learn more about [data discovery and classification](/azure/azure-sql/database/data-discovery-and-classification-overview). - Learn more about [storing vulnerability assessment scan results in a storage account accessible behind firewalls and VNets](/azure/azure-sql/database/sql-database-vulnerability-assessment-storage).
+- Check out [common questions](faq-defender-for-databases.yml) about Azure SQL databases.
defender-for-cloud Sql Azure Vulnerability Assessment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-azure-vulnerability-assessment-overview.md
Title: Scan your Azure SQL databases for vulnerabilities using Microsoft Defende
description: Learn how to configure SQL vulnerability assessment and interpret the assessment reports on Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics. Previously updated : 06/04/2023 Last updated : 06/15/2023
Configuration modes benefits and limitations comparison:
## Next steps - Enable [SQL vulnerability assessments](sql-azure-vulnerability-assessment-enable.md)-- Express configuration [FAQ](sql-azure-vulnerability-assessment-manage.md?tabs=express#faq) and [Troubleshooting](sql-azure-vulnerability-assessment-manage.md?tabs=express#troubleshooting).
+- Express configuration [common questions](faq-defender-for-databases.yml) and [Troubleshooting](sql-azure-vulnerability-assessment-manage.md?tabs=express#troubleshooting).
- Learn more about [Microsoft Defender for Azure SQL](defender-for-sql-introduction.md). - Learn more about [data discovery and classification](/azure/azure-sql/database/data-discovery-and-classification-overview). - Learn more about [storing vulnerability assessment scan results in a storage account accessible behind firewalls and VNets](/azure/azure-sql/database/sql-database-vulnerability-assessment-storage).
defender-for-cloud Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/troubleshooting-guide.md
Previously updated : 10/24/2022 Last updated : 06/18/2023 # Microsoft Defender for Cloud Troubleshooting Guide
In this page, you learned about troubleshooting steps for Defender for Cloud. To
- Learn how to [manage and respond to security alerts](managing-and-responding-alerts.md) in Microsoft Defender for Cloud - [Alert validation](alert-validation.md) in Microsoft Defender for Cloud-- Review [frequently asked questions](faq-general.yml) about using Microsoft Defender for Cloud
+- Review [common questions](faq-general.yml) about using Microsoft Defender for Cloud
defender-for-cloud Update Regulatory Compliance Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/update-regulatory-compliance-packages.md
Title: The regulatory compliance dashboard in Microsoft Defender for Cloud description: Learn how to add and remove regulatory standards from the regulatory compliance dashboard in Defender for Cloud Previously updated : 03/20/2023 Last updated : 06/18/2023
Microsoft Defender for Cloud continually compares the configuration of your resources with requirements in industry standards, regulations, and benchmarks. The **regulatory compliance dashboard** provides insights into your compliance posture based on how you're meeting specific compliance requirements. > [!TIP]
-> Learn more about Defender for Cloud's regulatory compliance dashboard in the [frequently asked questions](regulatory-compliance-dashboard.md#faqregulatory-compliance-dashboard).
+> Learn more about Defender for Cloud's regulatory compliance dashboard in the [common questions](faq-regulatory-compliance.yml).
## How are regulatory compliance standards represented in Defender for Cloud?
defender-for-cloud Workflow Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/workflow-automation.md
Previously updated : 05/16/2023 Last updated : 06/18/2023 # Automate responses to Microsoft Defender for Cloud triggers
This article describes the workflow automation feature of Microsoft Defender for
From this page you can create new automation rules, enable, disable, or delete existing ones.
+ > [!NOTE]
+ > A scope refers to the subscription where the workflow automation is deployed.
+ 1. To define a new workflow, select **Add workflow automation**. The options pane for your new automation opens. :::image type="content" source="./media/workflow-automation/add-workflow.png" alt-text="Add workflow automations pane." lightbox="media/workflow-automation/add-workflow.png":::
To implement these policies:
To view the raw event schemas of the security alerts or recommendations events passed to the logic app, visit the [Workflow automation data types schemas](https://aka.ms/ASCAutomationSchemas). This can be useful in cases where you aren't using Defender for Cloud's built-in Logic Apps connectors mentioned above, but instead are using the generic HTTP connector - you could use the event JSON schema to manually parse it as you see fit.
-## FAQ - Workflow automation
-
-### Does workflow automation support any business continuity or disaster recovery (BCDR) scenarios?
-
-When preparing your environment for BCDR scenarios, where the target resource is experiencing an outage or other disaster, it's the organization's responsibility to prevent data loss by establishing backups according to the guidelines from Azure Event Hubs, Log Analytics workspace, and Logic Apps.
-
-For every active automation, we recommend you create an identical (disabled) automation and store it in a different location. When there's an outage, you can enable these backup automations and maintain normal operations.
-
-Learn more about [Business continuity and disaster recovery for Azure Logic Apps](../logic-apps/business-continuity-disaster-recovery-guidance.md).
- ## Next steps In this article, you learned about creating logic apps, automating their execution in Defender for Cloud, and running them manually. For more information, see the following documentation:
In this article, you learned about creating logic apps, automating their executi
- [Security recommendations in Microsoft Defender for Cloud](review-security-recommendations.md) - [Security alerts in Microsoft Defender for Cloud](alerts-overview.md) - [Workflow automation data types schemas](https://aka.ms/ASCAutomationSchemas)
+- Check out [common questions](faq-general.yml) about Defender for Cloud.
defender-for-cloud Working With Log Analytics Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/working-with-log-analytics-agent.md
Here's a complete breakdown of the Security and App Locker event IDs for each se
> [!NOTE] >
-> - If you are using Group Policy Object (GPO), it is recommended that you enable audit policies Process Creation Event 4688 and the *CommandLine* field inside event 4688. For more information about Process Creation Event 4688, see Defender for Cloud's [FAQ](./faq-data-collection-agents.yml#what-happens-when-data-collection-is-enabled-). For more information about these audit policies, see [Audit Policy Recommendations](/windows-server/identity/ad-ds/plan/security-best-practices/audit-policy-recommendations).
+> - If you are using Group Policy Object (GPO), it is recommended that you enable audit policies Process Creation Event 4688 and the *CommandLine* field inside event 4688. For more information about Process Creation Event 4688, see Defender for Cloud's [common questions](./faq-data-collection-agents.yml#what-happens-when-data-collection-is-enabled-). For more information about these audit policies, see [Audit Policy Recommendations](/windows-server/identity/ad-ds/plan/security-best-practices/audit-policy-recommendations).
> - To enable data collection for [Adaptive application controls](adaptive-application-controls.md), Defender for Cloud configures a local AppLocker policy in Audit mode to allow all applications. This will cause AppLocker to generate events which are then collected and leveraged by Defender for Cloud. It is important to note that this policy will not be configured on any machines on which there is already a configured AppLocker policy. > - To collect Windows Filtering Platform [Event ID 5156](https://www.ultimatewindowssecurity.com/securitylog/encyclopedia/event.aspx?eventID=5156), you need to enable [Audit Filtering Platform Connection](/windows/security/threat-protection/auditing/audit-filtering-platform-connection) (Auditpol /set /subcategory:"Filtering Platform Connection" /Success:Enable) >
defender-for-iot Concept Supported Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-supported-protocols.md
OT network sensors can detect the following protocols when identifying assets an
|**GE** | Bentley Nevada (System 1 / BN3500)<br>ClassicSDI (MarkVle) <br> EGD<br> GSM (GE MarkVI and MarkVIe)<br> InterSite<br> SDI (MarkVle) <br> SRTP (GE)<br> GE_CMP | |**Generic Applications** | Active Directory<br> RDP<br> Teamviewer<br> VNC<br> | |**Honeywell** | ENAP<br> Experion DCS CDA<br> Experion DCS FDA<br> Honeywell EUCN <br> Honeywell Discovery |
-|**IEC** | Codesys V3<br>IEC 60870-5-7 (IEC 62351-3 + IEC 62351-5)<br> IEC 60870-5-101 (encapsulated serial)<br> IEC 60870-5-103 (encapsulated serial)<br> IEC 60870-5-104<br> IEC 60870-5-104 ASDU_APCI<br> IEC 60870 ICCP TASE.2<br> IEC 61850 GOOSE<br> IEC 61850 MMS<br> IEC 61850 SMV (SAMPLED-VALUES)<br> LonTalk (LonWorks) |
+|**IEC** | Codesys V3<br>IEC 60870-5-7 (IEC 62351-3 + IEC 62351-5)<br> IEC 60870-5-104<br> IEC 60870-5-104 ASDU_APCI<br> IEC 60870 ICCP TASE.2<br> IEC 61850 GOOSE<br> IEC 61850 MMS<br> IEC 61850 SMV (SAMPLED-VALUES)<br> LonTalk (LonWorks) |
|**IEEE** | LLC<br> STP<br> VLAN | |**IETF** | ARP<br> DHCP<br> DCE RPC<br> DNS<br> FTP (FTP_ADAT<br> FTP_DATA)<br> GSSAPI (RFC2743)<br> HTTP<br> ICMP<br> IPv4<br> IPv6<br> LLDP<br> MDNS<br> NBNS<br> NTLM (NTLMSSP Auth Protocol)<br> RPC<br> SMB / Browse / NBDGM<br> SMB / CIFS<br> SNMP<br> SPNEGO (RFC4178)<br> SSH<br> Syslog<br> TCP<br> Telnet<br> TFTP<br> TPKT<br> UDP | |**ISO** | CLNP (ISO 8473)<br> COTP (ISO 8073)<br> ISO Industrial Protocol<br> MQTT (IEC 20922) |
defender-for-iot Manage Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-users-portal.md
description: Learn how to manage user permissions in the Azure portal for Micros
Last updated 09/04/2022
- - zerotrust-services
+ - zerotrust-extra
# Manage users on the Azure portal
For more information, see:
- [Azure user roles for OT and Enterprise IoT monitoring with Defender for IoT](roles-azure.md) - [Create and manage on-premises users for OT monitoring](how-to-create-and-manage-users.md)-- [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md)
+- [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md)
defender-for-iot Onboard Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/onboard-sensors.md
description: Learn how to onboard sensors to Defender for IoT in the Azure porta
Last updated 05/28/2023
- - zerotrust-services
+ - zerotrust-extra
# Onboard OT sensors to Defender for IoT
defender-for-iot Sites And Zones On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/sites-and-zones-on-premises.md
description: Learn how to create OT networking sites and zones on an on-premises
Last updated 01/08/2023
- - zerotrust-services
+ - zerotrust-extra
# Create OT sites and zones on an on-premises management console
defender-for-iot Roles Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/roles-azure.md
Last updated 09/19/2022
- - zerotrust-services
+ - zerotrust-extra
# Azure user roles and permissions for Defender for IoT
For more information, see:
- [On-premises user roles for OT monitoring with Defender for IoT](roles-on-premises.md) - [Create and manage users on an OT network sensor](manage-users-sensor.md) - [Create and manage users on an on-premises management console](manage-users-on-premises-management-console.md)--
+-
dev-box How To Configure Stop Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-stop-schedule.md
Title: Set a dev box auto-stop schedule
-description: Learn how to configure an auto-stop schedule to automatically shutdown dev boxes in a pool at a specified time.
+description: Learn how to configure an auto-stop schedule to automatically shut down dev boxes in a pool at a specified time.
# Auto-stop your Dev Boxes on schedule
-To save on costs, you can enable an Auto-stop schedule on a dev box pool. Microsoft Dev Box Preview will attempt to shut down all dev boxes in that pool at the time specified in the schedule. You can configure one stop time in one timezone for each pool.
+To save on costs, you can enable an Auto-stop schedule on a dev box pool. Microsoft Dev Box Preview attempts to shut down all dev boxes in that pool at the time specified in the schedule. You can configure one stop time in one timezone for each pool.
## Permissions To manage a dev box schedule, you need the following permissions:
You can create an auto-stop schedule while creating a new dev box pool, or by mo
|Name|Value| |-|-| |**Enable Auto-stop**|Select **Yes** to enable an Auto-stop schedule after the pool has been created.|
- |**Stop time**| Select a time to shutdown all the dev boxes in the pool. All Dev Boxes in this pool will be shut down at this time, everyday.|
+ |**Stop time**| Select a time to shutdown all the dev boxes in the pool. All Dev Boxes in this pool shutdown at this time every day.|
|**Time zone**| Select the time zone that the stop time is in.| :::image type="content" source="./media/how-to-manage-stop-schedule/dev-box-save-pool.png" alt-text="Screenshot of the edit dev box pool page showing the Auto-stop options.":::
You can create an auto-stop schedule while creating a new dev box pool, or by mo
|**Network connection**|Select an existing network connection. The network connection determines the region of the dev boxes created within this pool.| |**Dev Box Creator Privileges**|Select Local Administrator or Standard User.| |**Enable Auto-stop**|Yes is the default. Select No to disable an Auto-stop schedule. You can configure an Auto-stop schedule after the pool has been created.|
- |**Stop time**| Select a time to shutdown all the dev boxes in the pool. All Dev Boxes in this pool will be shut down at this time, everyday.|
+ |**Stop time**| Select a time to shutdown all the dev boxes in the pool. All Dev Boxes in this pool shutdown at this time every day.|
|**Time zone**| Select the time zone that the stop time is in.| |**Licensing**| Select this check box to confirm that your organization has Azure Hybrid Benefit licenses that you want to apply to the dev boxes in this pool. |
You can create an auto-stop schedule while creating a new dev box pool, or by mo
1. Select **Add**. 1. Verify that the new dev box pool appears in the list. You may need to refresh the screen.
-
+ ### Delete an auto-stop schedule
dev-box How To Skip Delay Stop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-skip-delay-stop.md
+
+ Title: Skip or delay an auto-stop scheduled shutdown
+
+description: Learn how to delay the scheduled shutdown of your dev box, or skip the shutdown entirely.
++++ Last updated : 06/16/2023+++
+# Skip or delay an auto-stop scheduled shutdown
+
+A platform engineer or project admin can schedule a time for dev boxes in a pool to stop automatically, to ensure efficient resource use and provide cost management.
+
+You can delay the shutdown or skip it altogether. This flexibility allows you to manage your work and resources more effectively, ensuring that your projects remain uninterrupted when necessary.
++
+## Skip scheduled shutdown from the dev box
+
+If your dev box is in a pool with a stop schedule, you receive a notification about 30 minutes before the scheduled shutdown giving you time to save your work or make necessary adjustments.
+
+### Delay the shutdown
+
+1. In the pop-up notification, select a time to delay the shutdown for.
+
+ :::image type="content" source="media/how-to-skip-delay-stop/dev-box-toast-time.png" alt-text="Screenshot showing the shutdown notification.":::
+
+1. Select **Delay**
+
+ :::image type="content" source="media/how-to-skip-delay-stop/dev-box-toast-delay.png" alt-text="Screenshot showing the shutdown notification with Delay highlighted.":::
+
+### Skip the shutdown
+
+To skip the shutdown, select **Skip** in the notification. The dev box doesn't shut down until the next scheduled shutdown time.
+
+ :::image type="content" source="media/how-to-skip-delay-stop/dev-box-toast-skip.png" alt-text="Screenshot showing the shutdown notification with Skip highlighted.":::
+
+## Skip scheduled shutdown from the developer portal
+
+In the developer portal, you can see the scheduled shutdown time on the dev box tile, and delay or skip the shutdown from the more options menu.
+
+Shutdown time is shown on dev box tile:
++
+### Delay the shutdown
+1. Locate your dev box.
+1. On the more options menu, select **Delay scheduled shutdown**.
+
+ :::image type="content" source="media/how-to-skip-delay-stop/dev-portal-menu.png" alt-text="Screenshot showing the dev box tile, more options menu, with Delay scheduled shutdown highlighted.":::
+
+1. You can delay the shutdown by up to 8 hours from the scheduled time. From **Delay shutdown until**, select the time you want to delay the shutdown until, and then select **Delay**.
+
+ :::image type="content" source="media/how-to-skip-delay-stop/delay-options.png" alt-text="Screenshot showing the options available for delaying the scheduled shutdown.":::
+
+### Skip the shutdown
+1. Locate your dev box.
+1. On the more options menu, select **Delay scheduled shutdown**.
+1. On the **Delay shutdown until** list, select the last available option, which specifies the time 8 hours after the scheduled shutdown time, and then select **Delay**. In this example, the last option is **6:30 pm tomorrow (skip)**.
+
+ :::image type="content" source="media/how-to-skip-delay-stop/skip-shutdown.png" alt-text="Screenshot showing the final shutdown option is to skip shutdown until the next scheduled time.":::
+
+## Next steps
+
+- [Manage a dev box using the developer portal](./how-to-create-dev-boxes-developer-portal.md)
+- [Auto-stop your Dev Boxes on schedule](how-to-configure-stop-schedule.md)
energy-data-services How To Upload Large Files Using File Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-upload-large-files-using-file-service.md
+
+ Title: How to upload large files using file service API in Microsoft Azure Data Manager for Energy Preview
+description: This article describes how to to upload large files using File service API in Microsoft Azure Data Manager for Energy Preview
++++ Last updated : 06/13/2023+++
+# How to upload files in Azure Data Manager for Energy Preview using File service
+In this article, you know how to upload large files (~5GB) using File service API in Microsoft Azure Data Manager for Energy Preview. The upload process involves fetching a signed URL from [File API](https://community.opengroup.org/osdu/platform/system/file/-/tree/master/) and then using the signed URL to store the file into Azure Blob Storage
+
+## Generate a signed URL
+Run the below curl command in Azure Cloud Bash to get a signed URL from file service for a given data partition of your Azure Data Manager for Energy Preview resource.
+
+```bash
+ curl --location 'https://<URI>/api/file/v2/files/uploadURL' \
+ --header 'data-partition-id: <data-partition-id>' \
+ --header 'Authorization: Bearer <access_token>' \
+ --header 'Content-Type: text/plain'
+```
+
+### Sample request
+Consider an Azure Data Manager for Energy Preview resource named "medstest" with a data partition named "dp1"
+
+```bash
+ curl --location --request POST 'https://medstest.energy.azure.com/api/file/v2/files/uploadURL' \
+ --header 'data-partition-id: medstest-dp1' \
+ --header 'Authorization: Bearer eyxxxxxxx.........................' \
+ --header 'Content-Type: text/plain'
+```
+
+### Sample response
+
+```JSON
+{
+ "FileID": "2c5e7ac738a64eaeb7c0bc8bd47f90b6",
+ "Location": {
+ "SignedURL": "https://dummy.bloburl.com",
+ "FileSource": "/osdu-user/1686647303778-2023-06-13-09-08-23-778/2c5e7ac738a64eaeb7c0bc8bd47f90b6"
+ }
+}
+```
+
+The SignedURL key in the response object can be then used to upload files into Azure Blob Storage
+
+## Upload files with size less than 5 GB
+In order to upload file sizes less than 5 GB one can directly use [PUT blob API](https://azure.github.io/Storage/docs/application-and-user-data/basics/azure-blob-storage-upload-apis/#put-blob) call to upload their files into Azure Blob Storage
+
+### Sample Curl Request
+```bash
+ curl --location --request PUT '<SIGNED_URL>' \
+ --header 'x-ms-blob-type: BlockBlob' \
+ --header 'Content-Type: <file_type>' \ # for instance application/zip or application/csv or application/json depending on file type
+ --data '@/<path_to_file>'
+```
+If the upload is successful, we get a `201 Created` status code in response
+
+## Upload files with size greater or equal to 5 GB
+To upload files with sizes >= 5 GB, we would need [azcopy](https://github.com/Azure/azure-storage-azcopy)utility as a single PUT blob call can't be greater than 5 GB [doc link](https://azure.github.io/Storage/docs/application-and-user-data/basics/azure-blob-storage-upload-apis/#put-blob)
+
+### Steps
+1. Download `azcopy` using this [link](https://github.com/Azure/azure-storage-azcopy#download-azcopy)
+
+2. Run this command to upload your file
+
+```bash
+ azcopy copy "<path_to_file>" "signed_url"
+```
+
+3. Sample response
+
+```
+ INFO: Could not read destination length. If the destination is write-only, use --check-length=false on the command line.
+ 100.0 %, 1 Done, 0 Failed, 0 Pending, 0 Skipped, 1 Total
+
+ Job 624c59e8-9d5c-894a-582f-ef9d3fb3091d summary
+ Elapsed Time (Minutes): 0.1002
+ Number of File Transfers: 1
+ Number of Folder Property Transfers: 0
+ Number of Symlink Transfers: 0
+ Total Number of Transfers: 1
+ Number of File Transfers Completed: 1
+ Number of Folder Transfers Completed: 0
+ Number of File Transfers Failed: 0
+ Number of Folder Transfers Failed: 0
+ Number of File Transfers Skipped: 0
+ Number of Folder Transfers Skipped: 0
+ TotalBytesTransferred: 1367301
+ Final Job Status: Completed
+```
+
+## Next steps
+Begin your journey by ingesting data into your Azure Data Manager for Energy Preview resource.
+> [!div class="nextstepaction"]
+> [Tutorial on CSV parser ingestion](tutorial-csv-ingestion.md)
+> [!div class="nextstepaction"]
+> [Tutorial on manifest ingestion](tutorial-manifest-ingestion.md)
event-grid Mqtt Publish And Subscribe Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-publish-and-subscribe-cli.md
In this article, you use the Azure CLI to do the following tasks:
- You need an X.509 client certificate to generate the thumbprint and authenticate the client connection. ## Generate sample client certificate and thumbprint
-If you don't already have a certificate, you can create a sample certificate using the [step CLI](https://smallstep.com/docs/step-cli/installation/). Consider installing manually for Windows.
+If you don't already have a certificate, you can create a sample certificate using the [step CLI](https://smallstep.com/docs/step-cli/installation/). Consider installing manually for Windows. After a successful installation of Step, you should open a command prompt in your user profile folder (Win+R type %USERPROFILE%).
-Once you installed Step, in Windows PowerShell, run the command to create root and intermediate certificates.
+To create root and intermediate certificates, run the following command:
```powershell .\step ca init --deployment-type standalone --name MqttAppSamplesCA --dns localhost --address 127.0.0.1:443 --provisioner MqttAppSamplesCAProvisioner
Once you installed Step, in Windows PowerShell, run the command to create root a
Using the CA files generated to create certificate for the client. ```powershell
-.\step certificate create client1-authnID client1-authnID.pem client1-authnID.key --ca .step/certs/intermediate_ca.crt --ca-key .step/secrets/intermediate_ca_key --no-password --insecure --not-after 2400h
+.step certificate create client1-authnID client1-authnID.pem client1-authnID.key --ca .step/certs/intermediate_ca.crt --ca-key .step/secrets/intermediate_ca_key --no-password --insecure --not-after 2400h
``` To view the thumbprint, run the Step command.
event-hubs Configure Event Hub Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/configure-event-hub-properties.md
+
+ Title: Configure properties for an Azure event hub
+description: Learn how to configure status, partition count, cleanup policy, and retention time for an event hub
++ Last updated : 06/19/2023++
+# Configure properties for an event hub
+This article shows you how to configure properties such as status, partition count, retention time, etc. for an event hub.
+
+## Configure status
+You can update the status of an event hub to one of these values on the **Properties** page after the event hub is created.
+
+- Select **Active** (default) if you want to send events to and receive events from an event hub.
+- Select **Disabled** if you want to disable both sending and receiving events from an event hub.
+- Select **SendDisabled** if you want to disable sending events to an event hub.
+
+ :::image type="content" source="./media/configure-event-hub-properties/properties-page.png" alt-text="Screenshot showing the Properties page for an event hub.":::
+++
+## Configure partition count
+The **Properties** page allows you to see the number of partitions in an event hub for event hubs in all tiers. It allows you to update the partition count for event hubs in a premium or dedicated tier. For other tiers, you can only specify the partition count at the time of creating an event hub. To learn about partitions in Event Hubs, see [Scalability](event-hubs-scalability.md#partitions)
+
+### Configure cleanup policy
+You see the cleanup policy for an event hub on the **Properties** page. You can't update it. By default, an event hub is created with the **delete** cleanup policy, where events are purged upon the expiration of the retention time. While creating an event hub, you can set the cleanup policy to **Compact**. For more information, see [Log compaction](log-compaction.md) and [Configure log compaction](use-log-compaction.md).
++
+## Configure retention time
+
+If the cleanup policy is set to **Delete**, the **retention time** is the maximum time that Event Hubs retains an event before discarding the event. The **Properties** page allows you to specify retention time in hours.
+
+If the cleanup policy is set to **Compact** at the time of creating an event hub, the **infinite retention time** is automatically enabled. You can set the **Tombstone retention time in hours** though. Client applications can mark existing events of an event hub to be deleted during a compaction job by sending a new event with an existing key and a `null` event payload. These markers are known as **Tombstones**. The **Tombstone retention time in hours** is the time to retain tombstone markers in a compacted event hub.
+
+## Azure CLI
+Use the [`az eventhubs eventhub update`](/cli/azure/eventhubs/eventhub#az-eventhubs-eventhub-update) command to configure partition count and retention settings for an event hub.
+
+- Use the `--status` parameter to set the status of an existing event hub to `Active`, `Disabled`, or `SendDisabled` or `ReceiveDisabled`.
+- Use `--partition-count` parameter to specify the number of partitions. You can specify the partition count for an existing event hub only if it's in the premium or dedicated tier namespace.
+- Use the `--retention-time` to specify the number of hours to retain events for an event hub, if the `cleanupPolicy` is `Delete`.
+- Use the `--tombstone-retention-time-in-hours` to specify the number of hours to retain the tombstone markers, if the `cleanupPolicy` is `Compact`.
++
+## Azure PowerShell
+Use the [`Set-AzEventHub`](/powershell/module/az.eventhub/set-azeventhub) by using the `-Status`, `-RetentionTimeInHour` or `TomstoneRetentionTimeInHour` parameters. Currently, the PowerShell command doesn't support updating the partition count for an event hub.
+
+## Azure Resource Manager template
+
+If you're using an Azure Resource Manager template, use the `partitionCount` and `retentionTimeinHours` as shown in the following example. `MYNAMESPACE` is the name of the Event Hubs namespace and `MYEVENTHUB` is the name of the event hub in this example.
+
+```json
+{
+ "type": "Microsoft.EventHub/namespaces/eventhubs",
+ "apiVersion": "2022-10-01-preview",
+ "name": "MYNAMESPACE/MYEVENTHUB ",
+ "properties": {
+ "partitionIds": [],
+ "partitionCount": 1,
+ "captureDescription": null,
+ "retentionDescription": {
+ "cleanupPolicy": "Delete",
+ "retentionTimeInHours": 1
+ }
+ }
+}
+```
+
+## Next steps
+See the following articles:
+
+- [Scalability](event-hubs-scalability.md#partitions)
+- [Log compaction](log-compaction.md) and [Configure log compaction](use-log-compaction.md)-
event-hubs Event Hubs Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-features.md
architectural philosophy here's that historic data needs richer indexing and
more direct access than the real-time eventing interface that Event Hubs or Kafka provide. Event stream engines aren't well suited to play the role of data lakes or long-term archives for event sourcing.
-
++ > [!NOTE] > Event Hubs is a real-time event stream engine and is not designed to be used instead of a database and/or as a
firewall-manager Private Link Inspection Secure Virtual Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/private-link-inspection-secure-virtual-hub.md
Title: Secure traffic destined to private endpoints in Azure Virtual WAN description: Learn how to use network rules and application rules to secure traffic destined to private endpoints in Azure Virtual WAN -+ Previously updated : 04/02/2021- Last updated : 06/19/2023+ # Secure traffic destined to private endpoints in Azure Virtual WAN
Azure Firewall filters traffic using any of the following methods:
* [FQDN in application rules](../firewall/features.md#application-fqdn-filtering-rules) for HTTP, HTTPS, and MSSQL. * Source and destination IP addresses, port, and protocol using [network rules](../firewall/features.md#network-traffic-filtering-rules)
-Application rules are preferred over network rules to inspect traffic destined to private endpoints because Azure Firewall always SNATs traffic with application rules. SNAT is recommended when inspecting traffic destined to a private endpoint due to the limitation described here: [What is a private endpoint?][private-endpoint-overview]. If you're planning on using network rules instead, it is recommended to configure Azure Firewall to always perform SNAT: [Azure Firewall SNAT private IP address ranges][firewall-snat-private-ranges].
+Application rules are preferred over network rules to inspect traffic destined to private endpoints because Azure Firewall always SNATs traffic with application rules. SNAT is recommended when inspecting traffic destined to a private endpoint due to the limitation described here: [What is a private endpoint?][private-endpoint-overview]. If you're planning on using network rules instead, it's recommended to configure Azure Firewall to always perform SNAT: [Azure Firewall SNAT private IP address ranges][firewall-snat-private-ranges].
-A secured virtual hub is managed by Microsoft and it cannot be linked to a [Private DNS Zone](../dns/private-dns-privatednszone.md). This is required to resolve a [private link resource](../private-link/private-endpoint-overview.md#private-link-resource) FQDN to its corresponding private endpoint IP address.
+ Microsoft manages secured virtual hubs, which can't be linked to a [Private DNS Zone](../dns/private-dns-privatednszone.md). This is required to resolve a [private link resource](../private-link/private-endpoint-overview.md#private-link-resource) FQDN to its corresponding private endpoint IP address.
SQL FQDN filtering is supported in [proxy-mode](/azure/azure-sql/database/connectivity-architecture#connection-policy) only (port 1433). *Proxy* mode can result in more latency compared to *redirect*. If you want to continue using redirect mode, which is the default for clients connecting within Azure, you can filter access using FQDN in firewall network rules.
The following steps enable Azure Firewall to filter traffic using either network
2. Configure [custom DNS servers](../virtual-network/manage-virtual-network.md#change-dns-servers) for the virtual networks connected to the secured virtual hub: - **FQDN-based network rules** - configure [custom DNS settings](../firewall/dns-settings.md#configure-custom-dns-serversazure-portal) to point to the DNS forwarder virtual machine IP address and enable DNS proxy in the firewall policy associated with the Azure Firewall. Enabling DNS proxy is required if you want to do FQDN filtering in network rules.
- - **IP address-based network rules** - the custom DNS settings described in the previous point are **optional**. You can simply configure the custom DNS servers to point to the private IP of the DNS forwarder virtual machine.
+ - **IP address-based network rules** - the custom DNS settings described in the previous point are **optional**. You can configure the custom DNS servers to point to the private IP of the DNS forwarder virtual machine.
3. Depending on the configuration chosen in step **2.**, configure on-premises DNS servers to forward DNS queries for the private endpoints **public DNS zones** to either the private IP address of the Azure Firewall, or of the DNS forwarder virtual machine.
The following steps enable Azure Firewall to filter traffic using either network
2. Configure an [application rule](../firewall/tutorial-firewall-deploy-portal.md#configure-an-application-rule) as required in the firewall policy associated with the Azure Firewall. Choose *Destination Type* FQDN and the private link resource public FQDN as *Destination*.
-Lastly, and regardless of the type of rules configured in the Azure Firewall, make sure [Network Policies][network-policies-overview] (at least for UDR support) are enabled in the subnet(s) where the private endpoints are deployed. This will ensure traffic destined to private endpoints will not bypass the Azure Firewall.
+Lastly, and regardless of the type of rules configured in the Azure Firewall, make sure [Network Policies][network-policies-overview] (at least for UDR support) are enabled in the subnet(s) where the private endpoints are deployed. This ensures traffic destined to private endpoints doesn't bypass the Azure Firewall.
> [!IMPORTANT] > By default, RFC 1918 prefixes are automatically included in the *Private Traffic Prefixes* of the Azure Firewall. For most private endpoints, this will be enough to make sure traffic from on-premises clients, or in different virtual networks connected to the same secured hub, will be inspected by the firewall. In case traffic destined to private endpoints is not being logged in the firewall, try adding the /32 prefix for each private endpoint to the list of *Private Traffic Prefixes*.
-If needed, you can edit the CIDR prefixes that will be inspected via Azure Firewall in a secured hub as follows:
+If needed, you can edit the CIDR prefixes that is inspected via Azure Firewall in a secured hub as follows:
-1. Navigate to *Secured virtual hubs* in the firewall policy associated with the Azure Firewall deployed in the secured virtual hub and select the secured virtual hub where traffic filtering destined to private endpoints will be configured.
+1. Navigate to *Secured virtual hubs* in the firewall policy associated with the Azure Firewall deployed in the secured virtual hub and select the secured virtual hub where traffic filtering destined to private endpoints is configured.
2. Navigate to **Security configuration**, select **Send via Azure Firewall** under **Private traffic**.
-3. Select **Private traffic prefixes** to edit the CIDR prefixes that will be inspected via Azure Firewall in secured virtual hub and add one /32 prefix for each private endpoint.
+3. Select **Private traffic prefixes** to edit the CIDR prefixes that are inspected via Azure Firewall in secured virtual hub and add one /32 prefix for each private endpoint.
:::image type="content" source="./media/private-link-inspection-secure-virtual-hub/firewall-manager-security-configuration.png" alt-text="Firewall Manager Security Configuration" border="true":::
-To inspect traffic from clients in the same virtual network as private endpoints, it is not required to specifically override the /32 routes from private endpoints. As long as **Network Policies** are enabled in the private endpoints subnet(s), a UDR with a wider address range will take precedence. For instance, configure this UDR with **Next hop type** set to **Virtual Appliance**, **Next hop address** set to the private IP of the Azure Firewall, and **Address prefix** destination set to the subnet dedicated to all private endpoint deployed in the virtual network. **Propagate gateway routes** must be set to **Yes**.
+To inspect traffic from clients in the same virtual network as private endpoints, it isn't required to specifically override the /32 routes from private endpoints. As long as **Network Policies** are enabled in the private endpoints subnet(s), a UDR with a wider address range takes precedence. For instance, configure this UDR with **Next hop type** set to **Virtual Appliance**, **Next hop address** set to the private IP of the Azure Firewall, and **Address prefix** destination set to the subnet dedicated to all private endpoint deployed in the virtual network. **Propagate gateway routes** must be set to **Yes**.
The following diagram illustrates the DNS and data traffic flows for the different clients to connect to a private endpoint deployed in Azure virtual WAN:
The main problems that you might have when you attempt to filter traffic destine
- Clients are unable to connect to private endpoints. -- Azure Firewall is bypassed. This symptom can be validated by the absence of network or application rules log entries in Azure Firewall.
+- Azure Firewall is bypassed. You can validate this symptom the absence of network or application rules log entries in Azure Firewall.
-In most cases, these problems are caused by one of the following issues:
+In most cases, one of the following issues causes these problems:
- Incorrect DNS name resolution
In most cases, these problems are caused by one of the following issues:
:::image type="content" source="./media/private-link-inspection-secure-virtual-hub/firewall-policy-private-traffic-configuration.png" alt-text="Private Traffic Secured by Azure Firewall" border="true":::
-2. Verify **Security configuration** in the firewall policy associated with the Azure Firewall deployed in the secured virtual hub. In case traffic destined to private endpoints is not being logged in the firewall, try adding the /32 prefix for each private endpoint to the list of **Private Traffic Prefixes**.
+2. Verify **Security configuration** in the firewall policy associated with the Azure Firewall deployed in the secured virtual hub. In case traffic destined to private endpoints isn't being logged in the firewall, try adding the /32 prefix for each private endpoint to the list of **Private Traffic Prefixes**.
:::image type="content" source="./media/private-link-inspection-secure-virtual-hub/firewall-manager-security-configuration.png" alt-text="Firewall Manager Security Configuration - Private Traffic Prefixes" border="true":::
In most cases, these problems are caused by one of the following issues:
``` 5. Inspect the routing tables of your on-premises routing devices. Make sure you're learning the address spaces of the virtual networks where the private endpoints are deployed.
- Azure virtual WAN doesn't advertise the prefixes configured under **Private traffic prefixes** in firewall policy **Security configuration** to on-premises. It's expected that the /32 entries won't show in the routing tables of your on-premises routing devices.
+ Azure virtual WAN doesn't advertise the prefixes configured under **Private traffic prefixes** in firewall policy **Security configuration** to on-premises. It's expected that the /32 entries don't show in the routing tables of your on-premises routing devices.
6. Inspect **AzureFirewallApplicationRule** and **AzureFirewallNetworkRule** Azure Firewall logs. Make sure traffic destined to the private endpoints is being logged. **AzureFirewallNetworkRule** log entries don't include FQDN information. Filter by IP address and port when inspecting network rules.
- When filtering traffic destined to [Azure Files](../storage/files/storage-files-introduction.md) private endpoints, **AzureFirewallNetworkRule** log entries will only be generated when a client first mounts or connects to the file share. Azure Firewall won't generate logs for [CRUD](https://en.wikipedia.org/wiki/Create,_read,_update_and_delete) operations for files in the file share. This is because CRUD operations are carried over the persistent TCP channel opened when the client first connects or mounts to the file share.
+ When filtering traffic destined to [Azure Files](../storage/files/storage-files-introduction.md) private endpoints, **AzureFirewallNetworkRule** log entries are only generated when a client first mounts or connects to the file share. Azure Firewall doesn't generate logs for [CRUD](https://en.wikipedia.org/wiki/Create,_read,_update_and_delete) operations for files in the file share. This is because CRUD operations are carried over the persistent TCP channel opened when the client first connects or mounts to the file share.
Application rule log query example:
frontdoor Front Door Routing Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-routing-limits.md
The composite route metric for each Front Door profile can't exceed 5000.
> [!TIP] > Most Front Door profiles don't approach the composite route limit. However, if you have a large Front Door profiles, consider whether you could exceed the limit and plan accordingly.
-The number of origin groups, origins, and endpoints don't affect your composite routing limit. However, there are other limits that apply to these resources. For more information, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-front-door-standard-and-premium-tier-service-limits).
+The number of origin groups, origins, and endpoints don't affect your composite routing limit. However, there are other limits that apply to these resources. For more information, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-front-door-standard-and-premium-service-limits).
## Calculate your profile's composite route metric
frontdoor Front Door Rules Engine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-rules-engine.md
Rule sets can be configured using Azure Resource Manager templates. For an examp
## Limitations
-For information about quota limits, refer to [Front Door limits, quotas and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-front-door-standard-and-premium-tier-service-limits).
+For information about quota limits, refer to [Front Door limits, quotas and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-front-door-standard-and-premium-service-limits).
## Next steps
frontdoor Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/manager.md
A security policy is an association of one or more domains with a Web Applicatio
> [!TIP] > * If you see one of your domains is unhealthy, you can select the domain to take you to the domains page. From there you can take appropriate actions to troubleshoot the unhealthy domain.
-> * If you're running a large Azure Front Door profile, review [**Azure Front Door service limits**](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-front-door-standard-and-premium-tier-service-limits) and [**Azure Front Door routing limits**](front-door-routing-limits.md) to better manage your Azure Front Door.
+> * If you're running a large Azure Front Door profile, review [**Azure Front Door service limits**](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-front-door-standard-and-premium-service-limits) and [**Azure Front Door routing limits**](front-door-routing-limits.md) to better manage your Azure Front Door.
> ## Next steps
frontdoor Origin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/origin.md
For example, a request made for `www.contoso.com` has the host header `www.conto
Most app backends (Azure Web Apps, Blob storage, and Cloud Services) require the host header to match the domain of the backend. However, the frontend host that routes to your origin uses a different hostname such as `www.contoso.net`.
-If your origin requires the host header to match the origin hostname, make sure that the origin host header includes the hostname of the origin.
+If your origin requires the host header to match the origin hostname, make sure that the origin host header includes the hostname of the origin.
+
+> [!NOTE]
+> If you're using an App Service as an origin, make sure that the App Service also has the custom domain name configured. For more information, see [map an existing custom DNS name to Azure App Service](../app-service/app-service-web-tutorial-custom-domain.md#map-an-existing-custom-dns-name-to-azure-app-service).
#### Configure the origin host header for the origin
frontdoor Troubleshoot Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/troubleshoot-issues.md
This behavior is separate from the web application firewall (WAF) functionality
- Verify that your requests are in compliance with the requirements set out in the necessary RFCs. - Take note of any HTML message body that's returned in response to your request. A message body often explains exactly *how* your request is noncompliant.
+## My origin is configured as an IP address.
+
+### Symptom
+
+The origin is configured as an IP address. The origin is healthy, but rejecting requests from Azure Front Door.
+
+### Cause
+
+Azure Front Door users the origin host name as the SNI header during SSL handshake. Since the origin is configured as an IP address, the failure can be caused by one of the following reasons:
+
+* Certificate name check is enabled in the Front Door origin configuration. It's recommended to leave this setting enabled. Certificate name check requires the origin host name to match the certificate name or one of the entries in the subject alternative names extension.
+* If certificate name check is disabled, then the cause is likely due to the origin certificate logic rejecting any requests that don't have a valid host header in the request that matches the certificate.
+
+### Troubleshooting steps
+
+Change the origin from an IP address to an FQDN to which a valid certificate is issued that matches the origin certificate.
+ ## Next steps * Learn how to [create a Front Door](quickstart-create-front-door.md).
governance Create Blueprint Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/create-blueprint-portal.md
In this tutorial, you learn to use Azure Blueprints to do some of the common tas
## Prerequisites
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free)
+- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free)
before you begin.
+- To create blueprints, your account needs the following permissions:
+ - Microsoft.Blueprint/blueprints/write - Create a blueprint definition
+ - Microsoft.Blueprint/blueprints/artifacts/write - Create artifacts on a blueprint definition
+ - Microsoft.Blueprint/blueprints/versions/write - Publish a blueprint
## Create a blueprint
governance Pciv3_2_1_2018_Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/PCIv3_2_1_2018_audit.md
For more information about this compliance standard, see
_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and [Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
-The following mappings are to the **PCI v3.2.1 2018 PCI DSS 3.2.1** controls. Use the
-navigation on the right to jump directly to a specific **compliance domain**. Many of the controls
+The following mappings are to the **PCI v3.2.1 2018 PCI DSS 3.2.1** controls. Many of the controls
are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete initiative definition, open **Policy** in the Azure portal and select the **Definitions** page. Then, find and select the **PCI v3.2.1:2018** Regulatory Compliance built-in
Additional articles about Azure Policy:
- See the [initiative definition structure](../concepts/initiative-definition-structure.md). - Review other examples at [Azure Policy samples](./index.md). - Review [Understanding policy effects](../concepts/effects.md).-- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
+- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
governance Gov Dod Impact Level 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-dod-impact-level-4.md
For more information about this compliance standard, see
_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and [Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
-The following mappings are to the **DoD Impact Level 4** controls. Use the
-navigation on the right to jump directly to a specific **compliance domain**. Many of the controls
+The following mappings are to the **DoD Impact Level 4** controls. Many of the controls
are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete initiative definition, open **Policy** in the Azure portal and select the **Definitions** page. Then, find and select the **DoD Impact Level 4** Regulatory Compliance built-in
initiative definition.
|[OS and data disks should be encrypted with a customer-managed key](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F702dd420-7fcc-42c5-afe8-4026edd20fe0) |Use customer-managed keys to manage the encryption at rest of the contents of your managed disks. By default, the data is encrypted at rest with platform-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/disks-cmk](../../../virtual-machines/disk-encryption.md). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/OSAndDataDiskCMKRequired_Deny.json) | |[Saved-queries in Azure Monitor should be saved in customer storage account for logs encryption](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffa298e57-9444-42ba-bf04-86e8470e32c7) |Link storage account to Log Analytics workspace to protect saved-queries with storage account encryption. Customer-managed keys are commonly required to meet regulatory compliance and for more control over the access to your saved-queries in Azure Monitor. For more details on the above, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys?tabs=portal#customer-managed-key-for-saved-queries](/azure/azure-monitor/platform/customer-managed-keys?tabs=portal#customer-managed-key-for-saved-queries). |disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsWorkspaces_CMKBYOSQueryEnabled_Deny.json) | |[Service Bus Premium namespaces should use a customer-managed key for encryption](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F295fc8b1-dc9f-4f53-9c61-3f313ceab40a) |Azure Service Bus supports the option of encrypting data at rest with either Microsoft-managed keys (default) or customer-managed keys. Choosing to encrypt data using customer-managed keys enables you to assign, rotate, disable, and revoke access to the keys that Service Bus will use to encrypt data in your namespace. Note that Service Bus only supports encryption with customer-managed keys for premium namespaces. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_CustomerManagedKeyEnabled_Audit.json) |
-|[SQL managed instances should use customer-managed keys to encrypt data at rest](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac01ad65-10e5-46df-bdd9-6b0cad13e1d2) |Implementing Transparent Data Encryption (TDE) with your own key provides you with increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_EnsureServerTDEisEncryptedWithYourOwnKey_Deny.json) |
+|[SQL managed instances should use customer-managed keys to encrypt data at rest](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac01ad65-10e5-46df-bdd9-6b0cad13e1d2) |Implementing Transparent Data Encryption (TDE) with your own key provides you with increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_EnsureServerTDEisEncrypted_Deny.json) |
|[SQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a370ff3-6cab-4e85-8995-295fd854c5b8) |Implementing Transparent Data Encryption (TDE) with your own key provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_EnsureServerTDEisEncryptedWithYourOwnKey_Deny.json) | |[Storage account encryption scopes should use customer-managed keys to encrypt data at rest](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb5ec538c-daa0-4006-8596-35468b9148e8) |Use customer-managed keys to manage the encryption at rest of your storage account encryption scopes. Customer-managed keys enable the data to be encrypted with an Azure key-vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about storage account encryption scopes at [https://aka.ms/encryption-scopes-overview](../../../storage/blobs/encryption-scope-overview.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Storage/Storage_EncryptionScopesShouldUseCMK_Audit.json) | |[Storage accounts should use customer-managed key for encryption](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6fac406b-40ca-413b-bf8e-0bf964659c25) |Secure your blob and file storage account with greater flexibility using customer-managed keys. When you specify a customer-managed key, that key is used to protect and control access to the key that encrypts your data. Using customer-managed keys provides additional capabilities to control rotation of the key encryption key or cryptographically erase data. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountCustomerManagedKeyEnabled_Audit.json) |
governance Gov Dod Impact Level 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-dod-impact-level-5.md
For more information about this compliance standard, see
_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and [Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
-The following mappings are to the **DoD Impact Level 5** controls. Use the
-navigation on the right to jump directly to a specific **compliance domain**. Many of the controls
+The following mappings are to the **DoD Impact Level 5** controls. Many of the controls
are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete initiative definition, open **Policy** in the Azure portal and select the **Definitions** page. Then, find and select the **DoD Impact Level 5** Regulatory Compliance built-in
initiative definition.
|[OS and data disks should be encrypted with a customer-managed key](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F702dd420-7fcc-42c5-afe8-4026edd20fe0) |Use customer-managed keys to manage the encryption at rest of the contents of your managed disks. By default, the data is encrypted at rest with platform-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/disks-cmk](../../../virtual-machines/disk-encryption.md). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/OSAndDataDiskCMKRequired_Deny.json) | |[Saved-queries in Azure Monitor should be saved in customer storage account for logs encryption](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffa298e57-9444-42ba-bf04-86e8470e32c7) |Link storage account to Log Analytics workspace to protect saved-queries with storage account encryption. Customer-managed keys are commonly required to meet regulatory compliance and for more control over the access to your saved-queries in Azure Monitor. For more details on the above, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys?tabs=portal#customer-managed-key-for-saved-queries](/azure/azure-monitor/platform/customer-managed-keys?tabs=portal#customer-managed-key-for-saved-queries). |disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsWorkspaces_CMKBYOSQueryEnabled_Deny.json) | |[Service Bus Premium namespaces should use a customer-managed key for encryption](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F295fc8b1-dc9f-4f53-9c61-3f313ceab40a) |Azure Service Bus supports the option of encrypting data at rest with either Microsoft-managed keys (default) or customer-managed keys. Choosing to encrypt data using customer-managed keys enables you to assign, rotate, disable, and revoke access to the keys that Service Bus will use to encrypt data in your namespace. Note that Service Bus only supports encryption with customer-managed keys for premium namespaces. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_CustomerManagedKeyEnabled_Audit.json) |
-|[SQL managed instances should use customer-managed keys to encrypt data at rest](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac01ad65-10e5-46df-bdd9-6b0cad13e1d2) |Implementing Transparent Data Encryption (TDE) with your own key provides you with increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_EnsureServerTDEisEncryptedWithYourOwnKey_Deny.json) |
+|[SQL managed instances should use customer-managed keys to encrypt data at rest](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac01ad65-10e5-46df-bdd9-6b0cad13e1d2) |Implementing Transparent Data Encryption (TDE) with your own key provides you with increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_EnsureServerTDEisEncrypted_Deny.json) |
|[SQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a370ff3-6cab-4e85-8995-295fd854c5b8) |Implementing Transparent Data Encryption (TDE) with your own key provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_EnsureServerTDEisEncryptedWithYourOwnKey_Deny.json) | |[Storage account encryption scopes should use customer-managed keys to encrypt data at rest](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb5ec538c-daa0-4006-8596-35468b9148e8) |Use customer-managed keys to manage the encryption at rest of your storage account encryption scopes. Customer-managed keys enable the data to be encrypted with an Azure key-vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about storage account encryption scopes at [https://aka.ms/encryption-scopes-overview](../../../storage/blobs/encryption-scope-overview.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Storage/Storage_EncryptionScopesShouldUseCMK_Audit.json) | |[Storage accounts should use customer-managed key for encryption](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6fac406b-40ca-413b-bf8e-0bf964659c25) |Secure your blob and file storage account with greater flexibility using customer-managed keys. When you specify a customer-managed key, that key is used to protect and control access to the key that encrypts your data. Using customer-managed keys provides additional capabilities to control rotation of the key encryption key or cryptographically erase data. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountCustomerManagedKeyEnabled_Audit.json) |
governance Gov Nist Sp 800 171 R2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-171-r2.md
For more information about this compliance standard, see
_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and [Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
-The following mappings are to the **NIST SP 800-171 R2** controls. Use the
-navigation on the right to jump directly to a specific **compliance domain**. Many of the controls
+The following mappings are to the **NIST SP 800-171 R2** controls. Many of the controls
are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete initiative definition, open **Policy** in the Azure portal and select the **Definitions** page. Then, find and select the **NIST SP 800-171 R2** Regulatory Compliance built-in
Additional articles about Azure Policy:
- See the [initiative definition structure](../concepts/initiative-definition-structure.md). - Review other examples at [Azure Policy samples](./index.md). - Review [Understanding policy effects](../concepts/effects.md).-- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
+- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
governance Gov Nist Sp 800 53 R4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-53-r4.md
For more information about this compliance standard, see
_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and [Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
-The following mappings are to the **NIST SP 800-53 Rev. 4** controls. Use the
-navigation on the right to jump directly to a specific **compliance domain**. Many of the controls
+The following mappings are to the **NIST SP 800-53 Rev. 4** controls. Many of the controls
are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete initiative definition, open **Policy** in the Azure portal and select the **Definitions** page. Then, find and select the **NIST SP 800-53 Rev. 4** Regulatory Compliance built-in
initiative definition.
|[OS and data disks should be encrypted with a customer-managed key](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F702dd420-7fcc-42c5-afe8-4026edd20fe0) |Use customer-managed keys to manage the encryption at rest of the contents of your managed disks. By default, the data is encrypted at rest with platform-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/disks-cmk](../../../virtual-machines/disk-encryption.md). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/OSAndDataDiskCMKRequired_Deny.json) | |[Saved-queries in Azure Monitor should be saved in customer storage account for logs encryption](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffa298e57-9444-42ba-bf04-86e8470e32c7) |Link storage account to Log Analytics workspace to protect saved-queries with storage account encryption. Customer-managed keys are commonly required to meet regulatory compliance and for more control over the access to your saved-queries in Azure Monitor. For more details on the above, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys?tabs=portal](../../../azure-monitor/logs/customer-managed-keys.md). |disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsWorkspaces_CMKBYOSQueryEnabled_Deny.json) | |[Service Bus Premium namespaces should use a customer-managed key for encryption](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F295fc8b1-dc9f-4f53-9c61-3f313ceab40a) |Azure Service Bus supports the option of encrypting data at rest with either Microsoft-managed keys (default) or customer-managed keys. Choosing to encrypt data using customer-managed keys enables you to assign, rotate, disable, and revoke access to the keys that Service Bus will use to encrypt data in your namespace. Note that Service Bus only supports encryption with customer-managed keys for premium namespaces. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_CustomerManagedKeyEnabled_Audit.json) |
-|[SQL managed instances should use customer-managed keys to encrypt data at rest](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F048248b0-55cd-46da-b1ff-39efd52db260) |Implementing Transparent Data Encryption (TDE) with your own key provides you with increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_EnsureServerTDEisEncryptedWithYourOwnKey_Audit.json) |
+|[\[Deprecated\]: SQL managed instances should use customer-managed keys to encrypt data at rest](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F048248b0-55cd-46da-b1ff-39efd52db260) |Implementing Transparent Data Encryption (TDE) with your own key provides you with increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |AuditIfNotExists, Disabled |[1.0.2-deprecated](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_EnsureServerTDEisEncrypted_Audit.json) |
|[SQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0d134df8-db83-46fb-ad72-fe0c9428c8dd) |Implementing Transparent Data Encryption (TDE) with your own key provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_EnsureServerTDEisEncryptedWithYourOwnKey_Audit.json) | |[Storage account encryption scopes should use customer-managed keys to encrypt data at rest](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb5ec538c-daa0-4006-8596-35468b9148e8) |Use customer-managed keys to manage the encryption at rest of your storage account encryption scopes. Customer-managed keys enable the data to be encrypted with an Azure key-vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about storage account encryption scopes at [https://aka.ms/encryption-scopes-overview](../../../storage/blobs/encryption-scope-overview.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Storage/Storage_EncryptionScopesShouldUseCMK_Audit.json) | |[Storage accounts should use customer-managed key for encryption](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6fac406b-40ca-413b-bf8e-0bf964659c25) |Secure your blob and file storage account with greater flexibility using customer-managed keys. When you specify a customer-managed key, that key is used to protect and control access to the key that encrypts your data. Using customer-managed keys provides additional capabilities to control rotation of the key encryption key or cryptographically erase data. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountCustomerManagedKeyEnabled_Audit.json) |
governance Nist Sp 800 171 R2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-171-r2.md
For more information about this compliance standard, see
_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and [Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
-The following mappings are to the **NIST SP 800-171 R2** controls. Use the
-navigation on the right to jump directly to a specific **compliance domain**. Many of the controls
+The following mappings are to the **NIST SP 800-171 R2** controls. Many of the controls
are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete initiative definition, open **Policy** in the Azure portal and select the **Definitions** page. Then, find and select the **NIST SP 800-171 R2** Regulatory Compliance built-in
Additional articles about Azure Policy:
- See the [initiative definition structure](../concepts/initiative-definition-structure.md). - Review other examples at [Azure Policy samples](./index.md). - Review [Understanding policy effects](../concepts/effects.md).-- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
+- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
governance Nist Sp 800 53 R4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-53-r4.md
For more information about this compliance standard, see
_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and [Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
-The following mappings are to the **NIST SP 800-53 Rev. 4** controls. Use the
-navigation on the right to jump directly to a specific **compliance domain**. Many of the controls
+The following mappings are to the **NIST SP 800-53 Rev. 4** controls. Many of the controls
are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete initiative definition, open **Policy** in the Azure portal and select the **Definitions** page. Then, find and select the **NIST SP 800-53 Rev. 4** Regulatory Compliance built-in
initiative definition.
|[PostgreSQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F18adea5e-f416-4d0f-8aa8-d24321e3e274) |Use customer-managed keys to manage the encryption at rest of your PostgreSQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableByok_Audit.json) | |[Saved-queries in Azure Monitor should be saved in customer storage account for logs encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffa298e57-9444-42ba-bf04-86e8470e32c7) |Link storage account to Log Analytics workspace to protect saved-queries with storage account encryption. Customer-managed keys are commonly required to meet regulatory compliance and for more control over the access to your saved-queries in Azure Monitor. For more details on the above, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys?tabs=portal](../../../azure-monitor/logs/customer-managed-keys.md). |disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsWorkspaces_CMKBYOSQueryEnabled_Deny.json) | |[Service Bus Premium namespaces should use a customer-managed key for encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F295fc8b1-dc9f-4f53-9c61-3f313ceab40a) |Azure Service Bus supports the option of encrypting data at rest with either Microsoft-managed keys (default) or customer-managed keys. Choosing to encrypt data using customer-managed keys enables you to assign, rotate, disable, and revoke access to the keys that Service Bus will use to encrypt data in your namespace. Note that Service Bus only supports encryption with customer-managed keys for premium namespaces. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_CustomerManagedKeyEnabled_Audit.json) |
-|[SQL managed instances should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F048248b0-55cd-46da-b1ff-39efd52db260) |Implementing Transparent Data Encryption (TDE) with your own key provides you with increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_EnsureServerTDEisEncryptedWithYourOwnKey_Audit.json) |
+|[\[Deprecated\]: SQL managed instances should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F048248b0-55cd-46da-b1ff-39efd52db260) |Implementing Transparent Data Encryption (TDE) with your own key provides you with increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |AuditIfNotExists, Disabled |[1.0.2-deprecated](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_EnsureServerTDEisEncrypted_Audit.json) |
|[SQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0d134df8-db83-46fb-ad72-fe0c9428c8dd) |Implementing Transparent Data Encryption (TDE) with your own key provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_EnsureServerTDEisEncryptedWithYourOwnKey_Audit.json) | |[Storage account encryption scopes should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb5ec538c-daa0-4006-8596-35468b9148e8) |Use customer-managed keys to manage the encryption at rest of your storage account encryption scopes. Customer-managed keys enable the data to be encrypted with an Azure key-vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about storage account encryption scopes at [https://aka.ms/encryption-scopes-overview](../../../storage/blobs/encryption-scope-overview.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_EncryptionScopesShouldUseCMK_Audit.json) | |[Storage accounts should use customer-managed key for encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6fac406b-40ca-413b-bf8e-0bf964659c25) |Secure your blob and file storage account with greater flexibility using customer-managed keys. When you specify a customer-managed key, that key is used to protect and control access to the key that encrypts your data. Using customer-managed keys provides additional capabilities to control rotation of the key encryption key or cryptographically erase data. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountCustomerManagedKeyEnabled_Audit.json) |
governance Pci Dss 3 2 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pci-dss-3-2-1.md
For more information about this compliance standard, see
_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and [Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
-The following mappings are to the **PCI DSS 3.2.1** controls. Use the
-navigation on the right to jump directly to a specific **compliance domain**. Many of the controls
+The following mappings are to the **PCI DSS 3.2.1** controls. Many of the controls
are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete initiative definition, open **Policy** in the Azure portal and select the **Definitions** page. Then, find and select the **PCI v3.2.1:2018** Regulatory Compliance built-in
governance Swift Cscf V2021 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/swift-cscf-v2021.md
For more information about this compliance standard, see
_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and [Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
-The following mappings are to the **[Preview]: SWIFT CSCF v2021** controls. Use the
-navigation on the right to jump directly to a specific **compliance domain**. Many of the controls
+The following mappings are to the **[Preview]: SWIFT CSCF v2021** controls. Many of the controls
are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete initiative definition, open **Policy** in the Azure portal and select the **Definitions** page. Then, find and select the **[Preview]: SWIFT CSCF v2021** Regulatory Compliance built-in
healthcare-apis Concepts Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/concepts-machine-learning.md
Previously updated : 06/15/2023 Last updated : 06/19/2023
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-In this article, we explore using the MedTech service and the Azure Machine Learning Service.
+In this article, learn about using the MedTech service and the Azure Machine Learning Service.
## The MedTech service and Azure Machine Learning Service reference architecture
healthcare-apis Concepts Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/concepts-power-bi.md
Previously updated : 06/15/2023 Last updated : 06/19/2023
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-In this article, we explore using the MedTech service and Microsoft Power Business Intelligence (Power BI).
+In this article, learn about using the MedTech service and Microsoft Power Business Intelligence (Power BI).
## The MedTech service and Power BI reference architecture
healthcare-apis Concepts Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/concepts-teams.md
Previously updated : 06/15/2023 Last updated : 06/19/2023
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-In this article, we explore using the MedTech service and Microsoft Teams for notifications.
+In this article, learn about using the MedTech service and Microsoft Teams for notifications.
## The MedTech service and Teams notifications reference architecture
healthcare-apis Device Messages Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/device-messages-through-iot-hub.md
Previously updated : 05/16/2023 Last updated : 06/19/2023
For enhanced workflows and ease of use, you can use the MedTech service to recei
> [!TIP] > To learn how the MedTech service transforms and persists device data into the FHIR service as FHIR Observations, see [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md).
-In this tutorial, you learn how to:
+In this tutorial, learn how to:
> [!div class="checklist"] > * Open an ARM template in the Azure portal.
healthcare-apis How To Configure Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-configure-metrics.md
Previously updated : 04/28/2023 Last updated : 06/19/2023
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-In this article, learn how to configure the MedTech service metrics in the Azure portal. You'll also learn how to pin the MedTech service metrics tile to an Azure portal dashboard for later viewing.
+In this article, learn how to configure the MedTech service metrics in the Azure portal. Also learn how to pin the MedTech service metrics tile to an Azure portal dashboard for later viewing.
The MedTech service metrics can be used to help determine the health and performance of your MedTech service and can be useful with troubleshooting and seeing patterns and/or trends with your MedTech service.
healthcare-apis How To Use Monitoring And Health Checks Tabs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-monitoring-and-health-checks-tabs.md
Previously updated : 04/28/2023 Last updated : 06/19/2023
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-In this article, you'll learn how to use the MedTech service monitoring and health check tabs in the Azure portal. The monitoring and health check tabs provide access to crucial MedTech service metrics and health checks. These metrics and health checks can be used in assessing the health and performance of your MedTech service and can be useful seeing patterns and/or trends or assisting with troubleshooting your MedTech service.
+In this article, learn how to use the MedTech service monitoring and health check tabs in the Azure portal. The monitoring and health check tabs provide access to crucial MedTech service metrics and health checks. These metrics and health checks can be used in assessing the health and performance of your MedTech service and can be useful seeing patterns and/or trends or assisting with troubleshooting your MedTech service.
## Use the MedTech service monitoring tab
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview.md
Previously updated : 04/28/2023 Last updated : 06/19/2023
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-This article provides an introductory overview of the MedTech service. The MedTech service is a Platform as a Service (PaaS) within the Azure Health Data Services. The MedTech service enables you to ingest device data, transform it into a unified FHIR format, and store it in an enterprise-scale, secure, and compliant cloud environment. 
+This article provides an overview of the MedTech service. The MedTech service is a Platform as a Service (PaaS) within the Azure Health Data Services. The MedTech service enables you to ingest device data, transform it into a unified FHIR format, and store it in an enterprise-scale, secure, and compliant cloud environment. 
The MedTech service was built to help customers that were dealing with the challenge of gaining relevant insights from device data coming in from multiple and diverse sources. No matter the device or structure, the MedTech service normalizes that device data into a common format, allowing the end user to then easily capture trends, run analytics, and build Artificial Intelligence (AI) models. In the enterprise healthcare setting, the MedTech service is used in the context of remote patient monitoring, virtual health, and clinical trials.
iot-hub Migrate Tls Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/migrate-tls-certificate.md
You can migrate your application from the Baltimore CyberTrust Root to the DigiC
4. In your IoT Central application you can find the Root Certification settings underΓÇ»**Settings**ΓÇ»>ΓÇ»**Application**ΓÇ»>ΓÇ»**Baltimore Cybertrust Migration**.ΓÇ» 1. Select **DigiCert Global G2 Root** to migrate to the new certificate root. 2. Click **Save** to initiate the migration.
- 3. If needed, you can migrate back to the Baltimore root by selecting **Baltimore CyberTrust Root** and saving the changes. This option is available until 15 May 2023 and will then be disabled as Microsoft will start initiating the migration.
+ 3. If needed, you can migrate back to the Baltimore root by selecting **Baltimore CyberTrust Root** and saving the changes. This option is available until 15 August 2023 and will then be disabled.
### How long will it take my devices to reconnect?
key-vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/overview.md
Last updated 02/28/2023
- - zerotrust-services
+ - zerotrust-extra
#Customer intent: As an IT Pro, Decision maker or developer I am trying to learn what Key Vault is and if it offers anything that could be used in my organization.
key-vault Rbac Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-guide.md
> Key Vault resource provider supports two resource types: **vaults** and **managed HSMs**. Access control described in this article only applies to **vaults**. To learn more about access control for managed HSM, see [Managed HSM access control](../managed-hsm/access-control.md). > [!NOTE]
-> Azure App Service certificate configuration through Azure Portal does not support Key Vault RBAC permission model. You can use Azure PowerShell, Azure CLI, ARM template deployments with **Key Vault Secrets User** and **Key Vault Reader** role assignemnts for 'Microsoft Azure App Service' global indentity.
+> Azure App Service certificate configuration through Azure Portal does not support Key Vault RBAC permission model. You can use Azure PowerShell, Azure CLI, ARM template deployments with **Key Vault Secrets User** and **Key Vault Reader** role assignments for 'Microsoft Azure App Service' global identity.
Azure role-based access control (Azure RBAC) is an authorization system built on [Azure Resource Manager](../../azure-resource-manager/management/overview.md) that provides fine-grained access management of Azure resources.
key-vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/overview.md
Last updated 02/28/2023
- - zerotrust-services
+ - zerotrust-extra
#Customer intent: As an IT Pro, Decision maker or developer I am trying to learn what Managed HSM is and if it offers anything that could be used in my organization.
The term "Managed HSM instance" is synonymous with "Managed HSM pool". To avoid
- [Managed HSM Status](https://azure.status.microsoft) - [Managed HSM Service Level Agreement](https://azure.microsoft.com/support/legal/sla/key-vault-managed-hsm/v1_0/) - [Managed HSM region availability](https://azure.microsoft.com/global-infrastructure/services/?products=key-vault)-- [What is Zero Trust?](/security/zero-trust/zero-trust-overview)
+- [What is Zero Trust?](/security/zero-trust/zero-trust-overview)
load-balancer Load Balancer Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-insights.md
The dashboard tabs currently available are:
* Connection Monitors * Metric Definitions
+[!NOTE] Displays on the Flow Distribution tab are not supported for load balancer backend pools configured by IP addresses. These are virtual machine-level metrics and can be seen by from the virtual machines / VMSS resources associated with the IP addresses attached instead.
+ ### Overview tab The Overview tab contains a searchable grid with the overall Data Path Availability and Health Probe Status for each of the Frontend IPs attached to your Load Balancer. These metrics indicate whether the Frontend IP is responsive and the compute instances in your Backend Pool are individually responsive to inbound connections.
load-balancer Load Balancer Outbound Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-outbound-connections.md
By definition, every IP address has 65,535 ports. Each port can either be used f
Each port used in a load balancing or inbound NAT rule consumes a range of eight ports from the 64,000 available SNAT ports. This usage reduces the number of ports eligible for SNAT, if the same frontend IP is used for outbound connectivity. If load-balancing or inbound NAT rules consumed ports are in the same block of eight ports consumed by another rule, the rules don't require extra ports.
+> [!NOTE]
+> If you need to connect to any [supported Azure PaaS services](../private-link/availability.md) like Azure Storage, Azure SQL, or Azure Cosmos DB, you can use Azure Private Link to avoid SNAT entirely. Azure Private Link sends traffic from your virtual network to Azure services over the Azure backbone network instead of over the internet.
+>
+> Private Link is the recommended option over service endpoints for private access to Azure hosted services. For more information on the difference between Private Link and service endpoints, see [Compare Private Endpoints and Service Endpoints](../virtual-network/vnet-integration-for-azure-services.md#compare-private-endpoints-and-service-endpoints).
+ ### How does default SNAT work? When a VM creates an outbound flow, Azure translates the source IP address to an ephemeral IP address. This translation is done via SNAT.
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
The hosts in this section are used to install Visual Studio Code packages to est
| `marketplace.visualstudio.com`<br>`vscode.blob.core.windows.net`<br>`*.gallerycdn.vsassets.io` | Required to download and install VS Code extensions. These hosts enable the remote connection to compute instances using the Azure Machine Learning extension for VS Code. For more information, see [Connect to an Azure Machine Learning compute instance in Visual Studio Code](./how-to-set-up-vs-code-remote.md) | | `raw.githubusercontent.com/microsoft/vscode-tools-for-ai/master/azureml_remote_websocket_server/*` | Used to retrieve websocket server bits that are installed on the compute instance. The websocket server is used to transmit requests from Visual Studio Code client (desktop application) to Visual Studio Code server running on the compute instance. |
-## Scenario: Third party firewall
+## Scenario: Third party firewall or Azure Firewall without service tags
The guidance in this section is generic, as each firewall has its own terminology and specific configurations. If you have questions, check the documentation for the firewall you're using.
+> [!TIP]
+> If you're using __Azure Firewall__, and want to use the FQDNs listed in this section instead of using service tags, use the following guidance:
+> * FQDNs that use HTTP/S ports (80 and 443) should be configured as __application rules__.
+> * FQDNs that use other ports should be configured as __network rules__.
+>
+> For more information, see [Differences in application rules vs. network rules](/azure/firewall/fqdn-filtering-network-rules#differences-in-application-rules-vs-network-rules).
+ If not configured correctly, the firewall can cause problems using your workspace. There are various host names that are used both by the Azure Machine Learning workspace. The following sections list hosts that are required for Azure Machine Learning. ### Dependencies API
machine-learning How To Mlflow Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-mlflow-batch.md
Azure Machine Learning supports no-code deployment for batch inference in [manag
### How work is distributed on workers
-Work is distributed at the file level, for both structured and unstructured data. As a consequence, only [file datasets](v1/how-to-create-register-datasets.md#filedataset) or [URI folders](reference-yaml-data.md) are supported for this feature. Each worker processes batches of `Mini batch size` files at a time. Further parallelism can be achieved if `Max concurrency per instance` is increased.
+Work is distributed at the file level, for both structured and unstructured data. As a consequence, only [file datasets (v1 API)](v1/how-to-create-register-datasets.md#filedataset) or [URI folders](reference-yaml-data.md) are supported for this feature. Each worker processes batches of `Mini batch size` files at a time. Further parallelism can be achieved if `Max concurrency per instance` is increased.
> [!WARNING] > Nested folder structures are not explored during inference. If you are partitioning your data using folders, make sure to flatten the structure beforehand.
machine-learning How To Private Endpoint Integration Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-private-endpoint-integration-synapse.md
Last updated 11/16/2022
In this article, learn how to securely integrate with Azure Machine Learning from Azure Synapse. This integration enables you to use Azure Machine Learning from notebooks in your Azure Synapse workspace. Communication between the two workspaces is secured using an Azure Virtual Network.
-> [!TIP]
-> You can also perform integration in the opposite direction, using Azure Synapse spark pool from Azure Machine Learning. For more information, see [Link Azure Synapse and Azure Machine Learning](v1/how-to-link-synapse-ml-workspaces.md).
- ## Prerequisites * An Azure subscription.
machine-learning How To Troubleshoot Protobuf Descriptor Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-protobuf-descriptor-error.md
Last updated 11/04/2022
+monikerRange: 'azureml-api-1 || azureml-api-2'
# Troubleshoot `descriptors cannot not be created directly` error
Last updated 11/04/2022
When using Azure Machine Learning, you may receive the following error: ```
-TypeError: Descriptors cannot not be created directly. If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.ΓÇ¥ It is followed by the proposition to install the appropriate version of protobuf library.
+TypeError: Descriptors cannot not be created directly. If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0." It is followed by the proposition to install the appropriate version of protobuf library.
If you cannot immediately regenerate your protos, some other possible workarounds are: 1. Downgrade the protobuf package to 3.20.x or lower.
pip install azureml-sdk[automl,explain,notebooks]>=1.42.0
For more information on updating an Azure Machine Learning environment (for training or deployment), see the following articles: * [Manage environments in studio](how-to-manage-environments-in-studio.md#rebuild-an-environment)
-* [Create & use software environments (SDK v1)](how-to-use-environments.md#update-an-existing-environment)
-* [Create & manage environments (CLI v2)](how-to-manage-environments-v2.md#update)
+* [Create & use software environments](./v1/how-to-use-environments.md)
+* [Manage environments in studio](how-to-manage-environments-in-studio.md#rebuild-an-environment)
+* [Create & manage environments](how-to-manage-environments-v2.md#update)
To verify the version of your installed SDK, use the following command:
For more information on the breaking changes in protobuf 4.0.0, see [https://dev
For more information on updating an Azure Machine Learning environment (for training or deployment), see the following articles:
+* [Manage environments in studio](how-to-manage-environments-in-studio.md#rebuild-an-environment)
+* [Create & use software environments](./v1/how-to-use-environments.md)
* [Manage environments in studio](how-to-manage-environments-in-studio.md#rebuild-an-environment)
-* [Create & use software environments (SDK v1)](how-to-use-environments.md#update-an-existing-environment)
-* [Create & manage environments (CLI v2)](how-to-manage-environments-v2.md#update)
+* [Create & manage environments](how-to-manage-environments-v2.md#update)
machine-learning How To Understand Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-understand-automated-ml.md
The mAP, precision and recall values are logged at an epoch-level for image obje
While model evaluation metrics and charts are good for measuring the general quality of a model, inspecting which dataset features a model used to make its predictions is essential when practicing responsible AI. That's why automated ML provides a model explanations dashboard to measure and report the relative contributions of dataset features. See how to [view the explanations dashboard in the Azure Machine Learning studio](how-to-use-automated-ml-for-ml-models.md#model-explanations-preview).
-For a code first experience, see how to set up [model explanations for automated ML experiments with the Azure Machine Learning Python SDK](./v1/how-to-machine-learning-interpretability-automl.md).
+For a code first experience, see how to set up [model explanations for automated ML experiments with the Azure Machine Learning Python SDK (v1)](./v1/how-to-machine-learning-interpretability-automl.md).
> [!NOTE] > Interpretability, best model explanation, is not available for automated ML forecasting experiments that recommend the following algorithms as the best model or ensemble:
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
Otherwise, you'll see a list of your recent automated ML experiments, including
Select **Next.**
- 1. The **Confirm details** form is a summary of the information previously populated in the **Basic info** and **Settings and preview** forms. You also have the option to create a data profile for your dataset using a profiling enabled compute. Learn more about [data profiling](v1/how-to-connect-data-ui.md#profile).
+ 1. The **Confirm details** form is a summary of the information previously populated in the **Basic info** and **Settings and preview** forms. You also have the option to create a data profile for your dataset using a profiling enabled compute. Learn more about [data profiling (v1)](v1/how-to-connect-data-ui.md#profile).
Select **Next**. 1. Select your newly created dataset once it appears. You are also able to view a preview of the dataset and sample statistics.
Otherwise, you'll see a list of your recent automated ML experiments, including
Select **Create**. Creation of a new compute can take a few minutes. >[!NOTE]
- > Your compute name will indicate if the compute you select/create is *profiling enabled*. (See the section [data profiling](v1/how-to-connect-data-ui.md#profile) for more details).
+ > Your compute name will indicate if the compute you select/create is *profiling enabled*. (See the section [data profiling (v1)](v1/how-to-connect-data-ui.md#profile) for more details).
Select **Next**.
Otherwise, you'll see a list of your recent automated ML experiments, including
Additional configurations|Description | Primary metric| Main metric used for scoring your model. [Learn more about model metrics](how-to-configure-auto-train.md#primary-metric).
- Explain best model | Select to enable or disable, in order to show explanations for the recommended best model. <br> This functionality is not currently available for [certain forecasting algorithms](./v1/how-to-machine-learning-interpretability-automl.md#interpretability-during-training-for-the-best-model).
+ Explain best model | Select to enable or disable, in order to show explanations for the recommended best model. <br> This functionality is not currently available for [certain forecasting algorithms (SDK v1)](./v1/how-to-machine-learning-interpretability-automl.md#interpretability-during-training-for-the-best-model).
Blocked algorithm| Select algorithms you want to exclude from the training job. <br><br> Allowing algorithms is only available for [SDK experiments](how-to-configure-auto-train.md#supported-algorithms). <br> See the [supported algorithms for each task type](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels). Exit criterion| When any of these criteria are met, the training job is stopped. <br> *Training job time (hours)*: How long to allow the training job to run. <br> *Metric score threshold*: Minimum metric score for all pipelines. This ensures that if you have a defined target metric you want to reach, you do not spend more time on the training job than necessary. Concurrency| *Max concurrent iterations*: Maximum number of pipelines (iterations) to test in the training job. The job will not run more than the specified number of iterations. Learn more about how automated ML performs [multiple child jobs on clusters](how-to-configure-auto-train.md#multiple-child-runs-on-clusters).
Otherwise, you'll see a list of your recent automated ML experiments, including
> Providing a test dataset to evaluate generated models is a preview feature. This capability is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and may change at any time. * Test data is considered a separate from training and validation, so as to not bias the results of the test job of the recommended model. [Learn more about bias during model validation](concept-automated-ml.md#training-validation-and-test-data).
- * You can either provide your own test dataset or opt to use a percentage of your training dataset. Test data must be in the form of an [Azure Machine Learning TabularDataset](./v1/how-to-create-register-datasets.md#tabulardataset).
+ * You can either provide your own test dataset or opt to use a percentage of your training dataset. Test data must be in the form of an [Azure Machine Learning TabularDataset (v1)](./v1/how-to-create-register-datasets.md#tabulardataset).
* The schema of the test dataset should match the training dataset. The target column is optional, but if no target column is indicated no test metrics are calculated. * The test dataset should not be the same as the training dataset or the validation dataset. * Forecasting jobs do not support train/test split.
After your experiment completes, you can test the model(s) that automated ML gen
To better understand your model, you can see which data features (raw or engineered) influenced the model's predictions with the model explanations dashboard.
-The model explanations dashboard provides an overall analysis of the trained model along with its predictions and explanations. It also lets you drill into an individual data point and its individual feature importance. [Learn more about the explanation dashboard visualizations](./v1/how-to-machine-learning-interpretability-aml.md#visualizations).
+The model explanations dashboard provides an overall analysis of the trained model along with its predictions and explanations. It also lets you drill into an individual data point and its individual feature importance. [Learn more about the explanation dashboard visualizations (v1)](./v1/how-to-machine-learning-interpretability-aml.md#visualizations).
To get explanations for a particular model,
The **Edit and submit** button opens the **Create a new Automated ML job** wizar
Once you have the best model at hand, it is time to deploy it as a web service to predict on new data. >[!TIP]
-> If you are looking to deploy a model that was generated via the `automl` package with the Python SDK, you must [register your model](./v1/how-to-deploy-and-where.md) to the workspace.
+> If you are looking to deploy a model that was generated via the `automl` package with the Python SDK, you must [register your model (v1)](./v1/how-to-deploy-and-where.md) to the workspace.
> > Once you're model is registered, find it in the studio by selecting **Models** on the left pane. Once you open your model, you can select the **Deploy** button at the top of the screen, and then follow the instructions as described in **step 2** of the **Deploy your model** section.
Automated ML helps you with deploying the model without writing code:
Compute type| Select the type of endpoint you want to deploy: [*Azure Kubernetes Service (AKS)*](../aks/intro-kubernetes.md) or [*Azure Container Instance (ACI)*](../container-instances/container-instances-overview.md). Compute name| *Applies to AKS only:* Select the name of the AKS cluster you wish to deploy to. Enable authentication | Select to allow for token-based or key-based authentication.
- Use custom deployment assets| Enable this feature if you want to upload your own scoring script and environment file. Otherwise, automated ML provides these assets for you by default. [Learn more about scoring scripts](./v1/how-to-deploy-and-where.md).
+ Use custom deployment assets| Enable this feature if you want to upload your own scoring script and environment file. Otherwise, automated ML provides these assets for you by default. [Learn more about scoring scripts (v1)](./v1/how-to-deploy-and-where.md).
>[!Important] > File names must be under 32 characters and must begin and end with alphanumerics. May include dashes, underscores, dots, and alphanumerics between. Spaces are not allowed.
Now you have an operational web service to generate predictions! You can test th
## Next steps
-* [Learn how to consume a web service](v1/how-to-consume-web-service.md).
+* [Learn how to consume a web service (SDK v1)](v1/how-to-consume-web-service.md).
* [Understand automated machine learning results](how-to-understand-automated-ml.md). * [Learn more about automated machine learning](concept-automated-ml.md) and Azure Machine Learning.
machine-learning How To Use Automl Onnx Model Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automl-onnx-model-dotnet.md
ONNX is an open-source format for AI models. ONNX supports interoperability betw
- [.NET Core SDK 3.1 or greater](https://dotnet.microsoft.com/download) - Text Editor or IDE (such as [Visual Studio](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/Download))-- ONNX model. To learn how to train an AutoML ONNX model, see the following [bank marketing classification notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb).
+- ONNX model. To learn how to train an AutoML ONNX model, see the following [bank marketing classification notebook (SDK v1)](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb).
- [Netron](https://github.com/lutzroeder/netron) (optional) ## Create a C# console application
machine-learning How To Use Event Grid Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-event-grid-batch.md
Title: "Invoking batch endpoints from Event Grid events in storage"
+ Title: "Run batch endpoints from Event Grid events in storage"
description: Learn how to use batch endpoints to be automatically triggered when new files are generated in storage.
-# Invoking batch endpoints from Event Grid events in storage
+# Run batch endpoints from Event Grid events in storage
[!INCLUDE [ml v2](../../includes/machine-learning-dev-v2.md)]
-Event Grid is a fully managed service that enables you to easily manage events across many different Azure services and applications. It simplifies building event-driven and serverless applications. In this tutorial we are going to learn how to create a Logic App that can subscribe to the Event Grid event associated with new files created in a storage account and trigger a batch endpoint to process the given file.
+Event Grid is a fully managed service that enables you to easily manage events across many different Azure services and applications. It simplifies building event-driven and serverless applications. In this tutorial, we learn how to trigger a batch endpoint's job to process files as soon as they are created in a storage account. In this architecture, we use a Logic App to subscribe to those events and trigger the endpoint.
-The workflow will work in the following way:
+The workflow looks as follows:
-1. It will be triggered when a new blob is created in a specific storage account.
-2. Since the storage account can contain multiple data assets, event filtering will be applied to only react to events happening in a specific folder inside of it. Further filtering can be done is needed.
-3. It will get an authorization token to invoke batch endpoints using the credentials from a Service Principal.
-4. It will trigger the batch endpoint (default deployment) using the newly created file as input.
+
+1. A **file created** event is triggered when a new blob is created in a specific storage account.
+2. The event is sent to Event Grid to get processed to all the subscribers.
+3. A Logic App is subscribed to listen to those events. Since the storage account can contain multiple data assets, event filtering will be applied to only react to events happening in a specific folder inside of it. Further filtering can be done if needed (for instance, based on file extensions).
+4. The Logic App will be triggered, which in turns will:
+
+ a. It will get an authorization token to invoke batch endpoints using the credentials from a Service Principal
+
+ b. It will trigger the batch endpoint (default deployment) using the newly created file as input.
+
+5. The batch endpoint will return the name of the job that was created to process the file.
> [!IMPORTANT]
-> When using Logic App connected with event grid to invoke batch deployment, a job for each file that triggers the event of *blog created* will be generated. However, keep in mind that batch deployments distribute the work at the file level. Since this execution is specifying only one file, then, there will not be any parallelization happening in the deployment. Instead, you will be taking advantage of the capability of batch deployments of executing multiple scoring jobs under the same compute cluster. If you need to run jobs on entire folders in an automatic fashion, we recommend you to switch to [Invoking batch endpoints from Azure Data Factory](how-to-use-batch-azure-data-factory.md).
+> When using Logic App connected with event grid to invoke batch endpoint, you are generateing one job per **each blob file** created in the sotrage account. Keep in mind that since batch endpoints distribute the work at the file level, there will not be any parallelization happening. Instead, you will be taking advantage of batch endpoints's capability of executing multiple jobs under the same compute cluster. If you need to run jobs on entire folders in an automatic fashion, we recommend you to switch to [Invoking batch endpoints from Azure Data Factory](how-to-use-batch-azure-data-factory.md).
## Prerequisites
-* This example assumes that you have a model correctly deployed as a batch endpoint. Particularly, we are using the *heart condition classifier* created in the tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
-* This example assumes that your batch deployment runs in a compute cluster called `cpu-cluster`.
-* The Logic App we are creating will communicate with Azure Machine Learning batch endpoints using REST. To know more about how to use the REST API of batch endpoints read [Create jobs and input data for batch endpoints](how-to-access-data-batch-endpoints-jobs.md?tabs=rest).
+* This example assumes that you have a model correctly deployed as a batch endpoint. This architecture can perfectly be extended to work with [Pipeline component deployments](concept-endpoints-batch.md?#pipeline-component-deployment-preview) if needed.
+* This example assumes that your batch deployment runs in a compute cluster called `batch-cluster`.
+* The Logic App we are creating will communicate with Azure Machine Learning batch endpoints using REST. To know more about how to use the REST API of batch endpoints read [Create jobs and input data for batch endpoints](how-to-access-data-batch-endpoints-jobs.md?tabs=rest).
## Authenticating against batch endpoints
We recommend to using a service principal for authentication and interaction wit
## Enabling data access
-We will be using cloud URIs provided by Event Grid to indicate the input data to send to the deployment job. Batch deployments use the identity of the compute to mount the data. The identity of the job is used to read the data once mounted for external storage accounts. You will need to assign a user-assigned managed identity to the compute cluster in order to ensure it does have access to mount the underlying data. Follow these steps to ensure data access:
+We will be using cloud URIs provided by Event Grid to indicate the input data to send to the deployment job. Batch endpoints use the identity of the compute to mount the data while keeping the identity of the job **to read it** once mounted. Hence, we need to assign a user-assigned managed identity to the compute cluster in order to ensure it does have access to mount the underlying data. Follow these steps to ensure data access:
1. Create a [managed identity resource](../active-directory/managed-identities-azure-resources/overview.md):
- # [Azure Machine Learning CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
```azurecli IDENTITY=$(az identity create -n azureml-cpu-cluster-idn --query id -o tsv) ```
- # [Azure Machine Learning SDK for Python](#tab/sdk)
+ # [Python](#tab/sdk)
```python # Use the Azure CLI to create the managed identity. Then copy the value of the variable IDENTITY into a Python variable
We will be using cloud URIs provided by Event Grid to indicate the input data to
> [!NOTE] > This examples assumes you have a compute cluster created named `cpu-cluster` and it is used for the default deployment in the endpoint.
- # [Azure Machine Learning CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
```azurecli az ml compute update --name cpu-cluster --identity-type user_assigned --user-assigned-identities $IDENTITY ```
- # [Azure Machine Learning SDK for Python](#tab/sdk)
+ # [Python](#tab/sdk)
```python from azure.ai.ml import MLClient from azure.ai.ml.entities import AmlCompute, ManagedIdentityConfiguration from azure.ai.ml.constants import ManagedServiceIdentityType
- compute_name = "cpu-cluster"
+ compute_name = "batch-cluster"
compute_cluster = ml_client.compute.get(name=compute_name) compute_cluster.identity.type = ManagedServiceIdentityType.USER_ASSIGNED
We will be using cloud URIs provided by Event Grid to indicate the input data to
| **Standard** | This logic app type is the default selection and runs in single-tenant Azure Logic Apps and uses the [Standard billing model](../logic-apps/logic-apps-pricing.md#standard-pricing). | | **Consumption** | This logic app type runs in global, multi-tenant Azure Logic Apps and uses the [Consumption billing model](../logic-apps/logic-apps-pricing.md#consumption-pricing). |
+ > [!IMPORTANT]
+ > For private-link enabled workspaces, you need to use the Standard plan for Logic Apps with allow private networking configuration.
+ 1. Now continue with the following selections: | Property | Required | Value | Description |
We will be using cloud URIs provided by Event Grid to indicate the input data to
## Configure the workflow parameters
-This Logic App will use parameters to store specific pieces of information that you will need to run the batch deployment.
+This Logic App uses parameters to store specific pieces of information that you will need to run the batch deployment.
1. On the workflow designer, under the tool bar, select the option __Parameters__ and configure them as follows:
This Logic App will use parameters to store specific pieces of information that
| Parameter | Description | Sample value | | | -|- |
- | `tenant_id` | Tenant ID where the endpoint is deployed | `00000000-0000-0000-00000000` |
- | `client_id` | The client ID of the service principal used to invoke the endpoint | `00000000-0000-0000-00000000` |
- | `client_secret` | The client secret of the service principal used to invoke the endpoint | `ABCDEFGhijkLMNOPQRstUVwz` |
- | `endpoint_uri` | The endpoint scoring URI | `https://<endpoint_name>.<region>.inference.ml.azure.com/jobs` |
+ | `tenant_id` | Tenant ID where the endpoint is deployed. | `00000000-0000-0000-00000000` |
+ | `client_id` | The client ID of the service principal used to invoke the endpoint. | `00000000-0000-0000-00000000` |
+ | `client_secret` | The client secret of the service principal used to invoke the endpoint. | `ABCDEFGhijkLMNOPQRstUVwz` |
+ | `endpoint_uri` | The endpoint scoring URI. | `https://<endpoint_name>.<region>.inference.ml.azure.com/jobs` |
> [!IMPORTANT] > `endpoint_uri` is the URI of the endpoint you are trying to execute. The endpoint must have a default deployment configured.
This Logic App will use parameters to store specific pieces of information that
## Add the trigger
-We want to trigger the Logic App each time a new file is created in a given folder (data asset) of a Storage Account. The Logic App will also use the information of the event to invoke the batch endpoint and passing the specific file to be processed.
+We want to trigger the Logic App each time a new file is created in a given folder (data asset) of a Storage Account. The Logic App uses the information of the event to invoke the batch endpoint and pass the specific file to be processed.
1. On the workflow designer, under the search box, select **Built-in**.
We want to trigger the Logic App each time a new file is created in a given fold
}', '<JOB_INPUT_URI>', triggerBody()?[0]['data']['url']) ```
+ > [!TIP]
+ > The previous payload correspond to a **Model deployment**. If you are working with a **Pipeline component deployment**, please adapt the format according to the expectations of the pipeline's inputs. Learn more about how to structure the input in REST calls at [Create jobs and input data for batch endpoints (REST)](how-to-access-data-batch-endpoints-jobs.md?tabs=rest).
+
The action will look as follows: :::image type="content" source="./media/how-to-use-event-grid-batch/invoke.png" alt-text="Screenshot of the invoke activity of the Logic App."::: > [!NOTE]
- > Notice that this last action will trigger the batch deployment job, but it will not wait for its completion. AzureLogic Apps is not designed for long-running applications. If you need to wait for the job to complete, we recommend you to switch to [Invoking batch endpoints from Azure Data Factory](how-to-use-batch-azure-data-factory.md).
+ > Notice that this last action will trigger the batch job, but it will not wait for its completion. Azure Logic Apps is not designed for long-running applications. If you need to wait for the job to complete, we recommend you to switch to [Run batch endpoints from Azure Data Factory](how-to-use-batch-azure-data-factory.md).
1. Click on __Save__.
We want to trigger the Logic App each time a new file is created in a given fold
## Next steps
-* [Invoking batch endpoints from Azure Data Factory](how-to-use-batch-azure-data-factory.md)
+* [Run batch endpoints from Azure Data Factory](how-to-use-batch-azure-data-factory.md)
machine-learning How To Use Mlflow Azure Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-synapse.md
Last updated 07/06/2022 -+ # Track Azure Synapse Analytics ML experiments with MLflow and Azure Machine Learning
mlflow.spark.log_model(model,
registered_model_name = "model_name") ```
-* **If a registered model with the name doesnΓÇÖt exist**, the method registers a new model, creates version 1, and returns a ModelVersion MLflow object.
+* **If a registered model with the name doesn't exist**, the method registers a new model, creates version 1, and returns a ModelVersion MLflow object.
* **If a registered model with the name already exists**, the method creates a new model version and returns the version object.
machine-learning Monitor Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/monitor-azure-machine-learning.md
When you have critical applications and business processes relying on Azure reso
> * [Start, monitor, and cancel training runs](how-to-track-monitor-analyze-runs.md) > * [Log metrics for training runs](how-to-log-view-metrics.md) > * [Track experiments with MLflow](how-to-use-mlflow.md)
-> * [Visualize runs with TensorBoard](v1/how-to-monitor-tensorboard.md)
> > If you want to monitor information generated by models deployed to online endpoints, see [Monitor online endpoints](how-to-monitor-online-endpoints.md).
machine-learning Reference Automl Images Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-schema.md
In instance segmentation, output consists of multiple boxes with their scaled to
> These settings are currently in public preview. They are provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). > [!WARNING]
-> **Explainability** is supported only for **multi-class classification** and **multi-label classification**. While generating explanations on online endpoint, if you encounter timeout issues, use [batch scoring notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass-batch-scoring) to generate explanations.
+> **Explainability** is supported only for **multi-class classification** and **multi-label classification**. While generating explanations on online endpoint, if you encounter timeout issues, use [batch scoring notebook (SDK v1)](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass-batch-scoring) to generate explanations.
In this section, we document the input data format required to make predictions and generate explanations for the predicted class/classes using a deployed model. There's no separate deployment needed for explainability. The same endpoint for online scoring can be utilized to generate explanations. We just need to pass some extra explainability related parameters in input schema and get either visualizations of explanations and/or attribution score matrices (pixel level explanations).
If `model_explainability`, `visualizations`, `attributions` are set to `True` in
> [!WARNING]
-> While generating explanations on online endpoint, make sure to select only few classes based on confidence score in order to avoid timeout issues on the endpoint or use the endpoint with GPU instance type. To generate explanations for large number of classes in multi-label classification, refer to [batch scoring notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass-batch-scoring).
+> While generating explanations on online endpoint, make sure to select only few classes based on confidence score in order to avoid timeout issues on the endpoint or use the endpoint with GPU instance type. To generate explanations for large number of classes in multi-label classification, refer to [batch scoring notebook (SDK v1)](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass-batch-scoring).
```json [
machine-learning Reference Checkpoint Performance For Large Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-checkpoint-performance-for-large-models.md
With Nebula you can:
* An Azure Machine Learning compute target. See [Manage training & deploy computes](./how-to-create-attach-compute-studio.md) to learn more about compute target creation * A training script that uses **PyTorch**. * ACPT-curated (Azure Container for PyTorch) environment. See [Curated environments](resource-curated-environments.md#azure-container-for-pytorch-acpt) to obtain the ACPT image. Learn how to [use the curated environment](./how-to-use-environments.md)
-* An Azure Machine Learning script run configuration file. If you don't have one, you can follow [this resource](./v1/how-to-set-up-training-targets.md)
## How to Use Nebula
machine-learning Reference Machine Learning Cloud Parity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-machine-learning-cloud-parity.md
The information in the rest of this document provides information on what featur
| View, edit, or delete dataset drift monitors from the UI | Public Preview | YES | YES | | **Machine learning lifecycle** | | | | | [Model profiling (SDK/CLI v1)](v1/how-to-deploy-profile-model.md) | GA | YES | PARTIAL |
-| [The Azure Machine Learning CLI 1.0](v1/reference-azure-machine-learning-cli.md) | GA | YES | YES |
+| [The Azure Machine Learning CLI v1](v1/reference-azure-machine-learning-cli.md) | GA | YES | YES |
| [FPGA-based Hardware Accelerated Models (SDK/CLI v1)](./v1/how-to-deploy-fpga-web-service.md) | GA | NO | NO | | [Visual Studio Code integration](how-to-setup-vs-code.md) | Public Preview | NO | NO | | [Event Grid integration](how-to-use-event-grid.md) | Public Preview | NO | NO |
The information in the rest of this document provides information on what featur
| [Azure Stack Edge with FPGA (SDK/CLI v1)](./v1/how-to-deploy-fpga-web-service.md#deploy-to-a-local-edge-server) | Public Preview | NO | NO | | **Other** | | | | | [Open Datasets](../open-datasets/samples.md) | Public Preview | YES | YES |
-| [Custom Cognitive Search](./v1/how-to-deploy-model-cognitive-search.md) | Public Preview | YES | YES |
+| [Custom Cognitive Search (SDK v1)](./v1/how-to-deploy-model-cognitive-search.md) | Public Preview | YES | YES |
### Azure Government scenarios
machine-learning Resource Curated Environments