Updates from: 05/15/2023 01:06:36
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Active Directory Optional Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-optional-claims.md
The set of optional claims available by default for applications to use are list
| `acct` | Users account status in tenant | JWT, SAML | | If the user is a member of the tenant, the value is `0`. If they're a guest, the value is `1`. | | `auth_time` | Time when the user last authenticated. See OpenID Connect spec.| JWT | | | | `ctry` | User's country/region | JWT | | Azure AD returns the `ctry` optional claim if it's present and the value of the field is a standard two-letter country/region code, such as FR, JP, SZ, and so on. |
-| `email` | The reported email address for this user | JWT, SAML | MSA, Azure AD | This value is included by default if the user is a guest in the tenant. For managed users (the users inside the tenant), it must be requested through this optional claim or, on v2.0 only, with the OpenID scope. This value isn't guaranteed to be correct, and is mutable over time - never use it for authorization or to save data for a user. For more information, see [Validate the user has permission to access this data](access-tokens.md). If you require an addressable email address in your app, request this data from the user directly, using this claim as a suggestion or prefill in your UX. |
+| `email` | The reported email address for this user | JWT, SAML | MSA, Azure AD | This value is included by default if the user is a guest in the tenant. For managed users (the users inside the tenant), it must be requested through this optional claim or, on v2.0 only, with the OpenID scope. This value isn't guaranteed to be correct, and is mutable over time - never use it for authorization or to save data for a user. For more information, see [Validate the user has permission to access this data](access-tokens.md). If you are the email claim for authorization, we recommend [performing a migration to move to a more secure claim](./migrate-off-email-claim-authorization.md). If you require an addressable email address in your app, request this data from the user directly, using this claim as a suggestion or prefill in your UX. |
| `fwd` | IP address.| JWT | | Adds the original IPv4 address of the requesting client (when inside a VNET) | | `groups`| Optional formatting for group claims |JWT, SAML| |For details see [Group claims](#configuring-groups-optional-claims). For more information about group claims, see [How to configure group claims](../hybrid/how-to-connect-fed-group-claims.md). Used with the GroupMembershipClaims setting in the [application manifest](reference-app-manifest.md), which must be set as well. | `idtyp` | Token type | JWT access tokens | Special: only in app-only access tokens | Value is `app` when the token is an app-only token. This claim is the most accurate way for an API to determine if a token is an app token or an app+user token.|
Some optional claims can be configured to change the way the claim is returned.
| | `include_externally_authenticated_upn_without_hash` | Same as listed previously, except that the hash marks (`#`) are replaced with underscores (`_`), for example `foo_hometenant.com_EXT_@resourcetenant.com`| | `aud` | | In v1 access tokens, this claim is used to change the format of the `aud` claim. This claim has no effect in v2 tokens or either version's ID tokens, where the `aud` claim is always the client ID. Use this configuration to ensure that your API can more easily perform audience validation. Like all optional claims that affect the access token, the resource in the request must set this optional claim, since resources own the access token.| | | `use_guid` | Emits the client ID of the resource (API) in GUID format as the `aud` claim always instead of it being runtime dependent. For example, if a resource sets this flag, and its client ID is `bb0a297b-6a42-4a55-ac40-09a501456577`, any app that requests an access token for that resource will receive an access token with `aud` : `bb0a297b-6a42-4a55-ac40-09a501456577`. </br></br> Without this claim set, an API could get tokens with an `aud` claim of `api://MyApi.com`, `api://MyApi.com/`, `api://myapi.com/AdditionalRegisteredField` or any other value set as an app ID URI for that API, and the client ID of the resource. |
+| `email` | | Can be used for both SAML and JWT responses, and for v1.0 and v2.0 tokens. |
+ | | `replace_unverified_email_with_upn` (Preview) | In scenarios where email ownership is not verified, the `email` claim returns the user's home tenant UPN instead, unless otherwise stated. For managed users, email is verified if the home tenant owns the email's domain as a custom domain name. For guest users, email is verified if either the home or resource tenants own the email's domain. If the user authenticates using Email OTP, MSA, or Google federation, the `email` claim remains the same. If the user authenticates using Facebook or SAML/WS-Fed IdP federation, the `email` claim isn't returned. The `email` claim isn't guaranteed to be mailbox addressable, regardless of whether it is verified. |
#### Additional properties example
active-directory Migrate Off Email Claim Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/migrate-off-email-claim-authorization.md
+
+ Title: Migrate away from using email claims for authorization
+description: Learn how to migrate your application away from using insecure claims, such as email, for authorization purposes.
+++++++ Last updated : 05/11/2023++++++
+# Migrate away from using email claims for authorization
+
+This article is meant to provide guidance to developers whose applications are currently using a pattern where the email claim is used for authorization, which can lead to full account takeover by another user. Continue reading to learn more about if your application is impacted, and steps for remediation.
+
+## How do I know if my application is impacted?
+
+Microsoft recommends reviewing application source code and determining whether the following patterns are present:
+
+- A mutable claim, such as `email`, is used for the purposes of uniquely identifying a user
+- A mutable claim, such as `email` is used for the purposes of authorizing a user's access to resources
+
+These patterns are considered insecure, as users without a provisioned mailbox can have any email address set for their Mail (Primary SMTP) attribute. This attribute is **not guaranteed to come from a verified email address**. When an unverified email claim is used for authorization, any user without a provisioned mailbox has the potential to gain unauthorized access by changing their Mail attribute to impersonate another user.
+
+This risk of unauthorized access has only been found in multi-tenant apps, as a user from one tenant could escalate their privileges to access resources from another tenant through modification of their Mail attribute.
+
+## Migrate applications to more secure configurations
+
+You should never use mutable claims (such as `email`, `preferred_username`, etc.) as identifiers to perform authorization checks or index users in a database. These values are re-usable and could expose your application to privilege escalation attacks.
+
+If your application is currently using a mutable value for indexing users, you should migrate to a globally unique identifier, such as the object ID (referred to as `oid` in the token claims). Doing so ensures that each user is indexed on a value that can't be re-used (or abused to impersonate another user).
++
+If your application uses email (or any other mutable claim) for authorization purposes, you should read through the [Secure applications and APIs by validating claims](claims-validation.md) and implement the appropriate checks.
+
+## Short-term risk mitigation
+
+To mitigate the risk of unauthorized access before updating application code, you can use the `replace_unverified_email_with_upn` property for the optional `email` claim, which replaces (or removes) email claims, depending on account type, according to the following table:
+
+| **User type** | **Replaced with** |
+||-|
+| On Premise | Home tenant UPN |
+| Cloud Managed | Home tenant UPN |
+| Microsoft Account (MSA) | Email address the user signed up with |
+| Email OTP | Email address the user signed up with |
+| Social IDP: Google | Email address the user signed up with |
+| Social IDP: Facebook | Email claim isn't issued |
+| Direct Fed | Email claim isn't issued |
+
+Enabling `replace_unverified_email_with_upn` eliminates the most significant risk of cross-tenant privilege escalation by ensuring authorization doesn't occur against an arbitrarily set email value. While enabling this property prevents unauthorized access, it can also break access to users with unverified emails. Internal data suggests the overall percentage of users with unverified emails is low and this tradeoff is appropriate to secure applications in the short term.
+
+The `replace_unverified_email_with_upn` option is also documented under the documentation for [additional properties of optional claims](active-directory-optional-claims.md#additional-properties-of-optional-claims).
+
+Enabling `replace_unverified_email_with_upn` should be viewed mainly as a short-term risk mitigation strategy while migrating applications away from email claims, and not as a permanent solution for resolving account escalation risk related to email usage.
+
+## Next steps
+
+- To learn more about using claims-based authorization securely, see [Secure applications and APIs by validating claims](claims-validation.md)
+- For more information about optional claims, see [Provide optional claims to your application](./active-directory-optional-claims.md)
active-directory Cross Tenant Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-overview.md
You can configure organization-specific settings by adding an organization and m
[!INCLUDE [automatic-redemption-include](../includes/automatic-redemption-include.md)]
-To configure this setting using Microsoft Graph, see the [Update crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicyconfigurationpartner-update?view=graph-rest-beta&preserve-view=true) API. For information about building your own onboarding experience, see [B2B collaboration invitation manager](external-identities-overview.md#azure-ad-microsoft-graph-api-for-b2b-collaboration).
+To configure this setting using Microsoft Graph, see the [Update crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicyconfigurationpartner-update) API. For information about building your own onboarding experience, see [B2B collaboration invitation manager](external-identities-overview.md#azure-ad-microsoft-graph-api-for-b2b-collaboration).
For more information, see [Configure cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-configure.md), [Configure cross-tenant access settings for B2B collaboration](cross-tenant-access-settings-b2b-collaboration.md), and [Configure cross-tenant access settings for B2B direct connect](cross-tenant-access-settings-b2b-direct-connect.md).
For more information, see [Configure cross-tenant synchronization](../multi-tena
[!INCLUDE [cross-tenant-synchronization-include](../includes/cross-tenant-synchronization-include.md)]
-To configure this setting using Microsoft Graph, see the [Update crossTenantIdentitySyncPolicyPartner](/graph/api/crosstenantidentitysyncpolicypartner-update?view=graph-rest-beta&preserve-view=true) API. For more information, see [Configure cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-configure.md).
+To configure this setting using Microsoft Graph, see the [Update crossTenantIdentitySyncPolicyPartner](/graph/api/crosstenantidentitysyncpolicypartner-update) API. For more information, see [Configure cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-configure.md).
## Microsoft cloud settings
active-directory Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/sap.md
+
+ Title: Manage access to your SAP applications
+description: Manage access to your SAP applications. Bring identities from SAP SuccessFactors into Azure AD and provision access to SAP ECC, SAP S/4 Hana, and other SAP applications.
+
+documentationcenter: ''
++
+editor: markwahl-msft
++
+ na
++ Last updated : 5/12/2023+++++
+# Manage access to your SAP applications
++
+SAP likely runs critical functions such as HR and ERP for your business. At the same time, your business relies on Microsoft for various Azure services, Microsoft 365, and Entra Identity Governance for managing access to applications. This document describes how you can use Entra Identity Governance to manage identities across your SAP applications.
++
+![Diagram of SAP integrations.](./media/sap/sap-integrations.png)
+
+## Bring identities from HR into Azure AD
+
+#### SuccessFactors
+Customers using SAP SuccessFactors can easily bring identities into [Azure AD](../../active-directory/saas-apps/sap-successfactors-inbound-provisioning-cloud-only-tutorial.md) or [Active Directory](../../active-directory/saas-apps/sap-successfactors-inbound-provisioning-tutorial.md) using native connectors. The connectors support the following scenarios:
+* **Hiring new employees** - When a new employee is added to SuccessFactors, a user account is automatically created in Azure Active Directory and optionally Microsoft 365 and [other SaaS applications supported by Azure AD](../../active-directory/app-provisioning/user-provisioning.md), with write-back of the email address to SuccessFactors.
+* **Employee attribute and profile updates** - When an employee record is updated in SuccessFactors (such as their name, title, or manager), their user account will be automatically updated Azure Active Directory and optionally Microsoft 365 and [other SaaS applications supported by Azure AD](../../active-directory/app-provisioning/user-provisioning.md).
+* **Employee terminations** - When an employee is terminated in SuccessFactors, their user account is automatically disabled in Azure Active Directory and optionally Microsoft 365 and other SaaS applications supported by Azure AD.
+* **Employee rehires** - When an employee is rehired in SuccessFactors, their old account can be automatically reactivated or re-provisioned (depending on your preference) to Azure Active Directory and optionally Microsoft 365 and other SaaS applications supported by Azure AD.
+
+> [!VIDEO https://www.youtube-nocookie.com/embed/66v2FR2-QrY]
+
+#### SAP HCM
+Customers that are still using SAP HCM can also bring identities into Azure AD. Using the SAP Integration Suite, you can synchronize identities between SAP HCM and SAP SuccessFactors. From there, you can bring identities directly into Azure AD or provisioning them into Active Directory Domain Services, using the native provisioning integrations mentioned above.
+
+![Diagram of SAP HR integrations.](./media/sap/sap-hr.png)
+
+## Provision identities into modern SAP applications.
+Once your users are in Azure Active Directory, you can provision accounts into the various SaaS and on-premises SAP applications that they need access to. You've three ways to accomplish this.
+* **Option 1:** Use the enterprise application in Azure AD to configure both SSO and provisioning to SAP applications such as [SAP analytics cloud](../../active-directory/saas-apps/sap-analytics-cloud-provisioning-tutorial.md). With this option, you can apply a consistent set of governance processes across all your applications.
+* **Option 2:** Use the [SAP IAS](../../active-directory/saas-apps/sap-cloud-platform-identity-authentication-provisioning-tutorial.md) enterprise application in Azure AD to provision identities into SAP IAS. Once you bring all the identities into SAP IAS, you can use SAP IPS to provision the accounts from SAP IAS into the application when required.
+* **Option 3:** Use the [SAP IPS](https://help.sap.com/docs/IDENTITY_PROVISIONING/f48e822d6d484fa5ade7dda78b64d9f5/f2b2df8a273642a1bf801e99ecc4a043.html) integration to directly export identities from Azure AD into your [application](https://help.sap.com/docs/IDENTITY_PROVISIONING/f48e822d6d484fa5ade7dda78b64d9f5/ab3f641552464c79b94d10b9205fd721.html). When using SAP IPS to pull users into your applications, all provisioning configuration is managed in SAP directly. You can still use the enterprise application in Azure AD to manage single sign-on and use [Azure AD as the corporate identity provider](https://help.sap.com/docs/IDENTITY_AUTHENTICATION/6d6d63354d1242d185ab4830fc04feb1/058c7b14209f4f2d8de039da4330a1c1.html).
+
+## Provision identities into on-premises SAP systems such as SAP ECC that aren't supported by SAP IPS
+
+Customers who have yet to transition from applications such as SAP ECC to SAP S/4 Hana can still rely on the Azure AD provisioning service to provision user accounts. Within SAP ECC, you'll expose the necessary BAPIs for creating, updating, and deleting users. Within Azure AD, you have two options:
+* **Option 1:** Use the lightweight Azure AD provisioning agent and web services connector to provision users into apps such as SAP ECC.
+* **Option 2:** In scenarios where you need to do more complex group and role management, use the [Microsoft Identity Manager](https://learn.microsoft.com/microsoft-identity-manager/reference/microsoft-identity-manager-2016-ma-ws) to manage access to your legacy SAP applications.
+
+## SSO, workflows, and separation of duties
+In addition to the native provisioning integrations that allow you to manage access to your SAP applications, Azure AD supports a rich set of integrations with SAP.
+* SSO: Once youΓÇÖve setup provisioning for your SAP application, youΓÇÖll want to enable single sign-on for those applications. Azure AD can serve as the identity provider and server as the authentication authority for your SAP applications. Learn more about how you can [configure Azure AD as the corporate identity provider for your SAP applications](https://help.sap.com/docs/IDENTITY_AUTHENTICATION/6d6d63354d1242d185ab4830fc04feb1/058c7b14209f4f2d8de039da4330a1c1.html).
+Custom workflows: When a new employee is hired in your organization, you may need to trigger a workflow within your SAP server.
+* Using the [Entra Identity Governance Lifecycle Workflows](lifecycle-workflow-extensibility.md) in conjunction with the [SAP connector in Azure Logic apps](https://learn.microsoft.com/azure/logic-apps/logic-apps-using-sap-connector), you can trigger custom actions in SAP upon hiring a new employee.
+* Separation of duties: With separation of duties checks now available in preview in Azure AD [entitlement management](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/ensure-compliance-using-separation-of-duties-checks-in-access/ba-p/2466939), customers can now ensure that users don't take on excessive access rights. Admins and access managers can prevent users from requesting additional access packages if theyΓÇÖre already assigned to other access packages or are a member of other groups that are incompatible with the requested access. Enterprises with critical regulatory requirements for SAP apps will have a single consistent view of access controls and enforce separation of duties checks across their financial and other business critical applications and Azure AD-integrated applications. With our [Pathlock](https://pathlock.com/), integration customers can leverage fine-grained separation of duties checks with access packages in Azure AD, and over time will help customers to address Sarbanes Oxley and other compliance requirements.
+
+## Next steps
+
+- [Bring identities from SAP SuccessFactors into Azure AD](../../active-directory/saas-apps/sap-successfactors-inbound-provisioning-cloud-only-tutorial.md)
+- [Provision accounts in SAP IAS](../../active-directory/saas-apps/sap-cloud-platform-identity-authentication-provisioning-tutorial.md)
++++
active-directory Cross Tenant Synchronization Configure Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure-graph.md
Previously updated : 05/05/2023 Last updated : 05/14/2023
These steps describe how to use Microsoft Graph Explorer (recommended), but you
![Icon for the target tenant.](./media/common/icon-tenant-target.png)<br/>**Target tenant**
-1. In the target tenant, use the [Create crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicy-post-partners?view=graph-rest-beta&preserve-view=true) API to create a new partner configuration in a cross-tenant access policy between the target tenant and the source tenant. Use the source tenant ID in the request.
+1. In the target tenant, use the [Create crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicy-post-partners?branch=main) API to create a new partner configuration in a cross-tenant access policy between the target tenant and the source tenant. Use the source tenant ID in the request.
If you get an `Request_MultipleObjectsWithSameKeyValue` error, you might already have an existing configuration. For more information, see [Symptom - Request_MultipleObjectsWithSameKeyValue error](#symptomrequest_multipleobjectswithsamekeyvalue-error). **Request** ```http
- POST https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/partners
+ POST https://graph.microsoft.com/v1.0/policies/crossTenantAccessPolicy/partners
Content-Type: application/json {
These steps describe how to use Microsoft Graph Explorer (recommended), but you
Content-Type: application/json {
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#policies/crossTenantAccessPolicy/partners/$entity",
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#policies/crossTenantAccessPolicy/partners/$entity",
"tenantId": "{sourceTenantId}", "isServiceProvider": null, "inboundTrust": null,
These steps describe how to use Microsoft Graph Explorer (recommended), but you
} ```
-1. Use the [Create identitySynchronization](/graph/api/crosstenantaccesspolicyconfigurationpartner-put-identitysynchronization?view=graph-rest-beta&preserve-view=true) API to enable user synchronization in the target tenant.
+1. Use the [Create identitySynchronization](/graph/api/crosstenantaccesspolicyconfigurationpartner-put-identitysynchronization?branch=main) API to enable user synchronization in the target tenant.
If you get an `Request_MultipleObjectsWithSameKeyValue` error, you might already have an existing policy. For more information, see [Symptom - Request_MultipleObjectsWithSameKeyValue error](#symptomrequest_multipleobjectswithsamekeyvalue-error). **Request** ```http
- PUT https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/partners/{sourceTenantId}/identitySynchronization
+ PUT https://graph.microsoft.com/v1.0/policies/crossTenantAccessPolicy/partners/{sourceTenantId}/identitySynchronization
Content-type: application/json {
These steps describe how to use Microsoft Graph Explorer (recommended), but you
![Icon for the target tenant.](./media/common/icon-tenant-target.png)<br/>**Target tenant**
-1. In the target tenant, use the [Update crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicyconfigurationpartner-update?view=graph-rest-beta&preserve-view=true) API to automatically redeem invitations and suppress consent prompts for inbound access.
+1. In the target tenant, use the [Update crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicyconfigurationpartner-update?branch=main) API to automatically redeem invitations and suppress consent prompts for inbound access.
**Request** ```http
- PATCH https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/partners/{sourceTenantId}
+ PATCH https://graph.microsoft.com/v1.0/policies/crossTenantAccessPolicy/partners/{sourceTenantId}
Content-Type: application/json {
These steps describe how to use Microsoft Graph Explorer (recommended), but you
![Icon for the source tenant.](./media/common/icon-tenant-source.png)<br/>**Source tenant**
-1. In the source tenant, use the [Create crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicy-post-partners?view=graph-rest-beta&preserve-view=true) API to create a new partner configuration in a cross-tenant access policy between the source tenant and the target tenant. Use the target tenant ID in the request.
+1. In the source tenant, use the [Create crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicy-post-partners?branch=main) API to create a new partner configuration in a cross-tenant access policy between the source tenant and the target tenant. Use the target tenant ID in the request.
If you get an `Request_MultipleObjectsWithSameKeyValue` error, you might already have an existing configuration. For more information, see [Symptom - Request_MultipleObjectsWithSameKeyValue error](#symptomrequest_multipleobjectswithsamekeyvalue-error). **Request** ```http
- POST https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/partners
+ POST https://graph.microsoft.com/v1.0/policies/crossTenantAccessPolicy/partners
Content-Type: application/json {
These steps describe how to use Microsoft Graph Explorer (recommended), but you
Content-Type: application/json {
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#policies/crossTenantAccessPolicy/partners/$entity",
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#policies/crossTenantAccessPolicy/partners/$entity",
"tenantId": "{targetTenantId}", "isServiceProvider": null, "inboundTrust": null,
These steps describe how to use Microsoft Graph Explorer (recommended), but you
} ```
-1. Use the [Update crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicyconfigurationpartner-update?view=graph-rest-beta&preserve-view=true) API to automatically redeem invitations and suppress consent prompts for outbound access.
+1. Use the [Update crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicyconfigurationpartner-update?branch=main) API to automatically redeem invitations and suppress consent prompts for outbound access.
**Request** ```http
- PATCH https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/partners/{targetTenantId}
+ PATCH https://graph.microsoft.com/v1.0/policies/crossTenantAccessPolicy/partners/{targetTenantId}
Content-Type: application/json {
These steps describe how to use Microsoft Graph Explorer (recommended), but you
![Icon for the source tenant.](./media/common/icon-tenant-source.png)<br/>**Source tenant**
-1. In the source tenant, use the [applicationTemplate: instantiate](/graph/api/applicationtemplate-instantiate?view=graph-rest-beta&preserve-view=true) API to add an instance of a configuration application from the Azure AD application gallery into your tenant.
+1. In the source tenant, use the [applicationTemplate: instantiate](/graph/api/applicationtemplate-instantiate?branch=main) API to add an instance of a configuration application from the Azure AD application gallery into your tenant.
**Request** ```http
- POST https://graph.microsoft.com/beta/applicationTemplates/518e5f48-1fc8-4c48-9387-9fdf28b0dfe7/instantiate
+ POST https://graph.microsoft.com/v1.0/applicationTemplates/518e5f48-1fc8-4c48-9387-9fdf28b0dfe7/instantiate
Content-type: application/json {
These steps describe how to use Microsoft Graph Explorer (recommended), but you
Content-type: application/json {
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#microsoft.graph.applicationServicePrincipal",
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#microsoft.graph.applicationServicePrincipal",
"application": { "objectId": "{objectId}", "appId": "{appId}",
These steps describe how to use Microsoft Graph Explorer (recommended), but you
Be sure to use the service principal object ID instead of the application ID.
-2. In the source tenant, use the [synchronizationJob: validateCredentials](/graph/api/synchronization-synchronizationjob-validatecredentials?view=graph-rest-beta&preserve-view=true) API to test the connection to the target tenant and validate the credentials.
+2. In the source tenant, use the [synchronizationJob: validateCredentials](/graph/api/synchronization-synchronizationjob-validatecredentials?branch=main) API to test the connection to the target tenant and validate the credentials.
**Request**
These steps describe how to use Microsoft Graph Explorer (recommended), but you
In the source tenant, to enable provisioning, create a provisioning job.
-1. Determine the [synchronization template](/graph/api/resources/synchronization-synchronizationtemplate?view=graph-rest-beta&preserve-view=true) to use, such as `Azure2Azure`.
+1. Determine the [synchronization template](/graph/api/resources/synchronization-synchronizationtemplate?branch=main) to use, such as `Azure2Azure`.
A template has pre-configured synchronization settings.
-1. In the source tenant, use the [Create synchronizationJob](/graph/api/synchronization-synchronizationjob-post?view=graph-rest-beta&preserve-view=true) API to create a provisioning job based on a template.
+1. In the source tenant, use the [Create synchronizationJob](/graph/api/synchronization-synchronizationjob-post?branch=main) API to create a provisioning job based on a template.
**Request**
In the source tenant, to enable provisioning, create a provisioning job.
![Icon for the source tenant.](./media/common/icon-tenant-source.png)<br/>**Source tenant**
-1. In the source tenant, use the [synchronization: secrets](/graph/api/synchronization-synchronization-secrets?view=graph-rest-beta&preserve-view=true) API to save your credentials.
+1. In the source tenant, use the [Add synchronization secrets](/graph/api/synchronization-synchronization-secrets?branch=main) API to save your credentials.
**Request**
For cross-tenant synchronization to work, at least one internal user must be ass
Now that you have a configuration, you can test on-demand provisioning with one of your users.
-1. In the source tenant, use the [synchronizationJob: provisionOnDemand](/graph/api/synchronization-synchronizationjob-provision-on-demand?view=graph-rest-beta&preserve-view=true) API to provision a test user on demand.
+1. In the source tenant, use the [synchronizationJob: provisionOnDemand](/graph/api/synchronization-synchronizationjob-provision-on-demand?branch=main) API to provision a test user on demand.
**Request**
Now that you have a configuration, you can test on-demand provisioning with one
![Icon for the source tenant.](./media/common/icon-tenant-source.png)<br/>**Source tenant**
-1. Now that the provisioning job is configured, in the source tenant, use the [Start synchronizationJob](/graph/api/synchronization-synchronizationjob-start?view=graph-rest-beta&preserve-view=true) API to start the provisioning job.
+1. Now that the provisioning job is configured, in the source tenant, use the [Start synchronizationJob](/graph/api/synchronization-synchronizationjob-start?branch=main) API to start the provisioning job.
**Request**
Now that you have a configuration, you can test on-demand provisioning with one
![Icon for the source tenant.](./media/common/icon-tenant-source.png)<br/>**Source tenant**
-1. Now that the provisioning job is running, in the source tenant, use the [Get synchronizationJob](/graph/api/synchronization-synchronizationjob-get?view=graph-rest-beta&preserve-view=true) API to monitor the progress of the current provisioning cycle as well as statistics to date such as the number of users and groups that have been created in the target system.
+1. Now that the provisioning job is running, in the source tenant, use the [Get synchronizationJob](/graph/api/synchronization-synchronizationjob-get?branch=main) API to monitor the progress of the current provisioning cycle as well as statistics to date such as the number of users and groups that have been created in the target system.
**Request**
You are likely trying to create a configuration or object that already exists, p
1. If you have an existing object, instead of making a create request using `POST` or `PUT`, you might need to make an update request using `PATCH`, such as:
- - [Update crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicyconfigurationpartner-update?view=graph-rest-beta&preserve-view=true)
- - [Update crossTenantIdentitySyncPolicyPartner](/graph/api/crosstenantidentitysyncpolicypartner-update?view=graph-rest-beta&preserve-view=true)
+ - [Update crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicyconfigurationpartner-update?branch=main)
+ - [Update crossTenantIdentitySyncPolicyPartner](/graph/api/crosstenantidentitysyncpolicypartner-update?branch=main)
#### Symptom - Directory_ObjectNotFound error
You are likely trying to update an object that doesn't exist using `PATCH`.
1. If object doesn't exist, instead of making an update request using `PATCH`, you might need to make a create request using `POST` or `PUT`, such as:
- - [Create identitySynchronization](/graph/api/crosstenantaccesspolicyconfigurationpartner-put-identitysynchronization?view=graph-rest-beta&preserve-view=true)
+ - [Create identitySynchronization](/graph/api/crosstenantaccesspolicyconfigurationpartner-put-identitysynchronization?branch=main)
## Next steps -- [Azure AD synchronization API overview](/graph/api/resources/synchronization-overview?view=graph-rest-beta&preserve-view=true)
+- [Azure AD synchronization API overview](/graph/api/resources/synchronization-overview?branch=main)
- [Tutorial: Develop and plan provisioning for a SCIM endpoint in Azure Active Directory](../app-provisioning/use-scim-to-provision-users-and-groups.md)
active-directory Cross Tenant Synchronization Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-overview.md
Previously updated : 05/05/2023 Last updated : 05/14/2023
The following table shows the parts of cross-tenant synchronization and which te
[!INCLUDE [cross-tenant-synchronization-include](../includes/cross-tenant-synchronization-include.md)]
-To configure this setting using Microsoft Graph, see the [Update crossTenantIdentitySyncPolicyPartner](/graph/api/crosstenantidentitysyncpolicypartner-update?view=graph-rest-beta&preserve-view=true) API. For more information, see [Configure cross-tenant synchronization](cross-tenant-synchronization-configure.md).
+To configure this setting using Microsoft Graph, see the [Update crossTenantIdentitySyncPolicyPartner](/graph/api/crosstenantidentitysyncpolicypartner-update?branch=main) API. For more information, see [Configure cross-tenant synchronization](cross-tenant-synchronization-configure.md).
## Automatic redemption setting [!INCLUDE [automatic-redemption-include](../includes/automatic-redemption-include.md)]
-To configure this setting using Microsoft Graph, see the [Update crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicyconfigurationpartner-update?view=graph-rest-beta&preserve-view=true) API. For more information, see [Configure cross-tenant synchronization](cross-tenant-synchronization-configure.md).
+To configure this setting using Microsoft Graph, see the [Update crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicyconfigurationpartner-update?branch=main) API. For more information, see [Configure cross-tenant synchronization](cross-tenant-synchronization-configure.md).
#### How do users know what tenants they belong to?
active-directory Concept Provisioning Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-provisioning-logs.md
Use the following table to better understand how to resolve errors that you find
> | AzureActiveDirectoryCannotUpdateObjectsOriginatedInExternalService | The synchronization engine could not update one or more user properties in the target tenant.<br/><br/>The operation failed in Microsoft Graph API because of Source of Authority (SOA) enforcement. Currently, the following properties show up in the list:<br/>`Mail`<br/>`showInAddressList` | In some cases (for example when `showInAddressList` property is part of the user update), the synchronization engine might automatically retry the (user) update without the offending property. Otherwise, you will need to update the property directly in the target tenant. | > | AzureDirectoryB2BManagementPolicyCheckFailure | The cross-tenant synchronization policy allowing automatic redemption failed.<br/><br/>The synchronization engine checks to ensure that the administrator of the target tenant has created an inbound cross-tenant synchronization policy allowing automatic redemption. The synchronization engine also checks if the administrator of the source tenant has enabled an outbound policy for automatic redemption. | Ensure that the automatic redemption setting has been enabled for both the source and target tenants. For more information, see [Automatic redemption setting](../multi-tenant-organizations/cross-tenant-synchronization-overview.md#automatic-redemption-setting). | > | AzureActiveDirectoryQuotaLimitExceeded | The number of objects in the tenant exceeds the directory limit.<br/><br/>Azure AD has limits for the number of objects that can be created in a tenant. | Check whether the quota can be increased. For information about the directory limits and steps to increase the quota, see [Azure AD service limits and restrictions](../enterprise-users/directory-service-limits-restrictions.md). |
-> |InvitationCreationFailure| The Azure AD provisioning service attempted to invite the user in the target tenant. That invitation failed.| Navigate to the user settings page in Azure AD > external users > collaboration restrictions and ensure that collaboration with that tenant is enabled.|
+> |InvitationCreationFailure| The Azure AD provisioning service attempted to invite the user in the target tenant. That invitation failed.| Further investigation likely requires contacting support.|
> |AzureActiveDirectoryInsufficientRights|When a B2B user in the target tenant has a role other than User, Helpdesk Admin, or User Account Admin, they cannot be deleted.| Remove the role(s) on the user in the target tenant in order to successfully delete the user in the target tenant.| > |AzureActiveDirectoryForbidden|External collaboration settings have blocked invitations.|Navigate to user settings and ensure that [external collaboration settings](../external-identities/external-collaboration-settings-configure.md) are permitted.| > |InvitationCreationFailureInvalidPropertyValue|Potential causes:<br/>* The Primary SMTP Address is an invalid value.<br/>* UserType is neither guest nor member<br/>* Group email Address is not supported | Potential solutions:<br/>* The Primary SMTP Address has an invalid value. Resolving this issue will likely require updating the mail property of the source user. For more information, see [Prepare for directory synchronization to Microsoft 365](https://aka.ms/DirectoryAttributeValidations)<br/>* Ensure that the userType property is provisioned as type guest or member. This can be fixed by checking your attribute mappings to understand how the userType attribute is mapped.<br/>* The email address address of the user matches with the email address of a group in the tenant. Update the email address for one of the two objects.|
aks Cluster Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-extensions.md
Title: Cluster extensions for Azure Kubernetes Service (AKS) description: Learn how to deploy and manage the lifecycle of extensions on Azure Kubernetes Service (AKS) Previously updated : 03/14/2023 Last updated : 05/12/2023
# Deploy and manage cluster extensions for Azure Kubernetes Service (AKS)
-Cluster extensions provide an Azure Resource Manager driven experience for installation and lifecycle management of services like Azure Machine Learning (ML) on an AKS cluster. This feature enables:
+Cluster extensions provide an Azure Resource Manager driven experience for installation and lifecycle management of services like Azure Machine Learning or Kubernetes applications on an AKS cluster. This feature enables:
* Azure Resource Manager-based deployment of extensions, including at-scale deployments across AKS clusters.
-* Lifecycle management of the extension (Update, Delete) from Azure Resource Manager.
+* Lifecycle management of the extension (Update, Delete) from Azure Resource Manager
-In this article, you'll learn about:
-> [!div class="checklist"]
+## Cluster extension requirements
-> * How to create an extension instance.
-> * Available cluster extensions on AKS.
-> * How to view, list, update, and delete extension instances.
-
-For a conceptual overview of cluster extensions, see [Cluster extensions - Azure Arc-enabled Kubernetes][arc-k8s-extensions].
+Cluster extensions can be used on AKS clusters in the regions listed in [Azure Arc enabled Kubernetes region support][arc-k8s-regions].
-## Prerequisites
+For supported Kubernetes versions, refer to the corresponding documentation for each extension.
> [!IMPORTANT] > Ensure that your AKS cluster is created with a managed identity, as cluster extensions won't work with service principal-based clusters. > > For new clusters created with `az aks create`, managed identity is configured by default. For existing service principal-based clusters that need to be switched over to managed identity, it can be enabled by running `az aks update` with the `--enable-managed-identity` flag. For more information, see [Use managed identity][use-managed-identity].
-* An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
-* [Azure CLI](/cli/azure/install-azure-cli) version >= 2.16.0 installed.
- > [!NOTE] > If you have enabled [Azure AD pod-managed identity][use-azure-ad-pod-identity] on your AKS cluster or are considering implementing it, > we recommend you first review [Workload identity overview][workload-identity-overview] to understand our
For a conceptual overview of cluster extensions, see [Cluster extensions - Azure
> > The open source Azure AD pod-managed identity (preview) in Azure Kubernetes Service has been deprecated as of 10/24/2022.
-### Set up the Azure CLI extension for cluster extensions
-
-> [!NOTE]
-> The minimum supported version for the `k8s-extension` Azure CLI extension is `1.0.0`. If you are unsure what version you have installed, run `az extension show --name k8s-extension` and look for the `version` field. We recommend using the latest version.
-
-You'll also need the `k8s-extension` Azure CLI extension. Install the extension by running the following command:
-
-```azurecli-interactive
-az extension add --name k8s-extension
-```
-
-If the `k8s-extension` extension is already installed, you can update it to the latest version using the following command:
-
-```azurecli-interactive
-az extension update --name k8s-extension
-```
- ## Currently available extensions
->[!NOTE]
-> Cluster extensions provides a platform for different extensions to be installed and managed on an AKS cluster. If you are facing issues while using any of these extensions, please open a support ticket with the respective service.
- | Extension | Description | | | -- | | [Dapr][dapr-overview] | Dapr is a portable, event-driven runtime that makes it easy for any developer to build resilient, stateless and stateful applications that run on cloud and edge. |
-| [Azure ML][azure-ml-overview] | Use Azure Kubernetes Service clusters to train, inference, and manage machine learning models in Azure Machine Learning. |
+| [Azure Machine Learning][azure-ml-overview] | Use Azure Kubernetes Service clusters to train, inference, and manage machine learning models in Azure Machine Learning. |
| [Flux (GitOps)][gitops-overview] | Use GitOps with Flux to manage cluster configuration and application deployment. See also [supported versions of Flux (GitOps)][gitops-support] and [Tutorial: Deploy applications using GitOps with Flux v2][gitops-tutorial].|
-## Supported regions and Kubernetes versions
-
-Cluster extensions can be used on AKS clusters in the regions listed in [Azure Arc enabled Kubernetes region support][arc-k8s-regions].
-
-For supported Kubernetes versions, refer to the corresponding documentation for each extension.
-
-## Usage of cluster extensions
-
-> [!NOTE]
-> The samples provided in this article are not complete, and are only meant to showcase functionality. For a comprehensive list of commands and their parameters, please see the [az k8s-extension CLI reference][k8s-extension-reference].
-
-### Create extensions instance
-
-Create a new extension instance with `k8s-extension create`, passing in values for the mandatory parameters. The below command creates an Azure Machine Learning extension instance on your AKS cluster:
-
-```azurecli
-az k8s-extension create --name aml-compute --extension-type Microsoft.AzureML.Kubernetes --scope cluster --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters --configuration-settings enableInference=True allowInsecureConnections=True
-```
+You can also [select and deploy Kubernetes applications available through Marketplace](deploy-marketplace.md).
> [!NOTE]
-> The Cluster Extensions service is unable to retain sensitive information for more than 48 hours. If the cluster extension agents don't have network connectivity for more than 48 hours and can't determine whether to create an extension on the cluster, then the extension transitions to `Failed` state. Once in `Failed` state, you will need to run `k8s-extension create` again to create a fresh extension instance.
-
-#### Required parameters
-
-| Parameter name | Description |
-|-||
-| `--name` | Name of the extension instance |
-| `--extension-type` | The type of extension you want to install on the cluster. For example: Microsoft.AzureML.Kubernetes |
-| `--cluster-name` | Name of the AKS cluster on which the extension instance has to be created |
-| `--resource-group` | The resource group containing the AKS cluster |
-| `--cluster-type` | The cluster type on which the extension instance has to be created. Specify `managedClusters` as it maps to AKS clusters|
-
-#### Optional parameters
-
-| Parameter name | Description |
-|--||
-| `--auto-upgrade-minor-version` | Boolean property that specifies if the extension minor version will be upgraded automatically or not. Default: `true`. If this parameter is set to true, you can't set `version` parameter, as the version will be dynamically updated. If set to `false`, extension won't be auto-upgraded even for patch versions. |
-| `--version` | Version of the extension to be installed (specific version to pin the extension instance to). Must not be supplied if auto-upgrade-minor-version is set to `true`. |
-| `--configuration-settings` | Settings that can be passed into the extension to control its functionality. Pass values as space separated `key=value` pairs after the parameter name. If this parameter is used in the command, then `--configuration-settings-file` can't be used in the same command. |
-| `--configuration-settings-file` | Path to the JSON file having key value pairs to be used for passing in configuration settings to the extension. If this parameter is used in the command, then `--configuration-settings` can't be used in the same command. |
-| `--configuration-protected-settings` | These settings are not retrievable using `GET` API calls or `az k8s-extension show` commands, and are thus used to pass in sensitive settings. Pass values as space separated `key=value` pairs after the parameter name. If this parameter is used in the command, then `--configuration-protected-settings-file` can't be used in the same command. |
-| `--configuration-protected-settings-file` | Path to the JSON file having key value pairs to be used for passing in sensitive settings to the extension. If this parameter is used in the command, then `--configuration-protected-settings` can't be used in the same command. |
-| `--scope` | Scope of installation for the extension - `cluster` or `namespace` |
-| `--release-namespace` | This parameter indicates the namespace within which the release is to be created. This parameter is only relevant if `scope` parameter is set to `cluster`. |
-| `--release-train` | Extension authors can publish versions in different release trains such as `Stable`, `Preview`, etc. If this parameter isn't set explicitly, `Stable` is used as default. This parameter can't be used when `--auto-upgrade-minor-version` parameter is set to `false`. |
-| `--target-namespace` | This parameter indicates the namespace within which the release will be created. Permission of the system account created for this extension instance will be restricted to this namespace. This parameter is only relevant if the `scope` parameter is set to `namespace`. |
-
-### Show details of an extension instance
-
-View details of a currently installed extension instance with `k8s-extension show`, passing in values for the mandatory parameters:
-
-```azurecli
-az k8s-extension show --name azureml --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters
-```
-
-### List all extensions installed on the cluster
-
-List all extensions installed on a cluster with `k8s-extension list`, passing in values for the mandatory parameters.
-
-```azurecli
-az k8s-extension list --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters
-```
-
-### Update extension instance
-
-> [!NOTE]
-> Refer to documentation of the extension type (Eg: Azure ML) to learn about the specific settings under ConfigurationSetting and ConfigurationProtectedSettings that are allowed to be updated. For ConfigurationProtectedSettings, all settings are expected to be provided during an update of a single setting. If some settings are omitted, those settings would be considered obsolete and deleted.
-
-Update an existing extension instance with `k8s-extension update`, passing in values for the mandatory parameters. The below command updates the auto-upgrade setting for an Azure Machine Learning extension instance:
-
-```azurecli
-az k8s-extension update --name azureml --extension-type Microsoft.AzureML.Kubernetes --scope cluster --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters
-```
-
-**Required parameters**
-
-| Parameter name | Description |
-|-||
-| `--name` | Name of the extension instance |
-| `--extension-type` | The type of extension you want to install on the cluster. For example: Microsoft.AzureML.Kubernetes |
-| `--cluster-name` | Name of the AKS cluster on which the extension instance has to be created |
-| `--resource-group` | The resource group containing the AKS cluster |
-| `--cluster-type` | The cluster type on which the extension instance has to be created. Specify `managedClusters` as it maps to AKS clusters|
-
-**Optional parameters**
-
-| Parameter name | Description |
-|--||
-| `--auto-upgrade-minor-version` | Boolean property that specifies if the extension minor version will be upgraded automatically or not. Default: `true`. If this parameter is set to true, you cannot set `version` parameter, as the version will be dynamically updated. If set to `false`, extension won't be auto-upgraded even for patch versions. |
-| `--version` | Version of the extension to be installed (specific version to pin the extension instance to). Must not be supplied if auto-upgrade-minor-version is set to `true`. |
-| `--configuration-settings` | Settings that can be passed into the extension to control its functionality. Only the settings that require an update need to be provided. The provided settings would be replaced with the provided values. Pass values as space separated `key=value` pairs after the parameter name. If this parameter is used in the command, then `--configuration-settings-file` can't be used in the same command. |
-| `--configuration-settings-file` | Path to the JSON file having key value pairs to be used for passing in configuration settings to the extension. If this parameter is used in the command, then `--configuration-settings` can't be used in the same command. |
-| `--configuration-protected-settings` | These settings are not retrievable using `GET` API calls or `az k8s-extension show` commands, and are thus used to pass in sensitive settings. When you update a setting, all settings are expected to be specified. If some settings are omitted, those settings would be considered obsolete and deleted. Pass values as space separated `key=value` pairs after the parameter name. If this parameter is used in the command, then `--configuration-protected-settings-file` can't be used in the same command. |
-| `--configuration-protected-settings-file` | Path to the JSON file having key value pairs to be used for passing in sensitive settings to the extension. If this parameter is used in the command, then `--configuration-protected-settings` can't be used in the same command. |
-| `--scope` | Scope of installation for the extension - `cluster` or `namespace` |
-| `--release-train` | Extension authors can publish versions in different release trains such as `Stable`, `Preview`, etc. If this parameter isn't set explicitly, `Stable` is used as default. This parameter can't be used when `autoUpgradeMinorVersion` parameter is set to `false`. |
-
-### Delete extension instance
-
->[!NOTE]
-> The Azure resource representing this extension gets deleted immediately. The Helm release on the cluster associated with this extension is only deleted when the agents running on the Kubernetes cluster have network connectivity and can reach out to Azure services again to fetch the desired state.
+> Cluster extensions provides a platform for different extensions to be installed and managed on an AKS cluster. If you are facing issues while using any of these extensions, please open a support ticket with the respective service.
-Delete an extension instance on a cluster with `k8s-extension delete`, passing in values for the mandatory parameters.
+## Next steps
-```azurecli
-az k8s-extension delete --name azureml --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters
-```
+* Learn how to [deploy cluster extensions by using Azure CLI](deploy-extensions-az-cli.md).
+* Read about [cluster extensions for Azure Arc-enabled Kubernetes clusters][arc-k8s-extensions].
<!-- LINKS --> <!-- INTERNAL --> [arc-k8s-extensions]: ../azure-arc/kubernetes/conceptual-extensions.md
-[az-feature-register]: /cli/azure/feature#az-feature-register
-[az-feature-list]: /cli/azure/feature#az-feature-list
-[az-provider-register]: /cli/azure/provider#az-provider-register
[azure-ml-overview]: ../machine-learning/how-to-attach-kubernetes-anywhere.md [dapr-overview]: ./dapr.md [gitops-overview]: ../azure-arc/kubernetes/conceptual-gitops-flux2.md
aks Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr.md
For self-managed runtime, the Dapr extension supports:
- [The latest version of Dapr and two previous versions (N-2)][dapr-supported-version] - Upgrading minor version incrementally (for example, 1.5 -> 1.6 -> 1.7)
-Self-managed runtime requires manual upgrade to remain in the support window. To upgrade Dapr via the extension, follow the [Update extension instance instructions][update-extension].
+Self-managed runtime requires manual upgrade to remain in the support window. To upgrade Dapr via the extension, follow the [Update extension instance](deploy-extensions-az-cli.md#update-extension-instance) instructions.
**Auto-upgrade** Enabling auto-upgrade keeps your Dapr extension updated to the latest minor version. You may experience breaking changes between updates.
az k8s-extension delete --resource-group myResourceGroup --cluster-name myAKSClu
[sample-application]: ./quickstart-dapr.md [k8s-version-support-policy]: ./supported-kubernetes-versions.md?tabs=azure-cli#kubernetes-version-support-policy [arc-k8s-cluster]: ../azure-arc/kubernetes/quickstart-connect-cluster.md
-[update-extension]: ./cluster-extensions.md#update-extension-instance
[install-cli]: /cli/azure/install-azure-cli [dapr-migration]: ./dapr-migration.md [dapr-settings]: ./dapr-settings.md
aks Deploy Extensions Az Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-extensions-az-cli.md
+
+ Title: Deploy and manage cluster extensions by using the Azure CLI
+description: Learn how to use Azure CLI to deploy and manage extensions for Azure Kubernetes Service clusters.
Last updated : 05/12/2023+++++
+# Deploy and manage cluster extensions by using Azure CLI
+
+You can create extension instances in an AKS cluster, setting required and optional parameters including options related to updates and configurations. You can also view, list, update, and delete extension instances.
+
+Before you begin, read about [cluster extensions](cluster-extensions.md).
+
+> [!NOTE]
+>The examples provided in this article are not complete, and are only meant to showcase functionality. For a comprehensive list of commands and their parameters, see theΓÇ»[az k8s-extension CLI reference](/cli/azure/k8s-extension).
+
+## Prerequisites
+
+* An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
+* The `Microsoft.ContainerService` and `Microsoft.KubernetesConfiguration` resource providers must be registered on your subscription. To register these providers, run the following command:
+
+ ```azurecli-interactive
+ az provider register --namespace Microsoft.ContainerService --wait
+ az provider register --namespace Microsoft.KubernetesConfiguration --wait
+ ```
+
+* An AKS cluster. This cluster must have been created with a managed identity, as cluster extensions won't work with service principal-based clusters. For new clusters created with `az aks create`, managed identity is configured by default. For existing service principal-based clusters, switch to manage identity by running `az aks update` with the `--enable-managed-identity` flag. For more information, see [Use managed identity][use-managed-identity].
+* [Azure CLI](/cli/azure/install-azure-cli) version >= 2.16.0 installed. We recommend using the latest version.
+* The latest version of the `k8s-extension` Azure CLI extensions. Install the extension by running the following command:
+
+ ```azurecli
+ az extension add --name k8s-extension
+ ```
+
+ If the extension is already installed, make sure you're running the latest version by using the following command:
+
+ ```azurecli
+ az extension update --name k8s-extension
+ ```
+
+## Create extension instance
+
+Create a new extension instance with `k8s-extension create`, passing in values for the mandatory parameters. This example command creates an Azure Machine Learning extension instance on your AKS cluster:
+
+```azurecli
+az k8s-extension create --name aml-compute --extension-type Microsoft.AzureML.Kubernetes --scope cluster --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters --configuration-settings enableInference=True allowInsecureConnections=True
+```
+
+This example command creates a sample Kubernetes application (published on Marketplace) on your AKS cluster:
+
+```azurecli
+az k8s-extension create --name voteapp --extension-type Contoso.AzureVoteKubernetesAppTest --scope cluster --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters --plan-name testPlanID --plan-product testOfferID --plan-publisher testPublisherID --configuration-settings title=VoteAnimal value1=Cats value2=Dogs
+```
+
+> [!NOTE]
+> The Cluster Extensions service is unable to retain sensitive information for more than 48 hours. If the cluster extension agents don't have network connectivity for more than 48 hours and can't determine whether to create an extension on the cluster, then the extension transitions to `Failed` state. Once in `Failed` state, you'll need to run `k8s-extension create` again to create a fresh extension instance.
+
+### Required parameters
+
+| Parameter name | Description |
+|-||
+| `--name` | Name of the extension instance |
+| `--extension-type` | The type of extension you want to install on the cluster. For example: `Microsoft.AzureML.Kubernetes` |
+| `--cluster-name` | Name of the AKS cluster on which the extension instance has to be created |
+| `--resource-group` | The resource group containing the AKS cluster |
+| `--cluster-type` | The cluster type on which the extension instance has to be created. Specify `managedClusters` as it maps to AKS clusters|
+
+### Optional parameters
+
+| Parameter name | Description |
+|--||
+| `--auto-upgrade-minor-version` | Boolean property that specifies if the extension minor version will be upgraded automatically or not. Default: `true`. If this parameter is set to true, you can't set `version` parameter, as the version will be dynamically updated. If set to `false`, extension won't be auto-upgraded even for patch versions. |
+| `--version` | Version of the extension to be installed (specific version to pin the extension instance to). Must not be supplied if auto-upgrade-minor-version is set to `true`. |
+| `--configuration-settings` | Settings that can be passed into the extension to control its functionality. Pass values as space separated `key=value` pairs after the parameter name. If this parameter is used in the command, then `--configuration-settings-file` can't be used in the same command. |
+| `--configuration-settings-file` | Path to the JSON file having key value pairs to be used for passing in configuration settings to the extension. If this parameter is used in the command, then `--configuration-settings` can't be used in the same command. |
+| `--configuration-protected-settings` | These settings are not retrievable using `GET` API calls or `az k8s-extension show` commands, and are thus used to pass in sensitive settings. Pass values as space separated `key=value` pairs after the parameter name. If this parameter is used in the command, then `--configuration-protected-settings-file` can't be used in the same command. |
+| `--configuration-protected-settings-file` | Path to the JSON file having key value pairs to be used for passing in sensitive settings to the extension. If this parameter is used in the command, then `--configuration-protected-settings` can't be used in the same command. |
+| `--scope` | Scope of installation for the extension - `cluster` or `namespace` |
+| `--release-namespace` | This parameter indicates the namespace within which the release is to be created. This parameter is only relevant if `scope` parameter is set to `cluster`. |
+| `--release-train` | Extension authors can publish versions in different release trains such as `Stable`, `Preview`, etc. If this parameter isn't set explicitly, `Stable` is used as default. This parameter can't be used when `--auto-upgrade-minor-version` parameter is set to `false`. |
+| `--target-namespace` | This parameter indicates the namespace within which the release will be created. Permission of the system account created for this extension instance will be restricted to this namespace. This parameter is only relevant if the `scope` parameter is set to `namespace`. |
+|`--plan-name` | **Plan ID** of the extension, found on the Marketplace page in the Azure portal under **Usage Information + Support**. |
+|`--plan-product` | **Product ID** of the extension, found on the Marketplace page in the Azure portal under **Usage Information + Support**. An example of this is the name of the ISV offering used. |
+|`--plan-publisher` | **Publisher ID** of the extension, found on the Marketplace page in the Azure portal under **Usage Information + Support**. |
+
+## Show details of an extension instance
+
+To view details of a currently installed extension instance, use `k8s-extension show`, passing in values for the mandatory parameters.
+
+```azurecli
+az k8s-extension show --name azureml --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters
+```
+
+## List all extensions installed on the cluster
+
+To list all extensions installed on a cluster, use `k8s-extension list`, passing in values for the mandatory parameters.
+
+```azurecli
+az k8s-extension list --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters
+```
+
+### Update extension instance
+
+> [!NOTE]
+> Refer to documentation for the specific extension type to understand the specific settings in `--configuration-settings` and `--configuration-protected-settings` that are able to be updated. For `--configuration-protected-settings`, all settings are expected to be provided, even if only one setting is being updated. If any of these settings are omitted, those settings will be considered obsolete and deleted.
+
+To update an existing extension instance, use `k8s-extension update`, passing in values for the mandatory parameters. The following command updates the auto-upgrade setting for an Azure Machine Learning extension instance:
+
+```azurecli
+az k8s-extension update --name azureml --extension-type Microsoft.AzureML.Kubernetes --scope cluster --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters
+```
+
+### Required parameters for update
+
+| Parameter name | Description |
+|-||
+| `--name` | Name of the extension instance |
+| `--extension-type` | The type of extension you want to install on the cluster. For example: Microsoft.AzureML.Kubernetes |
+| `--cluster-name` | Name of the AKS cluster on which the extension instance has to be created |
+| `--resource-group` | The resource group containing the AKS cluster |
+| `--cluster-type` | The cluster type on which the extension instance has to be created. Specify `managedClusters` as it maps to AKS clusters|
+
+### Optional parameters for update
+
+| Parameter name | Description |
+|--||
+| `--auto-upgrade-minor-version` | Boolean property that specifies if the extension minor version will be upgraded automatically or not. Default: `true`. If this parameter is set to true, you cannot set `version` parameter, as the version will be dynamically updated. If set to `false`, extension won't be auto-upgraded even for patch versions. |
+| `--version` | Version of the extension to be installed (specific version to pin the extension instance to). Must not be supplied if auto-upgrade-minor-version is set to `true`. |
+| `--configuration-settings` | Settings that can be passed into the extension to control its functionality. Only the settings that require an update need to be provided. The provided settings would be replaced with the provided values. Pass values as space separated `key=value` pairs after the parameter name. If this parameter is used in the command, then `--configuration-settings-file` can't be used in the same command. |
+| `--configuration-settings-file` | Path to the JSON file having key value pairs to be used for passing in configuration settings to the extension. If this parameter is used in the command, then `--configuration-settings` can't be used in the same command. |
+| `--configuration-protected-settings` | These settings are not retrievable using `GET` API calls or `az k8s-extension show` commands, and are thus used to pass in sensitive settings. When you update a setting, all settings are expected to be specified. If some settings are omitted, those settings would be considered obsolete and deleted. Pass values as space separated `key=value` pairs after the parameter name. If this parameter is used in the command, then `--configuration-protected-settings-file` can't be used in the same command. |
+| `--configuration-protected-settings-file` | Path to the JSON file having key value pairs to be used for passing in sensitive settings to the extension. If this parameter is used in the command, then `--configuration-protected-settings` can't be used in the same command. |
+| `--scope` | Scope of installation for the extension - `cluster` or `namespace` |
+| `--release-train` | Extension authors can publish versions in different release trains such as `Stable`, `Preview`, etc. If this parameter isn't set explicitly, `Stable` is used as default. This parameter can't be used when `autoUpgradeMinorVersion` parameter is set to `false`. |
+|`--plan-name` | **Plan ID** of the extension, found on the Marketplace page in the Azure portal under **Usage Information + Support**. |
+|`--plan-product` | **Product ID** of the extension, found on the Marketplace page in the Azure portal under **Usage Information + Support**. An example of this is the name of the ISV offering used. |
+|`--plan-publisher` | **Publisher ID** of the extension, found on the Marketplace page in the Azure portal under **Usage Information + Support**. |
+
+## Delete extension instance
+
+To delete an extension instance on a cluster, use `k8s-extension-delete`, passing in values for the mandatory parameters.
+
+```azurecli
+az k8s-extension delete --name azureml --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters
+```
+
+>[!NOTE]
+> The Azure resource representing this extension gets deleted immediately. The Helm release on the cluster associated with this extension is only deleted when the agents running on the Kubernetes cluster have network connectivity and can reach out to Azure services again to fetch the desired state.
+
+## Next steps
+
+* View the list of [currently available cluster extensions](cluster-extensions.md#currently-available-extensions).
+* Learn about [Kubernetes applications available through Marketplace](deploy-marketplace.md).
+
+<!-- LINKS -->
+[arc-k8s-extensions]: ../azure-arc/kubernetes/conceptual-extensions.md
+[az-feature-register]: /cli/azure/feature#az-feature-register
+[az-feature-list]: /cli/azure/feature#az-feature-list
+[az-provider-register]: /cli/azure/provider#az-provider-register
+[azure-ml-overview]: ../machine-learning/how-to-attach-kubernetes-anywhere.md
+[dapr-overview]: ./dapr.md
+[gitops-overview]: ../azure-arc/kubernetes/conceptual-gitops-flux2.md
+[gitops-support]: ../azure-arc/kubernetes/extensions-release.md#flux-gitops
+[gitops-tutorial]: ../azure-arc/kubernetes/tutorial-use-gitops-flux2.md
+[k8s-extension-reference]: /cli/azure/k8s-extension
+[use-managed-identity]: ./use-managed-identity.md
+[workload-identity-overview]: workload-identity-overview.md
+[use-azure-ad-pod-identity]: use-azure-ad-pod-identity.md
+
aks Deploy Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-marketplace.md
Last updated 05/01/2023
-# Deploy a Kubernetes application from Azure Marketplace
+# Deploy and manage a Kubernetes application from Azure Marketplace
[Azure Marketplace][azure-marketplace] is an online store that contains thousands of IT software applications and services built by industry-leading technology companies. In Azure Marketplace, you can find, try, buy, and deploy the software and services that you need to build new solutions and manage your cloud infrastructure. The catalog includes solutions for different industries and technical areas, free trials, and consulting services from Microsoft partners.
az provider register --namespace Microsoft.ContainerService --wait
az provider register --namespace Microsoft.KubernetesConfiguration --wait ```
-## Select and deploy a Kubernetes offer
+## Select and deploy a Kubernetes application
### From the AKS portal screen
az provider register --namespace Microsoft.KubernetesConfiguration --wait
:::image type="content" source="./media/deploy-marketplace/plan-pricing.png" alt-text="Screenshot of the offer purchasing page in the Azure portal, showing plan and pricing information.":::
-1. Follow each page in the wizard, all the way through Review + Create. Fill in information for your resource group, your cluster, and any configuration options that the application requires. You can decide to deploy on a new AKS cluster or use an existing cluster.
+1. Follow each page in the wizard, all the way through Review + Create. Fill in information for your resource group, your cluster, and any configuration options that the application requires.
:::image type="content" source="./media/deploy-marketplace/review-create.png" alt-text="Screenshot of the Azure portal wizard for deploying a new offer, with the selector for creating a cluster or using an existing one.":::
You'll see your recently installed extensions listed:
Select an extension name to navigate to a properties view where you're able to disable auto upgrades, check the provisioning state, delete the extension instance, or modify configuration settings as needed. +
+To manage settings of your installed extension, you can edit the configuration settings:
+
+![Screenshot of Cluster-extension-config-settings.](media/deploy-marketplace/cluster-extension-config-settings.png)
++
If you experience issues, see the [troubleshooting checklist for failed deployme
[marketplace-troubleshoot]: /troubleshoot/azure/azure-kubernetes/troubleshoot-failed-kubernetes-deployment-offer ++
api-management Api Version Retirement Sep 2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/api-version-retirement-sep-2023.md
Title: Azure API Management - API version retirements (September 2023) | Microsoft Docs
-description: Azure API Management is retiring all API versions prior to 2021-08-01. If you use one of these API versions, you must update your tools, scripts, or programs to use the latest versions.
+description: The Azure API Management service is retiring all API versions prior to 2021-08-01. If you use one of these API versions, you must update your tools, scripts, or programs to use the latest versions.
documentationcenter: ''-+ Last updated 07/25/2022-+ # API version retirements (September 2023) Azure API Management uses Azure Resource Manager (ARM) to configure your API Management instances. The API version is embedded in your use of templates that describe your infrastructure, tools that are used to configure the service, and programs that you write to manage your Azure API Management services.
-On 30 September 2023, all API versions prior to **2021-08-01** will be retired and API calls using those API versions will fail. This means you'll no longer be able to create or manage your API Management services using your existing templates, tools, scripts, and programs until they've been updated. Data operations (such as accessing the APIs or Products configured on Azure API Management) will be unaffected by this update, including after 30 September 2023.
+On 30 September 2023, all API versions for the Azure API Management service prior to **2021-08-01** will be retired and API calls using those API versions will fail. This means you'll no longer be able to create or manage your API Management services using your existing templates, tools, scripts, and programs until they've been updated. Data operations (such as accessing the APIs or Products configured on Azure API Management) will be unaffected by this update, including after 30 September 2023.
From now through 30 September 2023, you can continue to use the templates, tools, and programs without impact. You can transition to API version 2021-08-01 or later at any point prior to 30 September 2023. ## Is my service affected by this?
-While your service isn't* affected by this change, any tool, script, or program that uses the Azure Resource Manager (such as the Azure CLI, Azure PowerShell, Azure API Management DevOps Resource Kit, or Terraform) is affected by this change. You'll be unable to run those tools successfully unless you update the tools.
+While your service isn't affected by this change, any tool, script, or program that uses the Azure Resource Manager (such as the Azure CLI, Azure PowerShell, Azure API Management DevOps Resource Kit, or Terraform) to interact with the API Management service is affected by this change. You'll be unable to run those tools successfully unless you update the tools.
## What is the deadline for the change?
api-management Set Header Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-header-policy.md
The `set-header` policy assigns a value to an existing HTTP response and/or requ
### Usage notes
- Multiple values of a header are concatenated to a CSV string, for example:
+Multiple values of a header are concatenated to a CSV string, for example:
`headerName: value1,value2,value3`
User-Agent: value2
User-Agent: value3 ```
+The following limitations apply:
+
+- Removal of `Server` header is not supported.
+ ## Examples ### Add header, override existing
This example shows how to apply policy at the API level to supply context inform
- [API Management transformation policies](api-management-transformation-policies.md)
azure-app-configuration Howto Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-geo-replication.md
To learn more about the concept of geo-replication, see [Geo-replication in Azur
To create a replica of your configuration store in the portal, follow the steps below.
+> [!NOTE]
+> Creating a replica for an App Configuration store with private endpoints configured with Static IP is not supported. If you prefer a private endpoint with Static IP configuration, replicas must be created before any private endpoint is added to a store.
+ <!-- ### [Portal](#tab/azure-portal) --> 1. In your App Configuration store, under **Settings**, select **Geo-replication**.
azure-cache-for-redis Cache Azure Active Directory For Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-azure-active-directory-for-authentication.md
To use the ACL integration, your client application must assume the identity of
Because most Azure Cache for Redis clients assume that a password/access key is used for authentication, you likely need to update your client workflow to support authentication using Azure AD. In this section, you learn how to configure your client applications to connect to Azure Cache for Redis using an Azure AD token.
-<!-- Conceptual Art goes here. -->
### Azure AD Client Workflow
Because most Azure Cache for Redis clients assume that a password/access key is
1. Ensure that your client executes a Redis [AUTH command](https://redis.io/commands/auth/) automatically before your Azure AD token expires using:
- 1. `UserName` = Object ID of your managed identity or service principal
+ - `UserName` = Object ID of your managed identity or service principal
- 1. `Password` = Azure AD token refreshed periodically
+ - `Password` = Azure AD token refreshed periodically
<!-- (ADD code snippet) --> ### Client library support
-The library `Microsoft.Azure.StackExchangeRedis` is an extension of `StackExchange.Redis` that enables you to use Azure Active Directory to authenticate connections from a Redis client application to an Azure Cache for Redis. The extension manages the authentication token, including proactively refreshing tokens before they expire to maintain persistent Redis connections over multiple days.
+The library [`Microsoft.Azure.StackExchangeRedis`](https://www.nuget.org/packages/Microsoft.Azure.StackExchangeRedis) is an extension of `StackExchange.Redis` that enables you to use Azure Active Directory to authenticate connections from a Redis client application to an Azure Cache for Redis. The extension manages the authentication token, including proactively refreshing tokens before they expire to maintain persistent Redis connections over multiple days.
-This [code sample](https://www.nuget.org/packages/Microsoft.Azure.StackExchangeRedis) demonstrates how to use the `Microsoft.Azure.StackExchangeRedis` NuGet package to connect to your Azure Cache for Redis instance using Azure Active Directory.
+This [code sample](https://github.com/Azure/Microsoft.Azure.StackExchangeRedis) demonstrates how to use the `Microsoft.Azure.StackExchangeRedis` NuGet package to connect to your Azure Cache for Redis instance using Azure Active Directory.
<!-- The following table includes links to code samples, which demonstrate how to connect to your Azure Cache for Redis instance using an Azure AD token. A wide variety of client libraries are included in multiple languages.
azure-cache-for-redis Cache Configure Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-configure-role-based-access-control.md
# Configure role-based access control with Data Access Policy
-Managing access to your Azure Cache for Redis instance is critical to ensure that the right users have access to the right set of data and commands. In Redis version 1, the [Access Control List](https://redis.io/docs/management/security/acl/) (ACL) was introduced. ACL limits which user can execute certain commands, and the keys that a user can be access. For example, you can prohibit specific users from deleting keys in the cache using [DEL](https://redis.io/commands/del/) command.
+Managing access to your Azure Cache for Redis instance is critical to ensure that the right users have access to the right set of data and commands. In Redis version 6, the [Access Control List](https://redis.io/docs/management/security/acl/) (ACL) was introduced. ACL limits which user can execute certain commands, and the keys that a user can be access. For example, you can prohibit specific users from deleting keys in the cache using [DEL](https://redis.io/commands/del/) command.
Azure Cache for Redis now integrates this ACL functionality with Azure Active Directory (Azure AD) to allow you to configure your Data Access Policies for your application's service principal and managed identity.
Azure Cache for Redis offers three built-in access policies: _Owner_, _Contribut
## Prerequisites and limitations -- Redis ACL and Data Access Policies aren't supported on Azure Cache for Redis instances that run Redis version 1.
+- Redis ACL and Data Access Policies aren't supported on Azure Cache for Redis instances that run Redis version 4.
- Redis ACL and Data Access Policies aren't supported on Azure Cache for Redis instances that depend on [Cloud Services](cache-faq.yml#caches-with-a-dependency-on-cloud-services--classic). - Azure AD authentication and authorization are supported for SSL connections only. - Some Redis commands are [blocked](cache-configure.md#redis-commands-not-supported-in-azure-cache-for-redis). ## Permissions for your data access policy
-As documented on [Redis Access Control List](https://redis.io/docs/management/security/acl/), ACL in Redis version 1.1 allows configuring access permissions for two areas:
+As documented on [Redis Access Control List](https://redis.io/docs/management/security/acl/), ACL in Redis version 6.0 allows configuring access permissions for three areas:
### Command categories
azure-cache-for-redis Cache Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-whats-new.md
Last updated 05/11/2023
### Azure Active Directory-based authentication and authorization (preview)
-Azure Active Directory (Azure AD) based authentication and authorization is now available for public preview with Azure Cache for Redis. With this Azure AD integration, users can connect to their cache instance without an access key and use role-based access control to connect to their cache instance.
+Azure Active Directory (Azure AD) based [authentication and authorization](cache-azure-active-directory-for-authentication.md) is now available for public preview with Azure Cache for Redis. With this Azure AD integration, users can connect to their cache instance without an access key and use [role-based access control](cache-configure-role-based-access-control.md) to connect to their cache instance.
> [!IMPORTANT] > The updates to Azure Cache for Redis that enable both Azure Active Directory for authentication and role-based access control are available only in East US region.
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
The **AllMetrics** setting routes a resource's platform metrics to other destina
With logs, you can select the log categories you want to route individually or choose a category group. > [!NOTE]
-> Category groups don't apply to metrics. Not all resources have category groups available.
+> Category groups don't apply to all metric resource providers. If a provider doesn't have them available in the diagnostic settings in the Azure portal, then they also won't be available via Azure Resource Manager templates.
You can use *category groups* to dynamically collect resource logs based on predefined groupings instead of selecting individual log categories. Microsoft defines the groupings to help monitor specific use cases across all Azure services.
azure-monitor Monitor Virtual Machine Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-alerts.md
The following section lists common alert rules for virtual machines in Azure Mon
> The details for log alerts provided here are using data collected by using [VM Insights](vminsights-overview.md), which provides a set of common performance counters for the client operating system. This name is independent of the operating system type. ### Machine unavailable
-One of the most common monitoring requirements for a virtual machine is to create an alert if it stops running. The best method is to create a metric alert rule in Azure Monitor by using the VM availability metric. It's currently in public preview. For a walk-through on this metric, see [Create availability alert rule for Azure virtual machine](tutorial-monitor-vm-alert-availability.md).
+One of the most common monitoring requirements for a virtual machine is to create an alert if it stops running. The best method is to create a metric alert rule in Azure Monitor by using the VM availability metric, which is currently in public preview. For a walk-through on this metric, see [Create availability alert rule for Azure virtual machine](tutorial-monitor-vm-alert-availability.md).
As described in [Scaling alert rules](#scaling-alert-rules), create an availability alert rule by using a subscription or resource group as the target resource. The rule applies to multiple virtual machines, including new machines that you create after the alert rule.
azure-monitor Vminsights Dependency Agent Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-dependency-agent-maintenance.md
Last updated 04/16/2020
The Dependency Agent collects data about processes running on the virtual machine and external process dependencies. Dependency Agent updates include bug fixes or support of new features or functionality. This article describes Dependency Agent requirements and how to upgrade Dependency Agent manually or through automation.
+>[!NOTE]
+> The Dependency Agent sends heartbeat data to the [InsightsMetrics](/azure/azure-monitor/reference/tables/insightsmetrics) table, for which you incur data ingestion charges. This behavior is different from Azure Monitor Agent, which sends agent health data to the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table, which is free from data collection charges.
+ ## Dependency Agent requirements - The Dependency Agent requires the Azure Monitor Agent to be installed on the same machine.
azure-resource-manager Deploy Service Catalog Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/deploy-service-catalog-quickstart.md
Title: Deploy a service catalog managed application description: Describes how to deploy a service catalog's managed application for an Azure Managed Application using Azure PowerShell, Azure CLI, or Azure portal. Previously updated : 03/21/2023 Last updated : 05/12/2023
$mrgname = $mrgprefix + $mrgtimestamp
$mrgname ```
-The `$mrgprefix` and `$mrgtimestamp` variables are concatenated to create a managed resource group name like _mrg-sampleManagedApplication-20230310100148_ that's stored in the `$mrgname` variable. The name's format `mrg-{definitionName}-{dateTime}` is the same format as the portal's default value. You use the `$mrgname` variable's value when you deploy the managed application.
+The `$mrgprefix` and `$mrgtimestamp` variables are concatenated and stored in the `$mrgname` variable. The variable's value is in the format _mrg-sampleManagedApplication-20230512103059_. You use the `$mrgname` variable's value when you deploy the managed application.
You need to provide several parameters to the deployment command for the managed application. You can use a JSON formatted string or create a JSON file. In this example, we use a JSON formatted string. The PowerShell escape character for the quote marks is the backtick (`` ` ``) character. The backtick is also used for line continuation so that commands can use multiple lines.
The parameters to create the managed resources:
- `appServicePlanName`: Create a plan name. Maximum of 40 alphanumeric characters and hyphens. For example, _demoAppServicePlan_. App Service plan names must be unique within a resource group in your subscription. - `appServiceNamePrefix`: Create a prefix for the plan name. Maximum of 47 alphanumeric characters or hyphens. For example, _demoApp_. During deployment, the prefix is concatenated with a unique string to create a name that's globally unique across Azure. - `storageAccountNamePrefix`: Use only lowercase letters and numbers and a maximum of 11 characters. For example, _demostg1234_. During deployment, the prefix is concatenated with a unique string to create a name globally unique across Azure. Although you're creating a prefix, the control checks for existing names in Azure and might post a validation message that the name already exists. If so, choose a different prefix.-- `storageAccountType`: The default is Standard_LRS. The other options are Premium_LRS, Standard_LRS, and Standard_GRS.
+- `storageAccountType`: The options are Premium_LRS, Standard_LRS, and Standard_GRS.
# [Azure CLI](#tab/azure-cli)
subid=$(az account list --query [].id --output tsv)
mrgpath="/subscriptions/$subid/resourceGroups/$mrgname" ```
-The `mrgprefix` and `mrgtimestamp` variables are concatenated to create a managed resource group name like _mrg-sampleManagedApplication-20230310100148_ that's stored in the `mrgname` variable. The name's format:`mrg-{definitionName}-{dateTime}` is the same format as the portal's default value. The `mrgname` and `subid` variable's are concatenated to create the `mrgpath` variable value that creates the managed resource group during the deployment.
+The `$mrgprefix` and `$mrgtimestamp` variables are concatenated and stored in the `$mrgname` variable. The variable's value is in the format _mrg-sampleManagedApplication-20230512103059_. The `mrgname` and `subid` variable's are concatenated to create the `mrgpath` variable value that creates the managed resource group during the deployment.
You need to provide several parameters to the deployment command for the managed application. You can use a JSON formatted string or create a JSON file. In this example, we use a JSON formatted string. In Bash, the escape character for the quote marks is the backslash (`\`) character. The backslash is also used for line continuation so that commands can use multiple lines.
The parameters to create the managed resources:
- `appServicePlanName`: Create a plan name. Maximum of 40 alphanumeric characters and hyphens. For example, _demoAppServicePlan_. App Service plan names must be unique within a resource group in your subscription. - `appServiceNamePrefix`: Create a prefix for the plan name. Maximum of 47 alphanumeric characters or hyphens. For example, _demoApp_. During deployment, the prefix is concatenated with a unique string to create a name that's globally unique across Azure. - `storageAccountNamePrefix`: Use only lowercase letters and numbers and a maximum of 11 characters. For example, _demostg1234_. During deployment, the prefix is concatenated with a unique string to create a name globally unique across Azure. Although you're creating a prefix, the control checks for existing names in Azure and might post a validation message that the name already exists. If so, choose a different prefix.-- `storageAccountType`: The default is Standard_LRS. The other options are Premium_LRS, Standard_LRS, and Standard_GRS.
+- `storageAccountType`: The options are Premium_LRS, Standard_LRS, and Standard_GRS.
# [Portal](#tab/azure-portal)
azure-resource-manager Publish Bicep Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-bicep-definition.md
To create and publish a managed application definition to your service catalog,
If your managed application definition is more than 120 MB or if you want to use your own storage account for your organization's compliance reasons, go to [Quickstart: Bring your own storage to create and publish an Azure Managed Application definition](publish-service-catalog-bring-your-own-storage.md).
-You can also use Bicep deploy an existing managed application definition. For more information, go to [Quickstart: Use Bicep to deploy an Azure Managed Application definition](deploy-bicep-definition.md).
+You can also use Bicep deploy a managed application definition from your service catalog. For more information, go to [Quickstart: Use Bicep to deploy an Azure Managed Application definition](deploy-bicep-definition.md).
## Prerequisites
azure-resource-manager Publish Service Catalog App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-service-catalog-app.md
Title: Create and publish Azure Managed Application in service catalog
description: Describes how to create and publish an Azure Managed Application in your service catalog using Azure PowerShell, Azure CLI, or Azure portal. Previously updated : 03/21/2023 Last updated : 05/12/2023 # Quickstart: Create and publish an Azure Managed Application definition
To publish a managed application to your service catalog, do the following tasks
If your managed application definition is more than 120 MB or if you want to use your own storage account for your organization's compliance reasons, go to [Quickstart: Bring your own storage to create and publish an Azure Managed Application definition](publish-service-catalog-bring-your-own-storage.md).
-> [!NOTE]
-> You can use Bicep to develop a managed application definition but it must be converted to ARM template JSON before you can publish the definition in Azure. To convert Bicep to JSON, use the Bicep [build](../bicep/bicep-cli.md#build) command. After the file is converted to JSON it's recommended to verify the code for accuracy.
->
-> Bicep files can be used to deploy an existing managed application definition.
+You can use Bicep to develop a managed application definition but it must be converted to ARM template JSON before you can publish the definition in Azure. For more information, go to [Quickstart: Use Bicep to create and publish an Azure Managed Application definition](publish-bicep-definition.md#convert-bicep-to-json).
+
+You can also use Bicep deploy a managed application definition from your service catalog. For more information, go to [Quickstart: Use Bicep to deploy an Azure Managed Application definition](deploy-bicep-definition.md).
## Prerequisites
Add the following JSON and save the file. It defines the resources to deploy an
As a publisher, you define the portal experience to create the managed application. The _createUiDefinition.json_ file generates the portal's user interface. You define how users provide input for each parameter using [control elements](create-uidefinition-elements.md) like drop-downs and text boxes.
-Open Visual Studio Code, create a file with the case-sensitive name _createUiDefinition.json_ and save it. The user interface allows the user to input the App Service name prefix, App Service plan's name, storage account prefix, and storage account type. During deployment, the variables in _mainTemplate.json_ use the `uniqueString` function to append a 13-character string to the name prefixes so the names are globally unique across Azure.
+In this example, the user interface prompts you to input the App Service name prefix, App Service plan's name, storage account prefix, and storage account type. During deployment, the variables in _mainTemplate.json_ use the `uniqueString` function to append a 13-character string to the name prefixes so the names are globally unique across Azure.
+
+Open Visual Studio Code, create a file with the case-sensitive name _createUiDefinition.json_ and save it.
-Add the following JSON to the file and save it.
+Add the following JSON code to the file and save it.
```json {
When the deployment is complete, you have a managed application definition in yo
You have access to the managed application definition, but you want to make sure other users in your organization can access it. Grant them at least the Reader role on the definition. They may have inherited this level of access from the subscription or resource group. To check who has access to the definition and add users or groups, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+## Clean up resources
+
+If you're going to deploy the definition, continue with the **Next steps** section that links to the article to deploy the definition.
+
+If you're finished with the managed application definition, you can delete the resource groups you created named _packageStorageGroup_ and _appDefinitionGroup_.
+
+# [PowerShell](#tab/azure-powershell)
+
+The command prompts you to confirm that you want to remove the resource group.
+
+```azurepowershell
+Remove-AzResourceGroup -Name packageStorageGroup
+
+Remove-AzResourceGroup -Name appDefinitionGroup
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+The command prompts for confirmation, and then returns you to command prompt while resources are being deleted.
+
+```azurecli
+az group delete --resource-group packageStorageGroup --no-wait
+
+az group delete --resource-group appDefinitionGroup --no-wait
+```
+
+# [Portal](#tab/azure-portal)
+
+1. From Azure portal **Home**, in the search field, enter _resource groups_.
+1. Select **Resource groups**.
+1. Select **packageStorageGroup** and **Delete resource group**.
+1. To confirm the deletion, enter the resource group name and select **Delete**.
+
+Use the same steps to delete _appDefinitionGroup_.
+++ ## Next steps You've published the managed application definition. The next step is to learn how to deploy an instance of that definition.
azure-resource-manager Publish Service Catalog Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-service-catalog-bring-your-own-storage.md
Title: Bring your own storage to create and publish an Azure Managed Application
description: Describes how to bring your own storage to create and publish an Azure Managed Application definition in your service catalog. Previously updated : 03/21/2023 Last updated : 05/12/2023 # Quickstart: Bring your own storage to create and publish an Azure Managed Application definition
To publish a managed application definition to your service catalog, do the foll
If your managed application definition is less than 120 MB and you don't want to use your own storage account, go to [Quickstart: Create and publish an Azure Managed Application definition](publish-service-catalog-app.md).
-> [!NOTE]
-> You can use Bicep to develop a managed application definition but it must be converted to ARM template JSON before you can publish the definition in Azure. To convert Bicep to JSON, use the Bicep [build](../bicep/bicep-cli.md#build) command. After the file is converted to JSON it's recommended to verify the code for accuracy.
->
-> Bicep files can be used to deploy an existing managed application definition.
+You can use Bicep to develop a managed application definition but it must be converted to ARM template JSON before you can publish the definition in Azure. For more information, go to [Quickstart: Use Bicep to create and publish an Azure Managed Application definition](publish-bicep-definition.md#convert-bicep-to-json).
+
+You can also use Bicep deploy a managed application definition from your service catalog. For more information, go to [Quickstart: Use Bicep to deploy an Azure Managed Application definition](deploy-bicep-definition.md).
## Prerequisites
Add the following JSON and save the file. It defines the managed application's r
As a publisher, you define the portal experience to create the managed application. The _createUiDefinition.json_ file generates the portal's user interface. You define how users provide input for each parameter using [control elements](create-uidefinition-elements.md) like drop-downs and text boxes.
-Open Visual Studio Code, create a file with the case-sensitive name _createUiDefinition.json_ and save it. The user interface allows the user to input the App Service name prefix, App Service plan's name, storage account prefix, and storage account type. During deployment, the variables in _mainTemplate.json_ use the `uniqueString` function to append a 13-character string to the name prefixes so the names are globally unique across Azure.
+In this example, the user interface prompts you to input the App Service name prefix, App Service plan's name, storage account prefix, and storage account type. During deployment, the variables in _mainTemplate.json_ use the `uniqueString` function to append a 13-character string to the name prefixes so the names are globally unique across Azure.
+
+Open Visual Studio Code, create a file with the case-sensitive name _createUiDefinition.json_ and save it.
-Add the following JSON to the file and save it.
+Add the following JSON code to the file and save it.
```json {
When you run the Azure CLI command, a credentials warning message might be displ
You have access to the managed application definition, but you want to make sure other users in your organization can access it. Grant them at least the Reader role on the definition. They may have inherited this level of access from the subscription or resource group. To check who has access to the definition and add users or groups, go to [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+## Clean up resources
+
+If you're going to deploy the definition, continue with the **Next steps** section that links to the article to deploy the definition.
+
+If you're finished with the managed application definition, you can delete the resource groups you created named _packageStorageGroup_, _byosDefinitionStorageGroup_, and _byosAppDefinitionGroup_.
+
+# [PowerShell](#tab/azure-powershell)
+
+The command prompts you to confirm that you want to remove the resource group.
+
+```azurepowershell
+Remove-AzResourceGroup -Name packageStorageGroup
+
+Remove-AzResourceGroup -Name byosAppDefinitionGroup
+
+Remove-AzResourceGroup -Name byosDefinitionStorageGroup
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+The command prompts for confirmation, and then returns you to command prompt while resources are being deleted.
+
+```azurecli
+az group delete --resource-group packageStorageGroup --no-wait
+
+az group delete --resource-group byosAppDefinitionGroup --no-wait
+
+az group delete --resource-group byosDefinitionStorageGroup --no-wait
+```
+++ ## Next steps You've published the managed application definition. Now, learn how to deploy an instance of that definition.
azure-resource-manager Resources Without Resource Group Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resources-without-resource-group-limit.md
For some resource types, you need to contact support to have the 800 instance li
Some resources have a limit on the number instances per region. This limit is different than the 800 instances per resource group. To check your instances per region, use the Azure portal. Select your subscription and **Usage + quotas** in the left pane. For more information, see [Check resource usage against limits](../../networking/check-usage-against-limits.md).
-## Microsoft.AlertsManagement
-
-* actionRules
-* smartDetectorAlertRules
- ## Microsoft.Automation * automationAccounts
Some resources have a limit on the number instances per region. This limit is di
* machines * machines/extensions
-## microsoft.insights
-
-* metricalerts
-* scheduledqueryrules
- ## Microsoft.Logic * integrationAccounts
cognitive-services Azure Machine Learning Labeling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom/azure-machine-learning-labeling.md
Before you connect to Azure Machine Learning, you need an Azure Machine Learning
1. In the window that appears, follow the prompts. Select the Azure Machine Learning workspace youΓÇÖve created previously under the same Azure subscription. Enter a name for the new Azure Machine Learning project that will be created to enable labeling in Azure Machine Learning. >[!TIP]
- > Make sure your workspace is linked to the same Azure Blob Storage account and Language resource before continuing. You can create a new workspace and link to your storage account through the [Azure portal](https://ms.portal.azure.com/#create/Microsoft.MachineLearningServices). Ensure that the storage account
+ > Make sure your workspace is linked to the same Azure Blob Storage account and Language resource before continuing. You can create a new workspace and link to your storage account through the [Azure portal](https://ms.portal.azure.com/#create/Microsoft.MachineLearningServices). Ensure that the storage account is properly linked to the workspace.
1. (Optional) Turn on the vendor labeling toggle to use labeling vendor companies. Before choosing the vendor labeling companies, contact the vendor labeling companies on the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/consulting-services?search=AzureMLVend) to finalize a contract with them. For more information about working with vendor companies, see [How to outsource data labeling](/azure/machine-learning/how-to-outsource-data-labeling).
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Title: Reference table for all security alerts in Microsoft Defender for Cloud description: This article lists the security alerts visible in Microsoft Defender for Cloud Previously updated : 04/20/2023 Last updated : 05/08/2023 # Security alerts - a reference guide
Microsoft Defender for Containers provides security alerts on the cluster level
| **Unusual user-application pair accessed a key vault**<br>(KV_UserAppAnomaly) | A key vault has been accessed by a user-service principal pair that doesn't normally access it. This anomalous access pattern may be legitimate activity, but it could be an indication that a threat actor has gained access to the key vault in an attempt to access the secrets contained within it. We recommend further investigations. | Credential Access | Medium | | **User accessed high volume of key vaults**<br>(KV_AccountVolumeAnomaly) | A user or service principal has accessed an anomalously high volume of key vaults. This anomalous access pattern may be legitimate activity, but it could be an indication that a threat actor has gained access to multiple key vaults in an attempt to access the secrets contained within them. We recommend further investigations. | Credential Access | Medium | | **Denied access from a suspicious IP to a key vault**<br>(KV_SuspiciousIPAccessDenied) | An unsuccessful key vault access has been attempted by an IP that has been identified by Microsoft Threat Intelligence as a suspicious IP address. Though this attempt was unsuccessful, it indicates that your infrastructure might have been compromised. We recommend further investigations. | Credential Access | Low |
+| **Unusual access to the key vault from a suspicious IP (Non-Microsoft or External)**<br>(KV_UnusualAccessSuspiciousIP) | A user or service principal has attempted anomalous access to key vaults from a non-Microsoft IP in the last 24 hours. This anomalous access pattern may be legitimate activity. It could be an indication of a possible attempt to gain access of the key vault and the secrets contained within it. We recommend further investigations. | Credential Access | Medium |
## <a name="alerts-azureddos"></a>Alerts for Azure DDoS Protection
Defender for Cloud's supported kill chain intents are based on [version 9 of the
| **Command and Control** | V7, V9 | The command and control tactic represents how adversaries communicate with systems under their control within a target network. | | **Exfiltration** | V7, V9 | Exfiltration refers to techniques and attributes that result or aid in the adversary removing files and information from a target network. This category also covers locations on a system or network where the adversary may look for information to exfiltrate. | | **Impact** | V7, V9 | Impact events primarily try to directly reduce the availability or integrity of a system, service, or network; including manipulation of data to impact a business or operational process. This would often refer to techniques such as ransomware, defacement, data manipulation, and others.
-
+ > [!NOTE] > For alerts that are in preview: [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]
defender-for-cloud Episode Thirty https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-thirty.md
+
+ Title: New Custom Recommendations for AWS and GCP | Defender for Cloud in the field
+
+description: Learn about new custom recommendations for AWS and GCP in Defender for Cloud
+ Last updated : 05/14/2023++
+# New Custom Recommendations for AWS and GCP in Defender for Cloud
+
+**Episode description**: In this episode of Defender for Cloud in the Field, Yael Genut joins Yuri Diogenes to talk about the new custom recommendations for AWS and GCP. Yael explains the importance of creating custom recommendations in a multicloud environment and how to use Kusto Query Language to create these customizations. Yael also demonstrates the step-by-step process to create custom recommendations using this new capability and how these custom recommendations appear in the Defender for Cloud dashboard.
+<br>
+<br>
+<iframe src="https://aka.ms/docs/player?id=41612fbe-4c9c-4cd2-9a99-3fbd94d31bec" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+- [01:44](/shows/mdc-in-the-field/new-custom-recommendations#time=01m44s) - Understanding custom recommendations
+- [03:15](/shows/mdc-in-the-field/new-custom-recommendations#time=03m15s) - Creating a custom recommendation based on a template
+- [08:20](/shows/mdc-in-the-field/new-custom-recommendations#time=08m20s) - Creating a custom recommendation from scratch
+- [12:27](/shows/mdc-in-the-field/new-custom-recommendations#time=12m27s) - Custom recommendation update interval
+- [14:30](/shows/mdc-in-the-field/new-custom-recommendations#time=14m30s) - Filtering custom recommendations in the Defender for Cloud dashboard
+- [16:40](/shows/mdc-in-the-field/new-custom-recommendations#time=16m40s) - Prerequisites to use the custom recommendations feature
+-
+## Recommended resources
+ - Learn how to [create custom recommendations and security standards](create-custom-recommendations.md)
+ - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
+ - Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+ - Learn more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+- Follow us on social media:
+
+ - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ - [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
defender-for-cloud Episode Twenty Nine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-nine.md
Last updated 04/27/2023
## Next steps > [!div class="nextstepaction"]
-> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
+> [New Custom Recommendations for AWS and GCP in Defender for Cloud](episode-thirty.md)
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 05/09/2023 Last updated : 05/14/2023 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
Updates in May include:
+- [New alert in Defender for Key Vault](#new-alert-in-defender-for-key-vault)
- [Agentless scanning now supports encrypted disks in AWS](#agentless-scanning-now-supports-encrypted-disks-in-aws) - [Revised JIT (Just-In-Time) rule naming conventions in Defender for Cloud](#revised-jit-just-in-time-rule-naming-conventions-in-defender-for-cloud) - [Onboard selected AWS regions](#onboard-selected-aws-regions)
Updates in May include:
- [Two Defender for DevOps recommendations now include Azure DevOps scan findings](#two-defender-for-devops-recommendations-now-include-azure-devops-scan-findings) - [New default setting for Defender for Servers vulnerability assessment solution](#new-default-setting-for-defender-for-servers-vulnerability-assessment-solution)
+### New alert in Defender for Key Vault
+
+Defender for Key Vault has the following new alert:
+
+| Alert (alert type) | Description | MITRE tactics | Severity |
+|||:-:||
+| **Unusual access to the key vault from a suspicious IP (Non-Microsoft or External)**<br>(KV_UnusualAccessSuspiciousIP) | A user or service principal has attempted anomalous access to key vaults from a non-Microsoft IP in the last 24 hours. This anomalous access pattern may be legitimate activity. It could be an indication of a possible attempt to gain access of the key vault and the secrets contained within it. We recommend further investigations. | Credential Access | Medium |
+
+For all of the available alerts, see [Alerts for Azure Key Vault](alerts-reference.md#alerts-azurekv).
+ ### Agentless scanning now supports encrypted disks in AWS Agentless scanning for VMs now supports processing of instances with encrypted disks in AWS, using both CMK and PMK.
We recommend updating your custom scripts, workflows, and governance rules to co
### Deprecation of legacy standards in compliance dashboard
-Legacy PCI DSS v3.2.1 and legacy SOC TSP have been fully deprecated in the Defender for Cloud compliance dashboard, and replaced by [SOC 2 Type 2](https://learn.microsoft.com/azure/compliance/offerings/offering-soc-2) initiative and [PCI DSS v4](https://learn.microsoft.com/azure/compliance/offerings/offering-pci-dss) initiative-based compliance standards.
-We have fully deprecated support of [PCI DSS](https://learn.microsoft.com/azure/compliance/offerings/offering-pci-dss) standard/initiative in Azure China 21Vianet.
+Legacy PCI DSS v3.2.1 and legacy SOC TSP have been fully deprecated in the Defender for Cloud compliance dashboard, and replaced by [SOC 2 Type 2](/azure/compliance/offerings/offering-soc-2) initiative and [PCI DSS v4](/azure/compliance/offerings/offering-pci-dss) initiative-based compliance standards.
+We have fully deprecated support of [PCI DSS](/azure/compliance/offerings/offering-pci-dss) standard/initiative in Azure China 21Vianet.
Learn how to [customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md).
If a subscription has a VA solution enabled on any of it's VMs, no changes will
Learn how to [Find vulnerabilities and collect software inventory with agentless scanning (Preview)](enable-vulnerability-assessment-agentless.md). ## April 2023- Updates in April include: - [Agentless Container Posture in Defender CSPM (Preview)](#agentless-container-posture-in-defender-cspm-preview)
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 05/07/2023 Last updated : 05/11/2023 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you can find them in the [What's
| Planned change | Estimated date for change | |--|--| | [Release of containers Vulnerability Assessment powered by Microsoft Defender Vulnerability Management (MDVM) in Defender CSPM](#release-of-containers-vulnerability-assessment-powered-by-microsoft-defender-vulnerability-management-mdvm-in-defender-cspm) | May 2023 |
-|[Renaming container recommendations powered by Qualys](#renaming-container-recommendations-powered-by-qualys) | May 2023 |
+| [Renaming container recommendations powered by Qualys](#renaming-container-recommendations-powered-by-qualys) | May 2023 |
+| [Additional scopes added to existing Azure DevOps Connectors](#additional-scopes-added-to-existing-azure-devops-connectors) | May 2023 |
| [DevOps Resource Deduplication for Defender for DevOps](#devops-resource-deduplication-for-defender-for-devops) | June 2023 | | [Replacing agent-based discovery with agentless discovery for containers capabilities in Defender CSPM](#replacing-agent-based-discovery-with-agentless-discovery-for-containers-capabilities-in-defender-cspm) | June 2023
Learn more about [Microsoft Defender Vulnerability Management (MDVM)](/microsoft
| Container registry images should have vulnerability findings resolved (powered by Qualys) | Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. | dbd0cb49-b563-45e7-9724-889e799fa648 | | Running container images should have vulnerability findings resolved (powered by Qualys) | Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. | 41503391-efa5-47ee-9282-4eff6131462c |
+### Additional scopes added to existing Azure DevOps Connectors
+
+**Estimated date for change: May 2023**
+
+Defender for DevOps will be adding an additional scope to the already existing Azure DevOps (ADO) application.
+
+The scopes that will be added include:
+
+- Advance Security management: `vso.advsec_manage`; Needed to enable, disable and manage, GitHub Advanced Security for ADO.
+
+- Container Mapping: `vso.extension_manage`, `vso.gallery_manager`; This is needed to share the decorator extension with the ADO organization.
+
+This change will only affect new Defender for DevOps customers that are trying to onboard ADO resources to Microsoft Defender for Cloud.
+
+Customers may experience ADO authentication errors when they try to create a new ADO connector. GitHub and existing connector flow will continue to work. This change of scope will result in downtime for any ADO Connector creation experience in May 2023. After May, all new ADO Connectors will be created with new scopes.
### DevOps Resource Deduplication for Defender for DevOps
defender-for-iot Concept Supported Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-supported-protocols.md
OT network sensors can detect the following protocols when identifying assets an
|**DNP. org** | DNP3 | |**Emerson** | DeltaV<br> DeltaV - Discovery<br> Emerson OpenBSI/BSAP<br> Ovation DCS ADMD<br>Ovation DCS DPUSTAT<br> Ovation DCS SSRPC | |**Emerson Fischer** | ROC |
-|**Eurocontrol** | ASTERIX |
|**GE** | Bentley Nevada (System 1 / BN3500)<br>ClassicSDI (MarkVle) <br> EGD<br> GSM (GE MarkVI and MarkVIe)<br> InterSite<br> SDI (MarkVle) <br> SRTP (GE)<br> GE_CMP | |**Generic Applications** | Active Directory<br> RDP<br> Teamviewer<br> VNC<br> | |**Honeywell** | ENAP<br> Experion DCS CDA<br> Experion DCS FDA<br> Honeywell EUCN <br> Honeywell Discovery |
defender-for-iot Configure Sensor Settings Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/configure-sensor-settings-portal.md
Continue by updating the relevant setting directly on the OT network sensor. For
Use the following sections to learn more about the individual OT sensor settings available from the Azure portal:
+### Active Directory
+
+To configure Active Directory settings from the Azure portal, define values for the following options:
+
+|Name |Description |
+|||
+|**Domain Controller FQDN** | The fully qualified domain name (FQDN), exactly as it appears on your LDAP server. For example, enter `host1.subdomain.contoso.com`. <br><br> If you encounter an issue with the integration using the FQDN, check your DNS configuration. You can also enter the explicit IP of the LDAP server instead of the FQDN when setting up the integration. |
+|**Domain Controller Port** | The port where your LDAP is configured. For example, use port 636 for LDAPS (SSL) connections. |
+|**Primary Domain** | The domain name, such as `subdomain.contoso.com`, and then select the connection type for your LDAP configuration. <br><br>Supported connection types include: **LDAPS/NTLMv3** (recommended), **LDAP/NTLMv3**, or **LDAP/SASL-MD5** |
+|**Active Directory Groups** | Select **+ Add** to add an Active Directory group to each permission level listed, as needed. <br><br> When you enter a group name, make sure that you enter the group name exactly as it's defined in your Active Directory configuration on the LDAP server. You'll use these group names when adding new sensor users with Active Directory.<br><br> Supported permission levels include **Read-only**, **Security Analyst**, **Admin**, and **Trusted Domains**. |
+
+> [!IMPORTANT]
+> When entering LDAP parameters:
+>
+> - Define values exactly as they appear in Active Directory, except for the case.
+> - User lowercase characters only, even if the configuration in Active Directory uses uppercase.
+> - LDAP and LDAPS can't be configured for the same domain. However, you can configure each in different domains and then use them at the same time.
+>
+
+To add another Active Directory server, select **+ Add Server** and define those server values.
+ ### Bandwidth cap For a bandwidth cap, define the maximum bandwidth you want the sensor to use for outgoing communication from the sensor to the cloud, either in Kbps or Mbps.
For a bandwidth cap, define the maximum bandwidth you want the sensor to use for
**Minimum required for a stable connection to Azure**: 350 Kbps. At this minimum setting, connections to the sensor console may be slower than usual.
+### NTP
+
+To configure an NTP server for your sensor from the Azure portal, define an IP/Domain address of a valid IPv4 NTP server using port 123.
+ ### Subnet To focus the Azure device inventory on devices that are in your IoT/OT scope, you will need to manually edit the subnet list to include only the locally monitored subnets that are in your IoT/OT scope. Once the subnets have been configured, the network location of the devices is shown in the *Network location* (Public preview) column in the Azure device inventory. All of the devices associated with the listed subnets will be displayed as *local*, while devices associated with detected subnets not included in the list will be displayed as *routed*.
defender-for-iot How To Control What Traffic Is Monitored https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-control-what-traffic-is-monitored.md
While the OT network sensor automatically learns the subnets in your network, we
|**Clear all** | Clear all currently defined subnets. | |**Auto subnet learning** | Selected by default. Clear this option to define your subnets manually instead of having them automatically detected by your OT sensor as new devices are detected. | |**Resolve all Internet traffic as internal/private** | Select to consider all public IP addresses as private, local addresses. If selected, public IP addresses are treated as local addresses, and alerts aren't sent about unauthorized internet activity. <br><br>This option reduces notifications and alerts received about external addresses. |
- |**ICS subnet** | Read-only. ICS/OT subnets are marked automatically when the system recognizes OT activity or protocols. |
+ |**ICS subnet** | Read-only. ICS/OT subnets are marked automatically when the system recognizes OT activity or protocols. If there is an OT subnet not being recognized, you can [manually define a subnet as ICS](#manually-define-a-subnet-as-ics). |
|**Segregated** | Select to show this subnet separately when displaying the device map according to Purdue level. | 1. When you're done, select **Save** to save your updates.
+### Manually define a subnet as ICS
+
+If you have an OT subnet that is not being marked automatically as an ICS subnet by the sensor, edit the device type for any of the devices in the relevant subnet to an ICS or IoT device type. The subnet will then be automatically marked by the sensor as an ICS subnet.
+
+> [!NOTE]
+> To manually change the subnet to be marked as ICS, the device type must be changed in device inventory in the OT sensor, and not from the Azure portal.
+
+**To change the device type to manually update the subnet**:
+
+1. Sign in to your OT sensor console and go to **Device inventory**.
+
+1. In the device inventory grid, select a device from the relevant subnet, and then select **Edit** in the toolbar at the top of the page.
+
+1. In the **Type** field, select a device type from the dropdown list that is listed under **ICS** or **IoT**.
+
+The subnet will now be marked as an ICS subnet in the sensor.
+
+For more information, see [Edit device details](how-to-investigate-sensor-detections-in-a-device-inventory.md#edit-device-details).
+ ## Customize port and VLAN names Use the following procedures to enrich the device data shown in Defender for IoT by customizing port and VLAN names on your OT network sensors.
defender-for-iot How To Troubleshoot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-troubleshoot-sensor.md
For more information, see:
You can configure a standalone sensor and a management console, with the sensors related to it, to connect to NTP.
+> [!TIP]
+> When you're ready to start managing your OT sensor settings at scale, define NTP settings from the Azure portal. Once you apply settings from the Azure portal, settings on the sensor console are read-only. For more information, see [Configure OT sensor settings from the Azure portal (Public preview)](configure-sensor-settings-portal.md).
+ To connect a standalone sensor to NTP: - [See the CLI documentation](./references-work-with-defender-for-iot-cli-commands.md).
defender-for-iot Manage Users Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-users-sensor.md
We recommend configuring on-premises users on your OT sensor with Active Directo
For example, use Active Directory when you have a large number of users that you want to assign Read Only access to, and you want to manage those permissions at the group level.
+> [!TIP]
+> When you're ready to start managing your OT sensor settings at scale, define Active Directory settings from the Azure portal. Once you apply settings from the Azure portal, settings on the sensor console are read-only. For more information, see [Configure OT sensor settings from the Azure portal (Public preview)](configure-sensor-settings-portal.md).
+ **To integrate with Active Directory**: 1. Sign in to your OT sensor and select **System Settings** > **Integrations** > **Active Directory**.
defender-for-iot Ot Pre Configured Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-pre-configured-appliances.md
You can [order](mailto:hardware.sales@arrow.com?cc=DIoTHardwarePurchase@microsof
|**C5600** | [HPE ProLiant DL360](appliance-catalog/hpe-proliant-dl360.md) | **Max bandwidth**: Up to 3 Gbps <br>**Max devices**: 12K <br> 32 Cores/32G RAM/5.6TB | **Mounting**: 1U <br>**Ports**: 15x RJ45 or 8x SFP (OPT) | |**E1800** | [HPE ProLiant DL20 Gen10 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) (4SFF) <br><br> [Dell PowerEdge R350](appliance-catalog/dell-poweredge-r350-e1800.md) | **Max bandwidth**: Up to 1 Gbps<br>**Max devices**: 10K <br> 8 Cores/32G RAM/1.8TB | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) | |**E500** | [Dell Edge 5200](appliance-catalog/dell-edge-5200.md) <br> (Rugged MIL-STD-810G) | **Max bandwidth**: Up to 1 Gbps<br>**Max devices**: 10K <br> 8 Cores/32G RAM/512GB | **Mounting**: Wall Mount<br>**Ports**: 3x RJ45 |
-|**L500** | [HPE ProLiant DL20 Gen10 Plus](appliance-catalog/hpe-proliant-dl20-plus-smb.md) <br> (NHP 2LFF) | **Max bandwidth**: Up to 200 Mbps<br>**Max devices**: 1,000 <br> 4 Cores/8G RAM/500GB | **Mounting**: 1U<br>**Ports**: 4x RJ45 |
+|**L500** | [HPE ProLiant DL20 Gen10 Plus](appliance-catalog/hpe-proliant-dl20-plus-smb.md) <br> (NHP 2LFF) | **Max bandwidth**: Up to 200 Mbps<br>**Max devices**: 1,000 <br> 8 Cores/8G RAM/500GB | **Mounting**: 1U<br>**Ports**: 4x RJ45 |
|**L100** | [YS-Techsystems YS-FIT2](appliance-catalog/ys-techsystems-ys-fit2.md) <br>(Rugged MIL-STD-810G) | **Max bandwidth**: Up to 10 Mbps <br>**Max devices**: 100 <br> 4 Cores/8G RAM/128GB | **Mounting**: DIN/VESA<br>**Ports**: 2x RJ45 | > [!NOTE]
For information about previously supported legacy appliances, see the [appliance
## Next steps > [!div class="step-by-step"]
-> [« Prepare an OT site deployment](best-practices/plan-prepare-deploy.md)
+> [« Prepare an OT site deployment](best-practices/plan-prepare-deploy.md)
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
This version includes bug fixes for stability improvements.
**Supported until**: 12/2023 - [Azure connectivity status shown on OT sensors](how-to-manage-individual-sensors.md#validate-connectivity-status)
+- [Configure Active Directory and NTP settings in the Azure portal](configure-sensor-settings-portal.md#active-directory)
## Versions 22.2.x
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
Features released earlier than nine months ago are described in the [What's new
> Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. >
+## May 2023
+
+|Service area |Updates |
+|||
+| **OT networks** | **Sensor versions 22.3.x and higher**: <br>- [Configure Active Directory and NTP settings in the Azure portal](#configure-active-directory-and-ntp-settings-in-the-azure-portal) |
+
+### Configure Active Directory and NTP settings in the Azure portal
+
+Now you can configure Active Directory and NTP settings for your OT sensors remotely from the **Sites and sensors** page in the Azure portal. These settings are available for OT sensor versions 22.3.x and higher.
+
+For more information, see [Sensor setting reference](configure-sensor-settings-portal.md#sensor-setting-reference)
+ ## April 2023 |Service area |Updates |
devtest-labs Devtest Lab Guidance Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-guidance-get-started.md
description: This article describes primary Azure DevTest Labs scenarios, and ho
Previously updated : 02/03/2022 Last updated : 05/12/2023
expressroute Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/planned-maintenance.md
+
+ Title: Planned maintenance for ExpressRoute
+description: Learn how to plan for ExpressRoute maintenance events.
++++ Last updated : 05/10/2023++
+# Planned maintenance for ExpressRoute
+
+ExpressRoute circuits and Direct Ports are configured with a primary and a secondary connection to Microsoft Enterprise Edge (MSEE) devices at Microsoft peering locations. These connections are established on physically different devices to offer reliable connectivity from on-premises to your Azure resources if there are planned or unplanned events.
+
+This article explains what happens during an ExpressRoute circuit maintenance and provide actions you should take to minimize service outage affected by a planned or an unplanned maintenance.
+
+## Prepare for maintenance
+
+MSEE devices undergo maintenance to improve the platform reliability, apply security patches, replace faulty hardware, etc. Operations of maintenance are required on Microsoft Enterprise Edge routers (MSEE) routers to improve the ExpressRoute circuit services or apply new software release. The maintenance activity is planned and scheduled in advance to minimize the effect on your services.
+
+### Resiliency of ExpressRoute circuit
+
+The resiliency of an ExpressRoute circuit is achieved with two connections to two MSEEs at an [ExpressRoute location](expressroute-locations.md#expressroute-locations).
+
+Microsoft requires dual BGP sessions from the connectivity provider or your network edge ΓÇô one to each MSEE. To be in compliant with the SLA (service-level agreement) associated with the ExpressRoute circuit, dual BGP sessions between the MSEE routers and your edge routers must be simultaneously established.
++
+### Turn on maintenance alerts
+
+When a planned maintenance is scheduled, you're notified at least 14 days prior to the work window through Azure Service Health notifications. With Service Health, you can configure alerts for ExpressRoute Circuit maintenance, view planned and scheduled maintenance. To learn more about Service Health for ExpressRoute maintenance, see [view and configure ExpressRoute maintenance alerts](maintenance-alerts.md). It's crucial that you subscribe to the Azure Service Health to be informed in advanced of the maintenance events.
+
+## How are maintenance events scheduled
+
+Planned maintenance on the MSEE is scheduled to occur over two different time windows. This separation is to ensure connectivity over your ExpressRoute circuits aren't disrupted due to the maintenance event and at least one path is always available to reach your Azure services.
+
+During maintenance, we enable AS path prepend which allows the traffic to drain to the redundant path gracefully. The AS Path prepend is done by prepending AS *12076* (eight times) to the BGP routes towards on-premises and the ExpressRoute gateway connection.
+You need to ensure any on-premises devices in the path are set up to accept the AS path prepend and allow traffic from on-premises to move over to the redundant ExpressRoute path.
+
+Check with your service provider to confirm they're set up to allow AS path prepend on your connections if they're managing your network.
+
+## Maintenance activity between MSEE routers and Microsoft core network
+
+During the maintenance activity, the BGP session between your on-premises network and MSEE may be in an established state and advertising routes from your on-premises network to the MSEE routers. In this case, you can't rely only on presence of established BGP session on your edge router to determine the integrity of the connection. Your routing policy might force traffic to be sent to a specific connection anyway. This setup may cause traffic discard as traffic is routed to the connection that is undergoing maintenance and your return traffic is over the redundant path. To avoid traffic discard from happening, the setup on your edge routers must be configured to forward traffic when the connection receives BGP advertisements from AS 12076 and with traffic forwarding to the connection with the best BGP metric. When the BGP metric in the primary and secondary connection are identical, traffic gets load balanced.
++
+## Validation of the ExpressRoute circuit failover
+
+After you complete the activation of an ExpressRoute circuit and before being used in production, the recommended practice is to run a failover test to verify the customerΓÇÖs edge router BGP configurations is correct.
+
+The process of validation of ExpressRoute circuit failover can be executed in two steps:
+
+1. Shutdown the BGP session between your on-premises edge router and the primary connection on the MSEE router. This forces the traffic only through the secondary connection. You can monitor the traffic statistics on the MSEE connection using the [`Get-AzExpressRouteCircuitStats`](expressroute-troubleshooting-expressroute-overview.md#confirm-the-traffic-flow) command. The **BitsInPerSecond** and **BitsOutPerSecond** traffic metrics should only increment on the path that is currently active.
+
+ :::image type="content" source="./media/planned-maintenance/primary-down.png" alt-text="Diagram of BGP peering down for primary connection of an ExpressRoute circuit.":::
+
+ When the test is completed successful, move to the second step.
+
+1. Shutdown the BGP session between your on-premises edge router and the secondary MSEE connection. Repeat the verification actions in Step 1 to validate the traffic is only incrementing on the primary path.
+
+ :::image type="content" source="./media/planned-maintenance/secondary-down.png" alt-text="Diagram of BGP peering down for secondary connection of an ExpressRoute circuit.":::
+
+You can run more tests by introducing AS path prepend on each path from your on-premises towards the MSEE to verify the traffic flow failover. A similar testing can be performed working with your service provider to introduce AS path prepend towards your on-premises network from provider edge. The described failover procedure should be verified for the ExpressRoute private peering and ExpressRoute Microsoft peering.
+
+To check the status of BGP sessions in the failover test, you can use the guidelines described in the [Verify ExpressRoute connectivity](expressroute-troubleshooting-expressroute-overview.md) documentation.
+
+The failover validation of an ExpressRoute circuit reduces the risk of outages during planned ExpressRoute circuits maintenance.
+
+If the verification of ExpressRoute circuit failover hasn't been completed and the ExpressRoute circuit is already in production, it's never too late to schedule a customerΓÇÖs maintenance, out of working hours, and proceed with failover test.
+
+> [!NOTE]
+> As general guideline, terminating ExpressRoute BGP connections on stateful devices (such as firewalls) can cause issues with failover during planned or unplanned maintenances by Microsoft or your ExpressRoute service provider. You should evaluate your set up to ensure your traffic will failover properly, and when possible, terminate BGP sessions on stateless devices.
+
+## Monitor of ExpressRoute circuit
+
+You should track the status of connections through ExpressRoute circuits. Tracking the health of network connectivity is important to react to unhealthy status and taking prompt remediation. [Azure Monitor alerts](monitor-expressroute.md) proactively notifies you when conditions causing negative effects are found in your monitoring data.
+
+Review the available metrics for [ExpressRoute monitoring](expressroute-monitoring-metrics-alerts.md) for ExpressRoute circuit and Direct ports. At the minimum you should configure alerts to trigger for **ARP availability**, **BGP availability** and **Line Protocol**. Then configure email notifications to be sent when an out of service occurs.
+
+You can elevate the monitor information by using [Connection Monitor for ExpressRoute](how-to-configure-connection-monitor.md). Connection monitor is a cloud-based network monitoring solution that monitors connectivity between on-premises networks (branch offices, etc.) and Azure cloud deployments. This service is used to track not only service disruptions but also end-to-end performance degradation for your services.
+
+## Next steps
+
+* Learn about [Network Insights for ExpressRoute](expressroute-network-insights.md) to monitor and troubleshoot your ExpressRoute circuit.
iot-edge Tutorial Nested Iot Edge For Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-nested-iot-edge-for-linux-on-windows.md
The `azureiotedge-diagnostics` value is pulled from the container registry that'
If you're using a private container registry, make sure that all the images (IoTEdgeAPIProxy, edgeAgent, edgeHub, Simulated Temperature Sensor, and diagnostics) are present in the container registry.
-If a downstream device has a different processor architecture from the parent device, you need to specify the correct image for the *edgeAgent* and *edgeHub* modules in the downstream device *config.toml* file. For example, if the parent device is running on an ARM32v7 architecture and the downstream device is running on an AMD64 architecture, you need to specify the matching version and architecture image tag in the downstream device *config.toml* file.
+If a downstream device has a different processor architecture from the parent device, you need the appropriate architecture image. You can use a [connected registry](/azure/container-registry/intro-connected-registry) or you can specify the correct image for the *edgeAgent* and *edgeHub* modules in the downstream device *config.toml* file. For example, if the parent device is running on an ARM32v7 architecture and the downstream device is running on an AMD64 architecture, you need to specify the matching version and architecture image tag in the downstream device *config.toml* file.
```toml [agent.config]
iot-edge Tutorial Nested Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-nested-iot-edge.md
The `azureiotedge-diagnostics` value is pulled from the container registry that'
If you're using a private container registry, make sure that all the images (IoTEdgeAPIProxy, edgeAgent, edgeHub, Simulated Temperature Sensor, and diagnostics) are present in the container registry.
-If a downstream device has a different processor architecture from the parent device, you need to specify the correct image for the *edgeAgent* and *edgeHub* modules in the downstream device *config.toml* file. For example, if the parent device is running on an ARM32v7 architecture and the downstream device is running on an AMD64 architecture, you need to specify the matching version and architecture image tag in the downstream device *config.toml* file.
+If a downstream device has a different processor architecture from the parent device, you need the appropriate architecture image. You can use a [connected registry](/azure/container-registry/intro-connected-registry) or you can specify the correct image for the *edgeAgent* and *edgeHub* modules in the downstream device *config.toml* file. For example, if the parent device is running on an ARM32v7 architecture and the downstream device is running on an AMD64 architecture, you need to specify the matching version and architecture image tag in the downstream device *config.toml* file.
```toml [agent.config]
load-testing Resource Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/resource-limits-quotas-capacity.md
The following limits apply on a per-region, per-subscription basis.
|||| | Concurrent test runs | 5-25 <sup>2</sup> | 1000 | | Test duration | 3 hours | |
+| Tests per resource | 10000 | |
+| Test runs per test | 5000 | |
+| File uploads per test | 1000 | |
+| App Components per test or test run | 100 | |
| [Test criteria](./how-to-define-test-criteria.md#load-test-fail-criteria) per test | 10 | | <sup>2</sup> If you aren't already at the maximum limit, you can request an increase. We aren't currently able to approve increase requests past our maximum limitations stated above. To request an increase for your default limit, contact Azure Support. Default limits vary by offer category type.
machine-learning How To Deploy Kubernetes Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-kubernetes-extension.md
Upon Azure Machine Learning extension deployment completes, you can use `kubectl
Update, list, show and delete an Azure Machine Learning extension. -- For AKS cluster without Azure Arc connected, refer to [Usage of AKS extensions](../aks/cluster-extensions.md#usage-of-cluster-extensions).
+- For AKS cluster without Azure Arc connected, refer to [Deploy and manage cluster extensions](../aks/deploy-extensions-az-cli.md).
- For Azure Arc-enabled Kubernetes, refer to [Deploy and manage Azure Arc-enabled Kubernetes cluster extensions](../azure-arc/kubernetes/extensions.md).
postgresql Common Errors And Special Scenarios Fms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/common-errors-and-special-scenarios-fms.md
Last updated 05/01/2023 +
-# Common errors and special scenarios for PostgreSQL Single Server to Flexible using the FMS migration tool.
+# Common errors and special scenarios for PostgreSQL Single Server to Flexible using the FMS migration tool
++ This articles explains common errors and special scenarios for PostgreSQL Single Server to Flexible using the FMS migration tool.
postgresql Partners Migration Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/partners-migration-postgresql.md
+
+ Title: Azure Database for PostgreSQL migration partners
+description: Lists of third-party migration partners with solutions that support Azure Database for PostgreSQL.
+++ Last updated : 05/11/2023++++
+# Azure Database for PostgreSQL migration partners
+++
+To broadly support your Azure Database for PostgreSQL solution, you can choose from a wide variety of industry-leading partners and tools. This article highlights Microsoft partners with migration solutions that support Azure Database for PostgreSQL.
+
+## Migration partners
+
+| Partner | Description | Links | Videos |
+| | | | |
+| ![Data Bene][9] |**Data Bene**<br>Databases done right! Data Bene is an open source software service company, expert in PostgreSQL and its ecosystem. Their customer portfolio includes several Fortune 100 companies as well as several famous «Unicorn». They have built over the years a serious reputation in PostgreSQL and Citus Data solutions and they provide support and technical assistance to ensure the smooth operation of your data infrastructure, including demanding projects in health-care and banking industries.|[Website][databene_website]<br>[LinkedIn][databene_linkedin]<br>[Contact][databene_contact] | |
+| ![DatAvail][8] |**DatAvail**<br>DatAvail is one of the largest providers of database, data management, analytics and application modernization services in North America. Offering database & application design, architecture, migration and modernization consulting services for all leading legacy and modern data platforms, along with tech-enabled 24x7 managed services, leveraging 1,000 consultants onshore, near-shore and off-shore.|[Website][datavail_website]<br>[Twitter][datavail_twitter]<br>[Contact][datavail_contact] | |
+| ![Newt Global][7] |**Newt Global**<br> Newt Global is a leading Cloud migration and DevOps implementation company with over a decade of focus on app & DB modernization. Newt Global leverages proprietary platform, DMAP for accelerating Oracle to PostgreSQL migration and can deliver migrations with 50% less time and effort. They have executed large and complex migrations of databases with 5 -50 TB of data and their associated applications. They help accelerate the end-to-end migration right from Discovery/Assessment, migration planning, migration execution and post migration validations. |[Website][newt_website]<br>[Marketplace][newt_marketplace]<br>[Twitter][newt_twitter]<br>[Contact][newt_contact] | |
+| ![SNP Technologies][1] |**SNP Technologies**<br>SNP Technologies is a cloud-only service provider, building secure and reliable solutions for businesses of the future. The company believes in generating real value for your business. From thought to execution, SNP Technologies shares a common purpose with clients, to turn their investment into an advantage.|[Website][snp_website]<br>[Twitter][snp_twitter]<br>[Contact][snp_contact] | |
+| ![Pragmatic Works][3] |**Pragmatic Works**<br>Pragmatic Works is a training and consulting company with deep expertise in data management and performance, Business Intelligence, Big Data, Power BI, and Azure. They focus on data optimization and improving the efficiency of SQL Server and cloud management.|[Website][pragmatic-works_website]<br>[Twitter][pragmatic-works_twitter]<br>[YouTube][pragmatic-works_youtube]<br>[Contact][pragmatic-works_contact] | |
+| ![Infosys][4] |**Infosys**<br>Infosys is a global leader in the latest digital services and consulting. With over three decades of experience managing the systems of global enterprises, Infosys expertly steers clients through their digital journey by enabling organizations with an AI-powered core. Doing so helps prioritize the execution of change. Infosys also provides businesses with agile digital at scale to deliver unprecedented levels of performance and customer delight.|[Website][infosys_website]<br>[Twitter][infosys_twitter]<br>[YouTube][infosys_youtube]<br>[Contact][infosys_contact] | |
+| ![credativ][5] |**credativ**<br>credativ is an independent consulting and services company. Since 1999, they have offered comprehensive services and technical support for the implementation and operation of Open Source software in business applications. Their comprehensive range of services includes strategic consulting, sound technical advice, qualified training, and personalized support up to 24 hours per day for all your IT needs.|[Marketplace][credativ_marketplace]<br>[Website][credativ_website]<br>[Twitter][credative_twitter]<br>[YouTube][credativ_youtube]<br>[Contact][credativ_contact] | |
+| ![Pactera][6] |**Pactera**<br>Pactera is a global company offering consulting, digital, technology, and operations services to the worldΓÇÖs leading enterprises. From their roots in engineering to the latest in digital transformation, they give customers a competitive edge. Their proven methodologies and tools ensure your data is secure, authentic, and accurate.|[Website][pactera_website]<br>[Twitter][pactera_twitter]<br>[Contact][pactera_contact] | |
+
+## Next steps
+
+To learn more about some of Microsoft's other partners, see the [Microsoft Partner site](https://partner.microsoft.com/).
+
+<!--Image references-->
+[1]: ./media/partner-migration-postgresql/snp-logo.png
+[2]: ./media/partner-migration-postgresql/db-best-logo.png
+[3]: ./media/partner-migration-postgresql/pw-logo-text-cmyk-1000.png
+[4]: ./media/partner-migration-postgresql/infosys-logo.png
+[5]: ./media/partner-migration-postgresql/credativ-round-logo-2.png
+[6]: ./media/partner-migration-postgresql/pactera-logo-small-2.png
+[7]: ./media/partner-migration-postgresql/newt-logo.png
+[8]:./media/partner-migration-postgresql/datavail-logo.png
+[9]:./media/partner-migration-postgresql/data-bene-logo.png
+
+<!--Website links -->
+[snp_website]:https://www.snp.com//
+[pragmatic-works_website]:https://pragmaticworks.com//
+[infosys_website]:https://www.infosys.com/
+[credativ_website]:https://www.credativ.com/postgresql-competence-center/microsoft-azure
+[pactera_website]:https://en.pactera.com/
+[newt_website]:https://newtglobal.com/database-migration-acceleration-platform-dmap-from-newt-global-db-schema-migration-schema-migration-oracle-to-postgresql-migration/
+[datavail_website]:https://www.datavail.com/technologies/postgresql/?/
+[databene_website]:https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdata-bene.io%2F&data=05%7C01%7Carianap%40microsoft.com%7C9619e9fb8f20426c479d08db4bcedd2c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638187124891347095%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=fEg07O8aMx4zXUFwgzMjuXM8ZvgYq6BuvD3soDpkEoQ%3D&reserved=0
+
+<!--Get Started Links-->
+<!--Datasheet Links-->
+<!--Marketplace Links -->
+[credativ_marketplace]:https://azuremarketplace.microsoft.com/de-de/marketplace/apps?search=credativ&page=1
+[newt_marketplace]:https://azuremarketplace.microsoft.com/en-in/marketplace/apps/newtglobalconsultingllc1581492268566.dmap_db_container_offer?tab=Overview
+
+<!--Press links-->
+
+<!--YouTube links-->
+[pragmatic-works_youtube]:https://www.youtube.com/user/PragmaticWorks
+[infosys_youtube]:https://www.youtube.com/user/Infosys
+[credativ_youtube]:https://www.youtube.com/channel/UCnSnr6_TcILUQQvAwlYFc8A
+
+<!--LinkedIn links-->
+[databene_linkedin]:https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linkedin.com%2Fcompany%2Fdata-bene%2F&data=05%7C01%7Carianap%40microsoft.com%7C9619e9fb8f20426c479d08db4bcedd2c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638187124891347095%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=PwPDHeQHNYHVa%2FbdfEjbvlnCSFo9iFll1E9UeM3RBQs%3D&reserved=0
+
+<!--Twitter links-->
+[snp_twitter]:https://twitter.com/snptechnologies
+[pragmatic-works_twitter]:https://twitter.com/PragmaticWorks
+[infosys_twitter]:https://twitter.com/infosys
+[credative_twitter]:https://twitter.com/credativ
+[pactera_twitter]:https://twitter.com/Pactera?s=17
+[newt_twitter]:https://twitter.com/newtglobal?lang=en
+[datavail_twitter]:https://twitter.com/datavail
+
+<!--Contact links-->
+[snp_contact]:mailto:sachin@snp.com
+[pragmatic-works_contact]:mailto:marketing@pragmaticworks.com
+[infosys_contact]:https://www.infosys.com/contact/
+[credativ_contact]:mailto:info@credativ.com
+[pactera_contact]:mailto:shushi.gaur@pactera.com
+[newt_contact]:mailto:dmap@newtglobalcorp.com
+[datavail_contact]:https://www.datavail.com/about/contact-us/
+[databene_contact]:https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.data-bene.io%2Fen%23contact&data=05%7C01%7Carianap%40microsoft.com%7C9619e9fb8f20426c479d08db4bcedd2c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638187124891347095%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=LAv2lRHmJH0kk2tft7LpRwtefQEdTkzwbB2ptoQpt3w%3D&reserved=0
private-5g-core Complete Private Mobile Network Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/complete-private-mobile-network-prerequisites.md
You must set these up in addition to the [ports required for Azure Stack Edge (A
Review and apply the firewall recommendations for the following - [Azure Stack Edge](../databox-online/azure-stack-edge-gpu-system-requirements.md#url-patterns-for-firewall-rules)-- [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/quickstart-connect-cluster.md?tabs=azure-cli%2cazure-cloud)
+- [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/network-requirements.md?tabs=azure-cloud)
- [Azure Network Function Manager](../network-function-manager/requirements.md) The following table contains the URL patterns for Azure Private 5G Core's outbound traffic.
purview Troubleshoot Policy Distribution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/troubleshoot-policy-distribution.md
- Title: Troubleshoot distribution of Microsoft Purview access policies
-description: Learn how to troubleshoot the communication of access policies that were created in Microsoft Purview and need to be enforced in data sources.
----- Previously updated : 11/21/2022--
-# Tutorial: Troubleshoot distribution of Microsoft Purview access policies (preview)
--
-In this tutorial, you learn how to programmatically fetch access policies that were created in Microsoft Purview. By doing so, you can troubleshoot the communication of policies between Microsoft Purview, where policies are created and updated, and the data sources, where these policies need to be enforced.
-
-For more information about Microsoft Purview policies, see the concept guides listed in the [Next steps](#next-steps) section.
-
-This guide uses examples from SQL Server as data sources.
-
-## Prerequisites
-
-* An Azure subscription. If you don't already have one, [create a free subscription](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-* A Microsoft Purview account. If you don't have one, see the [quickstart for creating a Microsoft Purview account](create-catalog-portal.md).
-* Register a data source, enable *Data use management*, and create a policy. To do so, use one of the Microsoft Purview policy guides. To follow along with the examples in this tutorial, you can [create a DevOps policy for Azure SQL Database](how-to-policies-devops-azure-sql-db.md).
-* Establish a bearer token and call data plane APIs. To learn how, see [how to call REST APIs for Microsoft Purview data planes](tutorial-using-rest-apis.md). To be authorized to fetch policies, you need to be a Policy Author, Data Source Admin, or Data Curator at the root-collection level in Microsoft Purview. To assign those roles, see [Manage Microsoft Purview role assignments](catalog-permissions.md#assign-permissions-to-your-users).
-
-## Overview
-
-You can fetch access policies from Microsoft Purview via either a *full pull* or a *delta pull*, as described in the following sections.
-
-The Microsoft Purview policy model is written in [JSON syntax](https://datatracker.ietf.org/doc/html/rfc8259).
-
-You can construct the policy distribution endpoint from the Microsoft Purview account name as
-`{endpoint} = https://<account-name>.purview.azure.com/pds`.
-
-## Full pull
-
-Full pull provides a complete set of policies for a particular data resource scope.
-
-### Request
-
-To fetch policies for a data source via full pull, send a `GET` request to `/policyElements`, as follows:
-
-```
-GET {{endpoint}}/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProvider}/{resourceType}/{resourceName}/policyelements?api-version={apiVersion}&$filter={filter}
-```
-
-where the path `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProvider}/{resourceType}/{resourceName}` matches the resource ID for the data source.
-
-The last two parameters `api-version` and `$filter` are query parameters of type string.
-`$filter` is optional and can take the following values: `atScope` (the default, if parameter is not specified) or `childrenScope`. The first value is used to request all the policies that apply at the level of the path, including the ones that exist at higher scope as well as the ones that apply specifically to lower scope, that is, children data objects. The second means just return fine-grained policies that apply to the children data objects.
-
->[!Tip]
-> The resource ID can be found under the properties for the data source in the Azure portal.
--
-### Response status codes
-
-|HTTP code|HTTP code description|Type|Description|Response|
-|||-|--|--|
-|200|Success|Success|The request was processed successfully|Policy data|
-|401|Unauthenticated|Error|No bearer token was passed in the request, or invalid token|Error data|
-|403|Forbidden|Error|Other authentication errors|Error data|
-|404|Not found|Error|The request path is invalid or not registered|Error data|
-|500|Internal server error|Error|The back-end service is unavailable|Error data|
-|503|Backend service unavailable|Error|The back-end service is unavailable|Error data|
-
-### Example for SQL Server (Azure SQL Database)
-
-**Example parameters**:
-- Microsoft Purview account: relecloud-pv-- Data source Resource ID: /subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/providers/Microsoft.Sql/servers/relecloud-sql-srv1-
-**Example request**:
-
-```
-GET https://relecloud-pv.purview.azure.com/pds/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/providers/Microsoft.Sql/servers/relecloud-sql-srv1/policyElements?api-version=2021-01-01-preview&$filter=atScope
-```
-**Example response**:
-
-`200 OK`
-
-```json
-{
- "count": 2,
- "syncToken": "820:0",
- "elements": [
- {
- "id": "9912572d-58bc-4835-a313-b913ac5bef97",
- "kind": "policy",
- "updatedAt": "2022-11-04T20:57:20.9389522Z",
- "version": 1,
- "elementJson": "{\"id\":\"9912572d-58bc-4835-a313-b913ac5bef97\",\"name\":\"marketing-rg_sqlsecurityauditor\",\"kind\":\"policy\",\"version\":1,\"updatedAt\":\"2022-11-04T20:57:20.9389522Z\",\"decisionRules\":[{\"kind\":\"decisionrule\",\"effect\":\"Permit\",\"updatedAt\":\"11/04/2022 20:57:20\",\"cnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/**\"]}],[{\"fromRule\":\"purviewdatarole_builtin_sqlsecurityauditor\",\"attributeName\":\"derived.purview.role\",\"attributeValueIncludes\":\"purviewdatarole_builtin_sqlsecurityauditor\"}],[{\"attributeName\":\"principal.microsoft.groups\",\"attributeValueIncludedIn\":[\"b29c1676-8d2c-4a81-b7e1-365b79088375\"]}]]},{\"kind\":\"decisionrule\",\"effect\":\"Permit\",\"id\":\"auto_0235e4df-0d3f-41ca-98ed-edf1b8bfcf9f\",\"updatedAt\":\"11/04/2022 20:57:20\",\"cnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/**\"]}],[{\"attributeName\":\"request.azure.dataAction\",\"attributeValueIncludedIn\":[\"Microsoft.Sql/sqlservers/Connect\"]}],[{\"attributeName\":\"principal.microsoft.groups\",\"attributeValueIncludedIn\":[\"b29c1676-8d2c-4a81-b7e1-365b79088375\"]}]]},{\"kind\":\"decisionrule\",\"effect\":\"Permit\",\"id\":\"auto_45fa5236-a2a3-4291-9f0a-813b2883f118\",\"updatedAt\":\"11/04/2022 20:57:20\",\"cnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/**\"]}],[{\"attributeName\":\"request.azure.dataAction\",\"attributeValueIncludedIn\":[\"Microsoft.Sql/sqlservers/databases/Connect\"]}],[{\"attributeName\":\"principal.microsoft.groups\",\"attributeValueIncludedIn\":[\"b29c1676-8d2c-4a81-b7e1-365b79088375\"]}]]}]}"
- },
- {
- "id": "f1f2ecc0-c8fa-473f-9adf-7f7bd53ffdb4",
- "scopes": [
- "/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg"
- ],
- "kind": "policyset",
- "updatedAt": "2022-11-04T20:57:20.9389456Z",
- "version": 1,
- "elementJson": "{\"id\":\"f1f2ecc0-c8fa-473f-9adf-7f7bd53ffdb4\",\"name\":\"f1f2ecc0-c8fa-473f-9adf-7f7bd53ffdb4\",\"kind\":\"policyset\",\"version\":1,\"updatedAt\":\"2022-11-04T20:57:20.9389456Z\",\"preconditionRules\":[{\"dnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/**\"]}]]}],\"policyRefs\":[\"9912572d-58bc-4835-a313-b913ac5bef97\"]}"
- }
- ]
-}
-```
-
-## Delta pull
-
-A delta pull provides an incremental view of policies (that is, the changes since the last pull request), regardless of whether the last pull was a full or a delta pull. A full pull is required prior to issuing the first delta pull.
-
-### Request
-
-To fetch policies via delta pull, send a `GET` request to `/policyEvents`, as follows:
-
-```
-GET {{endpoint}}/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProvider}/{resourceType}/{resourceName}/policyEvents?api-version={apiVersion}&syncToken={syncToken}
-```
-
-Provide the syncToken you got from the prior pull in any successive delta pulls.
-
-### Response status codes
-
-|HTTP code|HTTP code description|Type|Description|Response|
-|||-|--|--|
-|200|Success|Success|The request was processed successfully|Policy data|
-|304|Not modified|Success|No events were received since the last delta pull call|None|
-|401|Unauthenticated|Error|No bearer token was passed in the request, or invalid token|Error data|
-|403|Forbidden|Error|Other authentication errors|Error data|
-|404|Not found|Error|The request path is invalid or not registered|Error data|
-|500|Internal server error|Error| The back-end service is unavailable|Error data|
-|503|Backend service unavailable|Error| The back-end service is unavailable|Error data|
-
-### Examples for SQL Server (Azure SQL Database)
-
-**Example parameters**:
-- Microsoft Purview account: `relecloud-pv`-- Data source resource ID: `/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/providers/Microsoft.Sql/servers/relecloud-sql-srv1`-- syncToken: 820:0-
-**Example request**:
-```
-https://relecloud-pv.purview.azure.com/pds/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/providers/Microsoft.Sql/servers/relecloud-sql-srv1/policyEvents?api-version=2021-01-01-preview&syncToken=820:0
-```
-
-**Example response**:
-
-`200 OK`
-
-```json
-{
- "count": 2,
- "syncToken": "822:0",
- "elements": [
- {
- "eventType": "Microsoft.Purview/PolicyElements/Delete",
- "id": "f1f2ecc0-c8fa-473f-9adf-7f7bd53ffdb4",
- "scopes": [
- "/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg"
- ],
- "kind": "policyset",
- "updatedAt": "2022-11-04T20:57:20.9389456Z",
- "version": 1,
- "elementJson": "{\"id\":\"f1f2ecc0-c8fa-473f-9adf-7f7bd53ffdb4\",\"name\":\"f1f2ecc0-c8fa-473f-9adf-7f7bd53ffdb4\",\"kind\":\"policyset\",\"version\":1,\"updatedAt\":\"2022-11-04T20:57:20.9389456Z\",\"preconditionRules\":[{\"dnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/**\"]}]]}],\"policyRefs\":[\"9912572d-58bc-4835-a313-b913ac5bef97\"]}"
- },
- {
- "eventType": "Microsoft.Purview/PolicyElements/Delete",
- "id": "9912572d-58bc-4835-a313-b913ac5bef97",
- "scopes": [
- "/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg"
- ],
- "kind": "policy",
- "updatedAt": "2022-11-04T20:57:20.9389522Z",
- "version": 1,
- "elementJson": "{\"id\":\"9912572d-58bc-4835-a313-b913ac5bef97\",\"name\":\"marketing-rg_sqlsecurityauditor\",\"kind\":\"policy\",\"version\":1,\"updatedAt\":\"2022-11-04T20:57:20.9389522Z\",\"decisionRules\":[{\"kind\":\"decisionrule\",\"effect\":\"Permit\",\"updatedAt\":\"11/04/2022 20:57:20\",\"cnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/**\"]}],[{\"fromRule\":\"purviewdatarole_builtin_sqlsecurityauditor\",\"attributeName\":\"derived.purview.role\",\"attributeValueIncludes\":\"purviewdatarole_builtin_sqlsecurityauditor\"}],[{\"attributeName\":\"principal.microsoft.groups\",\"attributeValueIncludedIn\":[\"b29c1676-8d2c-4a81-b7e1-365b79088375\"]}]]},{\"kind\":\"decisionrule\",\"effect\":\"Permit\",\"id\":\"auto_0235e4df-0d3f-41ca-98ed-edf1b8bfcf9f\",\"updatedAt\":\"11/04/2022 20:57:20\",\"cnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/**\"]}],[{\"attributeName\":\"request.azure.dataAction\",\"attributeValueIncludedIn\":[\"Microsoft.Sql/sqlservers/Connect\"]}],[{\"attributeName\":\"principal.microsoft.groups\",\"attributeValueIncludedIn\":[\"b29c1676-8d2c-4a81-b7e1-365b79088375\"]}]]},{\"kind\":\"decisionrule\",\"effect\":\"Permit\",\"id\":\"auto_45fa5236-a2a3-4291-9f0a-813b2883f118\",\"updatedAt\":\"11/04/2022 20:57:20\",\"cnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/**\"]}],[{\"attributeName\":\"request.azure.dataAction\",\"attributeValueIncludedIn\":[\"Microsoft.Sql/sqlservers/databases/Connect\"]}],[{\"attributeName\":\"principal.microsoft.groups\",\"attributeValueIncludedIn\":[\"b29c1676-8d2c-4a81-b7e1-365b79088375\"]}]]}]}"
- }
- ]
-}
-```
-
-In this example, the delta pull communicates the event that the policy on the resource group *marketing-rg*, which has the scope ```"scopes": ["/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg"]``` was deleted, per the ```"eventType": "Microsoft.Purview/PolicyElements/Delete"```.
--
-## Policy constructs
-Three top-level policy constructs are used within the responses to the full pull (`/policyElements`) and delta pull (`/policyEvents`) requests: `Policy`, `PolicySet`, and `AttributeRule`.
-
-### Policy
-
-`Policy` specifies the decision that the data source must enforce (*permit* or *deny*) when an Azure AD principal attempts access via a client, provided that the request context attributes satisfy the attribute predicates, as specified in the policy (for example: *scope*, *requested action*, and so on). An evaluation of the policy triggers an evaluation of `AttributeRules`, as referenced in the policy.
-
-|Member|Value|Type|Cardinality|Description|
-||--|-|--|--|
-|ID| |string|1||
-|name| |string|1||
-|kind| |string|1||
-|version|1|number|1||
-|updatedAt| |string|1| A string representation of time, in the format yyyy-MM-ddTHH:mm:ss.fffffffZ (for example: "2022-01-11T09:55:52.6472858Z")|
-|preconditionRules| |array[Object:Rule]|0..1|All the rules are 'anded'|
-|decisionRules| |array[Object:DecisionRule]|1||
-
-### PolicySet
-
-`PolicySet` associates an array of policy IDs with a resource scope, where they need to be enforced.
-
-|Member|Value|Type|Cardinality|Description|
-||--|-|--|--|
-|ID| |string|1||
-|name| |string|1||
-|kind| |string|1||
-|version|1|number|1||
-|updatedAt| |string|1| A string representation of time in the format yyyy-MM-ddTHH:mm:ss.fffffffZ (for example: "2022-01-11T09:55:52.6472858Z")|
-|preconditionRules| |array[Object:Rule]|0..1||
-|policyRefs| |array[string]|1|A list of policy IDs|
--
-### AttributeRule
-
-`AttributeRule` produces derived attributes and adds them to the request context attributes. An evaluation of `AttributeRule` triggers an evaluation of additional `AttributeRules`, as referenced in `AttributeRule`.
-
-|Member|Value|Type|Cardinality|Description|
-||--|-|--|--|
-|ID| |string|1||
-|name| |string|1||
-|kind|AttributeRule|string|1||
-|version|1|number|1||
-|dnfCondition| |array[array[Object:AttributePredicate]]|0..1||
-|cnfCondition| |array[array[Object:AttributePredicate]]|0..1||
-|condition| |Object: Condition|0..1||
-|derivedAttributes| |array[Object:DerivedAttribute]|1||
-
-## Common subconstructs used in PolicySet, Policy, and AttributeRule
-
-### AttributePredicate
-`AttributePredicate` checks to see whether the predicate that's specified on an attribute is satisfied. `AttributePredicate` can specify the following properties:
-- `attributeName`: Specifies the attribute name on which an attribute predicate needs to be evaluated.-- `matcherId`: The ID of a matcher function that's used to compare the attribute value that's looked up in the request context by attribute name to the attribute value literal that's specified in the predicate. At present, we support two `matcherId` values: `ExactMatcher` and `GlobMatcher`. If `matcherId` isn't specified, it defaults to `GlobMatcher`.-- `fromRule`: An optional property that specifies the ID of `AttributeRule` that needs to be evaluated to populate the request context with attribute values that would be compared in this predicate.-- `attributeValueIncludes`: A scalar literal value that should match the request context attribute values.-- `attributeValueIncludedIn`: An array of literal values that should match the request context attribute values.-- `attributeValueExcluded`: A scalar literal value that should *not* match the request context attribute values.-- `attributeValueExcludedIn`: An array of literal values that should *not* match the request context attribute values.-
-### CNFCondition
-An array of `AttributePredicates` that have to be satisfied with the semantics of ANDofORs.
-
-### DNFCondition
-An array of `AttributePredicates` that have to be satisfied with the semantics of ORofANDs.
-
-### PreConditionRule
-- A `PreConditionRule` can specify at most one each of `CNFCondition`, `DNFCondition`, or `Condition`.-- All of the specified `CNFCondition`, `DNFCondition`, and `Condition` should evaluate to `true` for `PreConditionRule` to be satisfied for the current request.-- If any of the precondition rules isn't satisfied, `PolicySet` or `Policy` is considered not applicable for the current request and skipped.-
-### Condition
-- `condition` allows you to specify a complex condition of predicates that can nest functions from a library of functions.-- At `decision compute time`, `condition` evaluates to `true` or `false` and also could emit optional obligations.-- If `condition` evaluates to `false`, the containing `DecisionRule` is considered not applicable to the current request.--
-## Next steps
-
-Concept guides for Microsoft Purview access policies:
-- [DevOps policies](concept-policies-devops.md)-- [Self-service access policies](concept-self-service-data-access-policy.md)-- [Data owner policies](concept-policies-data-owner.md)
search Search Capacity Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-capacity-planning.md
When scaling a search service, you can choose from the following tools and appro
+ [Azure portal](#adjust-capacity) + [Azure PowerShell](search-manage-powershell.md) + [Azure CLI](/cli/azure/search)
-+ [Management REST API](/rest/api/searchmanagement/2020-08-01/services)
++ [Management REST API](/rest/api/searchmanagement/2022-09-01/services) ## Concepts: search units, replicas, partitions, shards
The error message "Service update operations aren't allowed at this time because
Resolve this error by checking service status to verify provisioning status:
-1. Use the [Management REST API](/rest/api/searchmanagement/2020-08-01/services), [Azure PowerShell](search-manage-powershell.md), or [Azure CLI](/cli/azure/search) to get service status.
-1. Call [Get Service (REST)](/rest/api/searchmanagement/2020-08-01/services/get) or equivalent for PowerShell or the CLI.
-1. Check the response for ["provisioningState": "provisioning"](/rest/api/searchmanagement/2020-08-01/services/get#provisioningstate)
+1. Use the [Management REST API](/rest/api/searchmanagement/2022-09-01/services), [Azure PowerShell](search-manage-powershell.md), or [Azure CLI](/cli/azure/search) to get service status.
+1. Call [Get Service (REST)](/rest/api/searchmanagement/2022-09-01/services/get) or equivalent for PowerShell or the CLI.
+1. Check the response for ["provisioningState": "provisioning"](/rest/api/searchmanagement/2022-09-01/services/get#provisioningstate)
If status is "Provisioning", wait for the request to complete. Status should be either "Succeeded" or "Failed" before another request is attempted. There's no status for backup. Backup is an internal operation and it's unlikely to be a factor in any disruption of a scale exercise.
search Search Howto Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-aad.md
This article shows you how to configure your client for Azure AD:
+ For authorization, you'll assign an Azure role to the managed identity that grants permissions to run queries or manage indexing jobs.
-+ Update your client code to call [DefaultAzureCredential()](/dotnet/api/azure.identity.defaultazurecredential)
++ Update your client code to call [`TokenCredential()`](/dotnet/api/azure.core.tokencredential). For example, you can get started with new SearchClient(endpoint, new `DefaultAzureCredential()`) to authenticate via Azure AD using [Azure.Identity](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/identity/Azure.Identity/README.md). ## Configure role-based access for data plane
In this step, configure your search service to recognize an **authorization** he
1. Choose an **API access control** option. We recommend **Both** if you want flexibility or need to migrate apps.
- | Option | Status | Description |
- |--|--|-|
- | API Key | Generally available (default) | Requires an [admin or query API keys](search-security-api-keys.md) on the request header for authorization. No roles are used. |
- | Role-based access control | Preview | Requires membership in a role assignment to complete the task, described in the next step. It also requires an authorization header. |
- | Both | Preview | Requests are valid using either an API key or role-based access control. |
+ | Option | Description |
+ |--||
+ | API Key | (default) Requires an [admin or query API keys](search-security-api-keys.md) on the request header for authorization. No roles are used. |
+ | Role-based access control | Requires membership in a role assignment to complete the task, described in the next step. It also requires an authorization header. |
+ | Both | Requests are valid using either an API key or role-based access control. |
The change is effective immediately, but wait a few seconds before testing.
When you enable role-based access control in the portal, the failure mode will b
### [**REST API**](#tab/config-svc-rest)
-Use the Management REST API version 2021-04-01-Preview, [Create or Update Service](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update), to configure your service.
+Use the Management REST API version 2022-09-01, [Create or Update Service](/rest/api/searchmanagement/2022-09-01/services/create-or-update), to configure your service.
All calls to the Management REST API are authenticated through Azure Active Directory, with Contributor or Owner permissions. For help setting up authenticated requests in Postman, see [Manage Azure Cognitive Search using REST](search-manage-rest.md). 1. Get service settings so that you can review the current configuration. ```http
- GET https://management.azure.com/subscriptions/{{subscriptionId}}/providers/Microsoft.Search/searchServices?api-version=2021-04-01-preview
+ GET https://management.azure.com/subscriptions/{{subscriptionId}}/providers/Microsoft.Search/searchServices?api-version=2022-09-01
``` 1. Use PATCH to update service configuration. The following modifications enable both keys and role-based access. If you want a roles-only configuration, see [Disable API keys](search-security-rbac.md#disable-api-key-authentication).
- Under "properties", set ["authOptions"](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#dataplaneauthoptions) to "aadOrApiKey". The "disableLocalAuth" property must be false to set "authOptions".
+ Under "properties", set ["authOptions"](/rest/api/searchmanagement/2022-09-01/services/create-or-update#dataplaneauthoptions) to "aadOrApiKey". The "disableLocalAuth" property must be false to set "authOptions".
- Optionally, set ["aadAuthFailureMode"](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#aadauthfailuremode) to specify whether 401 is returned instead of 403 when authentication fails. Valid values are "http401WithBearerChallenge" or "http403".
+ Optionally, set ["aadAuthFailureMode"](/rest/api/searchmanagement/2022-09-01/services/create-or-update#aadauthfailuremode) to specify whether 401 is returned instead of 403 when authentication fails. Valid values are "http401WithBearerChallenge" or "http403".
```http
- PATCH https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2021-04-01-Preview
+ PATCH https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2022-09-01
{ "properties": { "disableLocalAuth": false,
In this step, create a [managed identity](../active-directory/managed-identities
Next, you need to grant your managed identity access to your search service. Azure Cognitive Search has various [built-in roles](search-security-rbac.md#built-in-roles-used-in-search). You can also create a [custom role](search-security-rbac.md#create-a-custom-role).
-It's a best practice to grant minimum permissions. If your application only needs to handle queries, you should assign the [Search Index Data Reader (preview)](../role-based-access-control/built-in-roles.md#search-index-data-reader) role. Alternatively, if it needs both read and write access on a search index, you should use the [Search Index Data Contributor (preview)](../role-based-access-control/built-in-roles.md#search-index-data-contributor) role.
+It's a best practice to grant minimum permissions. If your application only needs to handle queries, you should assign the [Search Index Data Reader](../role-based-access-control/built-in-roles.md#search-index-data-reader) role. Alternatively, if it needs both read and write access on a search index, you should use the [Search Index Data Contributor](../role-based-access-control/built-in-roles.md#search-index-data-contributor) role.
1. Sign in to the [Azure portal](https://portal.azure.com).
search Search Indexer Howto Access Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-private.md
Because it's easy and quick, this section uses Azure CLI steps for getting a bea
az account get-access-token ```
-1. Switch to a REST client and set up a [GET Shared Private Link Resource](/rest/api/searchmanagement/2020-08-01/shared-private-link-resources/get). This step allows you to review existing shared private links to ensure you're not duplicating a link. There can be only one shared private link for each resource and sub-resource combination.
+1. Switch to a REST client and set up a [GET Shared Private Link Resource](/rest/api/searchmanagement/2022-09-01/shared-private-link-resources/get). This step allows you to review existing shared private links to ensure you're not duplicating a link. There can be only one shared private link for each resource and sub-resource combination.
```http GET https://https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{rg-name}}/providers/Microsoft.Search/searchServices/{{service-name}}/sharedPrivateLinkResources?api-version={{api-version}}
Because it's easy and quick, this section uses Azure CLI steps for getting a bea
1. Send the request. You should get a list of all shared private link resources that exist for your search service. Make sure there's no existing shared private link for the resource and sub-resource combination.
-1. Formulate a PUT request to [Create or Update Shared Private Link](/rest/api/searchmanagement/2020-08-01/shared-private-link-resources/create-or-update) for the Azure PaaS resource. Provide a URI and request body similar to the following example:
+1. Formulate a PUT request to [Create or Update Shared Private Link](/rest/api/searchmanagement/2022-09-01/shared-private-link-resources/create-or-update) for the Azure PaaS resource. Provide a URI and request body similar to the following example:
```http PUT https://https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{rg-name}}/providers/Microsoft.Search/searchServices/{{service-name}}/sharedPrivateLinkResources/{{shared-private-link-name}}?api-version={{api-version}}
A `202 Accepted` response is returned on success. The process of creating an out
<!-- 1. Check the response. The `PUT` call to create the shared private endpoint returns an `Azure-AsyncOperation` header value that looks like the following:
- `"Azure-AsyncOperation": "https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Search/searchServices/contoso-search/sharedPrivateLinkResources/blob-pe/operationStatuses/08586060559526078782?api-version=2020-08-01"`
+ `"Azure-AsyncOperation": "https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Search/searchServices/contoso-search/sharedPrivateLinkResources/blob-pe/operationStatuses/08586060559526078782?api-version=2022-09-01"`
You can poll for the status by manually querying the `Azure-AsyncOperationHeader` value. ```azurecli
- az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Search/searchServices/contoso-search/sharedPrivateLinkResources/blob-pe/operationStatuses/08586060559526078782?api-version=2020-08-01
+ az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Search/searchServices/contoso-search/sharedPrivateLinkResources/blob-pe/operationStatuses/08586060559526078782?api-version=2022-09-01
``` -->
On the Azure Cognitive Search side, you can confirm request approval by revisiti
Alternatively, you can also obtain connection state by using the [GET Shared Private Link API](/rest/api/searchmanagement/2021-04-01-preview/shared-private-link-resources/get). ```dotnetcli
-az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Search/searchServices/contoso-search/sharedPrivateLinkResources/blob-pe?api-version=2020-08-01
+az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Search/searchServices/contoso-search/sharedPrivateLinkResources/blob-pe?api-version=2022-09-01
``` This would return a JSON, where the connection state shows up as "status" under the "properties" section. Following is an example for a storage account.
search Search Indexer Securing Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-securing-resources.md
This section summarizes the main steps for setting up a private endpoint for out
#### Step 1: Create a private endpoint to the secure resource
-You'll create a shared private link using either the portal pages of your search service or through the [Management API](/rest/api/searchmanagement/2020-08-01/shared-private-link-resources/create-or-update).
+You'll create a shared private link using either the portal pages of your search service or through the [Management API](/rest/api/searchmanagement/2022-09-01/shared-private-link-resources/create-or-update).
In Azure Cognitive Search, your search service must be at least the Basic tier for text-based indexers, and S2 for indexers with skillsets.
search Search Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-limits-quotas-capacity.md
Maximum running times exist to provide balance and stability to the service as a
## Shared private link resource limits
-Indexers can access other Azure resources [over private endpoints](search-indexer-howto-access-private.md) managed via the [shared private link resource API](/rest/api/searchmanagement/2020-08-01/shared-private-link-resources). This section describes the limits associated with this capability.
+Indexers can access other Azure resources [over private endpoints](search-indexer-howto-access-private.md) managed via the [shared private link resource API](/rest/api/searchmanagement/2022-09-01/shared-private-link-resources). This section describes the limits associated with this capability.
| Resource | Free | Basic | S1 | S2 | S3 | S3 HD | L1 | L2 | | | | | | | | | |
search Search Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage-rest.md
Previously updated : 01/11/2023 Last updated : 05/09/2023 # Manage your Azure Cognitive Search service with REST APIs
The Management RESt API is available in stable and preview versions. Be sure to
> [!div class="checklist"] > * [List search services](#list-search-services) > * [Create or update a service](#create-or-update-a-service)
-> * [(preview) Enable Azure role-based access control for data plane](#enable-rbac)
+> * [Enable Azure role-based access control for data plane](#enable-rbac)
> * [(preview) Enforce a customer-managed key policy](#enforce-cmk) > * [(preview) Disable semantic search](#disable-semantic-search) > * [(preview) Disable workloads that push data to external resources](#disable-external-access)
Now that Postman is set up, you can send REST calls similar to the ones describe
Returns all search services under the current subscription, including detailed service information: ```rest
-GET https://management.azure.com/subscriptions/{{subscriptionId}}/providers/Microsoft.Search/searchServices?api-version=2020-08-01
+GET https://management.azure.com/subscriptions/{{subscriptionId}}/providers/Microsoft.Search/searchServices?api-version=2022-09-01
``` ## Create or update a service
GET https://management.azure.com/subscriptions/{{subscriptionId}}/providers/Micr
Creates or updates a search service under the current subscription. This example uses variables for the search service name and region, which haven't been defined yet. Either provide the names directly, or add new variables to the collection. ```rest
-PUT https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2020-08-01
+PUT https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2022-09-01
{ "location": "{{region}}", "sku": {
PUT https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups
To create an [S3HD](search-sku-tier.md#tier-descriptions) service, use a combination of `-Sku` and `-HostingMode` properties. Set "sku" to `Standard3` and "hostingMode" to `HighDensity`. ```rest
-PUT https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2020-08-01
+PUT https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2022-09-01
{ "location": "{{region}}", "sku": {
PUT https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups
<a name="enable-rbac"></a>
-## (preview) Configure role-based access for data plane
+## Configure role-based access for data plane
**Applies to:** Search Index Data Contributor, Search Index Data Reader, Search Service Contributor
To use Azure role-based access control (Azure RBAC) for data plane operations, s
If you want to use Azure RBAC exclusively, [turn off API key authentication](search-security-rbac.md#disable-api-key-authentication) by following up with a second request, this time setting "disableLocalAuth" to "true". ```rest
-PATCH https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2021-04-01-preview
+PATCH https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2022-09-01
{ "properties": { "disableLocalAuth": false,
search Search Security Api Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-api-keys.md
az search query-key list --resource-group <myresourcegroup> --service-name <myse
### [**REST API**](#tab/rest-find)
-Use [List Admin Keys](/rest/api/searchmanagement/2020-08-01/admin-keys) or [List Query Keys](/rest/api/searchmanagement/2020-08-01/query-keys/list-by-search-service) in the Management REST API to return API keys.
+Use [List Admin Keys](/rest/api/searchmanagement/2022-09-01/admin-keys) or [List Query Keys](/rest/api/searchmanagement/2022-09-01/query-keys/list-by-search-service) in the Management REST API to return API keys.
You must have a [valid role assignment](#permissions-to-view-or-manage-api-keys) to return or update API keys. See [Manage your Azure Cognitive Search service with REST APIs](search-manage-rest.md) for guidance on meeting role requirements using the REST APIs. ```rest
-POST https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{resource-group}}/providers//Microsoft.Search/searchServices/{{search-service-name}}/listAdminKeys?api-version=2021-04-01-preview
+POST https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{resource-group}}/providers//Microsoft.Search/searchServices/{{search-service-name}}/listAdminKeys?api-version=2022-09-01
```
A script example showing query key usage can be found at [Create or delete query
### [**REST API**](#tab/rest-query)
-Use [Create Query Keys](/rest/api/searchmanagement/2020-08-01/query-keys/create) in the Management REST API.
+Use [Create Query Keys](/rest/api/searchmanagement/2022-09-01/query-keys/create) in the Management REST API.
You must have a [valid role assignment](#permissions-to-view-or-manage-api-keys) to create or manage API keys. See [Manage your Azure Cognitive Search service with REST APIs](search-manage-rest.md) for guidance on meeting role requirements using the REST APIs. ```rest
-POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Search/searchServices/{searchServiceName}/createQueryKey/{name}?api-version=2020-08-01
+POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Search/searchServices/{searchServiceName}/createQueryKey/{name}?api-version=2022-09-01
```
search Search Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-overview.md
You can review the [REST APIs](/rest/api/searchservice/) to understand the full
At a minimum, all inbound requests must be authenticated: + Key-based authentication is the default. Inbound requests that include a valid API key are accepted by the search service as originating from a trusted party.
-+ Alternatively, you can use Azure Active Directory and role-based access control for data plane operations (currently in preview).
++ Alternatively, you can use Azure Active Directory and role-based access control for data plane operations. Additionally, you can add [network security features](#service-access-and-authentication) to further restrict access to the endpoint. You can create either inbound rules in an IP firewall, or create private endpoints that fully shield your search service from the public internet.
A search service is provisioned with a public endpoint that allows access using
You can use the portal to [configure firewall access](service-configure-firewall.md).
-Alternatively, you can use the management REST APIs. Starting with API version 2020-03-13, with the [IpRule](/rest/api/searchmanagement/2020-08-01/services/create-or-update#iprule) parameter, you can restrict access to your service by identifying IP addresses, individually or in a range, that you want to grant access to your search service.
+Alternatively, you can use the management REST APIs. Starting with API version 2020-03-13, with the [IpRule](/rest/api/searchmanagement/2022-09-01/services/create-or-update#iprule) parameter, you can restrict access to your service by identifying IP addresses, individually or in a range, that you want to grant access to your search service.
### Inbound connection to a private endpoint (network isolation, no Internet traffic)
While this solution is the most secure, using additional services is an added co
## Authentication
-Once a request is admitted, it must still undergo authentication and authorization that determines whether the request is permitted. Cognitive Search supports two approaches:
+Once a request is admitted to the search service, it must still undergo authentication and authorization that determines whether the request is permitted. Cognitive Search supports two approaches:
+ [Key-based authentication](search-security-api-keys.md) is performed on the request (not the calling app or user) through an API key, where the key is a string composed of randomly generated numbers and letters that prove the request is from a trustworthy source. Keys are required on every request. Submission of a valid key is considered proof the request originates from a trusted entity.
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-rbac.md
When you enable role-based access control in the portal, the failure mode will b
### [**REST API**](#tab/config-svc-rest)
-Use the Management REST API version 2021-04-01-Preview, [Create or Update Service](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update), to configure your service.
+Use the Management REST API version 2022-09-01, [Create or Update Service](/rest/api/searchmanagement/2022-09-01/services/create-or-update), to configure your service.
All calls to the Management REST API are authenticated through Azure Active Directory, with Contributor or Owner permissions. For help setting up authenticated requests in Postman, see [Manage Azure Cognitive Search using REST](search-manage-rest.md). 1. Get service settings so that you can review the current configuration. ```http
- GET https://management.azure.com/subscriptions/{{subscriptionId}}/providers/Microsoft.Search/searchServices?api-version=2021-04-01-preview
+ GET https://management.azure.com/subscriptions/{{subscriptionId}}/providers/Microsoft.Search/searchServices?api-version=2022-09-01
``` 1. Use PATCH to update service configuration. The following modifications enable both keys and role-based access. If you want a roles-only configuration, see [Disable API keys](#disable-api-key-authentication).
- Under "properties", set ["authOptions"](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#dataplaneauthoptions) to "aadOrApiKey". The "disableLocalAuth" property must be false to set "authOptions".
+ Under "properties", set ["authOptions"](/rest/api/searchmanagement/2022-09-01/services/create-or-update#dataplaneauthoptions) to "aadOrApiKey". The "disableLocalAuth" property must be false to set "authOptions".
- Optionally, set ["aadAuthFailureMode"](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#aadauthfailuremode) to specify whether 401 is returned instead of 403 when authentication fails. Valid values are "http401WithBearerChallenge" or "http403".
+ Optionally, set ["aadAuthFailureMode"](/rest/api/searchmanagement/2022-09-01/services/create-or-update#aadauthfailuremode) to specify whether 401 is returned instead of 403 when authentication fails. Valid values are "http401WithBearerChallenge" or "http403".
```http
- PATCH https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2021-04-01-Preview
+ PATCH https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2022-09-01
{ "properties": { "disableLocalAuth": false,
Role assignments in the portal are service-wide. If you want to [grant permissio
When [using PowerShell to assign roles](../role-based-access-control/role-assignments-powershell.md), call [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment), providing the Azure user or group name, and the scope of the assignment.
-Before you start, make sure you load the Az and AzureAD modules and connect to Azure:
+Before you start, make sure you load the **Az** and **AzureAD** modules and connect to Azure:
```powershell Import-Module -Name Az
This approach assumes Postman as the REST client and uses a Postman collection a
az login ```
-1. Get your subscription ID. You'll provide this value as variable in a future step.
+1. Get your subscription ID. The ID is used as a variable in a future step.
```azurecli az account show --query id -o tsv
This approach assumes Postman as the REST client and uses a Postman collection a
az group create -l westus -n MyResourceGroup ```
-1. Create the service principal, replacing the placeholder values with valid values. You'll need a descriptive security principal name, subscription ID, and resource group name. This example uses the "Search Index Data Reader" (quote enclosed) role.
+1. Create the service principal, replacing the placeholder values with valid values for a security principal name, subscription ID, and resource group name. This example uses the "Search Index Data Reader" (quote enclosed) role.
```azurecli az ad sp create-for-rbac --name mySecurityPrincipalName --role "Search Index Data Reader" --scopes /subscriptions/mySubscriptionID/resourceGroups/myResourceGroupName
This approach assumes Postman as the REST client and uses a Postman collection a
For more information on how to acquire a token for a specific environment, see [Microsoft identity platform authentication libraries](../active-directory/develop/reference-v2-libraries.md).
-### [**.NET SDK**](#tab/test-csharp)
+### [**.NET**](#tab/test-csharp)
See [Authorize access to a search app using Azure Active Directory](search-howto-aad.md) for instructions that create an identity for your client app, assign a role, and call [DefaultAzureCredential()](/dotnet/api/azure.identity.defaultazurecredential).
To disable key-based authentication, set "disableLocalAuth" to true.
1. Get service settings so that you can review the current configuration. ```http
- GET https://management.azure.com/subscriptions/{{subscriptionId}}/providers/Microsoft.Search/searchServices?api-version=2021-04-01-preview
+ GET https://management.azure.com/subscriptions/{{subscriptionId}}/providers/Microsoft.Search/searchServices?api-version=2022-09-01
``` 1. Use PATCH to update service configuration. The following modification will set "authOptions" to null. ```http
- PATCH https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2021-04-01-Preview
+ PATCH https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2022-09-01
{ "properties": { "disableLocalAuth": true
sentinel Data Transformation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-transformation.md
Only the following tables are currently supported for custom log ingestion:
- [**SecurityEvent**](/azure/azure-monitor/reference/tables/securityevent) - [**CommonSecurityLog**](/azure/azure-monitor/reference/tables/commonsecuritylog) - [**Syslog**](/azure/azure-monitor/reference/tables/syslog)
+- [**ASimAuditEventLogs**](/azure/azure-monitor/reference/tables/asimauditeventlogs)
+- **ASimAuthenticationEventLogs**
- [**ASimDnsActivityLogs**](/azure/azure-monitor/reference/tables/asimdnsactivitylogs) - [**ASimNetworkSessionLogs**](/azure/azure-monitor/reference/tables/asimnetworksessionlogs)
+- [**ASimWebSessionLogs**](/azure/azure-monitor/reference/tables/asimwebsessionlogs)
## Known issues
sentinel Normalization Ingest Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-ingest-time.md
Ingest time parsing enables transforming events to a normalized schema as they a
Normalized data can be stored in Microsoft Sentinel's native normalized tables, or in a custom table that uses an ASIM schema. A custom table that has a schema close to, but not identical, to an ASIM schema, also provides the performance benefits of ingest time normalization. Currently, ASIM supports the following native normalized tables as a destination for ingest time normalization:
+- [**ASimAuditEventLogs**](/azure/azure-monitor/reference/tables/asimauditeventlogs) for the [Audit Event](normalization-schema-audit.md) schema.
+- **ASimAuthenticationEventLogs** for the [Authentication](normalization-schema-authentication.md) schema.
- [**ASimDnsActivityLogs**](/azure/azure-monitor/reference/tables/asimdnsactivitylogs) for the [DNS](normalization-schema-dns.md) schema. - [**ASimNetworkSessionLogs**](/azure/azure-monitor/reference/tables/asimnetworksessionlogs) for the [Network Session](normalization-schema-network.md) schema
+- [**ASimWebSessionLogs**](/azure/azure-monitor/reference/tables/asimwebsessionlogs) for the [Web Session](normalization-schema-web.md) schema.
The advantage of native normalized tables is that they are included by default in the ASIM unifying parsers. Custom normalized tables can be included in the unifying parsers, as discussed in [Manage Parsers](normalization-manage-parsers.md).
sentinel Normalization Parsers List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-parsers-list.md
ASIM Network Session parsers are available in every workspace. Microsoft Sentine
| **Cisco Meraki** | Collected using the Cisco Meraki API connector. | `_Im_NetworkSession_CiscoMerakiVxx` | | **Corelight Zeek** | Collected using the Corelight Zeek connector. | `_im_NetworkSession_CorelightZeekVxx` | | **Fortigate FortiOS** | IP connection logs collected using Syslog. | `_Im_NetworkSession_FortinetFortiGateVxx` |
+| **ForcePoint Firewall** | | `_Im_NetworkSession_ForcePointFirewallVxx` |
| **Microsoft 365 Defender for Endpoint** | | `_Im_NetworkSession_Microsoft365DefenderVxx`| | **Microsoft Defender for IoT micro agent** | | `_Im_NetworkSession_MD4IoTAgentVxx` | | **Microsoft Defender for IoT sensor** | | `_Im_NetworkSession_MD4IoTSensorVxx` |
ASIM Web Session parsers are available in every workspace. Microsoft Sentinel pr
| **Source** | **Notes** | **Parser** | | | | |
-| **Palo Alto PanOS threat logs** | Collected using CEF. | `_Im_WebSession_PaloAltoCEF` |
+| **Normalized Web Session Logs** | Any event normalized at ingestion to the `ASimWebSessionLogs` table. | `_Im_WebSession_NativeVxx` |
+| **Internet Information Services (IIS) Logs** | Collected using the AMA or Log Analytics Agent based IIS connectors. | `_Im_WebSession_IISVxx` |
+| **Palo Alto PanOS threat logs** | Collected using CEF. | `_Im_WebSession_PaloAltoCEFVxx` |
| **Squid Proxy** | | `_Im_WebSession_SquidProxyVxx` | | **Vectra AI Streams** | Supports the [pack](normalization-about-parsers.md#the-pack-parameter) parameter. | `_Im_WebSession_VectraAIVxx` | | **Zscaler ZIA** | Collected using CEF. | `_Im_WebSessionZscalerZIAVxx` |
sentinel Normalization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization.md
Query time parsers have many advantages:
On the other hand, while ASIM parsers are optimized, query time parsing can slow down queries, especially on large data sets. To resolve this, Microsoft Sentinel complements query time parsing with ingest time parsing. Using ingest transformation the events are normalized to normalized table, accelerating queries that use normalized data.
-Currently, ASIM supports the following normalized tables as a destination for ingest time normalization:
+Currently, ASIM supports the following native normalized tables as a destination for ingest time normalization:
+- [**ASimAuditEventLogs**](/azure/azure-monitor/reference/tables/asimauditeventlogs) for the [Audit Event](normalization-schema-audit.md) schema.
+- **ASimAuthenticationEventLogs** for the [Authentication](normalization-schema-authentication.md) schema.
- [**ASimDnsActivityLogs**](/azure/azure-monitor/reference/tables/asimdnsactivitylogs) for the [DNS](normalization-schema-dns.md) schema. - [**ASimNetworkSessionLogs**](/azure/azure-monitor/reference/tables/asimnetworksessionlogs) for the [Network Session](normalization-schema-network.md) schema
+- [**ASimWebSessionLogs**](/azure/azure-monitor/reference/tables/asimwebsessionlogs) for the [Web Session](normalization-schema-web.md) schema.
For more information, see [Ingest Time Normalization](normalization-ingest-time.md).
storage Storage Files Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-faq.md
* <a id="afs-resource-move"></a> **Can I move the storage sync service and/or storage account to a different resource group, subscription, or Azure AD tenant?**
- Yes, you can move the storage sync service and/or storage account to a different resource group, subscription, or Azure AD tenant. After you move the storage sync service or storage account, you need to give the Microsoft.StorageSync application access to the storage account (see [Ensure Azure File Sync has access to the storage account](../file-sync/file-sync-troubleshoot-sync-errors.md#troubleshoot-rbac)).
+ Yes, you can move the storage sync service and/or storage account to a different resource group, subscription, or Azure AD tenant. After you move the storage sync service or storage account, you need to give the Microsoft.StorageSync application access to the storage account (see **Ensure Azure File Sync has access to the storage account** under [Common troubleshooting steps](../file-sync/file-sync-troubleshoot-sync-errors.md#common-troubleshooting-steps)).
> [!Note] > When creating the cloud endpoint, the storage sync service and storage account must be in the same Azure AD tenant. Once the cloud endpoint is created, the storage sync service and storage account can be moved to different Azure AD tenants.
storage Storage Files Migration Storsimple 8000 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-storsimple-8000.md
The StorSimple 8000 series is represented by either the 8100 or the 8600 physica
:::column::: This video provides an overview of: - Azure Files
- - Azure Files Sync
+ - Azure File Sync
- Comparison of StorSimple & Azure Files - StorSimple Data Manager migration tool and process overview :::column-end:::