Updates from: 03/16/2023 02:14:50
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Custom Policies Series Call Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-call-rest-api.md
Previously updated : 01/30/2023 Last updated : 03/16/2023
You need to deploy an app, which will serve as your external app. Your custom po
1. To test the app works as expected, use the following steps: 1. In your terminal, run the `node index.js` command to start your app server.
- 1. To make a POST request similar to the one shown below, you can use an HTTP client such as [Microsoft PowerShell](https://learn.microsoft.com/powershell/scripting/overview) or [Postman](https://www.postman.com/):
+ 1. To make a POST request similar to the one shown below, you can use an HTTP client such as [Microsoft PowerShell](/powershell/scripting/overview) or [Postman](https://www.postman.com/):
```http POST http://localhost/validate-accesscode HTTP/1.1
Follow the steps in [Deploy your app to Azure](../app-service/quickstart-nodejs.
- Service endpoint looks similar to `https://custompolicyapi.azurewebsites.net/validate-accesscode`.
-You can test the app you've deployed by using an HTTP client such as [Microsoft PowerShell](https://learn.microsoft.com/powershell/scripting/overview) or [Postman](https://www.postman.com/). This time, use `https://custompolicyapi.azurewebsites.net/validate-accesscode` URL as the endpoint.
+You can test the app you've deployed by using an HTTP client such as [Microsoft PowerShell](/powershell/scripting/overview) or [Postman](https://www.postman.com/). This time, use `https://custompolicyapi.azurewebsites.net/validate-accesscode` URL as the endpoint.
## Step 2 - Call the REST API
Then, update the *Metadata*, *InputClaimsTransformations*, and *InputClaims* of
</InputClaims> ```
+## Receive data from REST API
+
+If your REST API returns data, which you want to include as claims in your policy, you can receive it by specifying claims in the `OutputClaims` element of the RESTful technical profile. If the name of the claim defined in your policy is different from the name defined in the REST API, you need to map these names by using the `PartnerClaimType` attribute.
+
+Use the steps in [Receiving data](api-connectors-overview.md?pivots=b2c-custom-policy#receiving-data) to learn how to format the data the custom policy expects, how to handle nulls values, and how to parse REST the API's nested JSON body.
+ ## Next steps Next, learn:
active-directory-b2c Custom Policies Series Hello World https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-hello-world.md
Previously updated : 01/30/2023 Last updated : 03/16/2023
If you haven't already done so, create the following encryption keys. To automat
<BuildingBlocks> <!-- Building Blocks Here-->
- <BuildingBlocks>
+ </BuildingBlocks>
<ClaimsProviders> <!-- Claims Providers Here-->
After the policy finishes execution, you're redirected to `https://jwt.ms`, and
}.[Signature] ```
-Notice the `message` and `sub` claims, which we set as output claims](relyingparty.md#outputclaims) in the `RelyingParty` section.
+Notice the `message` and `sub` claims, which we set as [output claims](relyingparty.md#outputclaims) in the `RelyingParty` section.
## Next steps
Next, learn:
- About custom policy [claims data type](claimsschema.md#datatype). -- About custom policy [user input types](claimsschema.md#userinputtype).
+- About custom policy [user input types](claimsschema.md#userinputtype).
active-directory-b2c Custom Policies Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-overview.md
In Azure Active Directory B2C (Azure AD B2C), you can create user experiences by
User flows are already customizable such as [changing UI](customize-ui.md), [customizing language](language-customization.md) and using [custom attributes](user-flow-custom-attributes.md). However, these customizations might not cover all your business specific needs, which is the reason why you need custom policies.
-While you can use pre-made [custom policy starter pack](/azure/active-directory-b2c/tutorial-create-user-flows?pivots=b2c-custom-policy#custom-policy-starter-pack), it's important for you understand how custom policy is built from scratch. In this how-to guide series, you'll learn what you need to understand for you to customize the behavior of your user experience by using custom policies. At the end of this how-to guide series, you should be able to read and understand existing custom policies or write your own from scratch.
+While you can use pre-made [custom policy starter pack](./tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack), it's important for you understand how custom policy is built from scratch. In this how-to guide series, you'll learn what you need to understand for you to customize the behavior of your user experience by using custom policies. At the end of this how-to guide series, you should be able to read and understand existing custom policies or write your own from scratch.
## Prerequisites
This how-to guide series consists of multiple articles. We recommend that you st
- Learn about [Azure AD B2C TrustFrameworkPolicy BuildingBlocks](buildingblocks.md) -- [Write your first Azure Active Directory B2C custom policy - Hello World!](custom-policies-series-hello-world.md)
+- [Write your first Azure Active Directory B2C custom policy - Hello World!](custom-policies-series-hello-world.md)
active-directory-b2c Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/data-residency.md
The following locations are in the process of being added to the list. For now,
## EU Data Boundary > [!IMPORTANT]
-> For comprehensive details about Microsoft's EU Data Boundary commitment, see [Microsoft's EU Data Boundary documentation](https://learn.microsoft.com/privacy/eudb/eu-data-boundary-learn).
+> For comprehensive details about Microsoft's EU Data Boundary commitment, see [Microsoft's EU Data Boundary documentation](/privacy/eudb/eu-data-boundary-learn).
## Remote profile solution
active-directory-domain-services Migrate From Classic Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/migrate-from-classic-vnet.md
Before you begin the migration process, complete the following initial checks an
| Service tag | AzureActiveDirectoryDomainServices | * | Any | WinRM | 5986 | TCP | Allow | Yes | Management of your domain | | Service tag | CorpNetSaw | * | Any | RDP | 3389 | TCP | Allow | Optional | Debugging for support |
- Make a note of this target resource group, target virtual network, and target virtual network subnet. These resource names are used during the migration process.
+ Make a note of the target resource group, target virtual network, and target virtual network subnet. These resource names are used during the migration process.
- Please note that the **CorpNetSaw** service tag isn't available by using Azure portal, and the network security group rule for **CorpNetSaw** has to be added by using PowerShell (powershell-create-instance.md#create-a-network-security-group).
+ Note that the **CorpNetSaw** service tag isn't available by using Azure portal, and the network security group rule for **CorpNetSaw** has to be added by using [PowerShell](powershell-create-instance.md#create-a-network-security-group).
1. Check the managed domain health in the Azure portal. If you have any alerts for the managed domain, resolve them before you start the migration process. 1. Optionally, if you plan to move other resources to the Resource Manager deployment model and virtual network, confirm that those resources can be migrated. For more information, see [Platform-supported migration of IaaS resources from Classic to Resource Manager][migrate-iaas].
active-directory-domain-services Network Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/network-considerations.md
Previously updated : 01/29/2023 Last updated : 03/14/2023 + # Virtual network design considerations and configuration options for Azure Active Directory Domain Services
The following sections cover network security groups and Inbound and Outbound po
The following network security group Inbound rules are required for the managed domain to provide authentication and management services. Don't edit or delete these network security group rules for the virtual network subnet for your managed domain.
-| Inbound port number | Protocol | Source | Destination | Action | Required | Purpose |
-|:--:|:--:|:-:|:--:|::|:--:|:--|
-| 5986 | TCP | AzureActiveDirectoryDomainServices | Any | Allow | Yes | Management of your domain. |
-| 3389 | TCP | CorpNetSaw | Any | Allow | Optional | Debugging for support. |
+| Source | Source service tag | Source port ranges | Destination | Service | Destination port ranges | Protocol | Action | Required | Purpose |
+|:--:|:-:|::|:-:|:-:|:--:|:--:|::|:--:|:--|
+| Service tag | AzureActiveDirectoryDomainServices | * | Any | WinRM | 5986 | TCP | Allow | Yes | Management of your domain. |
+| Service tag | CorpNetSaw | * | Any | RDP | 3389 | TCP | Allow | Optional | Debugging for support |
++
+Note that the **CorpNetSaw** service tag isn't available by using Azure portal, and the network security group rule for **CorpNetSaw** has to be added by using [PowerShell](powershell-create-instance.md#create-a-network-security-group).
Azure AD DS also relies on the Default Security rules AllowVnetInBound and AllowAzureLoadBalancerInBound.
active-directory-domain-services Tutorial Configure Ldaps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-configure-ldaps.md
Previously updated : 01/29/2023 Last updated : 03/15/2023 + #Customer intent: As an identity administrator, I want to secure access to an Azure Active Directory Domain Services managed domain using secure lightweight directory access protocol (LDAPS)
active-directory Sap Successfactors Integration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/sap-successfactors-integration-reference.md
The SuccessFactors connector supports expansion of the position object. To expan
| positionNameDE | $.employmentNav.results[0].jobInfoNav.results[0].positionNav.externalName_de_DE | ### Provisioning users in the Onboarding module
-Inbound user provisioning from SAP SuccessFactors to on-premises Active Directory and Azure AD now supports advance provisioning of pre-hires present in the SAP SuccessFactors Onboarding 2.0 module. Upon encountering a new hire profile with future start date, the Azure AD provisioning service queries SAP SuccessFactors to get new hires with one of the following status codes: `active`, `inactive`, `active_external`. The status code `active_external` corresponds to pre-hires present in the SAP SuccessFactors Onboarding 2.0 module. For a description of these status codes, refer to [SAP support note 2736579](https://launchpad.support.sap.com/#/notes/0002736579).
+Inbound user provisioning from SAP SuccessFactors to on-premises Active Directory and Azure AD now supports advance provisioning of pre-hires present in the SAP SuccessFactors Onboarding 2.0 module. Upon encountering a new hire profile with future start date, the Azure AD provisioning service queries SAP SuccessFactors to get new hires with one of the following status codes: `active`, `inactive`, `active_external_suite`. The status code `active_external_suite` corresponds to pre-hires present in the SAP SuccessFactors Onboarding 2.0 module. For a description of these status codes, refer to [SAP support note 2736579](https://launchpad.support.sap.com/#/notes/0002736579).
The default behavior of the provisioning service is to process pre-hires in the Onboarding module.
If you want to exclude processing of pre-hires in the Onboarding module, update
1. Under show advanced options, edit the SuccessFactors attribute list to add a new attribute called `userStatus`. 1. Set the JSONPath API expression for this attribute as: `$.employmentNav.results[0].userNav.status` 1. Save the schema to return back to the attribute mapping blade.
-1. Edit the Source Object scope to apply a scoping filter `userStatus NOT EQUALS active_external`
+1. Edit the Source Object scope to apply a scoping filter `userStatus NOT EQUALS
++++
+`
1. Save the mapping and validate that the scoping filter works using provisioning on demand. ### Enabling OData API Audit logs in SuccessFactors
active-directory Use Scim To Provision Users And Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
Previously updated : 03/14/2023 Last updated : 03/15/2023
In the example of a request, to retrieve the current state of a user, the values
***Example 4. Query the value of a reference attribute to be updated***
-If a reference attribute is to be updated, then Azure AD queries the service to determine whether the current value of the reference attribute in the identity store fronted by the service already matches the value of that attribute in Azure AD. For users, the only attribute of which the current value is queried in this way is the manager attribute. Here's an example of a request to determine whether the manager attribute of a user object currently has a certain value:
+Azure AD checks the current attribute value in the identity store before updating it. However, only the manager attribute is the checked first for users. Here's an example of a request to determine whether the manager attribute of a user object currently has a certain value:
In the sample code, the request is translated into a call to the QueryAsync method of the serviceΓÇÖs provider. The value of the properties of the object provided as the value of the parameters argument are as follows: * parameters.AlternateFilters.Count: 2
Check with your application provider, or your application provider's documentati
### Getting started
-Applications that support the SCIM profile described in this article can be connected to Azure AD using the "non-gallery application" feature in the Azure AD application gallery. Once connected, Azure AD runs a synchronization process every 40 minutes where it queries the application's SCIM endpoint for assigned users and groups, and creates or modifies them according to the assignment details.
+Applications that support the SCIM profile described in this article can be connected to Azure AD using the "non-gallery application" feature in the Azure AD application gallery. Once connected, Azure AD runs a synchronization process. The process runs every 40 minutes. The process queries the application's SCIM endpoint for assigned users and groups, and creates or modifies them according to the assignment details.
**To connect an application that supports SCIM:**
The provisioning service supports the [authorization code grant](https://tools.i
> [!NOTE] > OAuth v1 is not supported due to exposure of the client secret. OAuth v2 is supported.
-It is recommended, but not required, that you support multiple secrets for easy renewal without downtime.
+It's recommended, but not required, that you support multiple secrets for easy renewal without downtime.
#### How to set up OAuth code grant flow
active-directory Concept Authentication Default Enablement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-default-enablement.md
Previously updated : 02/24/2023 Last updated : 03/12/2023
As MFA fatigue attacks rise, number matching becomes more critical to sign-in se
>[!NOTE] >Number matching will begin to be enabled for all users of Microsoft Authenticator starting May 08, 2023.
-<!Add link to Mayur Blog post here>
- ## Microsoft managed settings In addition to configuring Authentication methods policy settings to be either **Enabled** or **Disabled**, IT admins can configure some settings in the Authentication methods policy to be **Microsoft managed**. A setting that is configured as **Microsoft managed** allows Azure AD to enable or disable the setting.
The following table lists each setting that can be set to Microsoft managed and
| [Location in Microsoft Authenticator notifications](how-to-mfa-additional-context.md) | Disabled | | [Application name in Microsoft Authenticator notifications](how-to-mfa-additional-context.md) | Disabled | | [System-preferred MFA](concept-system-preferred-multifactor-authentication.md) | Disabled |
+| [Authenticator Lite](how-to-mfa-authenticator-lite.md) | Disabled |
As threat vectors change, Azure AD may announce default protection for a **Microsoft managed** setting in [release notes](../fundamentals/whats-new.md) and on commonly read forums like [Tech Community](https://techcommunity.microsoft.com/).
active-directory Concept Authentication Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-methods.md
Previously updated : 09/17/2022 Last updated : 03/13/2023
The following table outlines the security considerations for the available authe
| Authentication method | Security | Usability | Availability | |--|:--:|::|::| | Windows Hello for Business | High | High | High |
-| Microsoft Authenticator app | High | High | High |
+| Microsoft Authenticator | High | High | High |
+| Authenticator Lite | High | High | High |
| FIDO2 security key | High | High | High | | Certificate-based authentication (preview)| High | High | High | | OATH hardware tokens (preview) | Medium | Medium | High |
The following table outlines when an authentication method can be used during a
| Method | Primary authentication | Secondary authentication | |--|:-:|:-:|
-| Windows Hello for Business | Yes | MFA\* |
-| Microsoft Authenticator app | Yes | MFA and SSPR |
+| Windows Hello for Business | Yes | MFA\* |
+| Microsoft Authenticator | Yes | MFA and SSPR |
+| Authenticator Lite | No | MFA |
| FIDO2 security key | Yes | MFA |
-| Certificate-based authentication (preview) | Yes | No |
+| Certificate-based authentication | Yes | No |
| OATH hardware tokens (preview) | No | MFA and SSPR | | OATH software tokens | No | MFA and SSPR | | SMS | Yes | MFA and SSPR |
active-directory Concept Mfa Howitworks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-mfa-howitworks.md
Previously updated : 01/29/2023 Last updated : 03/13/2023
When users sign in to an application or service and receive an MFA prompt, they
The following additional forms of verification can be used with Azure AD Multi-Factor Authentication:
-* Microsoft Authenticator app
+* Microsoft Authenticator
+* Authenticator Lite (in Outlook)
* Windows Hello for Business * FIDO2 security key * OATH hardware token (preview)
active-directory How To Mfa Authenticator Lite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-authenticator-lite.md
+
+ Title: How to enable Microsoft Authenticator Lite for Outlook mobile (preview)
+description: Learn about how to you can set up Microsoft Authenticator Lite for Outlook mobile to help users validate their identity
+++++ Last updated : 03/15/2023++++++++
+# Customer intent: As an identity administrator, I want to encourage users to understand how default protection can improve our security posture.
+
+# How to enable Microsoft Authenticator Lite for Outlook mobile (preview)
+
+Microsoft Authenticator Lite is another surface for Azure Active Directory (Azure AD) users to complete multifactor authentication by using push notifications or time-based one-time passcodes (TOTP) on their Android or iOS device. With Authenticator Lite, users can satisfy a multifactor authentication requirement from the convenience of a familiar app. Authenticator Lite is currently enabled in [Outlook mobile](https://www.microsoft.com/microsoft-365/outlook-mobile-for-android-and-ios).
+
+Users receive a notification in Outlook mobile to approve or deny sign-in, or they can copy a TOTP to use during sign-in.
+
+>[!NOTE]
+>This is an important security enhancement for users authenticating via telecom transports. The 'Microsoft managed' setting for this feature will be set to enabled on May 26th, 2023. This will enable the feature for all users in tenants where the feature is set to Microsoft managed. If you wish to change the state of this feature, please do so before May 26th, 2023.
+
+## Prerequisites
+
+- Your organization needs to enable Microsoft Authenticator (second factor) push notifications for some users or groups by using the Authentication methods policy. You can edit the Authentication methods policy by using the Azure portal or Microsoft Graph API.
+- If your organization is using the Active Directory Federation Services (AD FS) adapter or Network Policy Server (NPS) extensions, upgrade to the latest versions for a consistent experience.
+- Users enabled for shared device mode on Outlook mobile aren't eligible for Authenticator Lite.
+- Users must run a minimum Outlook mobile version.
+
+ | Operating system | Outlook version |
+ |:-:|::|
+ |Android | 4.2308.0 |
+ |iOS | 4.2309.0 |
+
+## Enable Authenticator Lite
+
+By default, Authenticator Lite is [Microsoft managed](concept-authentication-default-enablement.md#microsoft-managed-settings) and disabled during preview. After general availability, the Microsoft managed state default value will change to enable Authenticator Lite.
+
+| Property | Type | Description |
+|-||-|
+| excludeTarget | featureTarget | A single entity that is excluded from this feature. <br>You can only exclude one group from Authenticator Lite, which can be a dynamic or nested group.|
+| includeTarget | featureTarget | A single entity that is included in this feature. <br>You can only include one group for Authenticator Lite, which can be a dynamic or nested group.|
+| State | advancedConfigState | Possible values are:<br>**enabled** explicitly enables the feature for the selected group.<br>**disabled** explicitly disables the feature for the selected group.<br>**default** allows Azure AD to manage whether the feature is enabled or not for the selected group. |
+
+Once you identify the single target group, use the following API endpoint to change the **CompanionAppsAllowedState** property under **featureSettings**.
+
+```http
+https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
+```
+
+>[!NOTE]
+>In Graph Explorer, you need to consent to the **Policy.ReadWrite.AuthenticationMethod** permission.
+
+### Request
+
+```JSON
+//Retrieve your existing policy via a GET.
+//Leverage the Response body to create the Request body section. Then update the Request body similar to the Request body as shown below.
+//Change the Query to PATCH and Run query
+
+{
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
+ "id": "MicrosoftAuthenticator",
+ "state": "enabled",
+ "isSoftwareOathEnabled": false,
+ "excludeTargets": [],
+ "featureSettings": {
+ "companionAppAllowedState": {
+ "state": "enabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "s4432809-3bql-5m2l-0p42-8rq4707rq36m"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": "00000000-0000-0000-0000-000000000000"
+ }
+ }
+ },
+ "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets": [
+ {
+ "targetType": "group",
+ "id": "all_users",
+ "isRegistrationRequired": false,
+ "authenticationMode": "any"
+ }
+ ]
+}
+```
++
+## User registration
+If enabled for Authenticator Lite, users are prompted to register their account directly from Outlook mobile. Authenticator Lite registration isn't available by using [MySignIns](https://aka.ms/mysignins). Users can also enable or disable Authenticator Lite from within Outlook mobile. For more information about the user experience, see [Authenticator Lite support](https://aka.ms/authappliteuserdocs).
+++
+## Monitoring Authenticator Lite usage
+[Sign-in logs](/graph/api/signin-list) can show which app was used to complete user authentication. To view the latest sign-ins, use the following call on the beta API endpoint:
+
+```http
+GET auditLogs/signIns
+```
+
+If the sign-in was done by phone app notification, under **authenticationAppDeivceDetails** the **clientApp** field returns **microsoftAuthenticator** or **Outlook**.
+
+If a user has registered Authenticator Lite, the userΓÇÖs registered authentication methods include **Microsoft Authenticator (in Outlook)**.
+
+## Push notifications in Authenticator Lite
+Push notifications sent by Authenticator Lite aren't configurable and don't depend on the Authenticator feature settings. The settings for features included in the Authenticator Lite experience are listed in the following table.
+
+| Authenticator Feature | Authenticator Lite Experience|
+|::|:-:|
+| Number Matching | Enabled |
+| Location Context | Disabled |
+| Application Context | Disabled |
+
+The following screenshots show what users see when Authenticator Lite sends a push notification.
++
+## AD FS adapter and NPS extension
+
+Authenticator Lite enforces number matching in every authentication. If your tenant is using an AD FS adapter or an NPS extension, your users may not be able to complete Authenticator Lite notifications. For more information, see [AD FS adapter](how-to-mfa-number-match.md#ad-fs-adapter) and [NPS extension](how-to-mfa-number-match.md#nps-extension).
+
+To learn more about verification notifications, see [Microsoft Authenticator authentication method](concept-authentication-authenticator-app.md).
+
+## Common questions
+
+### Does Authenticator Lite work as a broker app?
+No, Authenticator Lite is only available for push notifications and TOTP.
+
+### Can Authenticator Lite be used for SSPR?
+No, Authenticator Lite is only available for push notifications and TOTP.
+
+### Is this available in Outlook desktop app?
+No, Authenticator Lite is only available on Outlook mobile.
+
+### Where can users register for Authenticator Lite?
+Users can only register for Authenticator Lite from mobile Outlook. Authenticator Lite registration can be managed from [aka.ms/mysignins](https://aka.ms/mysignins).
+
+### Can users register Microsoft Authenticator and Authenticator Lite?
+
+Users that have Microsoft Authenticator on their device can't register Authenticator Lite. If a user has an Authenticator Lite registration and then later downloads Microsoft Authenticator, they can register both. If a user has two devices, they can register Authenticator Lite on one and Microsoft Authenticator on the other.
+
+## Next steps
+
+[Authentication methods in Azure Active Directory](concept-authentication-authenticator-app.md)
active-directory Howto Mfa Mfasettings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-mfasettings.md
To unblock a user, complete the following steps:
## Report suspicious activity
-A preview of **Report Suspicious Activity**, the updated MFA **Fraud Alert** feature, is now available. When an unknown and suspicious MFA prompt is received, users can report the fraud attempt by using Microsoft Authenticator or through their phone. These alerts are integrated with [Identity Protection](/azure/active-directory/identity-protection/overview-identity-protection) for more comprehensive coverage and capability.
+A preview of **Report Suspicious Activity**, the updated MFA **Fraud Alert** feature, is now available. When an unknown and suspicious MFA prompt is received, users can report the fraud attempt by using Microsoft Authenticator or through their phone. These alerts are integrated with [Identity Protection](../identity-protection/overview-identity-protection.md) for more comprehensive coverage and capability.
-Users who report an MFA prompt as suspicious are set to **High User Risk**. Administrators can use risk-based policies to limit access for these users, or enable self-service password reset (SSPR) for users to remediate problems on their own. If you previously used the **Fraud Alert** automatic blocking feature and don't have an Azure AD P2 license for risk-based policies, you can use risk detection events to identify and disable impacted users and automatically prevent their sign-in. For more information about using risk-based policies, see [Risk-based access policies](/azure/active-directory/identity-protection/concept-identity-protection-policies).
+Users who report an MFA prompt as suspicious are set to **High User Risk**. Administrators can use risk-based policies to limit access for these users, or enable self-service password reset (SSPR) for users to remediate problems on their own. If you previously used the **Fraud Alert** automatic blocking feature and don't have an Azure AD P2 license for risk-based policies, you can use risk detection events to identify and disable impacted users and automatically prevent their sign-in. For more information about using risk-based policies, see [Risk-based access policies](../identity-protection/concept-identity-protection-policies.md).
To enable **Report Suspicious Activity** from the Authentication Methods Settings:
When a user reports a MFA prompt as suspicious, the event shows up in the Sign-i
### Manage suspicious activity events
-Once a user has reported a prompt as suspicious, the risk should be investigated and remediated with [Identity Protection](/azure/active-directory/identity-protection/howto-identity-protection-remediate-unblock).
+Once a user has reported a prompt as suspicious, the risk should be investigated and remediated with [Identity Protection](../identity-protection/howto-identity-protection-remediate-unblock.md).
### Report suspicious activity and fraud alert
After you enable the **remember multi-factor authentication** feature, users can
## Next steps
-To learn more, see [What authentication and verification methods are available in Azure Active Directory?](concept-authentication-methods.md)
+To learn more, see [What authentication and verification methods are available in Azure Active Directory?](concept-authentication-methods.md)
active-directory Howto Conditional Access Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md
Sign-in frequency previously applied to only to the first factor authentication
### User sign-in frequency and device identities
-On Azure AD joined and hybrid Azure AD joined devices, unlocking the device, or signing in interactively will only refresh the Primary Refresh Token (PRT) every 4 hours. The last refresh timestamp recorded for PRT compared with the current timestamp must be within the time allotted in SIF policy for PRT to satisfy SIF and grant access to a PRT that has an existing MFA claim. On [Azure AD registered devices](/azure/active-directory/devices/concept-azure-ad-register), unlock/sign-in would not satisfy the SIF policy because the user is not accessing an Azure AD registered device via an Azure AD account. However, the [Azure AD WAM](../develop/scenario-desktop-acquire-token-wam.md) plugin can refresh a PRT during native application authentication using WAM.
+On Azure AD joined and hybrid Azure AD joined devices, unlocking the device, or signing in interactively will only refresh the Primary Refresh Token (PRT) every 4 hours. The last refresh timestamp recorded for PRT compared with the current timestamp must be within the time allotted in SIF policy for PRT to satisfy SIF and grant access to a PRT that has an existing MFA claim. On [Azure AD registered devices](../devices/concept-azure-ad-register.md), unlock/sign-in would not satisfy the SIF policy because the user is not accessing an Azure AD registered device via an Azure AD account. However, the [Azure AD WAM](../develop/scenario-desktop-acquire-token-wam.md) plugin can refresh a PRT during native application authentication using WAM.
Note: The timestamp captured from user log-in is not necessarily the same as the last recorded timestamp of PRT refresh because of the 4-hour refresh cycle. The case when it is the same is when a PRT has expired and a user log-in refreshes it for 4 hours. In the following examples, assume SIF policy is set to 1 hour and PRT is refreshed at 00:00.
We factor for five minutes of clock skew, so that we donΓÇÖt prompt users more o
## Next steps
-* If you're ready to configure Conditional Access policies for your environment, see the article [Plan a Conditional Access deployment](plan-conditional-access.md).
+* If you're ready to configure Conditional Access policies for your environment, see the article [Plan a Conditional Access deployment](plan-conditional-access.md).
active-directory App Only Access Primer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-only-access-primer.md
+
+ Title: Microsoft identity platform app-only access scenario
+description: Learn about when and how to use app-only access in the Microsoft identity platform endpoint.
+++++++ Last updated : 03/15/2023+++++
+# Understanding application-only access
+
+When an application directly accesses a resource, like Microsoft Graph, its access isn't limited to the files or operations available to any single user. The app calls APIs directly using its own identity, and a user or app with admin rights must authorize it to access the resources. This scenario is application-only access.
+
+> [!VIDEO https://www.youtube.com/embed/6R3W9T01gdE]
+
+## When should I use application-only access?
+
+In most cases, application-only access is broader and more powerful than [delegated access](delegated-access-primer.md), so you should only use app-only access where needed. ItΓÇÖs usually the right choice if:
+
+- The application needs to run in an automated way, without user input. For example, a daily script that checks emails from certain contacts and sends automated responses.
+- The application needs to access resources belonging to multiple different users. For example, a backup or data loss prevention app might need to retrieve messages from many different chat channels, each with different participants.
+- You find yourself tempted to store credentials locally and allow the app to sign in "as" the user or admin.
+
+In contrast, you should never use application-only access where a user would normally sign in to manage their own resources. These types of scenarios must use delegated access to be least privileged.
+
+![Diagram shows illustration of application permissions vs delegated permissions.](./media/permissions-consent-overview/delegated-app-only-permissions.png)
+++
+## Authorizing an app to make application-only calls
+
+To make app-only calls, you need to assign your client app the appropriate app roles. App roles are also referred to as application-only permissions. They're *app* roles because they grant access only in the context of the resource app that defines the role.
+
+For example, to read a list of all teams created in an organization, you need to assign your application the Microsoft Graph `Team.ReadBasic.All` app role. This app role grants the ability to read this data when Microsoft Graph is the resource app. This assignment doesn't assign your client application to a Teams role that might allow it to view this data through other services.
+
+As a developer, you need to configure all required app-only permissions, also referred to as app roles on your application registration. You can configure your app's requested app-only permissions through the Azure portal or Microsoft Graph. App-only access doesn't support dynamic consent, so you can't request individual permissions or sets of permissions at runtime.
+
+Once you've configured all the permissions your app needs, it must get admin consent [admin consent](../manage-apps/grant-admin-consent.md) for it to access the resources. For example, only users with the global admin role can grant app-only permissions (app roles) for the Microsoft Graph API. Users with other admin roles, like application admin and cloud app admin, are able to grant app-only permissions for other resources.
+
+Admin users can grant app-only permissions by using the Azure portal or by creating grants programmatically through the Microsoft Graph API. You can also prompt for interactive consent from within your app, but this option isn't preferable since app-only access doesn't require a user.
+
+Consumer users with Microsoft Accounts, like Outlook.com or Xbox Live accounts, can never authorize application-only access.
+Always follow the principle of least privilege: you should never request app roles that your app doesnΓÇÖt need. This principle helps limit the security risk if your app is compromised and makes it easier for administrators to grant your app access. For example, if your app-only needs to identify users without reading their detailed profile information, you should request the more limited Microsoft Graph `User.ReadBasic.All` app role instead ofΓÇ»`User.Read.All`.
+
+## Designing and publishing app roles for a resource service
+
+If you're building a service on Azure AD that exposes APIs for other clients to call, you may wish to support automated access with app roles (app-only permissions). You can define the app roles for your application in the **App roles** section of your app registration in Azure AD portal. For more information on how to create app roles, see [Declare roles for an application](howto-add-app-roles-in-azure-ad-apps.md#declare-roles-for-an-application).
+
+When exposing app roles for others to use, provide clear descriptions of the scenario to the admin who is going to assign them. App roles should generally be as narrow as possible and support specific functional scenarios, since app-only access isn't constrained by user rights. Avoid exposing a single role that grants full `read` or full `read/write` access to all APIs and resources your service contains.
+
+> [!NOTE]
+> App roles (app-only permissions) can also be configured to support assignment to users and groups. Be sure that you configure your app roles correctly for your intended access scenario. If you intend for your APIΓÇÖs app roles to be used for app-only access, select applications as the only allowed member types when creating the app roles.
+
+## How does application-only access work?
+
+The most important thing to remember about app-only access is that the calling app acts on its own behalf and as its own identity. There's no user interaction. If the app has been assigned to a given app role for a resource, then the app has fully unconstrained access to all resources and operations governed by that app role.
+
+Once an app has been assigned to one or more app roles (app-only permissions), it can request an app-only token from Azure AD using the [client credentials flow](v2-oauth2-client-creds-grant-flow.md) or any other supported authentication flow. The assigned roles are added to the `roles` claim of the app's access token.
+
+In some scenarios, the application identity may determine whether access is granted, similarly to user rights in a delegated call. For example, the `Application.ReadWrite.OwnedBy` app role grants an app the ability to manage service principals that the app itself owns.
+
+## Application-only access example - Automated email notification via Microsoft Graph
+
+The following example illustrates a realistic automation scenario.
+
+Alice wants to notify a team by email every time the division reporting folder that resides in a Windows file share registers a new document. Alice creates a scheduled task that runs a PowerShell script to examine the folder and find new files. The script then sends an email using a mailbox protected by a resource API, Microsoft Graph.
+
+The script runs without any user interaction, therefore the authorization system only checks the application authorization. Exchange Online checks whether the client making the call has been granted the application permission (app role), `Mail.Send` by the administrator. If `Mail.Send` isnΓÇÖt granted to the app, then Exchange Online fails the request.
+
+| POST /users/{id}/{userPrincipalName}/sendMail | Client app granted Mail.Send | Client app not granted Mail.Send |
+| -- | -- | -- |
+| The script uses AliceΓÇÖs mailbox to send emails. | 200 ΓÇô Access granted. Admin allowed the app to send mail as any user. |403 - Unauthorized. Admin hasnΓÇÖt allowed this client to send emails. |
+| The script creates a dedicated mailbox to send emails. | 200 ΓÇô Access granted. Admin allowed the app to send mail as any user. | 403 - Unauthorized. Admin hasnΓÇÖt allowed this client to send emails. |
+
+The example given is a simple illustration of application authorization. The production Exchange Online service supports many other access scenarios, such as limiting application permissions to specific Exchange Online mailboxes.
+
+## Next steps
+
+- [Learn how to create and assign app roles in Azure AD](howto-add-app-roles-in-azure-ad-apps.md)
+- [Overview of permissions in Microsoft Graph](/graph/permissions-overview)
+- [Microsoft Graph permissions reference](/graph/permissions-reference)
active-directory Custom Extension Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-extension-get-started.md
In this step, you create an HTTP trigger function API in the Azure portal. The f
| Setting | Suggested value | Description | | | - | -- | | **Subscription** | Your subscription | The subscription under which the new function app will be created in. |
- | **[Resource Group](/azure/azure-resource-manager/management/overview)** | *myResourceGroup* | Select and existing resource group, or name for the new one in which you'll create your function app. |
+ | **[Resource Group](../../azure-resource-manager/management/overview.md)** | *myResourceGroup* | Select and existing resource group, or name for the new one in which you'll create your function app. |
| **Function App name** | Globally unique name | A name that identifies the new function app. Valid characters are `a-z` (case insensitive), `0-9`, and `-`. | |**Publish**| Code | Option to publish code files or a Docker container. For this tutorial, select **Code**. | | **Runtime stack** | .NET | Your preferred programming language. For this tutorial, select **.NET**. |
To test your custom claim provider, follow these steps:
- Learn more about custom claims providers with the [custom claims provider reference](custom-claims-provider-reference.md) article. -- Learn how to [troubleshoot your custom extensions API](custom-extension-troubleshoot.md).--
+- Learn how to [troubleshoot your custom extensions API](custom-extension-troubleshoot.md).
active-directory Custom Extension Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-extension-troubleshoot.md
In order to troubleshoot issues with your custom claims provider REST API endpoi
## Azure AD sign-in logs
-You can also use [Azure AD sign-in logs](/azure/active-directory/reports-monitoring/concept-sign-ins) in addition to your REST API logs, and hosting environment diagnostics solutions. Using Azure AD sign-in logs, you can find errors, which may affect the users' sign-ins. The Azure AD sign-in logs provide information about the HTTP status, error code, execution duration, and number of retries that occurred the API was called by Azure AD.
+You can also use [Azure AD sign-in logs](../reports-monitoring/concept-sign-ins.md) in addition to your REST API logs, and hosting environment diagnostics solutions. Using Azure AD sign-in logs, you can find errors, which may affect the users' sign-ins. The Azure AD sign-in logs provide information about the HTTP status, error code, execution duration, and number of retries that occurred the API was called by Azure AD.
-Azure AD sign-in logs also integrate with [Azure Monitor](/azure/azure-monitor/). You can set up alerts and monitoring, visualize the data, and integrate with security information and event management (SIEM) tools. For example, you can set up notifications if the number of errors exceed a certain threshold that you choose.
+Azure AD sign-in logs also integrate with [Azure Monitor](../../azure-monitor/index.yml). You can set up alerts and monitoring, visualize the data, and integrate with security information and event management (SIEM) tools. For example, you can set up notifications if the number of errors exceed a certain threshold that you choose.
To access the Azure AD sign-in logs:
One of the most common issues is that your custom claims provider API doesn't re
- Learn how to [create and register a custom claims provider](custom-extension-get-started.md) with a sample Open ID Connect application. - If you already have a custom claims provider registered, you can configure a [SAML application](custom-extension-configure-saml-app.md) to receive tokens with claims sourced from an external store.-- Learn more about custom claims providers with the [custom claims provider reference](custom-claims-provider-reference.md) article.
+- Learn more about custom claims providers with the [custom claims provider reference](custom-claims-provider-reference.md) article.
active-directory Delegated Access Primer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/delegated-access-primer.md
Title: Microsoft identity platform delegated access scenario
-description: Learn about delegated access in the Microsoft identity platform endpoint.
+description: Learn about when and how to use delegated access in the Microsoft identity platform endpoint.
Previously updated : 11/01/2022 Last updated : 03/15/2023 -+ # Understanding delegated access
People frequently use different applications to access their data from cloud ser
Use delegated access whenever you want to let a signed-in user work with their own resources or resources they can access. Whether itΓÇÖs an admin setting up policies for their entire organization or a user deleting an email in their inbox, all scenarios involving user actions should use delegated access.
+![Diagram shows illustration of delegated permissions vs application permissions.](./media/permissions-consent-overview/delegated-app-only-permissions.png)
+ In contrast, delegated access is usually a poor choice for scenarios that must run without a signed-in user, like automation. It may also be a poor choice for scenarios that involve accessing many usersΓÇÖ resources, like data loss prevention or backups. Consider using [application-only access](permissions-consent-overview.md) for these types of operations. ## Requesting scopes as a client app
active-directory Permissions Consent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/permissions-consent-overview.md
# Introduction to permissions and consent
-To _access_ a protected resource like email or calendar data, your application needs the resource owner's _authorization_. The resource owner can _consent_ to or deny your app's request. Understanding these foundational concepts will help you build more secure and trustworthy applications that request only the access they need, when they need it, from its users and administrators.
+To *access* a protected resource like email or calendar data, your application needs the resource owner's *authorization*. The resource owner can *consent* to or deny your app's request. Understanding these foundational concepts will help you build more secure and trustworthy applications that request only the access they need, when they need it, from users and administrators.
## Access scenarios
For the user, the authorization relies on the privileges that the user has been
### App-only access (Access without a user)
-In this access scenario, the application acts on its own with no user signed in. Application access is used in scenarios such as automation, and backup. This scenario includes apps that run as background services or daemons. It's appropriate when it's undesirable to have a specific user signed in, or when the data required can't be scoped to a single user.
+In this access scenario, the application acts on its own with no user signed in. Application access is used in scenarios such as automation, and backup. This scenario includes apps that run as background services or daemons. It's appropriate when it's undesirable to have a specific user signed in, or when the data required can't be scoped to a single user. For more information about the app-only access scenario, see [App-only-access](app-only-access-primer.md).
App-only access uses app roles instead of delegated scopes. When granted through consent, app roles may also be called applications permissions. For app-only access, the client app must be granted appropriate app roles of the resource app it's calling in order to access the requested data. For more information about assigning app roles to client applications, see [Assigning app roles to applications](howto-add-app-roles-in-azure-ad-apps.md#assign-app-roles-to-applications).
There are other ways in which applications can be granted authorization for app-
| Result of consent (specific to Microsoft Graph) | [oAuth2PermissionGrant](/graph/api/resources/oauth2permissiongrant) | [appRoleAssignment](/graph/api/resources/approleassignment) | ## Consent+ One way that applications are granted permissions is through consent. Consent is a process where users or admins authorize an application to access a protected resource. For example, when a user attempts to sign into an application for the first time, the application can request permission to see the user's profile and read the contents of the user's mailbox. The user sees the list of permissions the app is requesting through a consent prompt. Other scenarios where users may see a consent prompt include: - When previously granted consent is revoked.-- When the application is coded to specifically prompt for consent during every sign-in.
+- When the application is coded to specifically prompt for consent during sign-in.
- When the application uses dynamic consent to ask for new permissions as needed at run time. The key details of a consent prompt are the list of permissions the application requires and the publisher information. For more information about the consent prompt and the consent experience for both admins and end-users, see [application consent experience](application-consent-experience.md).
Depending on the permissions they require, some applications might require an ad
Preauthorization allows a resource application owner to grant permissions without requiring users to see a consent prompt for the same set of permissions that have been preauthorized. This way, an application that has been preauthorized won't ask users to consent to permissions. Resource owners can preauthorize client apps in the Azure portal or by using PowerShell and APIs, like Microsoft Graph. ## Next steps+ - [Delegated access scenario](delegated-access-primer.md) - [User and admin consent overview](../manage-apps/consent-and-permissions-overview.md) - [OpenID connect scopes](scopes-oidc.md)
active-directory Reference Aadsts Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-aadsts-error-codes.md
Previously updated : 10/10/2022 Last updated : 03/14/2023
The `error` field has several possible values - review the protocol documentatio
| AADSTS700011 | UnauthorizedClientAppNotFoundInOrgIdTenant - Application with identifier {appIdentifier} was not found in the directory. A client application requested a token from your tenant, but the client app doesn't exist in your tenant, so the call failed. | | AADSTS70002 | InvalidClient - Error validating the credentials. The specified client_secret does not match the expected value for this client. Correct the client_secret and try again. For more info, see [Use the authorization code to request an access token](v2-oauth2-auth-code-flow.md#redeem-a-code-for-an-access-token). | | AADSTS700025 | InvalidClientPublicClientWithCredential - Client is public so neither 'client_assertion' nor 'client_secret' should be presented. |
+| AADSTS700027| Client assertion failed signature validation. Developer error - the app is attempting to sign in without the necessary or correct authentication parameters.|
| AADSTS70003 | UnsupportedGrantType - The app returned an unsupported grant type. | | AADSTS700030 | Invalid certificate - subject name in certificate isn't authorized. SubjectNames/SubjectAlternativeNames (up to 10) in token certificate are: {certificateSubjects}. | | AADSTS70004 | InvalidRedirectUri - The app returned an invalid redirect URI. The redirect address specified by the client does not match any configured addresses or any addresses on the OIDC approve list. |
active-directory Scenario Web App Call Api Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-acquire-token.md
public async Task<ActionResult> ReadMail()
} ```
-For details see the code for [BuildConfidentialClientApplication()](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect/blob/master/WebApp/Utils/MsalAppBuilder.cs) and [GetMsalAccountId](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect/blob/257c8f96ec3ff875c351d1377b36403eed942a18/WebApp/Utils/ClaimPrincipalExtension.cs#L38) in the code sample
+For details see the code for [GetMsalAccountId](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect/blob/257c8f96ec3ff875c351d1377b36403eed942a18/WebApp/Utils/ClaimPrincipalExtension.cs#L38) in the code sample.
# [Java](#tab/java)
active-directory Troubleshoot Publisher Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/troubleshoot-publisher-verification.md
The target application (`AppId`) must have a Publisher Domain set. Set a Publish
Occurs when a [Publisher Domain](howto-configure-publisher-domain.md) isn't configured on the app. **Remediation Steps**
-1. Follow the directions [here](/azure/active-directory/develop/howto-configure-publisher-domain#set-a-publisher-domain-in-the-azure-portal) to set a Publisher Domain
+1. Follow the directions [here](./howto-configure-publisher-domain.md#set-a-publisher-domain-in-the-azure-portal) to set a Publisher Domain
### PublisherDomainMismatch
If you've reviewed all of the previous information and are still receiving an er
- TenantId where app is registered - MPN ID - REST request being made -- Error code and message being returned
+- Error code and message being returned
active-directory Userinfo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/userinfo.md
# Microsoft identity platform UserInfo endpoint
-As part of the OpenID Connect (OIDC) standard, the [UserInfo endpoint](https://openid.net/specs/openid-connect-core-1_0.html#UserInfo) returns information about an authenticated user. In the Microsoft identity platform, the UserInfo endpoint is hosted by Microsoft Graph at https://graph.microsoft.com/oidc/userinfo.
+As part of the OpenID Connect (OIDC) standard, the [UserInfo endpoint](https://openid.net/specs/openid-connect-core-1_0.html#UserInfo) returns information about an authenticated user.
## Find the .well-known configuration endpoint
You can't add to or customize the information returned by the UserInfo endpoint.
To customize the information returned by the identity platform during authentication and authorization, use [claims mapping]( active-directory-claims-mapping.md) and [optional claims]( active-directory-optional-claims.md) to modify security token configuration.
-## Next Steps
+## Next steps
* [Review the contents of ID tokens](id-tokens.md). * [Customize the contents of an ID token using optional claims](active-directory-optional-claims.md).
active-directory Workload Identities Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identities-faqs.md
[Workload identities](workload-identities-overview.md) is now available in two editions: **Free** and **Workload Identities Premium**. The free edition of workload identities is included with a subscription of a commercial online service such as [Azure](https://azure.microsoft.com/) and [Power Platform](https://powerplatform.microsoft.com/). The Workload Identities Premium offering is available through a Microsoft representative, the [Open Volume License
-Program](https://www.microsoft.com/licensing/how-to-buy/how-to-buy), and the [Cloud Solution Providers program](/azure/lighthouse/concepts/cloud-solution-provider). Azure and Microsoft 365 subscribers can also purchase Workload
+Program](https://www.microsoft.com/licensing/how-to-buy/how-to-buy), and the [Cloud Solution Providers program](../../lighthouse/concepts/cloud-solution-provider.md). Azure and Microsoft 365 subscribers can also purchase Workload
Identities Premium online. For more information, see [what are workload identities?](workload-identities-overview.md)
Yes, it's available.
## Is it possible to have a mix of Azure AD Premium P1, Azure AD Premium P2 and Workload Identities Premium licenses in one tenant?
-Yes, customers can have a mixture of license plans in one tenant.
+Yes, customers can have a mixture of license plans in one tenant.
active-directory Workload Identity Federation Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-considerations.md
Creating multiple federated identity credentials under the same user-assigned ma
When you use automation or Azure Resource Manager templates (ARM templates) to create federated identity credentials under the same parent identity, create the federated credentials sequentially. Federated identity credentials under different managed identities can be created in parallel without any restrictions.
-If federated identity credentials are provisioned in a loop, you can [provision them serially](/azure/azure-resource-manager/templates/copy-resources#serial-or-parallel) by setting *"mode": "serial"*.
+If federated identity credentials are provisioned in a loop, you can [provision them serially](../../azure-resource-manager/templates/copy-resources.md#serial-or-parallel) by setting *"mode": "serial"*.
You can also provision multiple new federated identity credentials sequentially using the *dependsOn* property. The following Azure Resource Manager template (ARM template) example creates three new federated identity credentials sequentially on a user-assigned managed identity by using the *dependsOn* property:
The following error codes may be returned when creating, updating, getting, list
| 400 | Federated Identity Credential name '{ficName}' is invalid. | Alphanumeric, dash, underscore, no more than 3-120 symbols. First symbol is alphanumeric. | | 404 | The parent user-assigned identity doesn't exist. | Check user assigned identity name in federated identity credentials resource path. | | 400 | Issuer and subject combination already exists for this Managed Identity. | This is a constraint. List all federated identity credentials associated with the user-assigned identity to find existing federated identity credential. |
-| 409 | Conflict | Concurrent write request to federated identity credential resources under the same user-assigned identity has been denied.
+| 409 | Conflict | Concurrent write request to federated identity credential resources under the same user-assigned identity has been denied.
active-directory Assign Local Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/assign-local-admin.md
Device administrators are assigned to all Azure AD joined devices. You canΓÇÖt s
- Upto 4 hours have passed for Azure AD to issue a new Primary Refresh Token with the appropriate privileges. - User signs out and signs back in, not lock/unlock, to refresh their profile.-- Users won't be listed in the local administrator group, the permissions are received through the Primary Refresh Token. +
+Users won't be listed in the local administrator group, the permissions are received through the Primary Refresh Token.
> [!NOTE] > The above actions are not applicable to users who have not signed in to the relevant device previously. In this case, the administrator privileges are applied immediately after their first sign-in to the device.
active-directory Manage Stale Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/manage-stale-devices.md
If your device is under control of Intune or any other MDM solution, retire the
### System-managed devices
-Don't delete system-managed devices. These devices are generally devices such as Autopilot. Once deleted, these devices can't be reprovisioned. The new `Get-AzureADDevice` cmdlet excludes system-managed devices by default.
+Don't delete system-managed devices. These devices are generally devices such as Autopilot. Once deleted, these devices can't be reprovisioned.
### Hybrid Azure AD joined devices
active-directory Cross Cloud Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-cloud-settings.md
After each organization has completed these steps, Azure AD B2B collaboration be
- **Obtain any required object IDs or app IDs.** If you want to apply access settings to specific users, groups, or applications in the partner organization, you'll need to contact the organization for information before configuring your settings. Obtain their user object IDs, group object IDs, or application IDs (*client app IDs* or *resource app IDs*) so you can target your settings correctly. > [!NOTE]
-> Users from another Microsoft cloud must be invited using their user principal name (UPN). [Email as sign-in](/azure/active-directory/authentication/howto-authentication-use-email-signin#b2b-guest-user-sign-in-with-an-email-address) is not currently supported when collaborating with users from another Microsoft cloud.
+> Users from another Microsoft cloud must be invited using their user principal name (UPN). [Email as sign-in](../authentication/howto-authentication-use-email-signin.md#b2b-guest-user-sign-in-with-an-email-address) is not currently supported when collaborating with users from another Microsoft cloud.
## Enable the cloud in your Microsoft cloud settings
The following scenarios are supported when collaborating with an organization fr
## Next steps
-See [Configure external collaboration settings](external-collaboration-settings-configure.md) for B2B collaboration with non-Azure AD identities, social identities, and non-IT managed external accounts.
+See [Configure external collaboration settings](external-collaboration-settings-configure.md) for B2B collaboration with non-Azure AD identities, social identities, and non-IT managed external accounts.
active-directory Direct Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/direct-federation.md
Previously updated : 01/20/2023 Last updated : 03/15/2023
This article describes how to set up federation with any organization whose iden
> [!IMPORTANT] > >- We no longer support an allowlist of IdPs for new SAML/WS-Fed IdP federations. When you're setting up a new external federation, refer to [Step 1: Determine if the partner needs to update their DNS text records](#step-1-determine-if-the-partner-needs-to-update-their-dns-text-records).
->- In the SAML request sent by Azure AD for external federations, the Issuer URL is a tenanted endpoint. For any new federations, we recommend that all our partners set the audience of the SAML or WS-Fed based IdP to a tenanted endpoint. Refer to the [SAML 2.0](#required-saml-20-attributes-and-claims) and [WS-Fed](#required-ws-fed-attributes-and-claims) required attributes and claims sections below. Any existing federations configured with the global endpoint will continue to work, but new federations will stop working if your external IdP is expecting a global issuer URL in the SAML request.
+>- In the SAML request sent by Azure AD for external federations, the Issuer URL is a tenanted endpoint. For any new federations, we recommend that all our partners set the audience of the SAML or WS-Fed based IdP to a tenanted endpoint. Refer to the [SAML 2.0](#required-saml-20-attributes-and-claims) and [WS-Fed](#required-ws-fed-attributes-and-claims) required attributes and claims sections. Any existing federations configured with the global endpoint will continue to work, but new federations will stop working if your external IdP is expecting a global issuer URL in the SAML request.
> - We've removed the single domain limitation. You can now associate multiple domains with an individual federation configuration. > - We've removed the limitation that required the authentication URL domain to match the target domain or be from an allowed IdP. For details, see [Step 1: Determine if the partner needs to update their DNS text records](#step-1-determine-if-the-partner-needs-to-update-their-dns-text-records).
This article describes how to set up federation with any organization whose iden
After you set up federation with an organization's SAML/WS-Fed IdP, any new guest users you invite will be authenticated using that SAML/WS-Fed IdP. ItΓÇÖs important to note that setting up federation doesnΓÇÖt change the authentication method for guest users who have already redeemed an invitation from you. Here are some examples:
+ - Guest users have already redeemed invitations from you, and then later you set up federation with the organization's SAML/WS-Fed IdP. These guest users continue to use the same authentication method they used before you set up federation.
+ - You set up federation with an organization's SAML/WS-Fed IdP and invite guest users, and then the partner organization later moves to Azure AD. The guest users who have already redeemed invitations continue to use the federated SAML/WS-Fed IdP, as long as the federation policy in your tenant exists.
+ - You delete federation with an organization's SAML/WS-Fed IdP. Any guest users currently using the SAML/WS-Fed IdP are unable to sign in.
In any of these scenarios, you can update a guest userΓÇÖs authentication method by [resetting their redemption status](reset-redemption-status.md).
SAML/WS-Fed IdP federation is tied to domain namespaces, such as contoso.com and
## End-user experience
-With SAML/WS-Fed IdP federation, guest users sign into your Azure AD tenant using their own organizational account. When they are accessing shared resources and are prompted for sign-in, users are redirected to their IdP. After successful sign-in, users are returned to Azure AD to access resources. Their refresh tokens are valid for 12 hours, the [default length for passthrough refresh token](../develop/active-directory-configurable-token-lifetimes.md#configurable-token-lifetime-properties) in Azure AD. If the federated IdP has SSO enabled, the user will experience SSO and will not see any sign-in prompt after initial authentication.
+With SAML/WS-Fed IdP federation, guest users sign into your Azure AD tenant using their own organizational account. When they're accessing shared resources and are prompted for sign-in, users are redirected to their IdP. After successful sign-in, users are returned to Azure AD to access resources. If the Azure AD session expires or becomes invalid and the federated IdP has SSO enabled, the user experiences SSO. If the federated user's session is valid, the user isn't prompted to sign in again. Otherwise, the user is redirected to their IdP for sign-in.
## Sign-in endpoints
You can also give guest users a direct link to an application or resource by inc
**Can I set up SAML/WS-Fed IdP federation with Azure AD verified domains?**
-No, we block SAML/WS-Fed IdP federation for Azure AD verified domains in favor of native Azure AD managed domain capabilities. If you try to set up SAML/WS-Fed IdP federation with a domain that is DNS-verified in Azure AD, you'll see an error.
+No, we block SAML/WS-Fed IdP federation for Azure AD verified domains in favor of native Azure AD managed domain capabilities. If you try to set up SAML/WS-Fed IdP federation with a domain that is DNS-verified in Azure AD, an error occurs.
**Can I set up SAML/WS-Fed IdP federation with a domain for which an unmanaged (email-verified) tenant exists?**
Yes, we now support SAML/WS-Fed IdP federation with multiple domains from the sa
**Do I need to renew the signing certificate when it expires?**
-If you specify the metadata URL in the IdP settings, Azure AD will automatically renew the signing certificate when it expires. However, if the certificate is rotated for any reason before the expiration time, or if you don't provide a metadata URL, Azure AD will be unable to renew it. In this case, you'll need to update the signing certificate manually.
+If you specify the metadata URL in the IdP settings, Azure AD automatically renews the signing certificate when it expires. However, if the certificate is rotated for any reason before the expiration time, or if you don't provide a metadata URL, Azure AD is unable to renew it. In this case, you need to update the signing certificate manually.
**If SAML/WS-Fed IdP federation and email one-time passcode authentication are both enabled, which method takes precedence?**
-When SAML/WS-Fed IdP federation is established with a partner organization, it takes precedence over email one-time passcode authentication for new guest users from that organization. If a guest user redeemed an invitation using one-time passcode authentication before you set up SAML/WS-Fed IdP federation, they'll continue to use one-time passcode authentication.
+When SAML/WS-Fed IdP federation is established with a partner organization, it takes precedence over email one-time passcode authentication for new guest users from that organization. If a guest user redeemed an invitation using one-time passcode authentication before you set up SAML/WS-Fed IdP federation, they continue to use one-time passcode authentication.
**Does SAML/WS-Fed IdP federation address sign-in issues due to a partially synced tenancy?**
-No, the [email one-time passcode](one-time-passcode.md) feature should be used in this scenario. A ΓÇ£partially synced tenancyΓÇ¥ refers to a partner Azure AD tenant where on-premises user identities aren't fully synced to the cloud. A guest whose identity doesnΓÇÖt yet exist in the cloud but who tries to redeem your B2B invitation wonΓÇÖt be able to sign in. The one-time passcode feature would allow this guest to sign in. The SAML/WS-Fed IdP federation feature addresses scenarios where the guest has their own IdP-managed organizational account, but the organization has no Azure AD presence at all.
+No, the [email one-time passcode](one-time-passcode.md) feature should be used in this scenario. A ΓÇ£partially synced tenancyΓÇ¥ refers to a partner Azure AD tenant where on-premises user identities aren't fully synced to the cloud. A guest whose identity doesnΓÇÖt yet exist in the cloud but who tries to redeem your B2B invitation isn't able to sign in. The one-time passcode feature would allow this guest to sign in. The SAML/WS-Fed IdP federation feature addresses scenarios where the guest has their own IdP-managed organizational account, but the organization has no Azure AD presence at all.
**Once SAML/WS-Fed IdP federation is configured with an organization, does each guest need to be sent and redeem an individual invitation?**
Depending on the partner's IdP, the partner might need to update their DNS recor
1. Check the partner's IdP passive authentication URL to see if the domain matches the target domain or a host within the target domain. In other words, when setting up federation for `fabrikam.com`: - If the passive authentication endpoint is `https://fabrikam.com` or `https://sts.fabrikam.com/adfs` (a host in the same domain), no DNS changes are needed.
- - If the passive authentication endpoint is `https://fabrikamconglomerate.com/adfs` or `https://fabrikam.com.uk/adfs`, the domain doesn't match the fabrikam.com domain, so the partner will need to add a text record for the authentication URL to their DNS configuration.
+ - If the passive authentication endpoint is `https://fabrikamconglomerate.com/adfs` or `https://fabrikam.com.uk/adfs`, the domain doesn't match the fabrikam.com domain, so the partner needs to add a text record for the authentication URL to their DNS configuration.
1. If DNS changes are needed based on the previous step, ask the partner to add a TXT record to their domain's DNS records, like the following example:
Next, your partner organization needs to configure their IdP with the required c
### SAML 2.0 configuration
-Azure AD B2B can be configured to federate with IdPs that use the SAML protocol with specific requirements listed below. For more information about setting up a trust between your SAML IdP and Azure AD, see [Use a SAML 2.0 Identity Provider (IdP) for Single Sign-On](../hybrid/how-to-connect-fed-saml-idp.md).
+Azure AD B2B can be configured to federate with IdPs that use the SAML protocol with specific requirements listed in this section. For more information about setting up a trust between your SAML IdP and Azure AD, see [Use a SAML 2.0 Identity Provider (IdP) for SSO](../hybrid/how-to-connect-fed-saml-idp.md).
> [!NOTE] > The target domain for SAML/WS-Fed IdP federation must not be DNS-verified in Azure AD. See the [Frequently asked questions](#frequently-asked-questions) section for details.
Azure AD B2B can be configured to federate with IdPs that use the SAML protocol
The following tables show requirements for specific attributes and claims that must be configured at the third-party IdP. To set up federation, the following attributes must be received in the SAML 2.0 response from the IdP. These attributes can be configured by linking to the online security token service XML file or by entering them manually. > [!NOTE]
-> Ensure the value below matches the cloud for which you're setting up external federation.
+> Ensure the value matches the cloud for which you're setting up external federation.
Required attributes for the SAML 2.0 response from the IdP:
Required claims for the SAML 2.0 token issued by the IdP:
### WS-Fed configuration
-Azure AD B2B can be configured to federate with IdPs that use the WS-Fed protocol with some specific requirements as listed below. Currently, the two WS-Fed providers have been tested for compatibility with Azure AD include AD FS and Shibboleth. For more information about establishing a relying party trust between a WS-Fed compliant provider with Azure AD, see the "STS Integration Paper using WS Protocols" available in the [Azure AD Identity Provider Compatibility Docs](https://www.microsoft.com/download/details.aspx?id=56843).
+Azure AD B2B can be configured to federate with IdPs that use the WS-Fed protocol. This section discusses the requirements. Currently, the two WS-Fed providers have been tested for compatibility with Azure AD include AD FS and Shibboleth. For more information about establishing a relying party trust between a WS-Fed compliant provider with Azure AD, see the "STS Integration Paper using WS Protocols" available in the [Azure AD Identity Provider Compatibility Docs](https://www.microsoft.com/download/details.aspx?id=56843).
> [!NOTE] > The target domain for federation must not be DNS-verified on Azure AD. See the [Frequently asked questions](#frequently-asked-questions) section for details.
Azure AD B2B can be configured to federate with IdPs that use the WS-Fed protoco
The following tables show requirements for specific attributes and claims that must be configured at the third-party WS-Fed IdP. To set up federation, the following attributes must be received in the WS-Fed message from the IdP. These attributes can be configured by linking to the online security token service XML file or by entering them manually. > [!NOTE]
-> Ensure the value below matches the cloud for which you're setting up external federation.
+> Ensure the value matches the cloud for which you're setting up external federation.
Required attributes in the WS-Fed message from the IdP:
Required claims for the WS-Fed token issued by the IdP:
## Step 3: Configure SAML/WS-Fed IdP federation in Azure AD
-Next, you'll configure federation with the IdP configured in step 1 in Azure AD. You can use either the Azure portal or the [Microsoft Graph API](/graph/api/resources/samlorwsfedexternaldomainfederation?view=graph-rest-beta&preserve-view=true). It might take 5-10 minutes before the federation policy takes effect. During this time, don't attempt to redeem an invitation for the federation domain. The following attributes are required:
+Next, configure federation with the IdP configured in step 1 in Azure AD. You can use either the Azure portal or the [Microsoft Graph API](/graph/api/resources/samlorwsfedexternaldomainfederation?view=graph-rest-beta&preserve-view=true). It might take 5-10 minutes before the federation policy takes effect. During this time, don't attempt to redeem an invitation for the federation domain. The following attributes are required:
- Issuer URI of the partner's IdP - Passive authentication endpoint of partner IdP (only https is supported)
Next, you'll configure federation with the IdP configured in step 1 in Azure AD.
4. On the **New SAML/WS-Fed IdP** page, enter the following: - **Display name** - Enter a name to help you identify the partner's IdP. - **Identity provider protocol** - Select **SAML** or **WS-Fed**.
- - **Domain name of federating IdP** - Enter your partnerΓÇÖs IdP target domain name for federation. During this initial configuration, enter just one domain name. You'll be able to add more domains later.
+ - **Domain name of federating IdP** - Enter your partnerΓÇÖs IdP target domain name for federation. During this initial configuration, enter just one domain name. You can add more domains later.
![Screenshot showing the new SAML or WS-Fed IdP page.](media/direct-federation/new-saml-wsfed-idp-parse.png)
On the **All identity providers** page, you can view the list of SAML/WS-Fed ide
## How do I remove federation?
-You can remove your federation configuration. If you do, federation guest users who have already redeemed their invitations won't be able to sign in. But you can give them access to your resources again by [resetting their redemption status](reset-redemption-status.md).
+You can remove your federation configuration. If you do, federation guest users who have already redeemed their invitations can no longer sign in. But you can give them access to your resources again by [resetting their redemption status](reset-redemption-status.md).
To remove a configuration for an IdP in the Azure portal: 1. Go to the [Azure portal](https://portal.azure.com/). In the left pane, select **Azure Active Directory**.
active-directory External Identities Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/external-identities-pricing.md
Previously updated : 03/29/2022 Last updated : 03/15/2023 -+ # Billing model for Azure AD External Identities
To take advantage of MAU billing, your Azure AD tenant must be linked to an Azur
||| | An Azure AD tenant already linked to a subscription | Do nothing. When you use External Identities features to collaborate with guest users, you'll be automatically billed using the MAU model. | | An Azure AD tenant not yet linked to a subscription | [Link your Azure AD tenant to a subscription](#link-your-azure-ad-tenant-to-a-subscription) to activate MAU billing. |
-| | |
## About monthly active users (MAU) billing
The pricing tier that applies to your guest users is based on the highest pricin
An Azure AD tenant must be linked to a resource group within an Azure subscription for proper billing and access to features.
-1. Sign in to the [Azure portal](https://portal.azure.com/) with an Azure account that's been assigned at least the [Contributor](../../role-based-access-control/built-in-roles.md) role within the subscription or a resource group within the subscription.
+1. Sign in to the [Azure portal](https://portal.azure.com/) with an Azure account that's been assigned at least the Contributor role within the subscription or a resource group within the subscription.
2. Select the directory you want to link: In the Azure portal toolbar, select the **Directories + subscriptions** icon in the portal toolbar. Then on the **Portal settings | Directories + subscriptions** page, find your directory in the **Directory name** list, and then select **Switch**.
An Azure AD tenant must be linked to a resource group within an Azure subscripti
6. In the tenant list, select the checkbox next to the tenant, and then select **Link subscription**.
- ![Select the tenant and link a subscription](media/external-identities-pricing/linked-subscriptions.png)
+ :::image type="content" source="media/external-identities-pricing/linked-subscriptions.png" alt-text="Screenshot of the link a subscription option.":::
7. In the **Link a subscription** pane, select a **Subscription** and a **Resource group**. Then select **Apply**. (If there are no subscriptions listed, see [What if I can't find a subscription?](#what-if-i-cant-find-a-subscription).)
- ![Select a subscription and resource group](media/external-identities-pricing/link-subscription-resource.png)
+ :::image type="content" source="media/external-identities-pricing/link-subscription-resource.png" alt-text="Screenshot of how to link a subscription.":::
After you complete these steps, your Azure subscription is billed based on your Azure Direct or Enterprise Agreement details, if applicable.
After you complete these steps, your Azure subscription is billed based on your
If no subscriptions are available in the **Link a subscription** pane, here are some possible reasons: -- You don't have the appropriate permissions. Be sure to sign in with an Azure account that's been assigned at least the [Contributor](../../role-based-access-control/built-in-roles.md) role within the subscription or a resource group within the subscription.
+- You don't have the appropriate permissions. Be sure to sign in with an Azure account that's been assigned at least the Contributor role within the subscription or a resource group within the subscription.
- A subscription exists, but it hasn't been associated with your directory yet. You can [associate an existing subscription to your tenant](../fundamentals/active-directory-how-subscriptions-associated-directory.md) and then repeat the steps for [linking it to your tenant](#link-your-azure-ad-tenant-to-a-subscription).
If no subscriptions are available in the **Link a subscription** pane, here are
## Next steps
-For the latest pricing information, see [Azure Active Directory pricing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
-Learn more about [managing Azure resources](../../azure-resource-manager/management/overview.md).
+For the latest pricing information, see [Azure Active Directory pricing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
active-directory 5 Secure Access B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/5-secure-access-b2b.md
You can enable collaboration across Microsoft clouds, such as Microsoft Azure op
Learn more: * [Microsoft Azure operated by 21Vianet](/azure/china/overview-operations)
-* [Azure Government developer guide](/azure-government/documentation-government-developer-guide)
+* [Azure Government developer guide](/azure/azure-government/documentation-government-developer-guide)
* [Configure Microsoft cloud settings for B2B collaboration (Preview)](../external-identities/cross-cloud-settings.md). You can allow inbound access to specific tenants (allowlist), and set the default policy to block access. Then, create organizational policies that allow access by user, group, or application.
active-directory Whats New Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-sovereign-clouds.md
This page is updated monthly, so revisit it regularly.
**Service category:** Azure AD Domain Services **Product capability:** Azure AD Domain Services
-Now within the Azure portal you have access to view key data for your Azure AD-DS Domain Controllers such as: LDAP Searches/sec, Total Query Received/sec, DNS Total Response Sent/sec, LDAP Successful Binds/sec, memory usage, processor time, Kerberos Authentications, and NTLM Authentications. For more information, see: [Check fleet metrics of Azure Active Directory Domain Services](/azure/active-directory-domain-services/fleet-metrics).
+Now within the Azure portal you have access to view key data for your Azure AD-DS Domain Controllers such as: LDAP Searches/sec, Total Query Received/sec, DNS Total Response Sent/sec, LDAP Successful Binds/sec, memory usage, processor time, Kerberos Authentications, and NTLM Authentications. For more information, see: [Check fleet metrics of Azure Active Directory Domain Services](../../active-directory-domain-services/fleet-metrics.md).
active-directory How To Connect Sync Change The Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-change-the-configuration.md
String attributes are indexable by default, and the maximum length is 448 charac
The userPrincipalName attribute in Active Directory is not always known by the users and might not be suitable as the sign-in ID. With the Azure AD Connect sync installation wizard, you can choose a different attribute--for example, *mail*. But in some cases, the attribute must be calculated. For example, the company Contoso has two Azure AD directories, one for production and one for testing. They want the users in their test tenant to use another suffix in the sign-in ID:
-`userPrincipalName` <- `Word([userPrincipalName],1,"@") & "@contosotest.com"`.
+`Word([userPrincipalName],1,"@") & "@contosotest.com"`.
In this expression, take everything left of the first @-sign (Word) and concatenate with a fixed string.
active-directory Migrate Okta Sign On Policies To Azure Active Directory Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access.md
Title: Tutorial to migrate Okta sign-on policies to Azure Active Directory Conditional Access
-description: In this tutorial, you learn how to migrate Okta sign-on policies to Azure Active Directory Conditional Access.
+description: Learn how to migrate Okta sign-on policies to Azure Active Directory Conditional Access.
- Previously updated : 09/01/2021 Last updated : 01/13/2023 # Tutorial: Migrate Okta sign-on policies to Azure Active Directory Conditional Access
-In this tutorial, you'll learn how your organization can migrate from global or application-level sign-on policies in Okta to Azure Active Directory (Azure AD) Conditional Access policies to secure user access in Azure AD and connected applications.
+In this tutorial, learn to migrate an organization from global or application-level sign-on policies in Okta Conditional Access in Azure Active Directory (Azure AD). Conditional Access policies secure user access in Azure AD and connected applications.
+
+Learn more: [What is Conditional Access?](/azure/active-directory/conditional-access/overview)
-This tutorial assumes you have an Office 365 tenant federated to Okta for sign-in and multifactor authentication (MFA). You should also have Azure AD Connect server or Azure AD Connect cloud provisioning agents configured for user provisioning to Azure AD.
+This tutorial assumes you have:
+
+* Office 365 tenant federated to Okta for sign-in and multi-factor authentication
+* Azure AD Connect server, or Azure AD Connect cloud provisioning agents configured for user provisioning to Azure AD
## Prerequisites
-When you switch from Okta sign-on to Conditional Access, it's important to understand licensing requirements. Conditional Access requires users to have an Azure AD Premium P1 license assigned before registration for Azure AD Multi-Factor Authentication.
+See the following two sections for licensing and credentials prerequisites.
-Before you do any of the steps for hybrid Azure AD join, you'll need an enterprise administrator credential in the on-premises forest to configure the service connection point (SCP) record.
+### Licensing
-## Catalog current Okta sign-on policies
+There are licensing requirements if you switch from Okta sign-on to Conditional Access. The process requires an Azure AD Premium P1 license to enable registration for Azure AD Multi-Factor Authentication (MFA).
-To complete a successful transition to Conditional Access, evaluate the existing Okta sign-on policies to determine use cases and requirements that will be transitioned to Azure AD.
+Learn more: [Assign or remove licenses in the Azure Active Directory portal](/azure/active-directory/fundamentals/license-users-groups)
-1. Check the global sign-on policies by going to **Security** > **Authentication** > **Sign On**.
+### Enterprise Administrator credentials
- ![Screenshot that shows global sign-on policies.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/global-sign-on-policies.png)
+To configure the service connection point (SCP) record, ensure you have Enterprise Administrator credentials in the on-premises forest.
- In this example, the global sign-on policy enforces MFA on all sessions outside of our configured network zones.
+## Evaluate Okta sign-on policies for transition
- ![Screenshot that shows global sign-on policies enforcing MFA.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/global-sign-on-policies-enforce-mfa.png)
+Locate and evaluate Okta sign-on policies to determine what will be transitioned to Azure AD.
-2. Go to **Applications** and check the application-level sign-on policies. Select **Applications** from the submenu, and then select your Office 365 connected instance from the **Active apps list**.
+1. In Okta go to **Security** > **Authentication** > **Sign On**.
-3. Select **Sign On** and scroll to the bottom of the page.
+ ![Screenshot of Global MFA Sign On Policy entries on the Authentication page.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/global-sign-on-policies.png)
-In the following example, the Office 365 application sign-on policy has four separate rules:
+2. Go to **Applications**.
+3. From the submenu, select **Applications**
+4. From the **Active apps list**, select the Microsoft Office 365 connected instance.
-- **Enforce MFA for Mobile Sessions**: Requires MFA from every modern authentication or browser session on iOS or Android.-- **Allow Trusted Windows Devices**: Prevents your trusted Okta devices from being prompted for more verification or factors.-- **Require MFA from Untrusted Windows Devices**: Requires MFA from every modern authentication or browser session on untrusted Windows devices.-- **Block Legacy Authentication**: Prevents any legacy authentication clients from connecting to the service.
+ ![Screenshot of settings under Sign On, for Microsoft Office 365.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/global-sign-on-policies-enforce-mfa.png)
- ![Screenshot that shows Office 365 sign-on rules.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/sign-on-rules.png)
+5. Select **Sign On**.
+6. Scroll to the bottom of the page.
-## Configure condition prerequisites
+The Microsoft Office 365 application sign-on policy has four rules:
-Conditional Access policies can be configured to match Okta's conditions for most scenarios without more configuration.
+- **Enforce MFA for mobile sessions** - requires MFA from modern authentication or browser sessions on iOS or Android
+- **Allow trusted Windows devices** - prevents unnecessary verification or factor prompts for trusted Okta devices
+- **Require MFA from untrusted Windows devices** - requires MFA from modern authentication or browser sessions on untrusted Windows devices
+- **Block legacy authentication** - prevents legacy authentication clients from connecting to the service
-In some scenarios, you might need more setup before you configure the Conditional Access policies. The two known scenarios at the time of writing this article are:
+The following screenshot is conditions and actions for the four rules, on the Sign On Policy screen.
-- **Okta network locations to named locations in Azure AD**: Follow the instructions in [Using the location condition in a Conditional Access policy](../conditional-access/location-condition.md#named-locations) to configure named locations in Azure AD.-- **Okta device trust to device-based CA**: Conditional Access offers two possible options when you evaluate a user's device:
+ ![Screenshot of conditions and actions for the four rules, on the Sign On Policy screen.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/sign-on-rules.png)
- - [Use hybrid Azure AD join](#hybrid-azure-ad-join-configuration), which is a feature enabled within the Azure AD Connect server that synchronizes Windows current devices, such as Windows 10, Windows Server 2016, and Windows Server 2019, to Azure AD.
- - [Enroll the device in Microsoft Intune](#configure-device-compliance) and assign a compliance policy.
+## Configure Conditional Access policies
-### Hybrid Azure AD join configuration
+Configure Conditional Access policies to match Okta conditions. However, in some scenarios, you might need more setup:
-To enable hybrid Azure AD join on your Azure AD Connect server, run the configuration wizard. You'll need to take steps post-configuration to automatically enroll devices.
+* Okta network locations to named locations in Azure AD
+ * [Using the location condition in a Conditional Access policy](../conditional-access/location-condition.md)
+* Okta device trust to device-based Conditional Access (two options to evaluate user devices):
+ * See the following section, **Hybrid Azure AD join configuration** to synchronize Windows devices, such as Windows 10, Windows Server 2016 and 2019, to Azure AD
+ * See the following section, **Configure device compliance**
+ * See, [Use hybrid Azure AD join](#hybrid-azure-ad-join-configuration), a feature in Azure AD Connect server that synchronizes Windows devices, such as Windows 10, Windows Server 2016, and Windows Server 2019, to Azure AD
+ * See, [Enroll the device in Microsoft Intune](#configure-device-compliance) and assign a compliance policy
->[!NOTE]
->Hybrid Azure AD join isn't supported with the Azure AD Connect cloud provisioning agents.
+### Hybrid Azure AD join configuration
-1. To enable hybrid Azure AD join, follow these [instructions](../devices/hybrid-azuread-join-managed-domains.md#configure-hybrid-azure-ad-join).
+To enable hybrid Azure AD join on your Azure AD Connect server, run the configuration wizard. After configuration, enroll devices.
-1. On the **SCP configuration** page, select the **Authentication Service** dropdown. Choose your Okta federation provider URL and select **Add**. Enter your on-premises enterprise administrator credentials and then select **Next**.
+ >[!NOTE]
+ >Hybrid Azure AD join isn't supported with the Azure AD Connect cloud provisioning agents.
- ![Screenshot that shows SCP configuration.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/scp-configuration.png)
+1. [Configure hybrid Azure AD join](../devices/howto-hybrid-azure-ad-join.md).
+2. On the **SCP configuration** page, select the **Authentication Service** dropdown.
-1. If you've blocked legacy authentication on Windows clients in either the global or app-level sign-on policy, make a rule to allow the hybrid Azure AD join process to finish.
+ ![Screenshot of the Authentication Service dropdown on the Microsoft Azure Active Directory Connect dialog.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/scp-configuration.png)
-1. Allow the entire legacy authentication stack through for all Windows clients. You can also contact Okta support to enable its custom client string on your existing app policies.
+4. Select an Okta federation provider URL.
+5. Select **Add**.
+6. Enter your on-premises Enterprise Administrator credentials
+7. Select **Next**.
+
+ > [!TIP]
+ > If you blocked legacy authentication on Windows clients in the global or app-level sign-on policy, make a rule that enables the hybrid Azure AD join process to finish. Allow the legacy authentication stack for Windows clients. </br>To enable custom client strings on app policies, contact the [Okta Help Center](https://support.okta.com/help/).
### Configure device compliance
-Hybrid Azure AD join is a direct replacement for Okta device trust on Windows. Conditional Access policies can also look at device compliance for devices that have fully enrolled in Microsoft Intune:
+Hybrid Azure AD join is a replacement for Okta device trust on Windows. Conditional Access policies recognize compliance for devices enrolled in Microsoft Intune.
-- **Compliance overview**: Refer to [device compliance policies in Intune](/mem/intune/protect/device-compliance-get-started#:~:text=Reference%20for%20non-compliance%20and%20Conditional%20Access%20on%20the,applicable%20%20...%20%203%20more%20rows).-- **Device compliance**: Create [policies in Intune](/mem/intune/protect/create-compliance-policy).-- **Windows enrollment**: If you've opted to deploy hybrid Azure AD join, you can deploy another group policy to complete the [auto-enrollment process of these devices in Intune](/windows/client-management/mdm/enroll-a-windows-10-device-automatically-using-group-policy).-- **iOS/iPadOS enrollment**: Before you enroll an iOS device, you must make [more configurations](/mem/intune/enrollment/ios-enroll) in the Endpoint Management console.-- **Android enrollment**: Before you enroll an Android device, you must make [more configurations](/mem/intune/enrollment/android-enroll) in the Endpoint Management console.
+#### Device compliance policy
-## Configure Azure AD Multi-Factor Authentication tenant settings
+* [Use compliance policies to set rules for devices you manage with Intune](/mem/intune/protect/device-compliance-get-started)
+* [Create a compliance policy in Microsoft Intune](/mem/intune/protect/create-compliance-policy)
-Before you convert to Conditional Access, confirm the base Azure AD Multi-Factor Authentication tenant settings for your organization.
+#### Windows 10/11, iOS, iPadOS, and Android enrollment
-1. Go to the [Azure portal](https://portal.azure.com) and sign in with a global administrator account.
+If you deployed hybrid Azure AD join, you can deploy another group policy to complete auto-enrollment of these devices in Intune.
-1. Select **Azure Active Directory** > **Users** > **Multi-Factor Authentication** to go to the legacy Azure AD Multi-Factor Authentication portal.
+* [Enrollment in Microsoft Intune](/mem/intune/enrollment/)
+* [Quickstart: Set up automatic enrollment for Windows 10/11 devices](/mem/intune/enrollment/quickstart-setup-auto-enrollment)
+* [Enroll Android devices](/mem/intune/enrollment/android-enroll)
+* [Enroll iOS/iPadOS devices in Intune](/mem/intune/enrollment/ios-enroll)
- ![Screenshot that shows the legacy Azure AD Multi-Factor Authentication portal.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/legacy-azure-ad-portal.png)
+## Configure Azure AD Multi-Factor Authentication tenant settings
- You can also use the legacy link to the [Azure AD Multi-Factor Authentication portal](https://aka.ms/mfaportal).
+Before you convert to Conditional Access, confirm the base MFA tenant settings for your organization.
-1. On the legacy **multi-factor authentication** menu, change the status menu through **Enabled** and **Enforced** to confirm you have no users enabled for legacy MFA. If your tenant has users in the following views, you must disable them in the legacy menu. Only then will Conditional Access policies take effect on their account.
+1. Go to the [Azure portal](https://portal.azure.com).
+2. Sign in as a Global Administrator.
+3. Select **Azure Active Directory** > **Users** > **Multi-Factor Authentication**.
+4. The legacy Azure AD Multi-Factor Authentication portal appears. Or select [Azure AD MFA portal](https://aka.ms/mfaportal).
- ![Screenshot that shows disabling a user in the legacy Azure AD Multi-Factor Authentication portal.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/disable-user-legacy-azure-ad-portal.png)
+ ![Screenshot of the multi-factor authentication screen.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/legacy-azure-ad-portal.png)
- The **Enforced** field should also be empty.
+5. Confirm there are no users enabled for legacy MFA: On the **multi-factor authentication** menu, on **Multi-Factor Auth status**, select **Enabled** and **Enforced**. If the tenant has users in the following views, disable them in the legacy menu.
-1. Select the **Service settings** option. Change the **App passwords** selection to **Do not allow users to create app passwords to sign in to non-browser apps**.
+ ![Screenshot of the multi-factor authentication screen with the search feature highlighted.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/disable-user-legacy-azure-ad-portal.png)
- ![Screenshot that shows the application password settings not allowing users to create app passwords.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/app-password-selection.png)
+6. Ensure the **Enforced** field is empty.
+7. Select the **Service settings** option.
+8. Change the **App passwords** selection to **Do not allow users to create app passwords to sign in to non-browser apps**.
-1. Ensure the **Skip multi-factor authentication for requests from federated users on my intranet** and **Allow users to remember multi-factor authentication on devices they trust (between one to 365 days)** checkboxes are cleared, and then select **Save**.
+ ![Screenshot of the multi-factor authentication screen with service settings highlighted.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/app-password-selection.png)
- >[!NOTE]
- >See [best practices for configuring the MFA prompt settings](../authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md).
+9. Clear the checkboxes for **Skip multi-factor authentication for requests from federated users on my intranet** and **Allow users to remember multi-factor authentication on devices they trust (between one to 365 days)**.
+10. Select **Save**.
-![Screenshot that shows cleared checkboxes in the legacy Azure AD Multi-Factor Authentication portal.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/uncheck-fields-legacy-azure-ad-portal.png)
+ ![Screenshot of cleared checkboxes on the Require Trusted Devices for Access screen.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/uncheck-fields-legacy-azure-ad-portal.png)
-## Configure Conditional Access policies
+ >[!NOTE]
+ >See [Optimize reauthentication prompts and understand session lifetime for Azure AD MFA](../authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md).
-After you've configured the prerequisites and established the base settings, it's time to build the first Conditional Access policy.
-1. To configure Conditional Access policies in Azure AD, go to the [Azure portal](https://portal.azure.com). On **Manage Azure Active Directory**, select **View**.
+## Build a Conditional Access policy
- Configure Conditional Access policies by following [best practices for deploying and designing Conditional Access](../conditional-access/plan-conditional-access.md#conditional-access-policy-components).
+To configure Conditional Access policies, see [Best practices for deploying and designing Conditional Access](../conditional-access/plan-conditional-access.md#conditional-access-policy-components).
-1. To mimic the global sign-on MFA policy from Okta, [create a policy](../conditional-access/howto-conditional-access-policy-all-users-mfa.md).
+After you configure the prerequisites and established base settings, you can build Conditional Access policy. Policy can be targeted to an application, a test group of users, or both.
-1. Create a [device trust-based Conditional Access rule](../conditional-access/require-managed-devices.md).
+Before you get started:
- This policy as any other in this tutorial can be targeted to a specific application, a test group of users, or both.
+* [Understand Conditional Access policy components](../conditional-access/plan-conditional-access.md)
+* [Building a Conditional Access policy](../conditional-access/concept-conditional-access-policies.md)
- ![Screenshot that shows testing a user.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/test-user.png)
+1. Go to the [Azure portal](https://portal.azure.com).
+2. On **Manage Azure Active Directory**, select **View**.
+3. Create a policy. See, [Common Conditional Access policy: Require MFA for all users](../conditional-access/howto-conditional-access-policy-all-users-mfa.md).
+4. Create a device trust-based Conditional Access rule.
- ![Screenshot that shows success in testing a user.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/success-test-user.png)
+ ![Screenshot of entries for Require Trusted Devices for Access, under Conditional Access.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/test-user.png)
-1. After you've configured the location-based policy and device trust policy, it's time to configure the equivalent [block legacy authentication](../conditional-access/howto-conditional-access-policy-block-legacy.md) policy.
+ ![Screenshot of the Keep you account secure dialog with the success message.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/success-test-user.png)
-With these three Conditional Access policies, the original Okta sign-on policies experience has been replicated in Azure AD. Next steps involve enrolling the user via Azure AD Multi-Factor Authentication and testing the policies.
+5. After you configure the location-based policy and device trust policy, [Block legacy authentication with Azure AD with Conditional Access](/azure/active-directory/conditional-access/block-legacy-authentication).
-## Enroll pilot members in Azure AD Multi-Factor Authentication
+With these three Conditional Access policies, the original Okta sign-on policies experience is replicated in Azure AD.
-After you configure the Conditional Access policies, users must register for Azure AD Multi-Factor Authentication methods. Users can be required to register through several different methods.
+## Enroll pilot members in MFA
-1. For individual registration, direct users to the [Microsoft Sign-in pane](https://aka.ms/mfasetup) to manually enter the registration information.
+Users register for MFA methods.
-1. Users can go to the [Microsoft Security info page](https://aka.ms/mysecurityinfo) to enter information or manage the form of MFA registration.
+For individual registration, users go to [Microsoft Sign-in pane](https://aka.ms/mfasetup).
-See [this guide](../authentication/howto-registration-mfa-sspr-combined.md) to fully understand the MFA registration process.
+To manage registration, users go to [Microsoft My Sign-Ins | Security Info](https://aka.ms/mysecurityinfo).
-Go to the [Microsoft Sign-in pane](https://aka.ms/mfasetup). After you sign in with Okta MFA, you're instructed to register for MFA with Azure AD.
+Learn more: [Enable combined security information registration in Azure Active Directory](../authentication/howto-registration-mfa-sspr-combined.md).
->[!NOTE]
->If registration already happened in the past for a user, they're taken to the **My Security** information page after they satisfy the MFA prompt.
-See the [user documentation for MFA enrollment](../user-help/security-info-setup-signin.md).
+ >[!NOTE]
+ >If users registered, they're redirected to the **My Security** page, after they satisfy MFA.
## Enable Conditional Access policies
-1. To roll out testing, change the policies created in the earlier examples to **Enabled test user login**.
+1. To test, change the created policies to **Enabled test user login**.
- ![Screenshot that shows enabling a test user.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/enable-test-user.png)
+ ![Screenshot of policies on the Conditinal Access, Policies screen.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/enable-test-user.png)
-1. On the next Office 365 **Sign-In** pane, the test user John Smith is prompted to sign in with Okta MFA and Azure AD Multi-Factor Authentication.
+2. On the Office 365 **Sign-In** pane, the test user John Smith is prompted to sign in with Okta MFA and Azure AD MFA.
- ![Screenshot that shows the Azure Sign-In pane.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/sign-in-through-okta.png)
+ ![Screenshot of the Azure Sign-In pane.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/sign-in-through-okta.png)
-1. Complete the MFA verification through Okta.
+3. Complete the MFA verification through Okta.
- ![Screenshot that shows MFA verification through Okta.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/mfa-verification-through-okta.png)
+ ![Screenshot of MFA verification through Okta.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/mfa-verification-through-okta.png)
-1. After the user completes the Okta MFA prompt, the user is prompted for Conditional Access. Ensure that the policies were configured appropriately and are within conditions to be triggered for MFA.
+4. The user is prompted for Conditional Access.
+5. Ensure the policies are configured to be triggered for MFA.
- ![Screenshot that shows MFA verification through Okta prompted for Conditional Access.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/mfa-verification-through-okta-prompted-ca.png)
+ ![Screenshot of MFA verification through Okta prompted for Conditional Access.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/mfa-verification-through-okta-prompted-ca.png)
-## Cut over from sign-on to Conditional Access policies
+## Add organization members to Conditional Access policies
-After you conduct thorough testing on the pilot members to ensure that Conditional Access is in effect as expected, the remaining organization members can be added to Conditional Access policies after registration has been completed.
+After you conduct testing on pilot members, add the remaining organization members to Conditional Access policies, after registration.
-To avoid double-prompting between Azure AD Multi-Factor Authentication and Okta MFA, opt out from Okta MFA by modifying sign-on policies.
+To avoid double-prompting between Azure AD MFA and Okta MFA, opt out from Okta MFA: modify sign-on policies.
-The final migration step to Conditional Access can be done in a staged or cut-over fashion.
-1. Go to the Okta admin console, select **Security** > **Authentication**, and then go to **Sign-on Policy**.
+1. Go to the Okta admin console
+2. Select **Security** > **Authentication**
+3. Go to **Sign-on Policy**.
>[!NOTE]
- > Set global policies to **Inactive** only if all applications from Okta are protected by their own application sign-on policies.
-1. Set the **Enforce MFA** policy to **Inactive**. You can also assign the policy to a new group that doesn't include the Azure AD users.
+ > Set global policies to **Inactive** if all applications from Okta are protected by application sign-on policies.
- ![Screenshot that shows Global MFA Sign On Policy as Inactive.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/mfa-policy-inactive.png)
+4. Set the **Enforce MFA** policy to **Inactive**. You can assign the policy to a new group that doesn't include the Azure AD users.
-1. On the application-level sign-on policy pane, update the policies to **Inactive** by selecting the **Disable Rule** option. You can also assign the policy to a new group that doesn't include the Azure AD users.
+ ![Screenshot of Global MFA Sign On Policy as Inactive.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/mfa-policy-inactive.png)
-1. Ensure there's at least one application-level sign-on policy that's enabled for the application that allows access without MFA.
+5. On the application-level sign-on policy pane, select the **Disable Rule** option.
+6. Select **Inactive**. You can assign the policy to a new group that doesn't include the Azure AD users.
+7. Ensure there's at least one application-level sign-on policy enabled for the application that allows access without MFA.
- ![Screenshot that shows application access without MFA.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/application-access-without-mfa.png)
+ ![Screenshot of application access without MFA.](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/application-access-without-mfa.png)
-1. After you disable the Okta sign-on policies or exclude the migrated Azure AD users from the enforcement groups, users are prompted *only* for Conditional Access the next time they sign in.
+8. Users are prompted for Conditional Access the next time they sign in.
## Next steps
-For more information about migrating from Okta to Azure AD, see:
--- [Migrate applications from Okta to Azure AD](migrate-applications-from-okta-to-azure-active-directory.md)-- [Migrate Okta federation to Azure AD](migrate-okta-federation-to-azure-active-directory.md)-- [Migrate Okta sync provisioning to Azure AD Connect-based synchronization](migrate-okta-sync-provisioning-to-azure-active-directory.md)
+- [Tutorial: Migrate your applications from Okta to Azure Active Directory](migrate-applications-from-okta-to-azure-active-directory.md)
+- [Tutorial: Migrate Okta federation to Azure Active Directory-managed authentication](migrate-okta-federation-to-azure-active-directory.md)
+- [Tutorial: Migrate Okta sync provisioning to Azure AD Connect-based synchronization](migrate-okta-sync-provisioning-to-azure-active-directory.md)
active-directory Cross Tenant Synchronization Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-overview.md
Here are the primary goals of cross-tenant synchronization:
- Automate lifecycle management of B2B collaboration users in a multi-tenant organization - Automatically remove B2B accounts when a user leaves the organization
+> [!VIDEO https://www.youtube.com/embed/7B-PQwNfGBc]
+ ## Why use cross-tenant synchronization? Cross-tenant synchronization automates creating, updating, and deleting B2B collaboration users. Users created with cross-tenant synchronization are able to access both Microsoft applications (such as Teams and SharePoint) and non-Microsoft applications (such as [ServiceNow](../saas-apps/servicenow-provisioning-tutorial.md), [Adobe](../saas-apps/adobe-identity-management-provisioning-tutorial.md), and many more), regardless of which tenant the apps are integrated with. These users continue to benefit from the security capabilities in Azure AD, such as [Azure AD Conditional Access](../conditional-access/overview.md) and [cross-tenant access settings](../external-identities/cross-tenant-access-overview.md), and can be governed through features such as [Azure AD entitlement management](../governance/entitlement-management-overview.md).
active-directory Adpfederatedsso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adpfederatedsso-tutorial.md
Previously updated : 11/21/2022 Last updated : 03/07/2023
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
### Configure your ADP service(s) for federated access
->[!Important]
+> [!Important]
> Your employees who require federated access to your ADP services must be assigned to the ADP service app and subsequently, users must be reassigned to the specific ADP service. Upon receipt of confirmation from your ADP representative, configure your ADP service(s) and assign/manage users to control user access to the specific ADP service.
Upon receipt of confirmation from your ADP representative, configure your ADP se
1. On confirmation of a successful test, assign the federated ADP service to individual users or user groups, which is explained later in the tutorial and roll it out to your employees.
+### Configure ADP to support multiple instances in the same tenant
+
+1. Go to **Basic SAML Configuration** section and configure another test value in **Identifier (Entity ID)** textbox.
+
+ ![Screenshot shows how to configure another test instance value.](./media/adpfederatedsso-tutorial/append.png "Test")
+
+1. To support multiple instances in the same tenant, please follow the below steps:
+
+ ![Screenshot shows how to configure audience claim value.](./media/adpfederatedsso-tutorial/audience.png "Claim")
+
+ 1. Navigate to **Attributes & Claims** section > **Advanced settings** > **Advanced SAML claims options** and click **Edit**.
+
+ 1. Enable **Append application ID to issuer** checkbox.
+
+ 1. Enable **Override audience claim** checkbox.
+
+ 1. In the **Audience claim value** textbox, enter **Identifier (Entity ID)** value, which you've copied from **Basic SAML Configuration** section and click **Save**.
+
+1. Navigate to **Properties** tab under Manage section and copy **Application ID** from the Azure portal.
+
+ ![Screenshot shows how to copy application value from properties tab.](./media/adpfederatedsso-tutorial/app.png "Tab")
+
+1. Download and open the **Federation Metadata XML** file from the Azure portal and edit the **entityID** value by adding **Application ID** manually at the end.
+
+ ![Screenshot shows how to add the application value in the federation file.](./media/adpfederatedsso-tutorial/federation.png "File")
+
+1. **Save** the xml file and use in the ADP side.
+ ### Create ADP test user The objective of this section is to create a user called B.Simon in ADP. Work with [ADP support team](https://www.adp.com/contact-us/overview.aspx) to add the users in the ADP account.
active-directory Cloud Academy Sso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cloud-academy-sso-tutorial.md
Title: 'Tutorial: Azure Active Directory SSO integration with Cloud Academy'
-description: In this tutorial, you'll learn how to configure single sign-on between Azure Active Directory and Cloud Academy.
+description: In this tutorial, you learn how to configure single sign-on between Azure Active Directory and Cloud Academy.
Previously updated : 11/28/2022 Last updated : 03/15/2023
-# Tutorial: Azure Active Directory single sign-on integration with Cloud Academy
+# Tutorial: Azure Active Directory SSO integration with Cloud Academy
-In this tutorial, you'll learn how to integrate Cloud Academy with Azure Active Directory (Azure AD). When you integrate Cloud Academy with Azure AD, you can:
+In this tutorial, you learn how to integrate Cloud Academy with Azure Active Directory (Azure AD). When you integrate Cloud Academy with Azure AD, you can:
* Use Azure AD to control who can access Cloud Academy. * Enable your users to be automatically signed in to Cloud Academy with their Azure AD accounts.
To get started, you need the following items:
## Tutorial description
-In this tutorial, you'll configure and test Azure AD SSO in a test environment.
+In this tutorial, you configure and test Azure AD SSO in a test environment.
* Cloud Academy supports **SP** initiated SSO. * Cloud Academy supports **Just In Time** user provisioning.
To configure the integration of Cloud Academy into Azure AD, you need to add Clo
1. In the **Add from the gallery** section, enter **Cloud Academy** in the search box. 1. Select **Cloud Academy** in the results panel and then add the app. Wait a few seconds while the app is added to your tenant.
- Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+ Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
## Configure and test Azure AD SSO for Cloud Academy
Follow these steps to enable Azure AD SSO in the Azure portal:
### Create an Azure AD test user
-In this section, you'll create a test user called B.Simon in the Azure portal.
+In this section, you create a test user called B.Simon in the Azure portal.
1. In the left menu of the Azure portal, select **Azure Active Directory**. Select **Users**, and then select **All users**. 1. Select **New user** at the top of the screen.
In this section, you'll create a test user called B.Simon in the Azure portal.
### Grant access to the test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting that user access to Cloud Academy.
+In this section, you enable B.Simon to use Azure single sign-on by granting that user access to Cloud Academy.
1. In the Azure portal, select **Enterprise applications**, and then select **All applications**. 1. In the applications list, select **Cloud Academy**. 1. On the app's overview page, in the **Manage** section, select **Users and groups**: 1. Select **Add user**, and then select **Users and groups** in the **Add Assignment** dialog box: 1. In the **Users and groups** dialog box, select **B.Simon** in the **Users** list, and then click the **Select** button at the bottom of the screen.
-1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. If you're expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog box, select **Assign**. ## Configure single sign-on for Cloud Academy
In this section, you'll enable B.Simon to use Azure single sign-on by granting t
1. Open the downloaded Base64 certificate from the Azure portal in Notepad. Paste its contents into the **Certificate** box.
- 1. In the **Email Domains** box, enter all the domain values your company uses for user emails.
- 1. Perform the following steps in the below page: ![Screenshot that shows the Integrations in additional settings.](./media/cloud-academy-sso-tutorial/additional-settings.png)
In this section, you'll enable B.Simon to use Azure single sign-on by granting t
1. If sign-in is successful, you can activate SSO integration for the entire organization:
- :::image type="content" source="./media/cloud-academy-sso-tutorial/test-successful.png" alt-text="Screenshot that shows S S O activation is successful..":::
+ :::image type="content" source="./media/cloud-academy-sso-tutorial/test-successful.png" alt-text="Screenshot that shows S S O activation is successful.":::
> [!NOTE] > For more information about how to configure Cloud Academy, see [Setting Up Single Sign-On](https://support.cloudacademy.com/hc/articles/360043908452-Setting-Up-Single-Sign-On). ### Create a Cloud Academy test user
-In this section, a user called B.Simon is created in Cloud Academy. Cloud Academy supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Cloud Academy, a new one is created after authentication.
+In this section, a user called B.Simon is created in Cloud Academy. Cloud Academy supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Cloud Academy, a new one is created after authentication.
Cloud Academy also supports automatic user provisioning. For more information, see the [Cloud Academy SSO provisioning tutorial](./cloud-academy-sso-provisioning-tutorial.md).
active-directory Five9 Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/five9-tutorial.md
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
a. ΓÇ£Five9 Plus Adapter for Agent Desktop ToolkitΓÇ¥ Admin Guide: [https://webapps.five9.com/assets/files/for_customers/documentation/integrations/agent-desktop-toolkit/plus-agent-desktop-toolkit-administrators-guide.pdf](https://webapps.five9.com/assets/files/for_customers/documentation/integrations/agent-desktop-toolkit/plus-agent-desktop-toolkit-administrators-guide.pdf)
- b. ΓÇ£Five9 Plus Adapter for Microsoft Dynamics CRMΓÇ¥ Admin Guide: [https://webapps.five9.com/assets/files/for_customers/documentation/integrations/microsoft/microsoft-administrators-guide.pdf](https://webapps.five9.com/assets/files/for_customers/documentation/integrations/microsoft/microsoft-administrators-guide.pdf)
+ b. ΓÇ£Five9 Plus Adapter for Microsoft Dynamics CRMΓÇ¥ Admin Guide: [https://manualzz.com/download/25793001](https://manualzz.com/download/25793001)
c. ΓÇ£Five9 Plus Adapter for ZendeskΓÇ¥ Admin Guide: [https://webapps.five9.com/assets/files/for_customers/documentation/integrations/zendesk/zendesk-plus-administrators-guide.pdf](https://webapps.five9.com/assets/files/for_customers/documentation/integrations/zendesk/zendesk-plus-administrators-guide.pdf)
active-directory Lifesize Cloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lifesize-cloud-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://webapp.lifesizecloud.com/?ent=<IDENTIFIER>` > [!NOTE]
- > These values are not real. Update these values with the actual Sign-on URL, Identifier and Relay State. Contact [Lifesize Cloud Client support team](https://legacy.lifesize.com/en/support) to get Sign-On URL, and Identifier values and you can get Relay State value from SSO Configuration that is explained later in the tutorial. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Sign-on URL, Identifier and Relay State. Contact [Lifesize Cloud Client support team](https://support.lifesize.com/) to get Sign-On URL, and Identifier values and you can get Relay State value from SSO Configuration that is explained later in the tutorial. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
active-directory Oracle Access Manager For Oracle Ebs Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oracle-access-manager-for-oracle-ebs-tutorial.md
Complete the following steps to enable Azure AD single sign-on in the Azure port
### Create Oracle Access Manager for Oracle E-Business Suite test user
-In this section, you create a user called Britta Simon at Oracle Access Manager for Oracle E-Business Suite. Work with [Oracle Access Manager for Oracle E-Business Suite support team](https://www.oracle.com/support/advanced-customer-support/products/cloud.html) to add the users in the Oracle Access Manager for Oracle E-Business Suite platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon at Oracle Access Manager for Oracle E-Business Suite. Work with [Oracle Access Manager for Oracle E-Business Suite support team](https://www.oracle.com/support/advanced-customer-services/cloud/) to add the users in the Oracle Access Manager for Oracle E-Business Suite platform. Users must be created and activated before you use single sign-on.
## Test SSO
active-directory Oracle Access Manager For Oracle Retail Merchandising Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oracle-access-manager-for-oracle-retail-merchandising-tutorial.md
Complete the following steps to enable Azure AD single sign-on in the Azure port
` https://<SUBDOMAIN>.oraclecloud.com/` >[!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Oracle Access Manager for Oracle Retail Merchandising support team](https://www.oracle.com/support/advanced-customer-support/products/cloud.html) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Oracle Access Manager for Oracle Retail Merchandising support team](https://www.oracle.com/support/advanced-customer-services/cloud/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. Your Oracle Access Manager for Oracle Retail Merchandising application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but Oracle Access Manager for Oracle Retail Merchandising expects this to be mapped with the user's email address. For that you can use **user.mail** attribute from the list or use the appropriate attribute value based on your organization configuration.
Complete the following steps to enable Azure AD single sign-on in the Azure port
## Configure Oracle Access Manager for Oracle Retail Merchandising SSO
-To configure single sign-on on Oracle Access Manager for Oracle Retail Merchandising side, you need to send the downloaded Federation Metadata XML file from Azure portal to [Oracle Access Manager for Oracle Retail Merchandising support team](https://www.oracle.com/support/advanced-customer-support/products/cloud.html). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on Oracle Access Manager for Oracle Retail Merchandising side, you need to send the downloaded Federation Metadata XML file from Azure portal to [Oracle Access Manager for Oracle Retail Merchandising support team](https://www.oracle.com/support/advanced-customer-services/cloud/). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Oracle Access Manager for Oracle Retail Merchandising test user
-In this section, you create a user called Britta Simon at Oracle Access Manager for Oracle Retail Merchandising. Work with [Oracle Access Manager for Oracle Retail Merchandising support team](https://www.oracle.com/support/advanced-customer-support/products/cloud.html) to add the users in the Oracle Access Manager for Oracle Retail Merchandising platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon at Oracle Access Manager for Oracle Retail Merchandising. Work with [Oracle Access Manager for Oracle Retail Merchandising support team](https://www.oracle.com/support/advanced-customer-services/cloud/) to add the users in the Oracle Access Manager for Oracle Retail Merchandising platform. Users must be created and activated before you use single sign-on.
## Test SSO
active-directory Oracle Idcs For Peoplesoft Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oracle-idcs-for-peoplesoft-tutorial.md
Complete the following steps to enable Azure AD single sign-on in the Azure port
` https://<SUBDOMAIN>.oraclecloud.com/` >[!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Oracle IDCS for PeopleSoft support team](https://www.oracle.com/support/advanced-customer-support/products/cloud.html) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Oracle IDCS for PeopleSoft support team](https://www.oracle.com/support/advanced-customer-services/cloud/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. Your Oracle IDCS for PeopleSoft application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but Oracle IDCS for PeopleSoft expects this to be mapped with the user's email address. For that you can use **user.mail** attribute from the list or use the appropriate attribute value based on your organization configuration.
Complete the following steps to enable Azure AD single sign-on in the Azure port
## Configure Oracle IDCS for PeopleSoft SSO
-To configure single sign-on on Oracle IDCS for PeopleSoft side, you need to send the downloaded Federation Metadata XML file from Azure portal to [Oracle IDCS for PeopleSoft support team](https://www.oracle.com/support/advanced-customer-support/products/cloud.html). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on Oracle IDCS for PeopleSoft side, you need to send the downloaded Federation Metadata XML file from Azure portal to [Oracle IDCS for PeopleSoft support team](https://www.oracle.com/support/advanced-customer-services/cloud/). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Oracle IDCS for PeopleSoft test user
-In this section, you create a user called Britta Simon at Oracle IDCS for PeopleSoft. Work with [Oracle IDCS for PeopleSoft support team](https://www.oracle.com/support/advanced-customer-support/products/cloud.html) to add the users in the Oracle IDCS for PeopleSoft platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon at Oracle IDCS for PeopleSoft. Work with [Oracle IDCS for PeopleSoft support team](https://www.oracle.com/support/advanced-customer-services/cloud/) to add the users in the Oracle IDCS for PeopleSoft platform. Users must be created and activated before you use single sign-on.
## Test SSO
active-directory Pymetrics Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/pymetrics-tutorial.md
Previously updated : 11/21/2022 Last updated : 03/13/2023
For more information, see [Azure built-in roles](../roles/permissions-reference.
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* pymetrics supports **SP and IDP** initiated SSO.
+* pymetrics supports **SP** initiated SSO. If you need to configure an **IDP** initiated flow, please reach out to [pymetrics support](mailto:solutions-engineering@pymetrics.com).
* pymetrics supports **Just In Time** user provisioning. ## Add pymetrics from the gallery
Follow these steps to enable Azure AD SSO in the Azure portal.
b. In the **Reply URL** text box, type a URL using the following pattern: `https://www.pymetrics.com/saml2-sp/<CUSTOMERNAME>/<CUSTOMERNAME>/?acs`
-1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
-
- In the **Sign-on URL** text box, type a URL using the following pattern:
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
`https://www.pymetrics.com/saml2-sp/<CUSTOMERNAME>/<CUSTOMERNAME>/?sso` > [!NOTE]
In this section, a user called Britta Simon is created in pymetrics. pymetrics s
In this section, you test your Azure AD single sign-on configuration with following options.
-#### SP initiated:
-
-* Click on **Test this application** in Azure portal. This will redirect to pymetrics Sign-On URL where you can initiate the login flow.
-
-* Go to pymetrics Sign-On URL directly and initiate the login flow from there.
-
-#### IDP initiated:
+* Click on **Test this application** in Azure portal. This will redirect to pymetrics Sign-on URL where you can initiate the login flow.
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the pymetrics for which you set up the SSO.
+* Go to pymetrics Sign-on URL directly and initiate the login flow from there.
-You can also use Microsoft My Apps to test the application in any mode. When you click the pymetrics tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the pymetrics for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+* You can use Microsoft My Apps. When you click the pymetrics tile in the My Apps, this will redirect to pymetrics Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
active-directory Tanium Cloud Sso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tanium-cloud-sso-tutorial.md
Complete the following steps to enable Azure AD single sign-on in the Azure port
1. If you wish to configure the application in **SP** initiated mode, then perform the following step: In the **Sign on URL** textbox, type a URL using the following pattern:
- https://InstanceName.cloud.tanium.com
+ `https://InstanceName.cloud.tanium.com`
> [!NOTE] > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Tanium Cloud SSO Client support team](mailto:integrations@tanium.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
You can also use Microsoft My Apps to test the application in any mode. When you
## Next steps
-Once you configure Tanium Cloud SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
+Once you configure Tanium Cloud SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Configure Cmmc Level 2 Identification And Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-identification-and-authentication.md
The following table provides a list of practice statement and objectives, and Az
| IA.L2-3.5.5<br><br>**Practice statement:** Prevent reuse of identifiers for a defined period.<br><br>**Objectives:**<br>Determine if:<br>[a.] a period within which identifiers can't be reused is defined; and<br>[b.] reuse of identifiers is prevented within the defined period. | All user, group, device object globally unique identifiers (GUIDs) are guaranteed unique and non-reusable for the lifetime of the Azure AD tenant.<br>[user resource type - Microsoft Graph v1.0](/graph/api/resources/user?view=graph-rest-1.0&preserve-view=true)<br>[group resource type - Microsoft Graph v1.0](/graph/api/resources/group?view=graph-rest-1.0&preserve-view=true)<br>[device resource type - Microsoft Graph v1.0](/graph/api/resources/device?view=graph-rest-1.0&preserve-view=true) | | IA.L2-3.5.6<br><br>**Practice statement:** Disable identifiers after a defined period of inactivity.<br><br>**Objectives:**<br>Determine if:<br>[a.] a period of inactivity after which an identifier is disabled is defined; and<br>[b.] identifiers are disabled after the defined period of inactivity. | Implement account management automation with Microsoft Graph and Azure AD PowerShell. Use Microsoft Graph to monitor sign-in activity and Azure AD PowerShell to take action on accounts within the required time frame.<br><br>**Determine inactivity**<br>[Manage inactive user accounts in Azure AD](../reports-monitoring/howto-manage-inactive-user-accounts.md)<br>[Manage stale devices in Azure AD](../devices/manage-stale-devices.md)<br><br>**Remove or disable accounts**<br>[Working with users in Microsoft Graph](/graph/api/resources/user)<br>[Get a user](/graph/api/user-get?tabs=http)<br>[Update user](/graph/api/user-update?tabs=http)<br>[Delete a user](/graph/api/user-delete?tabs=http)<br><br>**Work with devices in Microsoft Graph**<br>[Get device](/graph/api/device-get?tabs=http)<br>[Update device](/graph/api/device-update?tabs=http)<br>[Delete device](/graph/api/device-delete?tabs=http)<br><br>**[Use Azure AD PowerShell](/powershell/module/azuread/)**<br>[Get-AzureADUser](/powershell/module/azuread/get-azureaduser)<br>[Set-AzureADUser](/powershell/module/azuread/set-azureaduser)<br>[Get-AzureADDevice](/powershell/module/azuread/get-azureaddevice)<br>[Set-AzureADDevice](/powershell/module/azuread/set-azureaddevice) | | IA.L2-3.5.7<br><br>**Practice statement:**<br><br>**Objectives:** Enforce a minimum password complexity and change of characters when new passwords are created.<br>Determine if:<br>[a.] password complexity requirements are defined;<br>[b.] password change of character requirements are defined;<br>[c.] minimum password complexity requirements as defined are enforced when new passwords are created; and<br>[d.] minimum password change of character requirements as defined are enforced when new passwords are created.<br><br>IA.L2-3.5.8<br><br>**Practice statement:** Prohibit password reuse for a specified number of generations.<br><br>**Objectives:**<br>Determine if:<br>[a.] the number of generations during which a password cannot be reused is specified; and<br>[b.] reuse of passwords is prohibited during the specified number of generations. | We **strongly encourage** passwordless strategies. This control is only applicable to password authenticators, so removing passwords as an available authenticator renders this control not applicable.<br><br>Per NIST SP 800-63 B Section 5.1.1: Maintain a list of commonly used, expected, or compromised passwords.<br><br>With Azure AD password protection, default global banned password lists are automatically applied to all users in an Azure AD tenant. To support your business and security needs, you can define entries in a custom banned password list. When users change or reset their passwords, these banned password lists are checked to enforce the use of strong passwords.<br>For customers that require strict password character change, password reuse and complexity requirements use hybrid accounts configured with Password-Hash-Sync. This action ensures the passwords synchronized to Azure AD inherit the restrictions configured in Active Directory password policies. Further protect on-premises passwords by configuring on-premises Azure AD Password Protection for Active Directory Domain Services.<br>[NIST Special Publication 800-63 B](https://pages.nist.gov/800-63-3/sp800-63b.html)<br>[NIST Special Publication 800-53 Revision 5 (IA-5 - Control enhancement (1)](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r5.pdf)<br>[Eliminate bad passwords using Azure AD password protection](../authentication/concept-password-ban-bad.md)<br>[What is password hash synchronization with Azure AD?](../hybrid/whatis-phs.md) |
-| IA.L2-3.5.9<br><br>**Practice statement:** Allow temporary password use for system logons with an immediate change to a permanent password.<br><br>**Objectives:**<br>Determine if:<br>[a.] an immediate change to a permanent password is required when a temporary password is used for system sign-on. | An Azure AD user initial password is a temporary single use password that once successfully used is immediately required to be changed to a permanent password. Microsoft strongly encourages the adoption of passwordless authentication methods. Users can bootstrap Passwordless authentication methods using Temporary Access Pass (TAP). TAP is a time and use limited passcode issued by an admin that satisfies strong authentication requirements. Use of passwordless authentication along with the time and use limited TAP completely eliminates the use of passwords (and their reuse).<br>[Add or delete users - Azure Active Directory](../fundamentals/add-users-azure-active-directory.md)<br>[Configure a Temporary Access Pass in Azure AD to register Passwordless authentication methods](../authentication/howto-authentication-temporary-access-pass.md)<br>[Passwordless authentication](/azure/active-directory/authentication/concept-authentication-passwordless) |
+| IA.L2-3.5.9<br><br>**Practice statement:** Allow temporary password use for system logons with an immediate change to a permanent password.<br><br>**Objectives:**<br>Determine if:<br>[a.] an immediate change to a permanent password is required when a temporary password is used for system sign-on. | An Azure AD user initial password is a temporary single use password that once successfully used is immediately required to be changed to a permanent password. Microsoft strongly encourages the adoption of passwordless authentication methods. Users can bootstrap Passwordless authentication methods using Temporary Access Pass (TAP). TAP is a time and use limited passcode issued by an admin that satisfies strong authentication requirements. Use of passwordless authentication along with the time and use limited TAP completely eliminates the use of passwords (and their reuse).<br>[Add or delete users - Azure Active Directory](../fundamentals/add-users-azure-active-directory.md)<br>[Configure a Temporary Access Pass in Azure AD to register Passwordless authentication methods](../authentication/howto-authentication-temporary-access-pass.md)<br>[Passwordless authentication](../authentication/concept-authentication-passwordless.md) |
| IA.L2-3.5.10<br><br>**Practice statement:** Store and transmit only cryptographically protected passwords.<br><br>**Objectives:**<br>Determine if:<br>[a.] passwords are cryptographically protected in storage; and<br>[b.] passwords are cryptographically protected in transit. | **Secret Encryption at Rest**:<br>In addition to disk level encryption, when at rest, secrets stored in the directory are encrypted using the Distributed Key Manager(DKM). The encryption keys are stored in Azure AD core store and in turn are encrypted with a scale unit key. The key is stored in a container that is protected with directory ACLs, for highest privileged users and specific services. The symmetric key is typically rotated every six months. Access to the environment is further protected with operational controls and physical security.<br><br>**Encryption in Transit**:<br>To assure data security, Directory Data in Azure AD is signed and encrypted while in transit between data centers within a scale unit. The data is encrypted and unencrypted by the Azure AD core store tier, which resides inside secured server hosting areas of the associated Microsoft data centers.<br><br>Customer-facing web services are secured with the Transport Layer Security (TLS) protocol.<br>For more information, [download](https://azure.microsoft.com/resources/azure-active-directory-data-security-considerations/) *Data Protection Considerations - Data Security*. On page 15, there are more details.<br>[Demystifying Password Hash Sync (microsoft.com)](https://www.microsoft.com/security/blog/2019/05/30/demystifying-password-hash-sync/)<br>[Azure Active Directory Data Security Considerations](https://aka.ms/aaddatawhitepaper) | |IA.L2-3.5.11<br><br>**Practice statement:** Obscure feedback of authentication information.<br><br>**Objectives:**<br>Determine if:<br>[a.] authentication information is obscured during the authentication process. | By default, Azure AD obscures all authenticator feedback. |
The following table provides a list of practice statement and objectives, and Az
* [Configure Azure Active Directory for CMMC compliance](configure-azure-active-directory-for-cmmc-compliance.md) * [Configure CMMC Level 1 controls](configure-cmmc-level-1-controls.md) * [Configure CMMC Level 2 Access Control (AC) controls](configure-cmmc-level-2-access-control.md)
-* [Configure CMMC Level 2 additional controls](configure-cmmc-level-2-additional-controls.md)
+* [Configure CMMC Level 2 additional controls](configure-cmmc-level-2-additional-controls.md)
aks Api Server Authorized Ip Ranges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/api-server-authorized-ip-ranges.md
For more information about the API server and other cluster components, see [Kub
## Create an AKS cluster with API server authorized IP ranges enabled
-Create a cluster using the [`az aks create`][az-aks-create] and specify the *`--api-server-authorized-ip-ranges`* parameter to provide a list of authorized IP address ranges. These IP address ranges are usually address ranges used by your on-premises networks or public IPs. When you specify a CIDR range, start with the first IP address in the range. For example, *137.117.106.90/29* is a valid range, but make sure you specify the first IP address in the range, such as *137.117.106.88/29*.
+Create a cluster using the [`az aks create`][az-aks-create] and specify the *`--api-server-authorized-ip-ranges`* parameter to provide a list of authorized public IP address ranges. When you specify a CIDR range, start with the first IP address in the range. For example, *137.117.106.90/29* is a valid range, but make sure you specify the first IP address in the range, such as *137.117.106.88/29*.
> [!IMPORTANT] > By default, your cluster uses the [Standard SKU load balancer][standard-sku-lb] which you can use to configure the outbound gateway. When you enable API server authorized IP ranges during cluster creation, the public IP for your cluster is also allowed by default in addition to the ranges you specify. If you specify *""* or no value for *`--api-server-authorized-ip-ranges`*, API server authorized IP ranges will be disabled. Note that if you're using PowerShell, use *`--api-server-authorized-ip-ranges=""`* (with equals sign) to avoid any parsing issues.
To add another IP address to the approved ranges, use the following commands.
CURRENT_IP=$(dig +short "myip.opendns.com" "@resolver1.opendns.com") ````
-```azurelci
+```azurecli
# Add to AKS approved list
-az aks update -g $RG -n $AKSNAME --api-server-authorized-ip-ranges $CURRENT_IP/32
+az aks update -g $RG -n $AKSNAME --api-server-authorized-ip-ranges $CURRENT_IP/24,73.140.245.0/24
``` > [!NOTE]
-> The above example appends the API server authorized IP ranges on the cluster. To disable authorized IP ranges, use `az aks update` and specify an empty range "".
+> The above example adds another IP address to the approved ranges. Note that it still includes the IP address from [Update a cluster's API server authorized IP ranges](#update-a-clusters-api-server-authorized-ip-ranges). If you don't include your existing IP address, this command will replace it with the new one instead of adding it to the authorized ranges. To disable authorized IP ranges, use `az aks update` and specify an empty range "".
Another option is to use the following command on Windows systems to get the public IPv4 address, or you can follow the steps in [Find your IP address](https://support.microsoft.com/en-gb/help/4026518/windows-10-find-your-ip-address).
aks Azure Ad Integration Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-ad-integration-cli.md
For best practices on identity and resource control, see [Best practices for aut
[operator-best-practices-identity]: operator-best-practices-identity.md [azure-ad-rbac]: azure-ad-rbac.md [managed-aad]: managed-aad.md
-[managed-aad-migrate]: managed-aad.md#upgrading-to-aks-managed-azure-ad-integration
+[managed-aad-migrate]: managed-aad.md#upgrade-to-aks-managed-azure-ad-integration
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
location="westcentralus"
az aks create -n $clusterName -g $resourceGroup --location $location --network-plugin azure --network-plugin-mode overlay --pod-cidr 192.168.0.0/16 ```
-This will perform a rolling upgrade of nodes in **all** nodepools simultaneously to Azure CNI overlay and should be treated like a node image upgrade. During the upgrade, traffic from an Overlay pod to a CNI v1 pod will be SNATed(Source Network Address Translation)
- ## Next steps To learn how to utilize AKS with your own Container Network Interface (CNI) plugin, see [Bring your own Container Network Interface (CNI) plugin](use-byo-cni.md).
aks Azure Csi Disk Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-disk-storage-provision.md
Last updated 01/18/2023
A persistent volume represents a piece of storage that has been provisioned for use with Kubernetes pods. A persistent volume can be used by one or many pods, and can be dynamically or statically provisioned. This article shows you how to dynamically create persistent volumes with Azure Disks for use by a single pod in an Azure Kubernetes Service (AKS) cluster. > [!NOTE]
-> An Azure disk can only be mounted with *Access mode* type *ReadWriteOnce*, which makes it available to one node in AKS. If you need to share a persistent volume across multiple nodes, use [Azure Files][azure-files-pvc].
+> An Azure disk can only be mounted with *Access mode* type *ReadWriteOnce*, which makes it available to one pod in AKS. If you need to share a persistent volume across multiple pods, use [Azure Files][azure-files-pvc].
This article shows you how to:
Each AKS cluster includes four pre-created storage classes, two of them configur
* The *default* storage class provisions a standard SSD Azure Disk. * Standard storage is backed by Standard SSDs and delivers cost-effective storage while still delivering reliable performance. * The *managed-csi-premium* storage class provisions a premium Azure Disk.
- * Premium disks are backed by SSD-based high-performance, low-latency disk. Perfect for VMs running production workload. If the AKS nodes in your cluster use premium storage, select the *managed-premium* class.
-
-If you use one of the default storage classes, you can't update the volume size after the storage class is created. To be able to update the volume size after a storage class is created, add the line `allowVolumeExpansion: true` to one of the default storage classes, or you can create your own custom storage class. It's not supported to reduce the size of a PVC (to prevent data loss). You can edit an existing storage class by using the `kubectl edit sc` command.
+ * Premium disks are backed by SSD-based high-performance, low-latency disks. They're ideal for VMs running production workloads. When you use the Azure Disks CSI driver on AKS, you can also use the `managed-csi` storage class, which is backed by Standard SSD locally redundant storage (LRS).
+
+It's not supported to reduce the size of a PVC (to prevent data loss). You can edit an existing storage class by using the `kubectl edit sc` command, or you can create your own custom storage class.
For example, if you want to use a disk of size 4 TiB, you must create a storage class that defines `cachingmode: None` because [disk caching isn't supported for disks 4 TiB and larger][disk-host-cache-setting].
aks Cluster Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler.md
To adjust to changing application demands, such as between the workday and eveni
![The cluster autoscaler and horizontal pod autoscaler often work together to support the required application demands](media/autoscaler/cluster-autoscaler.png)
-Both the horizontal pod autoscaler and cluster autoscaler can also decrease the number of pods and nodes as needed. The cluster autoscaler decreases the number of nodes when there has been unused capacity for a period of time. Pods on a node to be removed by the cluster autoscaler are safely scheduled elsewhere in the cluster. For more information about how scaling down works, see [How does scale-down work?]https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work).
+Both the horizontal pod autoscaler and cluster autoscaler can also decrease the number of pods and nodes as needed. The cluster autoscaler decreases the number of nodes when there has been unused capacity for a period of time. Pods on a node to be removed by the cluster autoscaler are safely scheduled elsewhere in the cluster. For more information about how scaling down works, see [How does scale-down work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work).
The cluster autoscaler may be unable to scale down if pods can't move, such as in the following situations:
aks Cluster Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-extensions.md
Title: Cluster extensions for Azure Kubernetes Service (AKS) description: Learn how to deploy and manage the lifecycle of extensions on Azure Kubernetes Service (AKS) Previously updated : 09/29/2022 Last updated : 03/14/2023
In this article, you'll learn about:
> * Available cluster extensions on AKS. > * How to view, list, update, and delete extension instances.
-A conceptual overview of this feature is available in [Cluster extensions - Azure Arc-enabled Kubernetes][arc-k8s-extensions] article.
+For a conceptual overview of cluster extensions, see [Cluster extensions - Azure Arc-enabled Kubernetes][arc-k8s-extensions].
## Prerequisites
A conceptual overview of this feature is available in [Cluster extensions - Azur
### Set up the Azure CLI extension for cluster extensions > [!NOTE]
-> The minimum supported version for the `k8s-extension` Azure CLI extension is `1.0.0`. If you are unsure what version you have installed, run `az extension show --name k8s-extension` and look for the `version` field.
+> The minimum supported version for the `k8s-extension` Azure CLI extension is `1.0.0`. If you are unsure what version you have installed, run `az extension show --name k8s-extension` and look for the `version` field. We recommend using the latest version.
You'll also need the `k8s-extension` Azure CLI extension. Install the extension by running the following command:
az extension update --name k8s-extension
| | -- | | [Dapr][dapr-overview] | Dapr is a portable, event-driven runtime that makes it easy for any developer to build resilient, stateless and stateful applications that run on cloud and edge. | | [Azure ML][azure-ml-overview] | Use Azure Kubernetes Service clusters to train, inference, and manage machine learning models in Azure Machine Learning. |
-| [Flux (GitOps)][gitops-overview] | Use GitOps with Flux to manage cluster configuration and application deployment. |
+| [Flux (GitOps)][gitops-overview] | Use GitOps with Flux to manage cluster configuration and application deployment. See also [supported versions of Flux (GitOps)][gitops-support] and [Tutorial: Deploy applications using GitOps with Flux v2][gitops-tutorial].|
## Supported regions and Kubernetes versions
az k8s-extension create --name aml-compute --extension-type Microsoft.AzureML.Ku
| `--configuration-protected-settings-file` | Path to the JSON file having key value pairs to be used for passing in sensitive settings to the extension. If this parameter is used in the command, then `--configuration-protected-settings` can't be used in the same command. | | `--scope` | Scope of installation for the extension - `cluster` or `namespace` | | `--release-namespace` | This parameter indicates the namespace within which the release is to be created. This parameter is only relevant if `scope` parameter is set to `cluster`. |
-| `--release-train` | Extension authors can publish versions in different release trains such as `Stable`, `Preview`, etc. If this parameter isn't set explicitly, `Stable` is used as default. This parameter can't be used when `autoUpgradeMinorVersion` parameter is set to `false`. |
+| `--release-train` | Extension authors can publish versions in different release trains such as `Stable`, `Preview`, etc. If this parameter isn't set explicitly, `Stable` is used as default. This parameter can't be used when `--auto-upgrade-minor-version` parameter is set to `false`. |
| `--target-namespace` | This parameter indicates the namespace within which the release will be created. Permission of the system account created for this extension instance will be restricted to this namespace. This parameter is only relevant if the `scope` parameter is set to `namespace`. | ### Show details of an extension instance
az k8s-extension delete --name azureml --cluster-name <clusterName> --resource-g
[azure-ml-overview]: ../machine-learning/how-to-attach-kubernetes-anywhere.md [dapr-overview]: ./dapr.md [gitops-overview]: ../azure-arc/kubernetes/conceptual-gitops-flux2.md
+[gitops-support]: ../azure-arc/kubernetes/extensions-release.md#flux-gitops
+[gitops-tutorial]: ../azure-arc/kubernetes/tutorial-use-gitops-flux2.md
[k8s-extension-reference]: /cli/azure/k8s-extension [use-managed-identity]: ./use-managed-identity.md [workload-identity-overview]: workload-identity-overview.md
aks Concepts Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-vulnerability-management.md
See the overview about [Upgrading Azure Kubernetes Service clusters and node poo
<!-- LINKS - internal --> [upgrade-aks-clusters-nodes]: upgrade.md
-[microsoft-azure-fedramp-high]: /azure/azure-government/compliance/azure-services-in-fedramp-auditscope#azure-government-services-by-audit-scope
+[microsoft-azure-fedramp-high]: ../azure-government/compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope
[apply-security-kernel-updates-to-aks-nodes]: node-updates-kured.md [aks-node-image-upgrade]: node-image-upgrade.md [upgrade-node-pool-in-aks]: use-multiple-node-pools.md#upgrade-a-node-pool
See the overview about [Upgrading Azure Kubernetes Service clusters and node poo
[mrc-create-report]: https://aka.ms/opensource/security/create-report [msrc-pgp-key-page]: https://aka.ms/opensource/security/pgpkey [microsoft-security-response-center]: https://aka.ms/opensource/security/msrc
-[azure-bounty-program-overview]: https://www.microsoft.com/msrc/bounty-microsoft-azure
+[azure-bounty-program-overview]: https://www.microsoft.com/msrc/bounty-microsoft-azure
aks Keda Deploy Add On Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-deploy-add-on-arm.md
You can troubleshoot KEDA add-on problems in [this article][keda-troubleshoot].
[az-feature-show]: /cli/azure/feature#az-feature-show <!-- LINKS - external -->
-[kubectl]: https://kubernetes.io/docs/user-guide/kubectl
+[kubectl]: https://kubernetes.io/docs/reference/kubectl/
[keda]: https://keda.sh/ [keda-scalers]: https://keda.sh/docs/scalers/ [keda-sample]: https://github.com/kedacore/sample-dotnet-worker-servicebus-queue
aks Kubernetes Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-portal.md
This article showed you how to access Kubernetes resources for your AKS cluster.
[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md [deployments]: concepts-clusters-workloads.md#deployments-and-yaml-manifests [aks-managed-aad]: managed-aad.md
-[cli-aad-upgrade]: managed-aad.md#upgrading-to-aks-managed-azure-ad-integration
+[cli-aad-upgrade]: managed-aad.md#upgrade-to-aks-managed-azure-ad-integration
[enable-monitor]: ../azure-monitor/containers/container-insights-enable-existing-clusters.md
aks Managed Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/managed-aad.md
Title: Use Azure AD in Azure Kubernetes Service description: Learn how to use Azure AD in Azure Kubernetes Service (AKS) Last updated : 03/02/2023 Previously updated : 01/23/2023
Learn more about the Azure AD integration flow in the [Azure AD documentation](c
## Limitations * AKS-managed Azure AD integration can't be disabled.
-* Changing an AKS-managed Azure AD integrated cluster to legacy Azure AD is not supported.
+* Changing an AKS-managed Azure AD integrated cluster to legacy Azure AD isn't supported.
* Clusters without Kubernetes RBAC enabled aren't supported with AKS-managed Azure AD integration.
-## Prerequisites
-
-Before getting started, make sure you have the following prerequisites:
-
-* Azure CLI version 2.29.0 or later.
-* `kubectl`, with a minimum version of [1.18.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1181) or [`kubelogin`](https://github.com/Azure/kubelogin).
-* If you're using [helm](https://github.com/helm/helm), you need a minimum version of helm 3.3.
-
-> [!IMPORTANT]
-> You must use `kubectl` with a minimum version of 1.18.1 or `kubelogin`. The difference between the minor versions of Kubernetes and `kubectl` shouldn't be more than 1 version. You'll experience authentication issues if you don't use the correct version.
-
-Use the following commands to install kubectl and kubelogin:
-
-```azurecli-interactive
-sudo az aks install-cli
-kubectl version --client
-kubelogin --version
-```
-
-Use [these instructions](https://kubernetes.io/docs/tasks/tools/install-kubectl/) for other operating systems.
- ## Before you begin
-You need an Azure AD group for your cluster. This group will be registered as an admin group on the cluster to grant cluster admin permissions. You can use an existing Azure AD group or create a new one. Make sure to record the object ID of your Azure AD group.
-
-```azurecli-interactive
-# List existing groups in the directory
-az ad group list --filter "displayname eq '<group-name>'" -o table
-```
-
-Use the following command to create a new Azure AD group for your cluster administrators:
-
-```azurecli-interactive
-# Create an Azure AD group
-az ad group create --display-name myAKSAdminGroup --mail-nickname myAKSAdminGroup
-```
+* Make sure Azure CLI version 2.29.0 or later is installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+* You need `kubectl`, with a minimum version of [1.18.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1181) or [`kubelogin`](https://github.com/Azure/kubelogin). The difference between the minor versions of Kubernetes and `kubectl` shouldn't be more than 1 version. You'll experience authentication issues if you don't use the correct version.
+* If you're using [helm](https://github.com/helm/helm), you need a minimum version of helm 3.3.
+* This article requires that you have an Azure AD group for your cluster. This group will be registered as an admin group on the cluster to grant cluster admin permissions. If you don't have an existing Azure AD group, you can create one using the [`az ad group create`](/cli/azure/ad/group#az_ad_group_create) command.
## Create an AKS cluster with Azure AD enabled
-1. Create an Azure resource group.
+1. Create an Azure resource group using the [`az group create`][az-group-create] command.
-```azurecli-interactive
-# Create an Azure resource group
-az group create --name myResourceGroup --location centralus
-```
-
-1. Create an AKS cluster and enable administration access for your Azure AD group.
+ ```azurecli-interactive
+ az group create --name myResourceGroup --location centralus
+ ```
-```azurecli-interactive
-# Create an AKS-managed Azure AD cluster
-az aks create -g myResourceGroup -n myManagedCluster --enable-aad --aad-admin-group-object-ids <id> [--aad-tenant-id <id>]
-```
+2. Create an AKS cluster and enable administration access for your Azure AD group using the [`az aks create`][az-aks-create] command.
-A successful creation of an AKS-managed Azure AD cluster has the following section in the response body:
+ ```azurecli-interactive
+ az aks create -g myResourceGroup -n myManagedCluster --enable-aad --aad-admin-group-object-ids <id> [--aad-tenant-id <id>]
+ ```
-```output
-"AADProfile": {
- "adminGroupObjectIds": [
- "5d24****-****-****-****-****afa27aed"
- ],
- "clientAppId": null,
- "managed": true,
- "serverAppId": null,
- "serverAppSecret": null,
- "tenantId": "72f9****-****-****-****-****d011db47"
- }
-```
+ A successful creation of an AKS-managed Azure AD cluster has the following section in the response body:
+
+ ```output
+ "AADProfile": {
+ "adminGroupObjectIds": [
+ "5d24****-****-****-****-****afa27aed"
+ ],
+ "clientAppId": null,
+ "managed": true,
+ "serverAppId": null,
+ "serverAppSecret": null,
+ "tenantId": "72f9****-****-****-****-****d011db47"
+ }
+ ```
## Access an Azure AD enabled cluster
-Before you access the cluster using an Azure AD defined group, you'll need the [Azure Kubernetes Service Cluster User](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-user-role) built-in role.
+Before you access the cluster using an Azure AD defined group, you need the [Azure Kubernetes Service Cluster User](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-user-role) built-in role.
-1. Get the user credentials to access the cluster.
+1. Get the user credentials to access the cluster using the [`az aks get-credentials`][az-aks-get-credentials] command.
-```azurecli-interactive
- az aks get-credentials --resource-group myResourceGroup --name myManagedCluster
-```
-
-1. Follow the instructions to sign in.
+ ```azurecli-interactive
+ az aks get-credentials --resource-group myResourceGroup --name myManagedCluster
+ ```
-1. Use the kubectl `get nodes` command to view nodes in the cluster.
+2. Follow the instructions to sign in.
-```azurecli-interactive
-kubectl get nodes
+3. Use the `kubectl get nodes` command to view nodes in the cluster.
-NAME STATUS ROLES AGE VERSION
-aks-nodepool1-15306047-0 Ready agent 102m v1.15.10
-aks-nodepool1-15306047-1 Ready agent 102m v1.15.10
-aks-nodepool1-15306047-2 Ready agent 102m v1.15.10
-```
+ ```azurecli-interactive
+ kubectl get nodes
+ ```
-1. Configure [Azure role-based access control (Azure RBAC)](./azure-ad-rbac.md) to configure other security groups for your clusters.
+4. Set up [Azure role-based access control (Azure RBAC)](./azure-ad-rbac.md) to configure other security groups for your clusters.
## Troubleshooting access issues with Azure AD > [!IMPORTANT] > The steps described in this section bypass the normal Azure AD group authentication. Use them only in an emergency.
-If you're permanently blocked by not having access to a valid Azure AD group with access to your cluster, you can still obtain the admin credentials to access the cluster directly.
-
-To do these steps, you need to have access to the [Azure Kubernetes Service Cluster Admin](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-admin-role) built-in role.
+If you're permanently blocked by not having access to a valid Azure AD group with access to your cluster, you can still obtain the admin credentials to access the cluster directly. You need to have access to the [Azure Kubernetes Service Cluster Admin](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-admin-role) built-in role.
```azurecli-interactive az aks get-credentials --resource-group myResourceGroup --name myManagedCluster --admin
az aks get-credentials --resource-group myResourceGroup --name myManagedCluster
## Enable AKS-managed Azure AD integration on your existing cluster
-You can enable AKS-managed Azure AD integration on your existing Kubernetes RBAC enabled cluster. Make sure to set your admin group to keep access on your cluster.
+Enable AKS-managed Azure AD integration on your existing Kubernetes RBAC enabled cluster using the [`az aks update`][az-aks-update] command. Make sure to set your admin group to keep access on your cluster.
```azurecli-interactive az aks update -g MyResourceGroup -n MyManagedCluster --enable-aad --aad-admin-group-object-ids <id-1> [--aad-tenant-id <id>]
A successful activation of an AKS-managed Azure AD cluster has the following sec
} ```
-Download user credentials again to access your cluster by following the steps [here][access-cluster].
+Download user credentials again to access your cluster by following the steps in [access an Azure AD enabled cluster][access-cluster].
-## Upgrading to AKS-managed Azure AD integration
+## Upgrade to AKS-managed Azure AD integration
-If your cluster uses legacy Azure AD integration, you can upgrade to AKS-managed Azure AD integration by running the following command:
+If your cluster uses legacy Azure AD integration, you can upgrade to AKS-managed Azure AD integration with no downtime using the [`az aks update`][az-aks-update] command.
```azurecli-interactive az aks update -g myResourceGroup -n myManagedCluster --enable-aad --aad-admin-group-object-ids <id> [--aad-tenant-id <id>]
A successful migration of an AKS-managed Azure AD cluster has the following sect
} ```
-In order to access the cluster, follow the steps [here][access-cluster] to update kubeconfig.
+In order to access the cluster, follow the steps in [access an Azure AD enabled cluster][access-cluster] to update kubeconfig.
## Non-interactive sign in with kubelogin
When you deploy an AKS cluster, local accounts are enabled by default. Even when
### Create a new cluster without local accounts
-To create a new AKS cluster without any local accounts, use the [`az aks create`][az-aks-create] command with the `disable-local-accounts` flag.
+Create a new AKS cluster without any local accounts using the [`az aks create`][az-aks-create] command with the `disable-local-accounts` flag.
```azurecli-interactive az aks create -g <resource-group> -n <cluster-name> --enable-aad --aad-admin-group-object-ids <aad-group-id> --disable-local-accounts
Operation failed with status: 'Bad Request'. Details: Getting static credential
### Disable local accounts on an existing cluster
-To disable local accounts on an existing Azure AD integration enabled AKS cluster, use the [`az aks update`][az-aks-update] command with the `disable-local-accounts` parameter.
+Disable local accounts on an existing Azure AD integration enabled AKS cluster using the [`az aks update`][az-aks-update] command with the `disable-local-accounts` parameter.
```azurecli-interactive az aks update -g <resource-group> -n <cluster-name> --disable-local-accounts
Operation failed with status: 'Bad Request'. Details: Getting static credential
### Re-enable local accounts on an existing cluster
-AKS supports enabling a disabled local account on an existing cluster with the `enable-local-accounts` parameter.
+AKS supports enabling a disabled local account on an existing cluster using the [`az aks update`][az-aks-update] command with the `enable-local-accounts` parameter.
```azurecli-interactive az aks update -g <resource-group> -n <cluster-name> --enable-local-accounts
When integrating Azure AD with your AKS cluster, you can also use [Conditional A
> [!NOTE] > Azure AD Conditional Access is an Azure AD Premium capability.
-Complete the following steps to create an example Conditional Access policy to use with AKS:
+Create an example Conditional Access policy to use with AKS:
-1. In the Azure portal, navigate to the **Azure Active Directory** page.
-2. From the left-hand pane, select **Enterprise applications**.
-3. On the **Enterprise applications** page, from the left-hand pane select **Conditional Access**.
-4. On the **Conditional Access** page, from the left-hand pane select **Policies** and then select **New policy**.
+1. In the Azure portal, go to the **Azure Active Directory** page and select **Enterprise applications**.
+2. Select **Conditional Access** > **Policies** >**New policy**.
:::image type="content" source="./media/managed-aad/conditional-access-new-policy.png" alt-text="Adding a Conditional Access policy":::
-5. Enter a name for the policy, for example **aks-policy**.
-6. Under **Assignments** select **Users and groups**. Choose your users and groups you want to apply the policy to. In this example, choose the same Azure AD group that has administrator access to your cluster.
+3. Enter a name for the policy, for example **aks-policy**.
+4. Under **Assignments** select **Users and groups**. Choose the users and groups you want to apply the policy to. In this example, choose the same Azure AD group that has administrator access to your cluster.
:::image type="content" source="./media/managed-aad/conditional-access-users-groups.png" alt-text="Selecting users or groups to apply the Conditional Access policy":::
-7. Under **Cloud apps or actions > Include**, select **Select apps**. Search for **Azure Kubernetes Service** and then select **Azure Kubernetes Service AAD Server**.
+5. Under **Cloud apps or actions > Include**, select **Select apps**. Search for **Azure Kubernetes Service** and select **Azure Kubernetes Service AAD Server**.
:::image type="content" source="./media/managed-aad/conditional-access-apps.png" alt-text="Selecting Azure Kubernetes Service AD Server for applying the Conditional Access policy":::
-8. Under **Access controls > Grant**, select **Grant access**, **Require device to be marked as compliant**, and select **Select**.
+6. Under **Access controls > Grant**, select **Grant access**, **Require device to be marked as compliant**, and **Require all the selected controls**.
:::image type="content" source="./media/managed-aad/conditional-access-grant-compliant.png" alt-text="Selecting to only allow compliant devices for the Conditional Access policy":::
-9. Confirm your settings and set **Enable policy** to **On**.
+7. Confirm your settings, set **Enable policy** to **On**, and then select **Create**.
:::image type="content" source="./media/managed-aad/conditional-access-enable-policy.png" alt-text="Enabling the Conditional Access policy":::
-10. Select **Create** to create and enable your policy.
-After creating the Conditional Access policy, perform the following steps to verify it has been successfully listed.
+After creating the Conditional Access policy, verify it has been successfully listed:
-11. To get the user credentials to access the cluster, run the following command:
+1. Get the user credentials to access the cluster using the [`az aks get-credentials`][az-aks-get-credentials] command.
```azurecli-interactive az aks get-credentials --resource-group myResourceGroup --name myManagedCluster ```
-12. Follow the instructions to sign in.
+2. Follow the instructions to sign in.
-13. View nodes in the cluster with the `kubectl get nodes` command:
+3. View the nodes in the cluster using the `kubectl get nodes` command.
```azurecli-interactive kubectl get nodes ```
-14. In the Azure portal, navigate to **Azure Active Directory**. From the left-hand pane select **Enterprise applications**, and then under **Activity** select **Sign-ins**.
+4. In the Azure portal, navigate to **Azure Active Directory** and select **Enterprise applications** > **Activity** > **Sign-ins**.
-15. Notice in the top of the results an event with a status of **Failed**, and under the **Conditional Access** column, a status of **Success**. Select the event and then select **Conditional Access** tab. Notice your Conditional Access policy is listed.
+5. Under the **Conditional Access** column you should see a status of **Success**. Select the event and then select **Conditional Access** tab. Your Conditional Access policy will be listed.
:::image type="content" source="./media/managed-aad/conditional-access-sign-in-activity.png" alt-text="Screenshot that shows failed sign-in entry due to Conditional Access policy."::: ## Configure just-in-time cluster access with Azure AD and AKS
Another option for cluster access control is to use Privileged Identity Manageme
>[!NOTE] > PIM is an Azure AD Premium capability requiring a Premium P2 SKU. For more on Azure AD SKUs, see the [pricing guide][aad-pricing].
-To integrate just-in-time access requests with an AKS cluster using AKS-managed Azure AD integration, complete the following steps:
+Integrate just-in-time access requests with an AKS cluster using AKS-managed Azure AD integration:
-1. In the Azure portal, navigate to **Azure Active Directory**.
-1. Select **Properties**. Scroll down to the **Tenant ID** field. Your tenant ID will be in the box. Note this value as it's referenced later in a step as `<tenant-id>`.
+1. In the Azure portal, go to **Azure Active Directory** and select **Properties**.
+2. Note the value listed under **Tenant ID**. It will be referenced in a later step as `<tenant-id>`.
:::image type="content" source="./media/managed-aad/jit-get-tenant-id.png" alt-text="In a web browser, the Azure portal screen for Azure Active Directory is shown with the tenant's ID highlighted.":::
-2. From the left-hand pane, under **Manage**, select **Groups** and then select **New group**.
+3. Select **Groups** > **New group**.
:::image type="content" source="./media/managed-aad/jit-create-new-group.png" alt-text="Shows the Azure portal Active Directory groups screen with the 'New Group' option highlighted.":::
-3. Verify the group type **Security** is selected and specify a group name, such as **myJITGroup**. Under the option **Azure AD roles can be assigned to this group (Preview)**, select **Yes** and then select **Create**.
+4. Verify the group type **Security** is selected and specify a group name, such as **myJITGroup**. Under the option **Azure AD roles can be assigned to this group (Preview)**, select **Yes** and then select **Create**.
:::image type="content" source="./media/managed-aad/jit-new-group-created.png" alt-text="Shows the Azure portal's new group creation screen.":::
-4. On the **Groups** page, select the group you just created and note the Object ID. This will be referenced in a later step as `<object-id>`.
+5. On the **Groups** page, select the group you just created and note the Object ID. It will be referenced in a later step as `<object-id>`.
:::image type="content" source="./media/managed-aad/jit-get-object-id.png" alt-text="Shows the Azure portal screen for the just-created group, highlighting the Object Id":::
-5. Create the AKS cluster with AKS-managed Azure AD integration using the `az aks create` command with the `--aad-admin-group-objects-ids` and `--aad-tenant-id parameters` and include the values noted in the steps earlier.
+6. Create the AKS cluster with AKS-managed Azure AD integration using the [`az aks create`][az-aks-create] command with the `--aad-admin-group-objects-ids` and `--aad-tenant-id parameters` and include the values noted in the steps earlier.
+ ```azurecli-interactive az aks create -g myResourceGroup -n myManagedCluster --enable-aad --aad-admin-group-object-ids <object-id> --aad-tenant-id <tenant-id> ```
-6. In the Azure portal, select **Activity** from the left-hand pane. Select **Privileged Access (Preview)** and then select **Enable Privileged Access**.
+
+7. In the Azure portal, select **Activity** > **Privileged Access (Preview)** > **Enable Privileged Access**.
:::image type="content" source="./media/managed-aad/jit-enabling-priv-access.png" alt-text="The Azure portal's Privileged access (Preview) page is shown, with 'Enable privileged access' highlighted":::
-7. To grant access, select **Add assignments**.
+8. To grant access, select **Add assignments**.
:::image type="content" source="./media/managed-aad/jit-add-active-assignment.png" alt-text="The Azure portal's Privileged access (Preview) screen after enabling is shown. The option to 'Add assignments' is highlighted.":::
-8. From the **Select role** drop-down list, select the users and groups you want to grant cluster access. These assignments can be modified at any time by a group administrator. Then select **Next**.
+9. From the **Select role** drop-down list, select the users and groups you want to grant cluster access. These assignments can be modified at any time by a group administrator. Then select **Next**.
:::image type="content" source="./media/managed-aad/jit-adding-assignment.png" alt-text="The Azure portal's Add assignments Membership screen is shown, with a sample user selected to be added as a member. The option 'Next' is highlighted.":::
-9. Under **Assignment type**, select **Active** and then specify the desired duration. Provide a justification and then select **Assign**. For more information about assignment types, see [Assign eligibility for a privileged access group (preview) in Privileged Identity Management][aad-assignments].
+10. Under **Assignment type**, select **Active** and then specify the desired duration. Provide a justification and then select **Assign**. For more information about assignment types, see [Assign eligibility for a privileged access group (preview) in Privileged Identity Management][aad-assignments].
:::image type="content" source="./media/managed-aad/jit-set-active-assignment-details.png" alt-text="The Azure portal's Add assignments Setting screen is shown. An assignment type of 'Active' is selected and a sample justification has been given. The option 'Assign' is highlighted.":::
-Once the assignments have been made, verify just-in-time access is working by accessing the cluster. For example:
+Once the assignments have been made, verify just-in-time access is working by accessing the cluster:
-```azurecli-interactive
- az aks get-credentials --resource-group myResourceGroup --name myManagedCluster
-```
+1. Get the user credentials to access the cluster using the [`az aks get-credentials`][az-aks-get-credentials] command.
-Follow the steps to sign in.
+ ```azurecli-interactive
+ az aks get-credentials --resource-group myResourceGroup --name myManagedCluster
+ ```
-Use the `kubectl get nodes` command to view nodes in the cluster:
+2. Follow the steps to sign in.
-```azurecli-interactive
-kubectl get nodes
-```
+3. Use the `kubectl get nodes` command to view the nodes in the cluster.
+
+ ```azurecli-interactive
+ kubectl get nodes
+ ```
-Note the authentication requirement and follow the steps to authenticate. If successful, you should see an output similar to the following output:
+4. Note the authentication requirement and follow the steps to authenticate. If successful, you should see an output similar to the following example output:
-```output
-To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code AAAAAAAAA to authenticate.
-NAME STATUS ROLES AGE VERSION
-aks-nodepool1-61156405-vmss000000 Ready agent 6m36s v1.18.14
-aks-nodepool1-61156405-vmss000001 Ready agent 6m42s v1.18.14
-aks-nodepool1-61156405-vmss000002 Ready agent 6m33s v1.18.14
-```
+ ```output
+ To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code AAAAAAAAA to authenticate.
+ NAME STATUS ROLES AGE VERSION
+ aks-nodepool1-61156405-vmss000000 Ready agent 6m36s v1.18.14
+ aks-nodepool1-61156405-vmss000001 Ready agent 6m42s v1.18.14
+ aks-nodepool1-61156405-vmss000002 Ready agent 6m33s v1.18.14
+ ```
-### Apply Just-in-Time access at the namespace level
+### Apply just-in-time access at the namespace level
1. Integrate your AKS cluster with [Azure RBAC](manage-azure-rbac.md).
-2. Associate the group you want to integrate with Just-in-Time access with a namespace in the cluster through role assignment.
+2. Associate the group you want to integrate with just-in-time access with a namespace in the cluster using the [`az role assignment create`][az-role-assignment-create] command.
-```azurecli-interactive
-az role assignment create --role "Azure Kubernetes Service RBAC Reader" --assignee <AAD-ENTITY-ID> --scope $AKS_ID/namespaces/<namespace-name>
-```
+ ```azurecli-interactive
+ az role assignment create --role "Azure Kubernetes Service RBAC Reader" --assignee <AAD-ENTITY-ID> --scope $AKS_ID/namespaces/<namespace-name>
+ ```
-1. Associate the group you configured at the namespace level with PIM to complete the configuration.
+3. Associate the group you configured at the namespace level with PIM to complete the configuration.
### Troubleshooting
-If `kubectl get nodes` returns an error similar to the following error:
+If `kubectl get nodes` returns an error similar to the following:
```output Error from server (Forbidden): nodes is forbidden: User "aaaa11111-11aa-aa11-a1a1-111111aaaaa" cannot list resource "nodes" in API group "" at the cluster scope
Make sure the admin of the security group has given your account an *Active* ass
* Use [Azure Resource Manager (ARM) templates][aks-arm-template] to create AKS-managed Azure AD enabled clusters. <!-- LINKS - external -->
-[kubernetes-webhook]:https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication
-[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
[aks-arm-template]: /azure/templates/microsoft.containerservice/managedclusters [aad-pricing]: https://azure.microsoft.com/pricing/details/active-directory/
Make sure the admin of the security group has given your account an *Active* ass
[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [az-group-create]: /cli/azure/group#az_group_create [open-id-connect]:../active-directory/develop/v2-protocols-oidc.md
-[az-ad-user-show]: /cli/azure/ad/user#az_ad_user_show
-[rbac-authorization]: concepts-identity.md#role-based-access-controls-rbac
-[operator-best-practices-identity]: operator-best-practices-identity.md
-[azure-ad-rbac]: azure-ad-rbac.md
-[azure-ad-cli]: azure-ad-integration-cli.md
[access-cluster]: #access-an-azure-ad-enabled-cluster
-[aad-migrate]: #upgrading-to-aks-managed-azure-ad-integration
[aad-assignments]: ../active-directory/privileged-identity-management/groups-assign-member-owner.md#assign-an-owner-or-member-of-a-group
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-provider-register]: /cli/azure/provider#az_provider_register
[az-aks-update]: /cli/azure/aks#az_aks_update
+[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create
aks Start Stop Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/start-stop-cluster.md
Title: Start and Stop an Azure Kubernetes Service (AKS)
-description: Learn how to stop or start an Azure Kubernetes Service (AKS) cluster.
+ Title: Stop and start an Azure Kubernetes Service (AKS) cluster
+description: Learn how to stop and start an Azure Kubernetes Service (AKS) cluster.
Previously updated : 08/09/2021 Last updated : 03/14/2023
-# Stop and Start an Azure Kubernetes Service (AKS) cluster
+# Stop and start an Azure Kubernetes Service (AKS) cluster
-Your AKS workloads may not need to run continuously, for example a development cluster that is used only during business hours. This leads to times where your Azure Kubernetes Service (AKS) cluster might be idle, running no more than the system components. You can reduce the cluster footprint by [scaling all the `User` node pools to 0](scale-cluster.md#scale-user-node-pools-to-0), but your [`System` pool](use-system-pools.md) is still required to run the system components while the cluster is running.
-To optimize your costs further during these periods, you can completely turn off (stop) your cluster. This action will stop your control plane and agent nodes altogether, allowing you to save on all the compute costs, while maintaining all your objects (except standalone pods) and cluster state stored for when you start it again. You can then pick up right where you left of after a weekend or to have your cluster running only while you run your batch jobs.
+You may not need to continuously run your Azure Kubernetes Service (AKS) workloads. For example, you may have a development cluster that you only use during business hours. This means there are times where your cluster might be idle, running nothing more than the system components. You can reduce the cluster footprint by [scaling all `User` node pools to 0](scale-cluster.md#scale-user-node-pools-to-0), but your [`System` pool](use-system-pools.md) is still required to run the system components while the cluster is running.
+
+To better optimize your costs during these periods, you can turn off, or stop, your cluster. This action stops your control plane and agent nodes, allowing you to save on all the compute costs, while maintaining all objects except standalone pods. The cluster state is stored for when you start it again, allowing you to pick up where you left off.
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
+This article assumes you have an existing AKS cluster. If you need an AKS cluster, you can create one using [Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or the [Azure portal][aks-quickstart-portal].
-### Limitations
+### About the cluster stop/start feature
-When using the cluster start/stop feature, the following restrictions apply:
+When using the cluster stop/start feature, the following conditions apply:
-- This feature is only supported for Virtual Machine Scale Sets backed clusters.-- The cluster state of a stopped AKS cluster is preserved for up to 12 months. If your cluster is stopped for more than 12 months, the cluster state cannot be recovered. For more information, see the [AKS Support Policies](support-policies.md).-- You can only start or delete a stopped AKS cluster. To perform any operation like scale or upgrade, start your cluster first.-- The customer provisioned PrivateEndpoints linked to private cluster need to be deleted and recreated again when you start a stopped AKS cluster.
+- This feature is only supported for Virtual Machine Scale Set backed clusters.
+- The cluster state of a stopped AKS cluster is preserved for up to 12 months. If your cluster is stopped for more than 12 months, you can't recover the state. For more information, see the [AKS support policies](support-policies.md).
+- You can only start or delete a stopped AKS cluster. To perform other operations, like scaling or upgrading, you need to start your cluster first.
+- If you provisioned PrivateEndpoints linked to private clusters, they need to be deleted and recreated again when starting a stopped AKS cluster.
- Because the stop process drains all nodes, any standalone pods (i.e. pods not managed by a Deployment, StatefulSet, DaemonSet, Job, etc.) will be deleted.
+- When you start your cluster back up, the following behavior is expected:
+ - The IP address of your API server may change.
+ - If you're using cluster autoscaler, when you start your cluster, your current node count may not be between the min and max range values you set. The cluster starts with the number of nodes it needs to run its workloads, which isn't impacted by your autoscaler settings. When your cluster performs scaling operations, the min and max values will impact your current node count, and your cluster will eventually enter and remain in that desired range until you stop your cluster.
-## Stop an AKS Cluster
+## Stop an AKS cluster
### [Azure CLI](#tab/azure-cli)
-You can use the `az aks stop` command to stop a running AKS cluster's nodes and control plane. The following example stops a cluster named *myAKSCluster*:
+1. Use the [`az aks stop`][az-aks-stop] command to stop a running AKS cluster, including the nodes and control plane. The following example stops a cluster named *myAKSCluster*:
+
+ ```azurecli-interactive
+ az aks stop --name myAKSCluster --resource-group myResourceGroup
+ ```
+
+2. Verify your cluster has stopped using the [`az aks show`][az-aks-show] command and confirming the `powerState` shows as `Stopped`.
-```azurecli-interactive
-az aks stop --name myAKSCluster --resource-group myResourceGroup
-```
+ ```azurecli-interactive
+ az aks show --name myAKSCluster --resource-group myResourceGroup
+ ```
-You can verify when your cluster is stopped by using the [az aks show][az-aks-show] command and confirming the `powerState` shows as `Stopped` as on the below output:
+ Your output should look similar to the following condensed example output:
-```json
-{
-[...]
- "nodeResourceGroup": "MC_myResourceGroup_myAKSCluster_westus2",
- "powerState":{
- "code":"Stopped"
- },
- "privateFqdn": null,
- "provisioningState": "Succeeded",
- "resourceGroup": "myResourceGroup",
-[...]
-}
-```
+ ```json
+ {
+ [...]
+ "nodeResourceGroup": "MC_myResourceGroup_myAKSCluster_westus2",
+ "powerState":{
+ "code":"Stopped"
+ },
+ "privateFqdn": null,
+ "provisioningState": "Succeeded",
+ "resourceGroup": "myResourceGroup",
+ [...]
+ }
+ ```
-If the `provisioningState` shows `Stopping` that means your cluster hasn't fully stopped yet.
+ If the `provisioningState` shows `Stopping`, your cluster hasn't fully stopped yet.
### [Azure PowerShell](#tab/azure-powershell)
-You can use the [Stop-AzAksCluster][stop-azakscluster] cmdlet to stop a running AKS cluster's nodes and control plane. The following example stops a cluster named *myAKSCluster*:
+1. Use the [`Stop-AzAksCluster`][stop-azakscluster] cmdlet to stop a running AKS cluster, including the nodes and control plane. The following example stops a cluster named *myAKSCluster*:
+
+ ```azurepowershell-interactive
+ Stop-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup
+ ```
+
+2. Verify your cluster has stopped using the [`Get-AzAksCluster`][get-azakscluster] cmdlet and confirming the `ProvisioningState` shows as `Succeeded`.
-```azurepowershell-interactive
-Stop-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup
-```
+ ```azurepowershell-interactive
+ Get-AzAKSCluster -Name myAKSCluster -ResourceGroupName myResourceGroup
+ ```
-You can verify your cluster is stopped using the [Get-AzAksCluster][get-azakscluster] cmdlet and confirming the `ProvisioningState` shows as `Succeeded` as shown in the following output:
+ Your output should look similar to the following condensed example output:
-```Output
-ProvisioningState : Succeeded
-MaxAgentPools : 100
-KubernetesVersion : 1.20.7
-...
-```
+ ```Output
+ ProvisioningState : Succeeded
+ MaxAgentPools : 100
+ KubernetesVersion : 1.20.7
+ ...
+ ```
-If the `ProvisioningState` shows `Stopping` that means your cluster hasn't fully stopped yet.
+ If the `ProvisioningState` shows `Stopping`, your cluster hasn't fully stopped yet.
> [!IMPORTANT]
-> If you are using [Pod Disruption Budgets](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/) the stop operation can take longer as the drain process will take more time to complete.
+> If you're using [pod disruption budgets](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/), the stop operation can take longer, as the drain process will take more time to complete.
-## Start an AKS Cluster
+## Start an AKS cluster
> [!CAUTION]
-> It is important that you don't repeatedly start/stop your cluster. Repeatedly starting/stopping your cluster may result in errors. Once your cluster is stopped, you should wait 15-30 minutes before starting it up again.
+> Don't repeatedly stop and start your clusters. This can result in errors. Once your cluster is stopped, you should wait at least 15-30 minutes before starting it again.
### [Azure CLI](#tab/azure-cli)
-You can use the `az aks start` command to start a stopped AKS cluster's nodes and control plane. The cluster is restarted with the previous control plane state and number of agent nodes.
-The following example starts a cluster named *myAKSCluster*:
+1. Use the [`az aks start`][az-aks-start] command to start a stopped AKS cluster. The cluster restarts with the previous control plane state and number of agent nodes. The following example starts a cluster named *myAKSCluster*:
-```azurecli-interactive
-az aks start --name myAKSCluster --resource-group myResourceGroup
-```
+ ```azurecli-interactive
+ az aks start --name myAKSCluster --resource-group myResourceGroup
+ ```
-You can verify when your cluster has started by using the [az aks show][az-aks-show] command and confirming the `powerState` shows `Running` as on the below output:
+2. Verify your cluster has started using the [`az aks show`][az-aks-show] command and confirming the `powerState` shows `Running`.
-```json
-{
-[...]
- "nodeResourceGroup": "MC_myResourceGroup_myAKSCluster_westus2",
- "powerState":{
- "code":"Running"
- },
- "privateFqdn": null,
- "provisioningState": "Succeeded",
- "resourceGroup": "myResourceGroup",
-[...]
-}
-```
+ ```azurecli-interactive
+ az aks show --name myAKSCluster --resource-group myResourceGroup
+ ```
-If the `provisioningState` shows `Starting` that means your cluster hasn't fully started yet.
+ Your output should look similar to the following condensed example output:
+
+ ```json
+ {
+ [...]
+ "nodeResourceGroup": "MC_myResourceGroup_myAKSCluster_westus2",
+ "powerState":{
+ "code":"Running"
+ },
+ "privateFqdn": null,
+ "provisioningState": "Succeeded",
+ "resourceGroup": "myResourceGroup",
+ [...]
+ }
+ ```
+
+ If the `provisioningState` shows `Starting`, your cluster hasn't fully started yet.
### [Azure PowerShell](#tab/azure-powershell)
-You can use the [Start-AzAksCluster][start-azakscluster] cmdlet to start a stopped AKS cluster's nodes and control plane. The cluster is restarted with the previous control plane state and number of agent nodes.
-The following example starts a cluster named *myAKSCluster*:
+1. Use the [`Start-AzAksCluster`][start-azakscluster] cmdlet to start a stopped AKS cluster. The cluster restarts with the previous control plane state and number of agent nodes. The following example starts a cluster named *myAKSCluster*:
-```azurepowershell-interactive
-Start-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup
-```
+ ```azurepowershell-interactive
+ Start-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup
+ ```
-You can verify when your cluster has started using the [Get-AzAksCluster][get-azakscluster] cmdlet and confirming the `ProvisioningState` shows `Succeeded` as shown in the following output:
+2. Verify your cluster has started using the [`Get-AzAksCluster`][get-azakscluster] cmdlet and confirming the `ProvisioningState` shows `Succeeded`.
-```Output
-ProvisioningState : Succeeded
-MaxAgentPools : 100
-KubernetesVersion : 1.20.7
-...
-```
+ ```azurepowershell-interactive
+ Get-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup
+ ```
-If the `ProvisioningState` shows `Starting` that means your cluster hasn't fully started yet.
+ Your output should look similar to the following condensed example output:
-
+ ```Output
+ ProvisioningState : Succeeded
+ MaxAgentPools : 100
+ KubernetesVersion : 1.20.7
+ ...
+ ```
-> [!NOTE]
-> When you start your cluster back up, the following is expected behavior:
->
-> * The IP address of your API server may change.
-> * If you are using cluster autoscaler, when you start your cluster back up your current node count may not be between the min and max range values you set. The cluster starts with the number of nodes it needs to run its workloads, which isn't impacted by your autoscaler settings. When your cluster performs scaling operations, the min and max values will impact your current node count and your cluster will eventually enter and remain in that desired range until you stop your cluster.
+ If the `ProvisioningState` shows `Starting`, your cluster hasn't fully started yet.
++ ## Next steps
If the `ProvisioningState` shows `Starting` that means your cluster hasn't fully
[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md [aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md [aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
-[install-azure-cli]: /cli/azure/install-azure-cli
-[az-extension-add]: /cli/azure/extension#az_extension_add
-[az-extension-update]: /cli/azure/extension#az_extension_update
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-provider-register]: /cli/azure/provider#az_provider_register
[az-aks-show]: /cli/azure/aks#az_aks_show
-[kubernetes-walkthrough-powershell]: kubernetes-walkthrough-powershell.md
[stop-azakscluster]: /powershell/module/az.aks/stop-azakscluster [get-azakscluster]: /powershell/module/az.aks/get-azakscluster [start-azakscluster]: /powershell/module/az.aks/start-azakscluster
+[az-aks-stop]: /cli/azure/aks#az_aks_stop
+[az-aks-start]: /cli/azure/aks#az_aks_start
analysis-services Analysis Services Create Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-terraform.md
# Quickstart: Create an Azure Analysis Services server using Terraform
-This article shows how to use [Terraform](/azure/terraform) to create an [Azure Analysis Services](/azure/analysis-services/analysis-services-overview) server.
+This article shows how to use [Terraform](/azure/terraform) to create an [Azure Analysis Services](./analysis-services-overview.md) server.
In this article, you learn how to:
In this article, you learn how to:
## Next steps > [!div class="nextstepaction"]
-> [Quickstart: Configure server firewall - Portal](analysis-services-qs-firewall.md)
+> [Quickstart: Configure server firewall - Portal](analysis-services-qs-firewall.md)
api-management How To Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-event-grid.md
Once the deployment has succeeded (it might take a few minutes), open a browser
You should see the sample app rendered with no event messages displayed. ## Subscribe to API Management events
api-management Quickstart Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quickstart-terraform.md
# Quickstart: Create an Azure API Management service using Terraform
-This article shows how to use [Terraform](/azure/terraform) to create an [API Management service instance](/azure/api-management/api-management-key-concepts) on Azure. API Management helps organizations publish APIs to external, partner, and internal developers to unlock the potential of their data and services. API Management provides the core competencies to ensure a successful API program through developer engagement, business insights, analytics, security, and protection. API Management enables you to create and manage modern API gateways for existing backend services hosted anywhere. For more information, see the [Azure API Management - Overview and key concepts](api-management-key-concepts.md).
+This article shows how to use [Terraform](/azure/terraform) to create an [API Management service instance](./api-management-key-concepts.md) on Azure. API Management helps organizations publish APIs to external, partner, and internal developers to unlock the potential of their data and services. API Management provides the core competencies to ensure a successful API program through developer engagement, business insights, analytics, security, and protection. API Management enables you to create and manage modern API gateways for existing backend services hosted anywhere. For more information, see the [Azure API Management - Overview and key concepts](api-management-key-concepts.md).
[!INCLUDE [Terraform abstract](~/azure-dev-docs-pr/articles/terraform/includes/abstract.md)]
In this article, you learn how to:
## Next steps > [!div class="nextstepaction"]
-> [Tutorial: Import and publish your first API](import-and-publish.md)
+> [Tutorial: Import and publish your first API](import-and-publish.md)
api-management Soft Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/soft-delete.md
https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{reso
Use the API Management [Purge](/rest/api/apimanagement/current-ga/deleted-services/purge) operation, substituting `{subscriptionId}`, `{location}`, and `{serviceName}` with your Azure subscription, resource location, and API Management name. > [!NOTE]
-> To purge a soft-deleted instance, you must have the following RBAC permissions at the subscription scope: Microsoft.ApiManagement/locations/deletedservices/delete, Microsoft.ApiManagement/deletedservices/read.
+> To purge a soft-deleted instance, you must have the following RBAC permissions at the subscription scope in addition to Contributor access to the API Management instance: Microsoft.ApiManagement/locations/deletedservices/delete, Microsoft.ApiManagement/deletedservices/read.
```rest DELETE https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.ApiManagement/locations/{location}/deletedservices/{serviceName}?api-version=2021-08-01
app-service Language Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/language-support-policy.md
App Service follows community support timelines for the lifecycle of the runtime
## Notifications End-of-life dates for runtime versions are determined independently by their respective stacks and are outside the control of App Service. App Service will send reminder notifications to subscription owners for upcoming end-of-life runtime versions 12 months prior to the end-of-life date.
-Those who receive notifications include account administrators, service administrators, and co-administrators. Contributors, readers, or other roles won't directly receive notifications, unless they opt-in to receive notification emails, using [Service Health Alerts](/azure/service-health/alerts-activity-log-service-notifications-portal).
+Those who receive notifications include account administrators, service administrators, and co-administrators. Contributors, readers, or other roles won't directly receive notifications, unless they opt-in to receive notification emails, using [Service Health Alerts](../service-health/alerts-activity-log-service-notifications-portal.md).
## Language runtime version support timelines To learn more about specific language support policy timelines, visit the following resources:
To learn more about how to update your App Service application language versions
- [Node](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/node_support.md#node-on-linux-app-service) - [Java](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/java_support.md#java-on-app-service) - [Python](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/python_support.md#how-to-update-your-app-to-target-a-different-version-of-python)-- [PHP](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/php_support.md#how-to-update-your-app-to-target-a-different-version-of-php) -
+- [PHP](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/php_support.md#how-to-update-your-app-to-target-a-different-version-of-php)
app-service Migrate Wordpress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/migrate-wordpress.md
The prerequisite is that the WordPress on Linux Azure App Service must have been
> [!NOTE]
-> Azure Database for MySQL - Single Server is on the road to retirement by 16 September 2024. If your existing MySQL database is hosted on Azure Database for MySQL - Single Server, consider migrating to Azure Database for MySQL - Flexible Server using the following steps, or using [Azure Database Migration Service (DMS)](/azure/mysql/single-server/whats-happening-to-mysql-single-server#migrate-from-single-server-to-flexible-server).
+> Azure Database for MySQL - Single Server is on the road to retirement by 16 September 2024. If your existing MySQL database is hosted on Azure Database for MySQL - Single Server, consider migrating to Azure Database for MySQL - Flexible Server using the following steps, or using [Azure Database Migration Service (DMS)](../mysql/single-server/whats-happening-to-mysql-single-server.md#migrate-from-single-server-to-flexible-server).
> 6. If you migrate the database, import the SQL file downloaded from the source database into the database of your newly created WordPress site. You can do it via the PhpMyAdmin dashboard available at **\<sitename\>.azurewebsites.net/phpmyadmin**. If you're unable to one single large SQL file, separate the files into parts and try uploading again. Steps to import the database through phpmyadmin are described [here](https://docs.phpmyadmin.net/en/latest/import_export.html#import).
When you migrate a live site and its DNS domain name to App Service, that DNS na
If your site is configured with SSL certs, then follow [Add and manage TLS/SSL certificates](configure-ssl-certificate.md?tabs=apex%2Cportal) to configure SSL. Next steps:
-[At-scale assessment of .NET web apps](/training/modules/migrate-app-service-migration-assistant/)
+[At-scale assessment of .NET web apps](/training/modules/migrate-app-service-migration-assistant/)
app-service Overview Arc Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-arc-integration.md
Title: 'App Service on Azure Arc' description: An introduction to App Service integration with Azure Arc for Azure operators. Previously updated : 05/03/2022 Last updated : 03/15/2023 # App Service, Functions, and Logic Apps on Azure Arc (Preview)
The following public preview limitations apply to App Service Kubernetes environ
| Feature: Key vault references | Not available (depends on managed identities) | | Feature: Pull images from ACR with managed identity | Not available (depends on managed identities) | | Feature: In-portal editing for Functions and Logic Apps | Not available |
-| Feature: Portal listing of Functions or keys | Not available if cluster is not publicly reachable |
+| Feature: Portal listing of Functions or keys | Not available if cluster isn't publicly reachable |
| Feature: FTP publishing | Not available | | Logs | Log Analytics must be configured with cluster extension; not per-site |
Only one Kubernetes environment resource can be created in a custom location. In
- [What logs are collected?](#what-logs-are-collected) - [What do I do if I see a provider registration error?](#what-do-i-do-if-i-see-a-provider-registration-error) - [Can I deploy the Application services extension on an ARM64 based cluster?](#can-i-deploy-the-application-services-extension-on-an-arm64-based-cluster)
+- [Which Kubernetes distributions can I deploy the extension on?](#which-kubernetes-distributions-can-i-deploy-the-extension-on)
### How much does it cost?
App Service on Azure Arc is free during the public preview.
### Are both Windows and Linux apps supported?
-Only Linux-based apps are supported, both code and custom containers. Windows apps are not supported.
+Only Linux-based apps are supported, both code and custom containers. Windows apps aren't supported.
### Which built-in application stacks are supported?
All built-in Linux stacks are supported.
### Are all app deployment types supported?
-FTP deployment is not supported. Currently `az webapp up` is also not supported. Other deployment methods are supported, including Git, ZIP, CI/CD, Visual Studio, and Visual Studio Code.
+FTP deployment isn't supported. Currently `az webapp up` is also not supported. Other deployment methods are supported, including Git, ZIP, CI/CD, Visual Studio, and Visual Studio Code.
### Which App Service features are supported?
-During the preview period, certain App Service features are being validated. When they're supported, their left navigation options in the Azure portal will be activated. Features that are not yet supported remain grayed out.
+During the preview period, certain App Service features are being validated. When they're supported, their left navigation options in the Azure portal will be activated. Features that aren't yet supported remain grayed out.
### Are all networking features supported?
-No. Networking features such as hybrid connections or Virtual Network integration, are not supported. [Access restriction](app-service-ip-restrictions.md) support was added in April 2022. Networking should be handled directly in the networking rules in the Kubernetes cluster itself.
+No. Networking features such as hybrid connections or Virtual Network integration, aren't supported. [Access restriction](app-service-ip-restrictions.md) support was added in April 2022. Networking should be handled directly in the networking rules in the Kubernetes cluster itself.
### Are managed identities supported?
All applications deployed with Azure App Service on Kubernetes with Azure Arc ar
Logs for both system components and your applications are written to standard output. Both log types can be collected for analysis using standard Kubernetes tools. You can also configure the App Service cluster extension with a [Log Analytics workspace](../azure-monitor/logs/log-analytics-overview.md), and it sends all logs to that workspace.
-By default, logs from system components are sent to the Azure team. Application logs are not sent. You can prevent these logs from being transferred by setting `logProcessor.enabled=false` as an extension configuration setting. This configuration setting will also disable forwarding of application to your Log Analytics workspace. Disabling the log processor might impact time needed for any support cases, and you will be asked to collect logs from standard output through some other means.
+By default, logs from system components are sent to the Azure team. Application logs aren't sent. You can prevent these logs from being transferred by setting `logProcessor.enabled=false` as an extension configuration setting. This configuration setting will also disable forwarding of application to your Log Analytics workspace. Disabling the log processor might impact time needed for any support cases, and you will be asked to collect logs from standard output through some other means.
### What do I do if I see a provider registration error?
When creating a Kubernetes environment resource, some subscriptions might see a
### Can I deploy the Application services extension on an ARM64 based cluster?
-ARM64 based clusters are not supported at this time.
+ARM64 based clusters aren't supported at this time.
+
+### Which Kubernetes distributions can I deploy the extension on?
+
+The extension has been validated on AKS, AKS on Azure Stack HCI, Google Kubernetes Engine, Amazon Elastic Kubernetes Service and Kubernetes Cluster API.
## Extension Release Notes
app-service Scenario Secure App Access Microsoft Graph As App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-access-microsoft-graph-as-app.md
Previously updated : 08/19/2022 Last updated : 03/14/2023 ms.devlang: csharp
app-service Scenario Secure App Access Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-access-storage.md
Previously updated : 02/16/2022 Last updated : 03/14/2023 ms.devlang: csharp, azurecli
app-service Troubleshoot Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-diagnostic-logs.md
In your application code, you use the usual logging facilities to send log messa
``` By default, ASP.NET Core uses the [Microsoft.Extensions.Logging.AzureAppServices](https://www.nuget.org/packages/Microsoft.Extensions.Logging.AzureAppServices) logging provider. For more information, see [ASP.NET Core logging in Azure](/aspnet/core/fundamentals/logging/). For information about WebJobs SDK logging, see [Get started with the Azure WebJobs SDK](./webjobs-sdk-get-started.md#enable-console-logging)-- Python applications can use the [OpenCensus package](/azure/azure-monitor/app/opencensus-python) to send logs to the application diagnostics log.
+- Python applications can use the [OpenCensus package](../azure-monitor/app/opencensus-python.md) to send logs to the application diagnostics log.
## Stream logs
If you secure your Azure Storage account by [only allowing selected networks](..
* [How to Monitor Azure App Service](web-sites-monitor.md) * [Troubleshooting Azure App Service in Visual Studio](troubleshoot-dotnet-visual-studio.md) * [Analyze app Logs in HDInsight](https://gallery.technet.microsoft.com/scriptcenter/Analyses-Windows-Azure-web-0b27d413)
-* [Tutorial: Run a load test to identify performance bottlenecks in a web app](../load-testing/tutorial-identify-bottlenecks-azure-portal.md)
+* [Tutorial: Run a load test to identify performance bottlenecks in a web app](../load-testing/tutorial-identify-bottlenecks-azure-portal.md)
app-service Tutorial Connect App Access Microsoft Graph As App Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-microsoft-graph-as-app-javascript.md
Previously updated : 01/21/2022 Last updated : 03/14/2023 ms.devlang: javascript
app-service Tutorial Connect App Access Storage Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-storage-javascript.md
Previously updated : 02/16/2022 Last updated : 03/14/2023 ms.devlang: javascript, azurecli
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
The creation wizard generated the connectivity variables for you already as [app
**Step 2.** In the **Application settings** tab of the **Configuration** page, verify that `AZURE_POSTGRESQL_CONNECTIONSTRING` is present. That will be injected into the runtime environment as an environment variable. App settings are one way to keep connection secrets out of your code repository. When you're ready to move your secrets to a more secure location,
- here's an [article on storing in Azure Key Vault](/azure/key-vault/certificates/quick-create-python).
+ here's an [article on storing in Azure Key Vault](../key-vault/certificates/quick-create-python.md).
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-2.png" alt-text="A screenshot showing how to see the autogenerated connection string." lightbox="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-2.png":::
Advance to the next tutorial to learn how to secure your app with a custom domai
Learn how App Service runs a Python app: > [!div class="nextstepaction"]
-> [Configure Python app](configure-language-python.md)
+> [Configure Python app](configure-language-python.md)
application-gateway Application Gateway Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-components.md
An application gateway can also communicate with on-premises servers when they'r
You can create different backend pools for different types of requests. For example, create one backend pool for general requests, and then another backend pool for requests to the microservices for your application.
+After you add virtual machine scale sets as a backend pool member, you need to upgrade virtual machine scale sets instances. Until you upgrade scale sets instances, the backend will be unhealthy.
+ ## Health probes By default, an application gateway monitors the health of all resources in its backend pool and automatically removes unhealthy ones. It then monitors unhealthy instances and adds them back to the healthy backend pool when they become available and respond to health probes.
applied-ai-services Concept General Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-general-document.md
Previously updated : 11/14/2022 Last updated : 03/15/2023 monikerRange: 'form-recog-3.0.0' recommendations: false
You need the following resources:
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
-#### Form Recognizer Studio
+#### Form Recognizer Studio
> [!NOTE] > Form Recognizer studio and the general document model are available with the v3.0 API.
applied-ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-layout.md
Previously updated : 03/03/2023 Last updated : 03/15/2023 recommendations: false
The paragraph roles are best used with unstructured documents. Paragraph roles
::: moniker range="form-recog-2.1.0"
-### Data extraction
+### Data extraction support
| **Model** | **Text** | **Tables** | Selection marks| | | | | |
The Layout model extracts annotations in documents, such as checks and crosses.
} ```
-### Extracting barcodes from documents
+### Barcode extraction
The Layout model extracts all identified barcodes in the `barcodes` collection as a top level object under `content`. Inside the `content`, detected barcodes are represented as `:barcode:`. Each entry in this collection represents a barcode and includes the barcode type as `kind` and the embedded barcode content as `value` along with its `polygon` coordinates. Initially, barcodes appear at the end of each page.
applied-ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-read.md
Previously updated : 03/02/2023 Last updated : 03/15/2023 monikerRange: 'form-recog-3.0.0' recommendations: false
The page units in the model output are computed as shown:
|PowerPoint | Each slide = 1 page unit, Each embedded image = 1 page unit | Total slides + Total images |HTML | Up to 3,000 characters = 1 page unit, embedded or linked images not supported | Total pages of up to 3,000 characters each |
-### Extracting barcodes from documents
+ ### Barcode extraction
The Read OCR model extracts all identified barcodes in the `barcodes` collection as a top level object under `content`. Inside the `content`, detected barcodes are represented as `:barcode:`. Each entry in this collection represents a barcode and includes the barcode type as `kind` and the embedded barcode content as `value` along with its `polygon` coordinates. Initially, barcodes appear at the end of each page. Here, the `confidence` is hard-coded for the public preview (`2023-02-28`) release.
The Read OCR model in Form Recognizer adds [language detection](language-support
] ```
-### Extracting pages from documents
+### Extract pages from documents
The page units in the model output are computed as shown:
applied-ai-services Form Recognizer Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-disconnected-containers.md
After you've configured the container, use the next section to run the container
## Form Recognizer container models and configuration
-> [!IMPORTANT]
-> If you're using the Translator, Neural text-to-speech, or Speech-to-text containers, read the **Additional parameters** section for information on commands or additional parameters you will need to use.
- After you've [configured the container](#configure-the-container-to-be-run-in-a-disconnected-environment), the values for the downloaded translation models and container configuration will be generated and displayed in the container output: ```bash
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Previously updated : 03/03/2023 Last updated : 03/15/2023 monikerRange: '>=form-recog-2.1.0' recommendations: false
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
## March 2023 > [!IMPORTANT]
-> Document classification, Query fields, and Add-on capabilities are currently only available in the following regions:
+> [**`2023-02-28-preview`**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument) capabilities are currently only available in the following regions:
> > * West Europe > * West US2 > * East US
-* **Document classification** is now a new capability within Form Recognizer starting with the ```2023-02-28-preview``` API. Try out the document classification capability in the [Studio](https://formrecognizer-dogfood.appliedai.azure.com/studio/) or the REST API.
-* **Query fields** added to the General Document model uses Open AI model to extract specific fields from documents. See the [general document](concept-general-document.md) model to learn more or try the feature in the [Studio](https://formrecognizer-dogfood.appliedai.azure.com/studio/). Query fields are currently only active for resources in the East US region.
-* **Additions to the Read and Layout APIs**
- * **Barcodes** are now supported with the ```2023-02-28-preview``` API.
- * **Fonts** are now recognized with the ```2023-02-28-preview``` API.
- * **Formulas** are now recognized with the ```2023-02-28-preview``` API.
-* **Common name** normalizing key variation to a common name makes the General Document model more useful in processing forms with variations in key names. Learn more about the common name feature in the [General Document model](concept-general-document.md).
-* **Custom extraction model updates**
- * Custom neural models now support added languages for training and analysis. Train neural models for Dutch, French, German, Italian and Spanish.
- * Custom template models now have an improved signature detection capability.
-* **Service Updates**
- * Support for high resolution documents
-* **Studio updates**
+* [**Custom classifier model**](concept-custom-classifier.md) is a new capability within Form Recognizer starting with the ```2023-02-28-preview``` API. Try the document classification capability using the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/document-classifier/projects) or the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/GetClassifyDocumentResult).
+* [**Query fields**](concept-query-fields.md) capabilities, added to the General Document model, use Azure OpenAI models to extract specific fields from documents. Try the **General documents with query fields** feature using the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio). Query fields are currently only active for resources in the `East US` region.
+* [**Read**](concept-read.md#barcode-extraction) and [**Layout**](concept-layout.md#barcode-extraction) models support **barcode** extraction with the ```2023-02-28-preview``` API.
+* [**Add-on capabilities**](concept-add-on-capabilities.md)
+ * [**Font extraction**](concept-add-on-capabilities.md#font-property-extraction) is now recognized with the ```2023-02-28-preview``` API.
+ * [**Formula extraction**](concept-add-on-capabilities.md#formula-extraction) is now recognized with the ```2023-02-28-preview``` API.
+ * [**High resolution extraction**](concept-add-on-capabilities.md#high-resolution-extraction) is now recognized with the ```2023-02-28-preview``` API.
+* [**Common name key normalization**](concept-general-document.md#key-normalization-common-name) capabilities are added to the General Document model to improve processing forms with variations in key names.
+* [**Custom extraction model updates**](concept-custom.md)
+ * [**Custom neural model**](concept-custom-neural.md) now supports added languages for training and analysis. Train neural models for Dutch, French, German, Italian and Spanish.
+ * [**Custom template model**](concept-custom-template.md) now has an improved signature detection capability.
+* [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio) updates
* In addition to support for all the new features like classification and query fields, the Studio now enables project sharing for custom model projects.
-* **Receipt model updates**
+ * New model additions in gated preview: **Vaccination cards**, **Contracts**, **US Tax 1098**, **US Tax 1098-E**, and **US Tax 1095-T**. To request access to gated preview models, complete and submit the [**Form Recognizer private preview request form**](https://aka.ms/form-recognizer/preview/survey).
+* [**Receipt model updates**](concept-receipt.md)
* Receipt model has added support for thermal receipts. * Receipt model now has added language support for 18 languages and three language dialects (English, French, Portuguese). * Receipt model now supports `TaxDetails` extraction.
-* **Layout model** now has improved table recognition.
-* **Read model** now has added improvement for single-digit character recognition.
+* [**Layout model**](concept-layout.md) now has improved table recognition.
+* [**Read model**](concept-read.md) now has added improvement for single-digit character recognition.
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
* **Navigation**. You can select labels to target labeled words within a document.
- * **Auto table labeling**. After you select the table icon within a document, you can opt to auto-label the extracted table in the labeling view.
+ * **Auto table labeling**. After you select the table icon within a document, you can opt to autolabel the extracted table in the labeling view.
* **Label subtypes and second-level subtypes** The Studio now supports subtypes for table columns, table rows, and second-level subtypes for types such as dates and numbers.
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
* [**Custom neural model**](concept-custom-neural.md) or custom document model is a new custom model to extract text and selection marks from structured forms, semi-strutured and **unstructured documents**. * [**W-2 prebuilt model**](concept-w2.md) is a new prebuilt model to extract fields from W-2 forms for tax reporting and income verification scenarios. * [**Read**](concept-read.md) API extracts printed text lines, words, text locations, detected languages, and handwritten text, if detected.
- * [**General document**](concept-general-document.md) pre-trained model is now updated to support selection marks in addition to API text, tables, structure, key-value pairs, and named entities from forms and documents.
+ * [**General document**](concept-general-document.md) pretrained model is now updated to support selection marks in addition to API text, tables, structure, key-value pairs, and named entities from forms and documents.
* [**Invoice API**](language-support.md#invoice-model) Invoice prebuilt model expands support to Spanish invoices. * [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com) adds new demos for Read, W2, Hotel receipt samples, and support for training the new custom neural models. * [**Language Expansion**](language-support.md) Form Recognizer Read, Layout, and Custom Form add support for 42 new languages including Arabic, Hindi, and other languages using Arabic and Devanagari scripts to expand the coverage to 164 languages. Handwritten language support expands to Japanese and Korean.
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
* **Form Recognizer v3.0 preview release version 4.0.0-beta.1 (2021-10-07)introduces several new features and capabilities:**
- * [**General document**](concept-general-document.md) model is a new API that uses a pre-trained model to extract text, tables, structure, key-value pairs, and named entities from forms and documents.
+ * [**General document**](concept-general-document.md) model is a new API that uses a pretrained model to extract text, tables, structure, key-value pairs, and named entities from forms and documents.
* [**Hotel receipt**](concept-receipt.md) model added to prebuilt receipt processing. * [**Expanded fields for ID document**](concept-id-document.md) the ID model supports endorsements, restrictions, and vehicle classification extraction from US driver's licenses. * [**Signature field**](concept-custom.md) is a new field type in custom forms to detect the presence of a signature in a form field.
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
* Split **FormField** type into several different interfaces. This update shouldn't cause any API compatibility issues except in certain edge cases (undefined valueType).
-* Migrated to the **2.1-preview.3** Form Recognizer service endpoint for all REST API calls.
+* Migrated to the **`2.1-preview.3`** Form Recognizer service endpoint for all REST API calls.
### [**Python**](#tab/python)
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
-* **SDK preview updates for API version 2.1-preview.3 introduces feature updates and enhancements.**
+* **SDK preview updates for API version `2.1-preview.3` introduces feature updates and enhancements.**
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
* **Form Recognizer v2.1-preview.1 has been released and includes the following features:
- * **REST API reference is available** - View the [`v2.1-preview.1 reference`](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-1/operations/AnalyzeBusinessCardAsync)
+ * **REST API reference is available** - View the [`v2.1-preview.1 reference`](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)
* **New languages supported In addition to English**, the following [languages](language-support.md) are now supported: for `Layout` and `Train Custom Model`: English (`en`), Chinese (Simplified) (`zh-Hans`), Dutch (`nl`), French (`fr`), German (`de`), Italian (`it`), Portuguese (`pt`) and Spanish (`es`). * **Checkbox / Selection Mark detection** ΓÇô Form Recognizer supports detection and extraction of selection marks such as check boxes and radio buttons. Selection Marks are extracted in `Layout` and you can now also label and train in `Train Custom Model` - _Train with Labels_ to extract key-value pairs for selection marks. * **Model Compose** - allows multiple models to be composed and called with a single model ID. When you submit a document to be analyzed with a composed model ID, a classification step is first performed to route it to the correct custom model. Model Compose is available for `Train Custom Model` - _Train with labels_.
automation Automation Solution Vm Management Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management-logs.md
# Query logs from Start/Stop VMs during off-hours > [!NOTE]
-> Start/Stop VM during off-hours version 1 is unavailable in the marketplace now as it will retire by 30 September 2023. We recommend you start using [version 2](/azure/azure-functions/start-stop-vms/overview), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until 30 September 2023. The details of the announcement will be shared on March 31st 2023.
+> Start/Stop VM during off-hours version 1 is unavailable in the marketplace now as it will retire by 30 September 2023. We recommend you start using [version 2](../azure-functions/start-stop-vms/overview.md), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until 30 September 2023. The details of the announcement will be shared on March 31st 2023.
Azure Automation forwards two types of records to the linked Log Analytics workspace: job logs and job streams. This article reviews the data available for [query](../azure-monitor/logs/log-query-overview.md) in Azure Monitor.
automation Automation Solution Vm Management Remove https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management-remove.md
# Remove Start/Stop VMs during off-hours from Automation account > [!NOTE]
-> Start/Stop VM during off-hours version 1 is unavailable in the marketplace now as it will retire by 30 September 2023. We recommend you start using [version 2](/azure/azure-functions/start-stop-vms/overview), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until 30 September 2023. The details of the announcement will be shared on March 31st 2023.
+> Start/Stop VM during off-hours version 1 is unavailable in the marketplace now as it will retire by 30 September 2023. We recommend you start using [version 2](../azure-functions/start-stop-vms/overview.md), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until 30 September 2023. The details of the announcement will be shared on March 31st 2023.
After you enable the Start/Stop VMs during off-hours feature to manage the running state of your Azure VMs, you may decide to stop using it. Removing this feature can be done using one of the following methods based on the supported deployment models:
To delete Start/Stop VMs during off-hours from your Automation account, perform
## Next steps
-To re-enable this feature, see [Enable Start/Stop during off-hours](automation-solution-vm-management-enable.md).
+To re-enable this feature, see [Enable Start/Stop during off-hours](automation-solution-vm-management-enable.md).
automation Automation Solution Vm Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management.md
# Start/Stop VMs during off-hours overview > [!NOTE]
-> Start/Stop VM during off-hours version 1 is unavailable in the marketplace now as it will retire by 30 September 2023. We recommend you start using [version 2](/azure/azure-functions/start-stop-vms/overview), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until 30 September 2023. The details of the announcement will be shared on March 31st 2023.
+> Start/Stop VM during off-hours version 1 is unavailable in the marketplace now as it will retire by 30 September 2023. We recommend you start using [version 2](../azure-functions/start-stop-vms/overview.md), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until 30 September 2023. The details of the announcement will be shared on March 31st 2023.
The Start/Stop VMs during off-hours feature start or stops enabled Azure VMs. It starts or stops machines on user-defined schedules, provides insights through Azure Monitor logs, and sends optional emails by using [action groups](../azure-monitor/alerts/action-groups.md). The feature can be enabled on both Azure Resource Manager and classic VMs for most scenarios.
You can perform further analysis of the job records by clicking the donut tile.
## Next steps
-To enable the feature on VMs in your environment, see [Enable Start/Stop VMs during off-hours](automation-solution-vm-management-enable.md).
+To enable the feature on VMs in your environment, see [Enable Start/Stop VMs during off-hours](automation-solution-vm-management-enable.md).
automation Region Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/how-to/region-mappings.md
# Supported regions for linked Log Analytics workspace > [!NOTE]
-> Start/Stop VM during off-hours version 1 is unavailable in the marketplace now as it will retire by 30 September 2023. We recommend you start using [version 2](/azure/azure-functions/start-stop-vms/overview), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until 30 September 2023. The details of the announcement will be shared on March 31st 2023.
+> Start/Stop VM during off-hours version 1 is unavailable in the marketplace now as it will retire by 30 September 2023. We recommend you start using [version 2](../../azure-functions/start-stop-vms/overview.md), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until 30 September 2023. The details of the announcement will be shared on March 31st 2023.
In Azure Automation, you can enable the Update Management, Change Tracking and Inventory, and Start/Stop VMs during off-hours features for your servers and virtual machines. These features have a dependency on a Log Analytics workspace, and therefore require linking the workspace with an Automation account. However, only certain regions are supported to link them together. In general, the mapping is *not* applicable if you plan to link an Automation account to a workspace that won't have these features enabled.
The following table shows the supported mappings:
* Learn about Update Management in [Update Management overview](../update-management/overview.md). * Learn about Change Tracking and Inventory in [Change Tracking and Inventory overview](../change-tracking/overview.md).
-* Learn about Start/Stop VMs during off-hours in [Start/Stop VMs during off-hours overview](../automation-solution-vm-management.md).
+* Learn about Start/Stop VMs during off-hours in [Start/Stop VMs during off-hours overview](../automation-solution-vm-management.md).
automation Start Stop Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/start-stop-vm.md
# Troubleshoot Start/Stop VMs during off-hours issues > [!NOTE]
-> Start/Stop VM during off-hours version 1 is unavailable in the marketplace now as it will retire by 30 September 2023. We recommend you start using [version 2](/azure/azure-functions/start-stop-vms/overview), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until 30 September 2023. The details of the announcement will be shared on March 31st 2023.
+> Start/Stop VM during off-hours version 1 is unavailable in the marketplace now as it will retire by 30 September 2023. We recommend you start using [version 2](../../azure-functions/start-stop-vms/overview.md), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until 30 September 2023. The details of the announcement will be shared on March 31st 2023.
This article provides information on troubleshooting and resolving issues that arise when you deploy the Azure Automation Start/Stop VMs during off-hours feature on your VMs.
azure-app-configuration Howto App Configuration Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-app-configuration-event.md
The deployment may take a few minutes to complete. After the deployment has succ
You should see the site with no messages currently displayed. ## Subscribe to your App Configuration store
azure-app-configuration Howto Backup Config Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-backup-config-store.md
az storage account create -n $storageName -g $resourceGroupName -l westus --sku
az storage queue create --name $queueName --account-name $storageName --auth-mode login ``` ## Subscribe to your App Configuration store events
If you don't see the new setting in your secondary store:
- Make sure the backup function was triggered *after* you created the setting in your primary store. - It's possible that Event Grid couldn't send the event notification to the queue in time. Check if your queue still contains the event notification from your primary store. If it does, trigger the backup function again. - Check [Azure Functions logs](../azure-functions/functions-create-scheduled-function.md#test-the-function) for any errors or warnings.-- Use the [Azure portal](../azure-functions/functions-how-to-use-azure-function-app-settings.md#get-started-in-the-azure-portal) to ensure that the Azure function app contains correct values for the application settings that Azure Functions is trying to read.
+- Use the [Azure portal](../azure-functions/functions-how-to-use-azure-function-app-settings.md#get-started-in-the-azure-portal) to ensure that the Azure function app contains correct values for the application settings that the Azure function is trying to read.
- You can also set up monitoring and alerting for Azure Functions by using [Azure Application Insights](../azure-functions/functions-monitoring.md?tabs=cmd). ## Clean up resources
azure-app-configuration Quickstart Container Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-container-apps.md
In this quickstart, you will use Azure App Configuration in an app running in Az
## Prerequisites - An application using an App Configuration store. If you don't have one, create an instance using the [Quickstart: Create an ASP.NET Core app with App Configuration](./quickstart-aspnet-core-app.md).-- An Azure Container Apps instance. If you don't have one, create an instance using the [Azure portal](/azure/container-apps/quickstart-portal) or [the CLI](/azure/container-apps/get-started).
+- An Azure Container Apps instance. If you don't have one, create an instance using the [Azure portal](../container-apps/quickstart-portal.md) or [the CLI](../container-apps/get-started.md).
- [Docker Desktop](https://www.docker.com/products/docker-desktop) - The [Azure CLI](/cli/azure/install-azure-cli)
Create an Azure Container Registry (ACR). ACR enables you to build, store, and m
#### [Portal](#tab/azure-portal)
-1. To create the container registry, follow the [Azure Container Registry quickstart](/azure/container-registry/container-registry-get-started-portal).
+1. To create the container registry, follow the [Azure Container Registry quickstart](../container-registry/container-registry-get-started-portal.md).
1. Once the deployment is complete, open your ACR instance and from the left menu, select **Settings > Access keys**. 1. Take note of the **Login server** value listed on this page. You'll use this information in a later step. 1. Switch **Admin user** to *Enabled*. This option lets you connect the ACR to Azure Container Apps using admin user credentials. Alternatively, you can leave it disabled and configure the container app to [pull images from the registry with a managed identity](../container-apps/managed-identity-image-pull.md). #### [Azure CLI](#tab/azure-cli)
-1. Create an ACR instance using the following command. It creates a basic tier registry named *myregistry* with admin user enabled that allows the container app to connect to the registry using admin user credentials. For more information, see [Azure Container Registry quickstart](/azure/container-registry/container-registry-get-started-azure-cli).
+1. Create an ACR instance using the following command. It creates a basic tier registry named *myregistry* with admin user enabled that allows the container app to connect to the registry using admin user credentials. For more information, see [Azure Container Registry quickstart](../container-registry/container-registry-get-started-azure-cli.md).
```azurecli az acr create
The managed identity enables one Azure resource to access another without you ma
To learn how to configure your ASP.NET Core web app to dynamically refresh configuration settings, continue to the next tutorial. > [!div class="nextstepaction"]
-> [Enable dynamic configuration](./enable-dynamic-configuration-aspnet-core.md)
+> [Enable dynamic configuration](./enable-dynamic-configuration-aspnet-core.md)
azure-app-configuration Quickstart Feature Flag Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-spring-boot.md
To create a new Spring Boot project:
> [!NOTE] > * If you need to support an older version of Spring Boot see our [old appconfiguration library](https://github.com/Azure/azure-sdk-for-jav).
-> * There is a non-web Feature Management Library that doesn't have a dependency on spring-web. Refer to GitHub's [documentation](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/appconfiguration/azure-spring-cloud-feature-management) for differences.
+> * There is a non-web Feature Management Library that doesn't have a dependency on spring-web. Refer to GitHub's [documentation](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/spring/spring-cloud-azure-feature-management) for differences.
## Connect to an App Configuration store
azure-arc Create Data Controller Indirect Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-indirect-cli.md
Title: Create data controller using CLI
-description: Create an Azure Arc data controller, on a typical multi-node Kubernetes cluster which you already have created, using the CLI.
+description: Create an Azure Arc data controller, on a typical multi-node Kubernetes cluster that you already have created, using the CLI.
Review the topic [Plan an Azure Arc-enabled data services deployment](plan-azure
### Install tools
-To create the data controller using the CLI, you will need to install the `arcdata` extension for Azure (az) CLI.
+Before you begin, install the `arcdata` extension for Azure (az) CLI.
[Install the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)]](install-client-tools.md)
-Regardless of which target platform you choose, you will need to set the following environment variables prior to the creation for the data controller. These environment variables will become the credentials used for accessing the metrics and logs dashboards after data controller creation.
-
+Regardless of which target platform you choose, you need to set the following environment variables prior to the creation for the data controller. These environment variables become the credentials used for accessing the metrics and logs dashboards after data controller creation.
### Set environment variables
$ENV:AZDATA_METRICSUI_PASSWORD="<password for Grafana dashboard>"
### Connect to Kubernetes cluster
-You will need to connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the creation of the Azure Arc data controller. How you connect to a Kubernetes cluster or service varies. See the documentation for the Kubernetes distribution or service that you are using on how to connect to the Kubernetes API server.
+Connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the creation of the Azure Arc data controller. How you connect to a Kubernetes cluster or service varies. See the documentation for the Kubernetes distribution or service that you are using on how to connect to the Kubernetes API server.
You can check to see that you have a current Kubernetes connection and confirm your current context with the following commands.
The following sections provide instructions for specific types of Kubernetes pla
## Create on Azure Kubernetes Service (AKS)
-By default, the AKS deployment profile uses the `managed-premium` storage class. The `managed-premium` storage class will only work if you have VMs that were deployed using VM images that have premium disks.
+By default, the AKS deployment profile uses the `managed-premium` storage class. The `managed-premium` storage class only works if you have VMs that were deployed using VM images that have premium disks.
If you are going to use `managed-premium` as your storage class, then you can run the following command to create the data controller. Substitute the placeholders in the command with your resource group name, subscription ID, and Azure location.
Once you have run the command, continue on to [Monitoring the creation status](#
### Determine storage class
-You will also need to determine which storage class to use by running the following command.
+To determine which storage class to use, run the following command.
```console kubectl get storageclass
az arcdata dc config replace --path ./custom/control.json --json-values "$.spec.
Now you are ready to create the data controller using the following command. > [!NOTE]
-> The `--path` parameter should point to the _directory_ containing the control.json file not to the control.json file itself.
+> The `--path` parameter should point to the _directory_ containing the control.json file not to the control.json file itself.
> [!NOTE]
-> When deploying to OpenShift Container Platform, you will need to specify the `--infrastructure` parameter value. Options are: `aws`, `azure`, `alibaba`, `gcp`, `onpremises`.
+> When deploying to OpenShift Container Platform, specify the `--infrastructure` parameter value. Options are: `aws`, `azure`, `alibaba`, `gcp`, `onpremises`.
```azurecli az arcdata dc create --path ./custom --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --infrastructure <infrastructure>
Once you have run the command, continue on to [Monitoring the creation status](#
By default, the kubeadm deployment profile uses a storage class called `local-storage` and service type `NodePort`. If this is acceptable you can skip the instructions below that set the desired storage class and service type and immediately run the `az arcdata dc create` command below.
-If you want to customize your deployment profile to specify a specific storage class and/or service type, start by creating a new custom deployment profile file based on the kubeadm deployment profile by running the following command. This command will create a directory `custom` in your current working directory and a custom deployment profile file `control.json` in that directory.
+If you want to customize your deployment profile to specify a specific storage class and/or service type, start by creating a new custom deployment profile file based on the kubeadm deployment profile by running the following command. This command creates a directory `custom` in your current working directory and a custom deployment profile file `control.json` in that directory.
```azurecli az arcdata dc config init --source azure-arc-kubeadm --path ./custom
az arcdata dc config replace --path ./custom/control.json --json-values "spec.st
By default, the kubeadm deployment profile uses `NodePort` as the service type. If you are using a Kubernetes cluster that is integrated with a load balancer, you can change the configuration using the following command. ```azurecli
-az arcdata dc config replace --path ./custom/control.json --json-values "$.spec.services[*].serviceType=LoadBalancer" --k8s-namespace <namespace> --use-k8s
+az arcdata dc config replace --path ./custom/control.json --json-values "$.spec.services[*].serviceType=LoadBalancer"
``` Now you are ready to create the data controller using the following command. > [!NOTE]
-> When deploying to OpenShift Container Platform, you will need to specify the `--infrastructure` parameter value. Options are: `aws`, `azure`, `alibaba`, `gcp`, `onpremises`.
+> When deploying to OpenShift Container Platform, specify the `--infrastructure` parameter value. Options are: `aws`, `azure`, `alibaba`, `gcp`, `onpremises`.
```azurecli az arcdata dc create --path ./custom --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --infrastructure <infrastructure>
Once you have run the command, continue on to [Monitoring the creation status](#
## Monitor the creation status
-Creating the controller will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands:
+It takes a few minutes to create the controller completely. You can monitor the progress in another terminal window with the following commands:
> [!NOTE] > The example commands below assume that you created a data controller named `arc-dc` and Kubernetes namespace named `arc`. If you used different values update the script accordingly.
azure-arc Deploy Telemetry Router https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-telemetry-router.md
metricsui-qqgbv 2/2 Running 0 15h
## Next steps -- [Add exporters and pipelines to your telemetry router](/adding-exporters-and-pipelines.md)
+- [Add exporters and pipelines to your telemetry router](adding-exporters-and-pipelines.md)
azure-arc Storage Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/storage-configuration.md
Kubernetes provides an infrastructure abstraction layer over the underlying virtualization tech stack (optional) and hardware. The way that Kubernetes abstracts away storage is through **[Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/)**. When you provision a pod, you can specify a storage class for each volume. At the time the pod is provisioned, the storage class **[provisioner](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/)** is called to provision the storage, and then a **[persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)** is created on that provisioned storage and then the pod is mounted to the persistent volume by a **[persistent volume claim](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)**.
-Kubernetes provides a way for storage infrastructure providers to plug in drivers (also called "Addons") that extend Kubernetes. Storage addons must comply with the **[Container Storage Interface standard](https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/)**. There are dozens of addons that can be found in this non-definitive **[list of CSI drivers](https://kubernetes-csi.github.io/docs/drivers.html)**. Which CSI driver you use will depend on factors such as whether you are running in a cloud-hosted, managed Kubernetes service or which OEM provider you are using for your hardware.
+Kubernetes provides a way for storage infrastructure providers to plug in drivers (also called "Addons") that extend Kubernetes. Storage addons must comply with the **[Container Storage Interface standard](https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/)**. There are dozens of addons that can be found in this non-definitive **[list of CSI drivers](https://kubernetes-csi.github.io/docs/drivers.html)**. The specific CSI driver you use depends on factors such as whether you're running in a cloud-hosted, managed Kubernetes service or which OEM provider you use for your hardware.
-You can view which storage classes are configured in your Kubernetes cluster by running this command:
+To view the storage classes configured in your Kubernetes cluster, run this command:
```console kubectl get storageclass
There are generally two types of storage:
Depending on the configuration of your NFS server and storage class provisioner, you may need to set the `supplementalGroups` in the pod configurations for database instances, and you may need to change the NFS server configuration to use the group IDs passed in by the client (as opposed to looking group IDs up on the server using the passed-in user ID). Consult your NFS administrator to determine if this is the case.
-The `supplementalGroups` property takes an array of values and can be set as part of the Azure Arc data controller deployment and will be used by any database instances configured by the Azure Arc data controller.
+The `supplementalGroups` property takes an array of values you can set at deployment. Azure Arc data controller applies these to any database instances it creates.
To set this property, run the following command:
Some services in Azure Arc for data services depend upon being configured to use
|**Controller SQL instance**|`<namespace>/logs-controldb`, `<namespace>/data-controldb`| |**Controller API service**|`<namespace>/data-controller`|
-At the time the data controller is provisioned, the storage class to be used for each of these persistent volumes is specified by either passing the --storage-class | -sc parameter to the `az arcdata dc create` command or by setting the storage classes in the control.json deployment template file that is used. If you are using the Azure portal to create the data controller in the directly connected mode, the deployment template that you choose will either have the storage class predefined in the template or if you select a template which does not have a predefined storage class then you will be prompted for one. If you use a custom deployment template, then you can specify the storage class.
+At the time the data controller is provisioned, the storage class to be used for each of these persistent volumes is specified by either passing the --storage-class | -sc parameter to the `az arcdata dc create` command or by setting the storage classes in the control.json deployment template file that is used. If you're using the Azure portal to create the data controller in the directly connected mode, the deployment template that you choose either has the storage class predefined in the template or you can select a template that does not have a predefined storage class. If your template does not define a storage class, the portal prompts you for one. If you use a custom deployment template, then you can specify the storage class.
The deployment templates that are provided out of the box have a default storage class specified that is appropriate for the target environment, but it can be overridden during deployment. See the detailed steps to [create custom configuration templates](create-custom-configuration-template.md) to change the storage class configuration for the data controller pods at deployment time.
-If you set the storage class using the --storage-class | -sc parameter the storage class will be used for both log and data storage classes. If you set the storage classes in the deployment template file, you can specify different storage classes for logs and data.
+If you set the storage class using the `--storage-class` or `-sc`parameter, that storage class is used for both log and data storage classes. If you set the storage classes in the deployment template file, you can specify different storage classes for logs and data.
Important factors to consider when choosing a storage class for the data controller pods:
Important factors to consider when choosing a storage class for the data control
- Changing the storage class post deployment is difficult, not documented, and not supported. Be sure to choose the storage class correctly at deployment time. > [!NOTE]
-> If no storage class is specified, the default storage class will be used. There can be only one default storage class per Kubernetes cluster. You can [change the default storage class](https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/).
+> If no storage class is specified, the default storage class is used. There can be only one default storage class per Kubernetes cluster. You can [change the default storage class](https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/).
### Database instance storage configuration
-Each database instance has data, logs, and backup persistent volumes. The storage classes for these persistent volumes can be specified at deployment time. If no storage class is specified the default storage class will be used.
+Each database instance has data, logs, and backup persistent volumes. The storage classes for these persistent volumes can be specified at deployment time. If no storage class is specified the default storage class is used.
-When creating an instance using either `az sql mi-arc create` or `az postgres server-arc create`, there are four parameters that can be used to set the storage classes:
+When you create an instance using either `az sql mi-arc create` or `az postgres server-arc create`, there are four parameters that you can use to set the storage classes:
|Parameter name, short name|Used for| |||
The table below lists the paths inside the PostgreSQL instance container that is
|`--storage-class-data`, `-d`|/var/opt/postgresql|Contains data and log directories for the postgres installation| |`--storage-class-logs`, `-g`|/var/log|Contains directories that store console output (stderr, stdout), other logging information of processes inside the container|
-Each database instance will have a separate persistent volume for data files, logs, and backups. This means that there will be separation of the I/O for each of these types of files subject to how the volume provisioner will provision storage. Each database instance has its own persistent volume claims and persistent volumes.
+Each database instance has a separate persistent volume for data files, logs, and backups. This means that there is separation of the I/O for each of these types of files subject to how the volume provisioner provisions storage. Each database instance has its own persistent volume claims and persistent volumes.
-If there are multiple databases on a given database instance, all of the databases will use the same persistent volume claim, persistent volume, and storage class. All backups - both differential log backups and full backups will use the same persistent volume claim and persistent volume. The persistent volume claims for the database instance pods are shown below:
+If there are multiple databases on a given database instance, all of the databases use the same persistent volume claim, persistent volume, and storage class. All backups - both differential log backups and full backups use the same persistent volume claim and persistent volume. The persistent volume claims for the database instance pods are shown below:
|**Instance**|**Persistent Volume Claims**| |||
Important factors to consider when choosing a storage class for the database ins
- Starting with the February, 2022 release of Azure Arc data services, you need to specify a **ReadWriteMany** (RWX) capable storage class for backups. Learn more about [access modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). If no storage class is specified for backups, the default storage class in kubernetes is used and if this is not RWX capable, an Azure SQL managed instance deployment may not succeed. - Database instances can be deployed in either a single pod pattern or a multiple pod pattern. An example of a single pod pattern is a General Purpose pricing tier Azure SQL managed instance. An example of a multiple pod pattern is a highly available Business Critical pricing tier Azure SQL managed instance. Database instances deployed with the single pod pattern **must** use a remote, shared storage class in order to ensure data durability and so that if a pod or node dies that when the pod is brought back up it can connect again to the persistent volume. In contrast, a highly available Azure SQL managed instance uses Always On Availability Groups to replicate the data from one instance to another either synchronously or asynchronously. Especially in the case where the data is replicated synchronously, there is always multiple copies of the data - typically three copies. Because of this, it is possible to use local storage or remote, shared storage classes for data and log files. If utilizing local storage, the data is still preserved even in the case of a failed pod, node, or storage hardware because there are multiple copies of the data. Given this flexibility, you might choose to use local storage for better performance. - Database performance is largely a function of the I/O throughput of a given storage device. If your database is heavy on reads or heavy on writes, then you should choose a storage class with hardware designed for that type of workload. For example, if your database is mostly used for writes, you might choose local storage with RAID 0. If your database is mostly used for reads of a small amount of "hot data", but there is a large overall storage volume of cold data, then you might choose a SAN device capable of tiered storage. Choosing the right storage class is not any different than choosing the type of storage you would use for any database.-- If you are using a local storage volume provisioner, ensure that the local volumes that are provisioned for data, logs, and backups are each landing on different underlying storage devices to avoid contention on disk I/O. The OS should also be on a volume that is mounted to a separate disk(s). This is essentially the same guidance as would be followed for a database instance on physical hardware.-- Because all databases on a given instance share a persistent volume claim and persistent volume, be sure not to colocate busy database instances on the same database instance. If possible, separate busy databases on to their own database instances to avoid I/O contention. Further, use node label targeting to land database instances onto separate nodes so as to distribute overall I/O traffic across multiple nodes. If you are using virtualization, be sure to consider distributing I/O traffic not just at the node level but also the combined I/O activity happening by all the node VMs on a given physical host.
+- If you're using a local storage volume provisioner, ensure that the local volumes that are provisioned for data, logs, and backups are each landing on different underlying storage devices to avoid contention on disk I/O. The OS should also be on a volume that is mounted to a separate disk(s). This is essentially the same guidance as would be followed for a database instance on physical hardware.
+- Because all databases on a given instance share a persistent volume claim and persistent volume, be sure not to colocate busy database instances on the same database instance. If possible, separate busy databases on to their own database instances to avoid I/O contention. Further, use node label targeting to land database instances onto separate nodes so as to distribute overall I/O traffic across multiple nodes. If you're using virtualization, be sure to consider distributing I/O traffic not just at the node level but also the combined I/O activity happening by all the node VMs on a given physical host.
## Estimating storage requirements Every pod that contains stateful data uses at least two persistent volumes - one persistent volume for data and another persistent volume for logs. The table below lists the number of persistent volumes required for a single Data Controller, Azure SQL Managed instance, Azure Database for PostgreSQL instance and Azure PostgreSQL HyperScale instance:
Every pod that contains stateful data uses at least two persistent volumes - one
|Azure SQL Managed Instance|1|2| |Azure PostgreSQL|1|2|
-The table below shows the total number of persistent volume required for a sample deployment:
+The table below shows the total number of persistent volumes required for a sample deployment:
|Resource Type|Number of instances|Required number of persistent volumes| ||||
This calculation can be used to plan the storage for your Kubernetes cluster bas
### On-premises and edge sites
-Microsoft and its OEM, OS, and Kubernetes partners have a validation program for Azure Arc data services. This program will provide customers comparable test results from a certification testing toolkit. The tests will evaluate feature compatibility, stress testing results, and performance and scalability. Each of these test results will indicate the OS used, Kubernetes distribution used, HW used, the CSI add-on used, and the storage classes used. This will help customers choose the best storage class, OS, Kubernetes distribution, and hardware for their requirements. More information on this program and test results can be found [here](validation-program.md).
+Microsoft and its OEM, OS, and Kubernetes partners have a validation program for Azure Arc data services. This program provides comparable test results from a certification testing toolkit. The tests evaluate feature compatibility, stress testing results, and performance and scalability. The test results indicate the OS used, Kubernetes distribution used, HW used, the CSI add-on used, and the storage classes used. This helps customers choose the best storage class, OS, Kubernetes distribution, and hardware for their requirements. More information on this program and test results can be found [here](validation-program.md).
#### Public cloud, managed Kubernetes services
For public cloud-based, managed Kubernetes services we can make the following re
|Public cloud service|Recommendation| |||
-|**Azure Kubernetes Service (AKS)**|Azure Kubernetes Service (AKS) has two types of storage - Azure Files and Azure Managed Disks. Each type of storage has two pricing/performance tiers - standard (HDD) and premium (SSD). Thus, the four storage classes provided in AKS are `azurefile` (Azure Files standard tier), `azurefile-premium` (Azure Files premium tier), `default` (Azure Disks standard tier), and `managed-premium` (Azure Disks premium tier). The default storage class is `default` (Azure Disks standard tier). There are substantial **[pricing differences](https://azure.microsoft.com/pricing/details/storage/)** between the types and tiers which should be factored into your decision. For production workloads with high-performance requirements, we recommend using `managed-premium` for all storage classes. For dev/test workloads, proofs of concept, etc. where cost is a consideration, then `azurefile` is the least expensive option. All four of the options can be used for situations requiring remote, shared storage as they are all network-attached storage devices in Azure. Read more about [AKS Storage](../../aks/concepts-storage.md).|
+|**Azure Kubernetes Service (AKS)**|Azure Kubernetes Service (AKS) has two types of storage - Azure Files and Azure Managed Disks. Each type of storage has two pricing/performance tiers - standard (HDD) and premium (SSD). Thus, the four storage classes provided in AKS are `azurefile` (Azure Files standard tier), `azurefile-premium` (Azure Files premium tier), `default` (Azure Disks standard tier), and `managed-premium` (Azure Disks premium tier). The default storage class is `default` (Azure Disks standard tier). There are substantial **[pricing differences](https://azure.microsoft.com/pricing/details/storage/)** between the types and tiers that you should consider. For production workloads with high-performance requirements, we recommend using `managed-premium` for all storage classes. For dev/test workloads, proofs of concept, etc. where cost is a consideration, then `azurefile` is the least expensive option. All four of the options can be used for situations requiring remote, shared storage as they are all network-attached storage devices in Azure. Read more about [AKS Storage](../../aks/concepts-storage.md).|
|**AWS Elastic Kubernetes Service (EKS)**| Amazon's Elastic Kubernetes Service has one primary storage class - based on the [EBS CSI storage driver](https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html). This is recommended for production workloads. There is a new storage driver - [EFS CSI storage driver](https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html) - that can be added to an EKS cluster, but it is currently in a beta stage and subject to change. Although AWS says that this storage driver is supported for production, we don't recommend using it because it is still in beta and subject to change. The EBS storage class is the default and is called `gp2`. Read more about [EKS Storage](https://docs.aws.amazon.com/eks/latest/userguide/storage-classes.html).|
-|**Google Kubernetes Engine (GKE)**|Google Kubernetes Engine (GKE) has just one storage class called `standard`, which is used for [GCE persistent disks](https://kubernetes.io/docs/concepts/storage/volumes/#gcepersistentdisk). Being the only one, it is also the default. Although there is a [local, static volume provisioner](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/local-ssd#run-local-volume-static-provisioner) for GKE that you can use with direct-attached SSDs, we don't recommend using it as it is not maintained or supported by Google. Read more about [GKE storage](https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes).
+|**Google Kubernetes Engine (GKE)**|Google Kubernetes Engine (GKE) has just one storage class called `standard`. This class is used for [GCE persistent disks](https://kubernetes.io/docs/concepts/storage/volumes/#gcepersistentdisk). Being the only one, it is also the default. Although there is a [local, static volume provisioner](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/local-ssd#run-local-volume-static-provisioner) for GKE that you can use with direct-attached SSDs, we don't recommend using it as it is not maintained or supported by Google. Read more about [GKE storage](https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes).
azure-arc Identity Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/identity-access-overview.md
For more information, see [Cluster connect access to Azure Arc-enabled Kubernete
## Azure RBAC
-[Azure role-based access control (RBAC)](/azure/role-based-access-control/overview) is an authorization system built on Azure Resource Manager and Azure Active Directory (Azure AD) that provides fine-grained access management of Azure resources.
+[Azure role-based access control (RBAC)](../../role-based-access-control/overview.md) is an authorization system built on Azure Resource Manager and Azure Active Directory (Azure AD) that provides fine-grained access management of Azure resources.
With Azure RBAC, role definitions outline the permissions to be applied. You assign these roles to users or groups via a role assignment for a particular scope. The scope can be across the entire subscription or limited to a resource group or to an individual resource such as a Kubernetes cluster.
For more information, see [Azure RBAC on Azure Arc-enabled Kubernetes](conceptua
## Next steps -- Learn about [access and identity options for Azure Kubernetes Service (AKS) clusters](/azure/aks/concepts-identity).
+- Learn about [access and identity options for Azure Kubernetes Service (AKS) clusters](../../aks/concepts-identity.md).
- Learn about [Cluster connect access to Azure Arc-enabled Kubernetes clusters](conceptual-cluster-connect.md). - Learn about [Azure RBAC on Azure Arc-enabled Kubernetes](conceptual-azure-rbac.md)
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
To deploy applications using GitOps with Flux v2, you need the following:
az upgrade ```
-* The Kubernetes command-line client, [kubectl](https://kubernetes.io/docs/user-guide/kubectl/). `kubectl` is already installed if you use Azure Cloud Shell.
+* The Kubernetes command-line client, [kubectl](https://kubernetes.io/docs/reference/kubectl/). `kubectl` is already installed if you use Azure Cloud Shell.
Install `kubectl` locally using the [`az aks install-cli`](/cli/azure/aks#az-aks-install-cli) command:
azure-arc Organize Inventory Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/organize-inventory-servers.md
Results can be easily visualized and exported to other reporting solutions. More
* [Azure Resource Graph sample queries for Azure Arc-enabled servers](resource-graph-samples.md)
-* [Use tags to organize your Azure resources and management hierarchy](/azure/azure-resource-manager/management/tag-resources?tabs=json)
+* [Use tags to organize your Azure resources and management hierarchy](../../azure-resource-manager/management/tag-resources.md?tabs=json)
azure-cache-for-redis Cache Event Grid Quickstart Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-event-grid-quickstart-cli.md
The deployment may take a few minutes to complete. After the deployment has succ
You should see the site with no messages currently displayed. ## Subscribe to your Azure Cache for Redis instance
azure-cache-for-redis Cache Event Grid Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-event-grid-quickstart-portal.md
Before subscribing to the events for the cache instance, let's create the endpoi
1. Select **Deploy to Azure** in GitHub README to deploy the solution to your subscription.
+ :::image type="content" source="media/cache-event-grid-portal/deploy-to-azure.png" alt-text="Deploy to Azure button.":::
1. On the **Custom deployment** page, do the following steps:
- 1. For **Resource group**, select the resource group that you created when creating the cache instance. It will be easier for you to clean up after you are done with the tutorial by deleting the resource group.
+ 1. For **Resource group**, select the resource group that you created when creating the cache instance. It will be easier for you to clean up after you're done with the tutorial by deleting the resource group.
2. For **Site Name**, enter a name for the web app. 3. For **Hosting plan name**, enter a name for the App Service plan to use for hosting the web app. 4. Select the check box for **I agree to the terms and conditions stated above**.
Before subscribing to the events for the cache instance, let's create the endpoi
| Setting | Suggested value | Description | | | - | -- |
- | **Subscription** | Drop down and select your subscription. | The subscription under which to create this web app. |
+ | **Subscription** | Drop down and select your subscription. | The subscription in which you want to create this web app. |
| **Resource group** | Drop down and select a resource group, or select **Create new** and enter a new resource group name. | By putting all your app resources in one resource group, you can easily manage or delete them together. |
- | **Site Name** | Enter a name for your web app. | This value cannot be empty. |
- | **Hosting plan name** | Enter a name for the App Service plan to use for hosting the web app. | This value cannot be empty. |
+ | **Site Name** | Enter a name for your web app. | This value can't be empty. |
+ | **Hosting plan name** | Enter a name for the App Service plan to use for hosting the web app. | This value can't be empty. |
1. Select Alerts (bell icon) in the portal, and then select **Go to resource group**.
Before subscribing to the events for the cache instance, let's create the endpoi
:::image type="content" source="media/cache-event-grid-portal/blank-event-grid-viewer.png" alt-text="Empty Event Grid Viewer site."::: + ## Subscribe to the Azure Cache for Redis instance
In this step, you'll subscribe to a topic to tell Event Grid which events you wa
1. In the portal, navigate to your cache instance that you created earlier. 1. On the **Azure Cache for Redis** page, select **Events** on the left menu.
-1. Select **Web Hook**. You are sending events to your viewer app using a web hook for the endpoint.
+1. Select **Web Hook**. You're sending events to your viewer app using a web hook for the endpoint.
:::image type="content" source="media/cache-event-grid-portal/event-grid-web-hook.png" alt-text="Azure portal Events page.":::
azure-functions Functions Add Output Binding Cosmos Db Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-cosmos-db-vs-code.md
A binding is added to the `bindings` array in your *function.json*, which should
# [v2](#tab/v2)
-Binding attributes are defined directly in the *function_app.py* file. You use the `cosmos_db_output` decorator to add an [Azure Cosmos DB output binding](/azure/azure-functions/functions-bindings-triggers-python#azure-cosmos-db-output-binding):
+Binding attributes are defined directly in the *function_app.py* file. You use the `cosmos_db_output` decorator to add an [Azure Cosmos DB output binding](./functions-bindings-triggers-python.md#azure-cosmos-db-output-binding):
```python @app.cosmos_db_output(arg_name="outputDocument", database_name="my-database",
You've updated your HTTP triggered function to write JSON documents to an Azure
+ [Examples of complete Function projects in Python](/samples/browse/?products=azure-functions&languages=python). + [Azure Functions Python developer guide](functions-reference-node.md)
azure-maps How To Create Data Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-data-registries.md
When you register a file in Azure Maps using the data registry API, an MD5 hash
[Get operation]: /rest/api/maps/data-registry/get-operation [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[storage account overview]: /azure/storage/common/storage-account-overview
-[create storage account]: /azure/storage/common/storage-account-create?tabs=azure-portal
-[managed identity]: /azure/active-directory/managed-identities-azure-resources/overview
+[storage account overview]: ../storage/common/storage-account-overview.md
+[create storage account]: ../storage/common/storage-account-create.md?tabs=azure-portal
+[managed identity]: ../active-directory/managed-identities-azure-resources/overview.md
[subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account [Azure portal]: https://portal.azure.com/ [Visual Studio]: https://visualstudio.microsoft.com/downloads/
-[geographic scope]: geographic-scope.md
+[geographic scope]: geographic-scope.md
azure-maps Release Notes Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-map-control.md
description: Release notes for the Azure Maps Web SDK. Previously updated : 1/31/2023 Last updated : 3/10/2023
This document contains information about new features and other changes to the M
## v3 (preview)
+### [3.0.0-preview.4] (March 10, 2023)
+
+#### Installation (3.0.0-preview.4)
+
+The preview is available on [npm][3.0.0-preview.4] and CDN.
+
+- **NPM:** Refer to the instructions at [azure-maps-control@3.0.0-preview.4][3.0.0-preview.4]
+
+- **CDN:** Reference the following CSS and JavaScript in the `<head>` element of an HTML file:
+
+ ```html
+ <link href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0.0-preview.4/atlas.min.css" rel="stylesheet" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0.0-preview.4/atlas.min.js"></script>
+ ```
+
+#### New features (3.0.0-preview.4)
+
+- Extended map coverage in China, Japan, and Korea.
+
+- Preview of refreshed map styles (Road / Night / Hybrid / Gray Scale Dark / Gray Scale Light / Terra / High Contrast Dark / High Contrast Light).
+
+- More details on roads/building footprints/trails coverage.
+
+- Wider zoom level ranges (1~21) for the Terra style.
+
+- Greater details on public transit including ferries, metros, and bus stops.
+
+- Additional information about the altitude of mountains and the location of waterfalls.
+
+#### Changes (3.0.0-preview.4)
+
+- Traffic data now only support relative mode.
+
+- Deprecated `showBuildingModels` in [StyleOptions].
+
+- Changed the default `minZoom` from -2 to 1.
+
+#### Bug fixes (3.0.0-preview.4)
+
+- Cleaned up various memory leaks in [Map.dispose()].
+
+- Improved style picker's tab navigation for accessibility in list layout.
+
+- Optimized style switching by avoiding deep cloning objects.
+
+- Fixed an exception that occurred in [SourceManager] when style switching with sources that weren't vector or raster.
+
+- **\[BREAKING\]** Previously `sourceadded` events are only emitted if new sources are added to the style. Now `sourceremoved` / `sourceadded` events are emitted when the new source and the original source in the current style aren't equal, even if they have the same source ID.
+ ### [3.0.0-preview.3] (February 2, 2023) #### Installation (3.0.0-preview.3)
The preview is available on [npm][3.0.0-preview.3] and CDN.
#### Bug fixes (3.0.0-preview.3) -- Fixed issue in [language mapping], now `zh-Hant-TW` will no longer revert back to `en-US`.
+- Fixed issue in [language mapping], now `zh-Hant-TW` no longer reverts back to `en-US`.
- Fixed the inability to switch between [user regions (view)].
This update is the first preview of the upcoming 3.0.0 release. The underlying [
## v2 (latest)
+### [2.2.4]
+
+#### Bug fixes (2.2.4)
+
+- Cleaned up various memory leaks in [Map.dispose()].
+
+- Improved style picker's tab navigation for accessibility in list layout.
+ ### [2.2.3] #### New features (2.2.3)
This update is the first preview of the upcoming 3.0.0 release. The underlying [
#### Bug fixes (2.2.3) -- Fixed issue in [language mapping], now `zh-Hant-TW` will no longer revert back to `en-US`.
+- Fixed issue in [language mapping], now `zh-Hant-TW` no longer reverts back to `en-US`.
- Fixed the inability to switch between [user regions (view)].
Stay up to date on Azure Maps:
> [!div class="nextstepaction"] > [Azure Maps Blog]
+[3.0.0-preview.4]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.4
[3.0.0-preview.3]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.3 [3.0.0-preview.2]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.2 [3.0.0-preview.1]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.1
+[2.2.4]: https://www.npmjs.com/package/azure-maps-control/v/2.2.4
[2.2.3]: https://www.npmjs.com/package/azure-maps-control/v/2.2.3 [2.2.2]: https://www.npmjs.com/package/azure-maps-control/v/2.2.2 [Azure AD]: ../active-directory/develop/v2-overview.md
Stay up to date on Azure Maps:
[@azure/msal-browser]: https://github.com/AzureAD/microsoft-authentication-library-for-js [migration guide]: ../active-directory/develop/msal-compare-msal-js-and-adal-js.md [CameraBoundsOptions]: /javascript/api/azure-maps-control/atlas.cameraboundsoptions?view=azure-maps-typescript-latest
+[Map.dispose()]: /javascript/api/azure-maps-control/atlas.map?view=azure-maps-typescript-latest#azure-maps-control-atlas-map-dispose
[Map.setCamera(options)]: /javascript/api/azure-maps-control/atlas.map?view=azure-maps-typescript-latest#azure-maps-control-atlas-map-setcamera [language mapping]: https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/azure-maps/supported-languages.md#azure-maps-supported-languages [user regions (view)]: /javascript/api/azure-maps-control/atlas.styleoptions?view=azure-maps-typescript-latest#azure-maps-control-atlas-styleoptions-view [ImageSpriteManager.add()]: /javascript/api/azure-maps-control/atlas.imagespritemanager?view=azure-maps-typescript-latest#azure-maps-control-atlas-imagespritemanager-add [azure-maps-control]: https://www.npmjs.com/package/azure-maps-control [maplibre-gl]: https://www.npmjs.com/package/maplibre-gl
+[SourceManager]: /javascript/api/azure-maps-control/atlas.sourcemanager
[StyleOptions]: /javascript/api/azure-maps-control/atlas.styleoptions [TrafficControlOptions]: /javascript/api/azure-maps-control/atlas.trafficcontroloptions [Azure Maps Samples]: https://samples.azuremaps.com
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
To complete this procedure, you need:
- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace. - A VM, Virtual Machine Scale Set, or Arc-enabled on-premises server that writes logs to a text file.
- Text file requirements:
- - Store on the local drive of the machine on which Azure Monitor Agent is running.
- - Delineate with an end of line.
- - Use ASCII or UTF-8 encoding. Other formats such as UTF-16 aren't supported.
- - Do not allow circular logging, log rotation where the file is overwritten with new entries, or renaming where a file is moved and a new file with the same name is opened.
+ Text file requirements and best practices:
+ - Do store files on the local drive of the machine on which Azure Monitor Agent is running.
+ - Do delineate the end of a record with an end of line.
+ - Do use ASCII or UTF-8 encoding. Other formats such as UTF-16 aren't supported.
+ - Do create a new log file every day so that you can remove old files easily.
+ - Do Clean up all old log files in the monitored directory. Azure Monitor Agent does not delete old log files.
+ - Do Not overwrite an exitsing file with new data. You should only append new data to the file.
+ - Do Not rename a file and open a new file with the same name to log to.
+ - Do Not rename or copy large log files in to the monitored directory.
+ - Do Not rename files in the monitored directory to a new name that is also in the monitored directory. This can cause incorrect ingestion behavior.
+ ## Create a custom table
azure-monitor Javascript Sdk Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk-advanced.md
+
+ Title: Microsoft Azure Monitor Application Insights JavaScript SDK advanced topics
+description: Microsoft Azure Monitor Application Insights JavaScript SDK advanced topics.
+ Last updated : 02/28/2023
+ms.devlang: javascript
++++
+# Microsoft Azure Monitor Application Insights JavaScript SDK advanced topics
+
+The Azure Application Insights JavaScript SDK provides advanced features for tracking, monitoring, and debugging your web applications.
+
+> [!div class="checklist"]
+> - [npm setup](#npm-setup)
+> - [Cookie configuration and management](#cookies)
+> - [Source map un-minify support](#source-map)
+> - [Tree shaking optimized code](#tree-shaking)
+
+## npm setup
+
+The npm setup installs the JavaScript SDK as a dependency to your project and enables IntelliSense.
+
+This option is only needed for developers who require more custom events and configuration.
+
+### Getting started with npm
+
+Install via npm.
+
+```sh
+npm i --save @microsoft/applicationinsights-web
+```
+
+> [!Note]
+> *Typings are included with this package*, so you do *not* need to install a separate typings package.
+
+```js
+import { ApplicationInsights } from '@microsoft/applicationinsights-web'
+
+const appInsights = new ApplicationInsights({ config: {
+ connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE'
+ /* ...Other Configuration Options... */
+} });
+appInsights.loadAppInsights();
+appInsights.trackPageView();
+```
+Replace the placeholder 'YOUR_CONNECTION_STRING_GOES_HERE' with your actual connection string found in the Azure portal.
+
+1. Navigate to the **Overview** pane of your Application Insights resource.
+1. Locate the **Connection String**.
+1. Select the button to copy the connection string to the clipboard.
++
+### Configuration
+
+These configuration fields are optional and default to false unless otherwise stated.
+
+| Name | Type | Default | Description |
+||||-|
+| accountId | string | null | An optional account ID, if your app groups users into accounts. No spaces, commas, semicolons, equals, or vertical bars |
+| sessionRenewalMs | numeric | 1800000 | A session is logged if the user is inactive for this amount of time in milliseconds. Default is 30 minutes |
+| sessionExpirationMs | numeric | 86400000 | A session is logged if it has continued for this amount of time in milliseconds. Default is 24 hours |
+| maxBatchSizeInBytes | numeric | 10000 | Max size of telemetry batch. If a batch exceeds this limit, it's immediately sent and a new batch is started |
+| maxBatchInterval | numeric | 15000 | How long to batch telemetry for before sending (milliseconds) |
+| disableExceptionTracking | boolean | false | If true, exceptions aren't autocollected. Default is false. |
+| disableTelemetry | boolean | false | If true, telemetry isn't collected or sent. Default is false. |
+| enableDebug | boolean | false | If true, **internal** debugging data is thrown as an exception **instead** of being logged, regardless of SDK logging settings. Default is false. <br>***Note:*** Enabling this setting results in dropped telemetry whenever an internal error occurs. It can be useful for quickly identifying issues with your configuration or usage of the SDK. If you don't want to lose telemetry while debugging, consider using `loggingLevelConsole` or `loggingLevelTelemetry` instead of `enableDebug`. |
+| loggingLevelConsole | numeric | 0 | Logs **internal** Application Insights errors to console. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) |
+| loggingLevelTelemetry | numeric | 1 | Sends **internal** Application Insights errors as telemetry. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) |
+| diagnosticLogInterval | numeric | 10000 | (internal) Polling interval (in ms) for internal logging queue |
+| samplingPercentage | numeric | 100 | Percentage of events that is sent. Default is 100, meaning all events are sent. Set it if you wish to preserve your data cap for large-scale applications. |
+| autoTrackPageVisitTime | boolean | false | If true, on a pageview, the _previous_ instrumented page's view time is tracked and sent as telemetry and a new timer is started for the current pageview. It's sent as a custom metric named `PageVisitTime` in `milliseconds` and is calculated via the Date [now()](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Date/now) function (if available) and falls back to (new Date()).[getTime()](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Date/getTime) if now() is unavailable (IE8 or less). Default is false. |
+| disableAjaxTracking | boolean | false | If true, Ajax calls aren't autocollected. Default is false. |
+| disableFetchTracking | boolean | false | The default setting for `disableFetchTracking` is `false`, meaning it's enabled. However, in versions prior to 2.8.10, it was disabled by default. When set to `true`, Fetch requests aren't automatically collected. The default setting changed from `true` to `false` in version 2.8.0. |
+| excludeRequestFromAutoTrackingPatterns | string[] \| RegExp[] | undefined | Provide a way to exclude specific route from automatic tracking for XMLHttpRequest or Fetch request. If defined, for an Ajax / fetch request that the request url matches with the regex patterns, auto tracking is turned off. Default is undefined. |
+| addRequestContext | (requestContext: IRequestionContext) => {[key: string]: any} | undefined | Provide a way to enrich dependencies logs with context at the beginning of api call. Default is undefined. You need to check if `xhr` exists if you configure `xhr` related context. You need to check if `fetch request` and `fetch response` exist if you configure `fetch` related context. Otherwise you may not get the data you need. |
+| overridePageViewDuration | boolean | false | If true, default behavior of trackPageView is changed to record end of page view duration interval when trackPageView is called. If false and no custom duration is provided to trackPageView, the page view performance is calculated using the navigation timing API. Default is false. |
+| maxAjaxCallsPerView | numeric | 500 | Default 500 - controls how many Ajax calls are monitored per page view. Set to -1 to monitor all (unlimited) Ajax calls on the page. |
+| disableDataLossAnalysis | boolean | true | If false, internal telemetry sender buffers are checked at startup for items not yet sent. |
+| disableCorrelationHeaders | boolean | false | If false, the SDK adds two headers ('Request-Id' and 'Request-Context') to all dependency requests to correlate them with corresponding requests on the server side. Default is false. |
+| correlationHeaderExcludedDomains | string[] | undefined | Disable correlation headers for specific domains |
+| correlationHeaderExcludePatterns | regex[] | undefined | Disable correlation headers using regular expressions |
+| correlationHeaderDomains | string[] | undefined | Enable correlation headers for specific domains |
+| disableFlushOnBeforeUnload | boolean | false | Default false. If true, flush method isn't called when onBeforeUnload event triggers |
+| enableSessionStorageBuffer | boolean | true | Default true. If true, the buffer with all unsent telemetry is stored in session storage. The buffer is restored on page load |
+| cookieCfg | [ICookieCfgConfig](javascript-sdk-advanced.md#cookies)<br>[Optional]<br>(Since 2.6.0) | undefined | Defaults to cookie usage enabled see [ICookieCfgConfig](javascript-sdk-advanced.md#cookies) settings for full defaults. |
+| disableCookiesUsage | alias for [`cookieCfg.enabled`](javascript-sdk-advanced.md#cookies)<br>[Optional] | false | Default false. A boolean that indicates whether to disable the use of cookies by the SDK. If true, the SDK doesn't store or read any data from cookies.<br>(Since v2.6.0) If `cookieCfg.enabled` is defined it takes precedence. Cookie usage can be re-enabled after initialization via the core.getCookieMgr().setEnabled(true). |
+| cookieDomain | alias for [`cookieCfg.domain`](javascript-sdk-advanced.md#cookies)<br>[Optional] | null | Custom cookie domain. It's helpful if you want to share Application Insights cookies across subdomains.<br>(Since v2.6.0) If `cookieCfg.domain` is defined it takes precedence over this value. |
+| cookiePath | alias for [`cookieCfg.path`](javascript-sdk-advanced.md#cookies)<br>[Optional]<br>(Since 2.6.0) | null | Custom cookie path. It's helpful if you want to share Application Insights cookies behind an application gateway.<br>If `cookieCfg.path` is defined, it takes precedence. |
+| isRetryDisabled | boolean | false | Default false. If false, retry on 206 (partial success), 408 (timeout), 429 (too many requests), 500 (internal server error), 503 (service unavailable), and 0 (offline, only if detected) |
+| isStorageUseDisabled | boolean | false | If true, the SDK doesn't store or read any data from local and session storage. Default is false. |
+| isBeaconApiDisabled | boolean | true | If false, the SDK sends all telemetry using the [Beacon API](https://www.w3.org/TR/beacon) |
+| disableXhr | boolean | false | Don't use XMLHttpRequest or XDomainRequest (for IE < 9) by default instead attempt to use fetch() or sendBeacon. If no other transport is available, it uses XMLHttpRequest |
+| onunloadDisableBeacon | boolean | false | Default false. when tab is closed, the SDK sends all remaining telemetry using the [Beacon API](https://www.w3.org/TR/beacon) |
+| onunloadDisableFetch | boolean | false | If fetch keepalive is supported don't use it for sending events during unload, it may still fall back to fetch() without keepalive |
+| sdkExtension | string | null | Sets the sdk extension name. Only alphabetic characters are allowed. The extension name is added as a prefix to the 'ai.internal.sdkVersion' tag (for example, 'ext_javascript:2.0.0'). Default is null. |
+| isBrowserLinkTrackingEnabled | boolean | false | Default is false. If true, the SDK tracks all [Browser Link](/aspnet/core/client-side/using-browserlink) requests. |
+| appId | string | null | AppId is used for the correlation between AJAX dependencies happening on the client-side with the server-side requests. When Beacon API is enabled, it can't be used automatically, but can be set manually in the configuration. Default is null |
+| enableCorsCorrelation | boolean | false | If true, the SDK adds two headers ('Request-Id' and 'Request-Context') to all CORS requests to correlate outgoing AJAX dependencies with corresponding requests on the server side. Default is false |
+| namePrefix | string | undefined | An optional value that is used as name postfix for localStorage and session cookie name.
+| sessionCookiePostfix | string | undefined | An optional value that is used as name postfix for session cookie name. If undefined, namePrefix is used as name postfix for session cookie name.
+| userCookiePostfix | string | undefined | An optional value that is used as name postfix for user cookie name. If undefined, no postfix is added on user cookie name.
+| enableAutoRouteTracking | boolean | false | Automatically track route changes in Single Page Applications (SPA). If true, each route change sends a new Pageview to Application Insights. Hash route changes (`example.com/foo#bar`) are also recorded as new page views.
+| enableRequestHeaderTracking | boolean | false | If true, AJAX & Fetch request headers is tracked, default is false. If ignoreHeaders isn't configured, Authorization and X-API-Key headers aren't logged.
+| enableResponseHeaderTracking | boolean | false | If true, AJAX & Fetch request's response headers is tracked, default is false. If ignoreHeaders isn't configured, WWW-Authenticate header isn't logged.
+| ignoreHeaders | string[] | ["Authorization", "X-API-Key", "WWW-Authenticate"] | AJAX & Fetch request and response headers to be ignored in log data. To override or discard the default, add an array with all headers to be excluded or an empty array to the configuration.
+| enableAjaxErrorStatusText | boolean | false | Default false. If true, include response error data text boolean in dependency event on failed AJAX requests. |
+| enableAjaxPerfTracking | boolean | false | Default false. Flag to enable looking up and including extra browser window.performance timings in the reported Ajax (XHR and fetch) reported metrics.
+| maxAjaxPerfLookupAttempts | numeric | 3 | Defaults to 3. The maximum number of times to look for the window.performance timings (if available) is required. Not all browsers populate the window.performance before reporting the end of the XHR request. For fetch requests, it's added after it's complete.
+| ajaxPerfLookupDelay | numeric | 25 | Defaults to 25 ms. The amount of time to wait before reattempting to find the windows.performance timings for an Ajax request, time is in milliseconds and is passed directly to setTimeout().
+| distributedTracingMode | numeric or `DistributedTracingModes` | `DistributedTracingModes.AI_AND_W3C` | Sets the distributed tracing mode. If AI_AND_W3C mode or W3C mode is set, W3C trace context headers (traceparent/tracestate) are generated and included in all outgoing requests. AI_AND_W3C is provided for back-compatibility with any legacy Application Insights instrumented services.
+| enableUnhandledPromiseRejectionTracking | boolean | false | If true, unhandled promise rejections are autocollected as a JavaScript error. When disableExceptionTracking is true (don't track exceptions), the config value is ignored and unhandled promise rejections aren't reported.
+| disableInstrumentationKeyValidation | boolean | false | If true, instrumentation key validation check is bypassed. Default value is false.
+| enablePerfMgr | boolean | false | [Optional] When enabled (true) it creates local perfEvents for code that has been instrumented to emit perfEvents (via the doPerf() helper). It can be used to identify performance issues within the SDK based on your usage or optionally within your own instrumented code.
+| perfEvtsSendAll | boolean | false | [Optional] When _enablePerfMgr_ is enabled and the [IPerfManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfManager.ts) fires a [INotificationManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/INotificationManager.ts).perfEvent() this flag determines whether an event is fired (and sent to all listeners) for all events (true) or only for 'parent' events (false &lt;default&gt;).<br />A parent [IPerfEvent](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfEvent.ts) is an event where no other IPerfEvent is still running at the point of the event being created and its _parent_ property isn't null or undefined. Since v2.5.7
+| createPerfMgr | (core: IAppInsightsCore, notification
+| idLength | numeric | 22 | [Optional] Identifies the default length used to generate new random session and user IDs. Defaults to 22, previous default value was 5 (v2.5.8 or less), if you need to keep the previous maximum length you should set the value to 5.
+| customHeaders | `[{header: string, value: string}]` | undefined | [Optional] The ability for the user to provide extra headers when using a custom endpoint. customHeaders aren't added on browser shutdown moment when beacon sender is used. And adding custom headers isn't supported on IE9 or earlier.
+| convertUndefined | `any` | undefined | [Optional] Provide user an option to convert undefined field to user defined value.
+| eventsLimitInMem | number | 10000 | [Optional] The number of events that can be kept in memory before the SDK starts to drop events when not using Session Storage (the default).
+| disableIkeyDeprecationMessage | boolean | true | [Optional] Disable instrumentation Key deprecation error message. If true, error messages are NOT sent.
+
+## Cookies
+
+The Azure Application Insights JavaScript SDK provides instance-based cookie management that allows you to control the use of cookies.
+
+You can control cookies by enabling or disabling them, setting custom domains and paths, and customizing the functions for managing cookies.
+
+### Cookie configuration
+
+ICookieMgrConfig is a cookie configuration for instance-based cookie management added in 2.6.0. The options provided allow you to enable or disable the use of cookies by the SDK. You can also set custom cookie domains and paths and customize the functions for fetching, setting, and deleting cookies.
+
+The ICookieMgrConfig options are defined in the following table.
+
+| Name | Type | Default | Description |
+||||-|
+| enabled | boolean | true | The current instance of the SDK uses this boolean to indicate whether the use of cookies is enabled. If false, the instance of the SDK initialized by this configuration doesn't store or read any data from cookies. |
+| domain | string | null | Custom cookie domain. It's helpful if you want to share Application Insights cookies across subdomains. If not provided uses the value from root `cookieDomain` value. |
+| path | string | / | Specifies the path to use for the cookie, if not provided it uses any value from the root `cookiePath` value. |
+| ignoreCookies | string[] | undefined | Specify the cookie name(s) to be ignored, it causes any matching cookie name to never be read or written. They may still be explicitly purged or deleted. You don't need to repeat the name in the `blockedCookies` configuration. (since v2.8.8)
+| blockedCookies | string[] | undefined | Specify the cookie name(s) to never write. It prevents creating or updating any cookie name, but they can still be read unless also included in the ignoreCookies. They may still be purged or deleted explicitly. If not provided, it defaults to the same list in ignoreCookies. (Since v2.8.8)
+| getCookie | `(name: string) => string` | null | Function to fetch the named cookie value, if not provided it uses the internal cookie parsing / caching. |
+| setCookie | `(name: string, value: string) => void` | null | Function to set the named cookie with the specified value, only called when adding or updating a cookie. |
+| delCookie | `(name: string, value: string) => void` | null | Function to delete the named cookie with the specified value, separated from setCookie to avoid the need to parse the value to determine whether the cookie is being added or removed. If not provided it uses the internal cookie parsing / caching. |
+
+### Cookie management
+
+Starting from version 2.6.0, the Azure Application Insights JavaScript SDK provides instance-based cookie management that can be disabled and re-enabled after initialization.
+
+If you disabled cookies during initialization using the `disableCookiesUsage` or `cookieCfg.enabled` configurations, you can re-enable them using the `setEnabled` function of the ICookieMgr object.
+
+The instance-based cookie management replaces the previous CoreUtils global functions of `disableCookies()`, `setCookie()`, `getCookie()`, and `deleteCookie()`.
+
+To take advantage of the tree-shaking enhancements introduced in version 2.6.0, it's recommended to no longer use the global functions
+
+## Source map
+
+Source map support helps you debug minified JavaScript code with the ability to unminify the minified callstack of your exception telemetry.
+
+> [!div class="checklist"]
+> - Compatible with all current integrations on the **Exception Details** panel
+> - Supports all current and future JavaScript SDKs, including Node.JS, without the need for an SDK upgrade
+
+To view the unminified callstack, select an Exception Telemetry item in the Azure portal, find the source maps that match the call stack, and drag and drop the source maps onto the call stack in the Azure portal. The source map must have the same name as the source file of a stack frame, but with a `map` extension.
++
+## Tree shaking
+
+Tree shaking eliminates unused code from the final JavaScript bundle.
+
+To take advantage of tree shaking, import only the necessary components of the SDK into your code. By doing so, unused code isn't included in the final bundle, reducing its size and improving performance.
+
+### Tree shaking enhancements and recommendations
+
+In version 2.6.0, we deprecated and removed the internal usage of these static helper classes to improve support for tree-shaking algorithms. It lets npm packages safely drop unused code.
+
+- `CoreUtils`
+- `EventHelper`
+- `Util`
+- `UrlHelper`
+- `DateTimeUtils`
+- `ConnectionStringParser`
+
+ The functions are now exported as top-level roots from the modules, making it easier to refactor your code for better tree-shaking.
+
+The static classes were changed to const objects that reference the new exported functions, and future changes are planned to further refactor the references.
+
+### Tree shaking deprecated functions and replacements
+
+| Existing | Replacement |
+|-|-|
+| **CoreUtils** | **@microsoft/applicationinsights-core-js** |
+| CoreUtils._canUseCookies | None. Don't use as it causes all of CoreUtils reference to be included in your final code.<br> Refactor your cookie handling to use the `appInsights.getCookieMgr().setEnabled(true/false)` to set the value and `appInsights.getCookieMgr().isEnabled()` to check the value. |
+| CoreUtils.isTypeof | isTypeof |
+| CoreUtils.isUndefined | isUndefined |
+| CoreUtils.isNullOrUndefined | isNullOrUndefined |
+| CoreUtils.hasOwnProperty | hasOwnProperty |
+| CoreUtils.isFunction | isFunction |
+| CoreUtils.isObject | isObject |
+| CoreUtils.isDate | isDate |
+| CoreUtils.isArray | isArray |
+| CoreUtils.isError | isError |
+| CoreUtils.isString | isString |
+| CoreUtils.isNumber | isNumber |
+| CoreUtils.isBoolean | isBoolean |
+| CoreUtils.toISOString | toISOString or getISOString |
+| CoreUtils.arrForEach | arrForEach |
+| CoreUtils.arrIndexOf | arrIndexOf |
+| CoreUtils.arrMap | arrMap |
+| CoreUtils.arrReduce | arrReduce |
+| CoreUtils.strTrim | strTrim |
+| CoreUtils.objCreate | objCreateFn |
+| CoreUtils.objKeys | objKeys |
+| CoreUtils.objDefineAccessors | objDefineAccessors |
+| CoreUtils.addEventHandler | addEventHandler |
+| CoreUtils.dateNow | dateNow |
+| CoreUtils.isIE | isIE |
+| CoreUtils.disableCookies | disableCookies<br>Referencing either causes CoreUtils to be referenced for backward compatibility.<br> Refactor your cookie handling to use the `appInsights.getCookieMgr().setEnabled(false)` |
+| CoreUtils.newGuid | newGuid |
+| CoreUtils.perfNow | perfNow |
+| CoreUtils.newId | newId |
+| CoreUtils.randomValue | randomValue |
+| CoreUtils.random32 | random32 |
+| CoreUtils.mwcRandomSeed | mwcRandomSeed |
+| CoreUtils.mwcRandom32 | mwcRandom32 |
+| CoreUtils.generateW3CId | generateW3CId |
+| **EventHelper** | **@microsoft/applicationinsights-core-js** |
+| EventHelper.Attach | attachEvent |
+| EventHelper.AttachEvent | attachEvent |
+| EventHelper.Detach | detachEvent |
+| EventHelper.DetachEvent | detachEvent |
+| **Util** | **@microsoft/applicationinsights-common-js** |
+| Util.NotSpecified | strNotSpecified |
+| Util.createDomEvent | createDomEvent |
+| Util.disableStorage | utlDisableStorage |
+| Util.isInternalApplicationInsightsEndpoint | isInternalApplicationInsightsEndpoint |
+| Util.canUseLocalStorage | utlCanUseLocalStorage |
+| Util.getStorage | utlGetLocalStorage |
+| Util.setStorage | utlSetLocalStorage |
+| Util.removeStorage | utlRemoveStorage |
+| Util.canUseSessionStorage | utlCanUseSessionStorage |
+| Util.getSessionStorageKeys | utlGetSessionStorageKeys |
+| Util.getSessionStorage | utlGetSessionStorage |
+| Util.setSessionStorage | utlSetSessionStorage |
+| Util.removeSessionStorage | utlRemoveSessionStorage |
+| Util.disableCookies | disableCookies<br>Referencing either causes CoreUtils to be referenced for backward compatibility.<br> Refactor your cookie handling to use the `appInsights.getCookieMgr().setEnabled(false)` |
+| Util.canUseCookies | canUseCookies<br>Referencing either causes CoreUtils to be referenced for backward compatibility.<br>Refactor your cookie handling to use the `appInsights.getCookieMgr().isEnabled()` |
+| Util.disallowsSameSiteNone | uaDisallowsSameSiteNone |
+| Util.setCookie | coreSetCookie<br>Referencing causes CoreUtils to be referenced for backward compatibility.<br>Refactor your cookie handling to use the `appInsights.getCookieMgr().set(name: string, value: string)` |
+| Util.stringToBoolOrDefault | stringToBoolOrDefault |
+| Util.getCookie | coreGetCookie<br>Referencing causes CoreUtils to be referenced for backward compatibility.<br>Refactor your cookie handling to use the `appInsights.getCookieMgr().get(name: string)` |
+| Util.deleteCookie | coreDeleteCookie<br>Referencing causes CoreUtils to be referenced for backward compatibility.<br>Refactor your cookie handling to use the `appInsights.getCookieMgr().del(name: string, path?: string)` |
+| Util.trim | strTrim |
+| Util.newId | newId |
+| Util.random32 | <br>No replacement, refactor your code to use the core random32(true) |
+| Util.generateW3CId | generateW3CId |
+| Util.isArray | isArray |
+| Util.isError | isError |
+| Util.isDate | isDate |
+| Util.toISOStringForIE8 | toISOString |
+| Util.getIEVersion | getIEVersion |
+| Util.msToTimeSpan | msToTimeSpan |
+| Util.isCrossOriginError | isCrossOriginError |
+| Util.dump | dumpObj |
+| Util.getExceptionName | getExceptionName |
+| Util.addEventHandler | attachEvent |
+| Util.IsBeaconApiSupported | isBeaconApiSupported |
+| Util.getExtension | getExtensionByName
+| **UrlHelper** | **@microsoft/applicationinsights-common-js** |
+| UrlHelper.parseUrl | urlParseUrl |
+| UrlHelper.getAbsoluteUrl | urlGetAbsoluteUrl |
+| UrlHelper.getPathName | urlGetPathName |
+| UrlHelper.getCompeteUrl | urlGetCompleteUrl |
+| UrlHelper.parseHost | urlParseHost |
+| UrlHelper.parseFullHost | urlParseFullHost
+| **DateTimeUtils** | **@microsoft/applicationinsights-common-js** |
+| DateTimeUtils.Now | dateTimeUtilsNow |
+| DateTimeUtils.GetDuration | dateTimeUtilsDuration |
+| **ConnectionStringParser** | **@microsoft/applicationinsights-common-js** |
+| ConnectionStringParser.parse | parseConnectionString |
+
+## Troubleshooting
+
+See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/javascript-sdk-troubleshooting).
+
+## Next steps
+
+* [Track usage](usage-overview.md)
+* [Custom events and metrics](api-custom-events-metrics.md)
+* [Build-measure-learn](usage-overview.md)
+* [JavaScript SDK advanced topics](javascript-sdk-advanced.md)
azure-monitor Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk.md
+
+ Title: Microsoft Azure Monitor Application Insights JavaScript SDK
+description: Microsoft Azure Monitor Application Insights JavaScript SDK is a powerful tool for monitoring and analyzing web application performance.
+ Last updated : 03/07/2023
+ms.devlang: javascript
++++
+# Microsoft Azure Monitor Application Insights JavaScript SDK
+
+[Microsoft Azure Monitor Application Insights](app-insights-overview.md) JavaScript SDK allows you to monitor and analyze the performance of JavaScript web applications.
+
+## Prerequisites
+
+- Azure subscription: [Create an Azure subscription for free](https://azure.microsoft.com/free/)
+- Application Insights resource: [Create an Application Insights resource](create-workspace-resource.md#create-a-workspace-based-resource)
+- An application that uses [JavaScript](/visualstudio/javascript)
+
+## Get started
+
+The Application Insights JavaScript SDK is implemented with a runtime snippet for out-of-the-box web analytics.
+
+### Enable Application Insights SDK for JavaScript
+
+Only two steps are required to enable the Application Insights SDK for JavaScript.
+
+#### Add the code snippet
+
+Add the following code snippet to beginning of every <head> tag for each HTML page you want to monitor.
+
+```html
+<script type="text/javascript">
+!function(T,l,y){var S=T.location,k="script",D="instrumentationKey",C="ingestionendpoint",I="disableExceptionTracking",E="ai.device.",b="toLowerCase",w="crossOrigin",N="POST",e="appInsightsSDK",t=y.name||"appInsights";(y.name||T[e])&&(T[e]=t);var n=T[t]||function(d){var g=!1,f=!1,m={initialize:!0,queue:[],sv:"5",version:2,config:d};function v(e,t){var n={},a="Browser";return n[E+"id"]=a[b](),n[E+"type"]=a,n["ai.operation.name"]=S&&S.pathname||"_unknown_",n["ai.internal.sdkVersion"]="javascript:snippet_"+(m.sv||m.version),{time:function(){var e=new Date;function t(e){var t=""+e;return 1===t.length&&(t="0"+t),t}return e.getUTCFullYear()+"-"+t(1+e.getUTCMonth())+"-"+t(e.getUTCDate())+"T"+t(e.getUTCHours())+":"+t(e.getUTCMinutes())+":"+t(e.getUTCSeconds())+"."+((e.getUTCMilliseconds()/1e3).toFixed(3)+"").slice(2,5)+"Z"}(),iKey:e,name:"Microsoft.ApplicationInsights."+e.replace(/-/g,"")+"."+t,sampleRate:100,tags:n,data:{baseData:{ver:2}}}}var h=d.url||y.src;if(h){function a(e){var t,n,a,i,r,o,s,c,u,p,l;g=!0,m.queue=[],f||(f=!0,t=h,s=function(){var e={},t=d.connectionString;if(t)for(var n=t.split(";"),a=0;a<n.length;a++){var i=n[a].split("=");2===i.length&&(e[i[0][b]()]=i[1])}if(!e[C]){var r=e.endpointsuffix,o=r?e.location:null;e[C]="https://"+(o?o+".":"")+"dc."+(r||"services.visualstudio.com")}return e}(),c=s[D]||d[D]||"",u=s[C],p=u?u+"/v2/track":d.endpointUrl,(l=[]).push((n="SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details)",a=t,i=p,(o=(r=v(c,"Exception")).data).baseType="ExceptionData",o.baseData.exceptions=[{typeName:"SDKLoadFailed",message:n.replace(/\./g,"-"),hasFullStack:!1,stack:n+"\nSnippet failed to load ["+a+"] -- Telemetry is disabled\nHelp Link: https://go.microsoft.com/fwlink/?linkid=2128109\nHost: "+(S&&S.pathname||"_unknown_")+"\nEndpoint: "+i,parsedStack:[]}],r)),l.push(function(e,t,n,a){var i=v(c,"Message"),r=i.data;r.baseType="MessageData";var o=r.baseData;return o.message='AI (Internal): 99 message:"'+("SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details) ("+n+")").replace(/\"/g,"")+'"',o.properties={endpoint:a},i}(0,0,t,p)),function(e,t){if(JSON){var n=T.fetch;if(n&&!y.useXhr)n(t,{method:N,body:JSON.stringify(e),mode:"cors"});else if(XMLHttpRequest){var a=new XMLHttpRequest;a.open(N,t),a.setRequestHeader("Content-type","application/json"),a.send(JSON.stringify(e))}}}(l,p))}function i(e,t){f||setTimeout(function(){!t&&m.core||a()},500)}var e=function(){var n=l.createElement(k);n.src=h;var e=y[w];return!e&&""!==e||"undefined"==n[w]||(n[w]=e),n.onload=i,n.onerror=a,n.onreadystatechange=function(e,t){"loaded"!==n.readyState&&"complete"!==n.readyState||i(0,t)},n}();y.ld<0?l.getElementsByTagName("head")[0].appendChild(e):setTimeout(function(){l.getElementsByTagName(k)[0].parentNode.appendChild(e)},y.ld||0)}try{m.cookie=l.cookie}catch(p){}function t(e){for(;e.length;)!function(t){m[t]=function(){var e=arguments;g||m.queue.push(function(){m[t].apply(m,e)})}}(e.pop())}var n="track",r="TrackPage",o="TrackEvent";t([n+"Event",n+"PageView",n+"Exception",n+"Trace",n+"DependencyData",n+"Metric",n+"PageViewPerformance","start"+r,"stop"+r,"start"+o,"stop"+o,"addTelemetryInitializer","setAuthenticatedUserContext","clearAuthenticatedUserContext","flush"]),m.SeverityLevel={Verbose:0,Information:1,Warning:2,Error:3,Critical:4};var s=(d.extensionConfig||{}).ApplicationInsightsAnalytics||{};if(!0!==d[I]&&!0!==s[I]){var c="onerror";t(["_"+c]);var u=T[c];T[c]=function(e,t,n,a,i){var r=u&&u(e,t,n,a,i);return!0!==r&&m["_"+c]({message:e,url:t,lineNumber:n,columnNumber:a,error:i}),r},d.autoExceptionInstrumented=!0}return m}(y.cfg);function a(){y.onInit&&y.onInit(n)}(T[t]=n).queue&&0===n.queue.length?(n.queue.push(a),n.trackPageView({})):a()}(window,document,{
+src: "https://js.monitor.azure.com/scripts/b/ai.2.min.js", // The SDK URL Source
+// name: "appInsights", // Global SDK Instance name defaults to "appInsights" when not supplied
+// ld: 0, // Defines the load delay (in ms) before attempting to load the sdk. -1 = block page load and add to head. (default) = 0ms load after timeout,
+// useXhr: 1, // Use XHR instead of fetch to report failures (if available),
+crossOrigin: "anonymous", // When supplied this will add the provided value as the cross origin attribute on the script tag
+// onInit: null, // Once the application insights instance has loaded and initialized this callback function will be called with 1 argument -- the sdk instance (DO NOT ADD anything to the sdk.queue -- As they won't get called)
+cfg: { // Application Insights Configuration
+ connectionString: "CONNECTION_STRING"
+}});
+</script>
+```
+
+#### Define the connection string
+
+An Application Insights [connection string](sdk-connection-string.md) contains information to connect to the Azure cloud and associate telemetry data with a specific Application Insights resource. The connection string includes the Instrumentation Key (a unique identifier), the endpoint suffix (to specify the Azure cloud), and optional explicit endpoints for individual services. The connection string isn't considered a security token or key.
+
+In the code snippet, replace the placeholder `"CONNECTION_STRING"` with your actual connection string found in the Azure portal.
+
+1. Navigate to the **Overview** pane of your Application Insights resource.
+1. Locate the **Connection String**.
+1. Select the button to copy the connection string to the clipboard.
++
+## Snippet configuration
+
+Other snippet configuration is optional.
+
+| Name | Type | Description
+|||-
+| src | string **[required]** | The full URL for where to load the SDK from. This value is used for the "src" attribute of a dynamically added &lt;script /&gt; tag. You can use the public CDN location or your own privately hosted one.
+| name | string *[optional]* | The global name for the initialized SDK, defaults to appInsights. So ```window.appInsights``` is a reference to the initialized instance. Note: If you assign a name value or if a previous instance has been assigned to the global name appInsightsSDK, the SDK initialization code requires it to be in the global namespace as `window.appInsightsSDK=<name value>` to ensure the correct snippet skeleton, and proxy methods are initialized and updated.
+| ld | number in ms *[optional]* | Defines the load delay to wait before attempting to load the SDK. The default value is 0ms. If you use a negative value, the script tag is immediately added to the <head> region of the page and blocks the page load event until the script is loaded or fails.
+| useXhr | boolean *[optional]* | This setting is used only for reporting SDK load failures. Reporting first attempts to use fetch() if available and then fallback to XHR, setting this value to true just bypasses the fetch check. Use of this value is only be required if your application is being used in an environment where fetch would fail to send the failure events.
+| crossOrigin | string *[optional]* | By including this setting, the script tag added to download the SDK includes the crossOrigin attribute with this string value. When not defined (the default) no crossOrigin attribute is added. Recommended values aren't defined (the default); ""; or "anonymous" (For all valid values see the [cross origin HTML attribute](https://developer.mozilla.org/docs/Web/HTML/Attributes/crossorigin) documentation)
+| onInit | function(aiSdk) { ... } *[optional]* | This callback function is called after the main SDK script has been successfully loaded and initialized from the CDN (based on the src value). It's passed a reference to the sdk instance that it's being called for and is also called before the first initial page view. If the SDK has already been loaded and initialized, this callback is still called. NOTE: During the processing of the sdk.queue array, this callback is called. You CANNOT add any more items to the queue because they're ignored and dropped. (Added as part of snippet version 5--the sv:"5" value within the snippet script)
+| cfg | object **[required]** | The configuration passed to the Application Insights SDK during initialization.
+
+### Example using the snippet onInit callback
+
+```html
+<script type="text/javascript">
+!function(T,l,y){<!-- Removed the Snippet code for brevity -->}(window,document,{
+src: "https://js.monitor.azure.com/scripts/b/ai.2.min.js",
+crossOrigin: "anonymous",
+onInit: function (sdk) {
+ sdk.addTelemetryInitializer(function (envelope) {
+ envelope.data = envelope.data || {};
+ envelope.data.someField = 'This item passed through my telemetry initializer';
+ });
+}, // Once the application insights instance has loaded and initialized this method will be called
+cfg: { // Application Insights Configuration
+ connectionString: "YOUR_CONNECTION_STRING"
+}});
+</script>
+```
+
+## What is collected automatically?
+
+When you enable the App Insights JavaScript SDK, the following data classes are collected automatically:
+
+- Uncaught exceptions in your app, including information on
+ - Stack trace
+ - Exception details and message accompanying the error
+ - Line & column number of error
+ - URL where error was raised
+- Network Dependency Requests made by your app XHR and Fetch (fetch collection is disabled by default) requests, include information on
+ - Url of dependency source
+ - Command & Method used to request the dependency
+ - Duration of the request
+ - Result code and success status of the request
+ - ID (if any) of user making the request
+ - Correlation context (if any) where request is made
+- User information (for example, Location, network, IP)
+- Device information (for example, Browser, OS, version, language, model)
+- Session information
+
+> [!Note]
+> For some applications, such as single-page applications (SPAs), the duration may not be recorded and will default to 0.
+
+For more information, see the following link: https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/azure-monitor/app/data-retention-privacy.md
+
+## Confirm data is flowing
+
+Check the data flow by going to the Azure portal and navigating to the Application Insights resource that you've enabled the SDK for. From there, you can view the data in the "Live Metrics Stream" or "Metrics" sections.
+
+Additionally, you can use the SDK's trackPageView() method to manually send a page view event and verify that it appears in the portal.
+
+If you can't run the application or you aren't getting data as expected, wee the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/javascript-sdk-troubleshooting).
+
+### Analytics
+
+To query your telemetry collected by the JavaScript SDK, select the **View in Logs (Analytics)** button. By adding a `where` statement of `client_Type == "Browser"`, you only see data from the JavaScript SDK. Any server-side telemetry collected by other SDKs is excluded.
+
+```kusto
+// average pageView duration by name
+let timeGrain=5m;
+let dataset=pageViews
+// additional filters can be applied here
+| where timestamp > ago(1d)
+| where client_Type == "Browser" ;
+// calculate average pageView duration for all pageViews
+dataset
+| summarize avg(duration) by bin(timestamp, timeGrain)
+| extend pageView='Overall'
+// render result in a chart
+| render timechart
+```
+
+## Advanced SDK configuration
+
+Additional information is available for the following advanced scenarios:
+
+- [JavaScript SDK npm setup](javascript-sdk-advanced.md#npm-setup)
+- [React plugin](javascript-framework-extensions.md?tabs=react)
+- [React native plugin](javascript-framework-extensions.md?tabs=reactnative)
+- [Angular plugin](javascript-framework-extensions.md?tabs=reactnative)
+- [Click Analytics plugin](javascript-feature-extensions.md)
+
+## Frequently asked questions
+
+#### What is the SDK performance/overhead?
+
+The Application Insights JavaScript SDK has a minimal overhead on your website. At just 36 KB gzipped, and taking only ~15 ms to initialize, the SDK adds a negligible amount of load time to your website. The minimal components of the library are quickly loaded when you use the SDK, and the full script is downloaded in the background.
+
+Additionally, while the script is downloading from the CDN, all tracking of your page is queued, so you don't lose any telemetry during the entire life cycle of your page. This setup process provides your page with a seamless analytics system that's invisible to your users.
+
+#### What browsers are supported?
+
+![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/master/src/chrome/chrome_48x48.png) | ![Firefox](https://raw.githubusercontent.com/alrra/browser-logos/master/src/firefox/firefox_48x48.png) | ![IE](https://raw.githubusercontent.com/alrra/browser-logos/master/src/edge/edge_48x48.png) | ![Opera](https://raw.githubusercontent.com/alrra/browser-logos/master/src/opera/opera_48x48.png) | ![Safari](https://raw.githubusercontent.com/alrra/browser-logos/master/src/safari/safari_48x48.png)
+ | | | | |
+Chrome Latest Γ£ö | Firefox Latest Γ£ö | IE 9+ & Microsoft Edge Γ£ö<br>IE 8- Compatible | Opera Latest Γ£ö | Safari Latest Γ£ö |
+
+#### Where can I find code examples?
+
+For runnable examples, see [Application Insights JavaScript SDK samples](https://github.com/microsoft/ApplicationInsights-JS/tree/master/examples).
+
+#### How can I upgrade from the old version of Application Insights?
+
+For more information, see [Upgrade from old versions of the Application Insights JavaScript SDK](javascript-sdk-upgrade.md).
+
+#### What is the ES3/Internet Explorer 8 compatibility?
+
+We need to take necessary measures to ensure that this SDK continues to "work" and doesn't break the JavaScript execution when loaded by an older browser. It would be ideal to not support older browsers, but numerous large customers can't control which browser their users choose to use.
+
+This statement doesn't mean that we only support the lowest common set of features. We need to maintain ES3 code compatibility. New features need to be added in a manner that wouldn't break ES3 JavaScript parsing and added as an optional feature.
+
+See GitHub for full details on [Internet Explorer 8 support](https://github.com/Microsoft/ApplicationInsights-JS#es3ie8-compatibility).
+
+#### Is the Application Insights SDK open-source?
+
+Yes, the Application Insights JavaScript SDK is open source. To view the source code or to contribute to the project, see the [official GitHub repository](https://github.com/Microsoft/ApplicationInsights-JS).
+
+#### How can I update my third-party server configuration?
+
+The server side needs to be able to accept connections with those headers present. Depending on the `Access-Control-Allow-Headers` configuration on the server side, it's often necessary to extend the server-side list by manually adding `Request-Id`, `Request-Context`, and `traceparent` (W3C distributed header).
+
+Access-Control-Allow-Headers: `Request-Id`, `traceparent`, `Request-Context`, `<your header>`
+
+#### How can I disable distributed tracing?
+
+Distributed tracing can be disabled in configuration.
+
+## Troubleshooting
+
+See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/javascript-sdk-troubleshooting).
+
+## Release notes
+
+Detailed release notes regarding updates and bug fixes can be found on [GitHub](https://github.com/microsoft/ApplicationInsights-JS/releases)
+
+## Next steps
+
+* [Track usage](usage-overview.md)
+* [Custom events and metrics](api-custom-events-metrics.md)
+* [Build-measure-learn](usage-overview.md)
+* [JavaScript SDK advanced topics](javascript-sdk-advanced.md)
azure-monitor Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript.md
- Title: Azure Application Insights for JavaScript web apps
-description: Get page view and session counts, web client data, and single-page applications and track usage patterns. Detect exceptions and performance issues in JavaScript webpages.
- Previously updated : 11/15/2022----
-# Application Insights for webpages
-
-> [!NOTE]
-> We continue to assess the viability of OpenTelemetry for browser scenarios. We recommend the Application Insights JavaScript SDK for the forseeable future. It's fully compatible with OpenTelemetry distributed tracing.
-
-Find out about the performance and usage of your webpage or app. If you add [Application Insights](app-insights-overview.md) to your page script, you get timings of page loads and AJAX calls, counts, and details of browser exceptions and AJAX failures. You also get user and session counts. All this telemetry can be segmented by page, client OS and browser version, geo location, and other dimensions. You can set alerts on failure counts or slow page loading. By inserting trace calls in your JavaScript code, you can track how the different features of your webpage application are used.
-
-Application Insights can be used with any webpages by adding a short piece of JavaScript. Node.js has a [standalone SDK](nodejs.md). If your web service is [Java](opentelemetry-enable.md?tabs=java) or [ASP.NET](asp-net.md), you can use the server-side SDKs with the client-side JavaScript SDK to get an end-to-end understanding of your app's performance.
-
-## Add the JavaScript SDK
-
-1. First you need an Application Insights resource. If you don't already have a resource and connection string, follow the instructions to [create a new resource](create-new-resource.md).
-1. Copy the [connection string](#connection-string-setup) for the resource where you want your JavaScript telemetry to be sent (from step 1). You'll add it to the `connectionString` setting of the Application Insights JavaScript SDK.
-1. Add the Application Insights JavaScript SDK to your webpage or app via one of the following two options:
- * [Node Package Manager (npm) setup](#npm-based-setup)
- * [JavaScript snippet](#snippet-based-setup)
-
-> [!WARNING]
-> `@microsoft/applicationinsights-web-basic - AISKULight` doesn't support the use of connection strings.
-
-Only use one method to add the JavaScript SDK to your application. If you use the npm setup, don't use the snippet and vice versa.
-
-> [!NOTE]
-> The npm setup installs the JavaScript SDK as a dependency to your project and enables IntelliSense. The snippet fetches the SDK at runtime. Both support the same features. Developers who want more custom events and configuration generally opt for the npm setup. Users who are looking for quick enablement of out-of-the-box web analytics opt for the snippet.
-
-### npm-based setup
-
-Install via npm.
-
-```sh
-npm i --save @microsoft/applicationinsights-web
-```
-
-> [!Note]
-> *Typings are included with this package*, so you do *not* need to install a separate typings package.
-
-```js
-import { ApplicationInsights } from '@microsoft/applicationinsights-web'
-
-const appInsights = new ApplicationInsights({ config: {
- connectionString: 'Copy connection string from Application Insights Resource Overview'
- /* ...Other Configuration Options... */
-} });
-appInsights.loadAppInsights();
-appInsights.trackPageView(); // Manually call trackPageView to establish the current user/session/pageview
-```
-
-### Snippet-based setup
-
-If your app doesn't use npm, you can directly instrument your webpages with Application Insights by pasting this snippet at the top of each of your pages. Preferably, it should be the first script in your `<head>` section. That way it can monitor any potential issues with all your dependencies and optionally any JavaScript errors. If you're using Blazor Server App, add the snippet at the top of the file `_Host.cshtml` in the `<head>` section.
-
-Starting from version 2.5.5, the page view event will include the new tag "ai.internal.snippet" that contains the identified snippet version. This feature assists with tracking which version of the snippet your application is using.
-
-The current snippet that follows is version "5." The version is encoded in the snippet as `sv:"#"`. The [current version is also available on GitHub](https://go.microsoft.com/fwlink/?linkid=2156318).
-
-```html
-<script type="text/javascript">
-!function(T,l,y){var S=T.location,k="script",D="connectionString",C="ingestionendpoint",I="disableExceptionTracking",E="ai.device.",b="toLowerCase",w="crossOrigin",N="POST",e="appInsightsSDK",t=y.name||"appInsights";(y.name||T[e])&&(T[e]=t);var n=T[t]||function(d){var g=!1,f=!1,m={initialize:!0,queue:[],sv:"5",version:2,config:d};function v(e,t){var n={},a="Browser";return n[E+"id"]=a[b](),n[E+"type"]=a,n["ai.operation.name"]=S&&S.pathname||"_unknown_",n["ai.internal.sdkVersion"]="javascript:snippet_"+(m.sv||m.version),{time:function(){var e=new Date;function t(e){var t=""+e;return 1===t.length&&(t="0"+t),t}return e.getUTCFullYear()+"-"+t(1+e.getUTCMonth())+"-"+t(e.getUTCDate())+"T"+t(e.getUTCHours())+":"+t(e.getUTCMinutes())+":"+t(e.getUTCSeconds())+"."+((e.getUTCMilliseconds()/1e3).toFixed(3)+"").slice(2,5)+"Z"}(),name:"Microsoft.ApplicationInsights."+e.replace(/-/g,"")+"."+t,sampleRate:100,tags:n,data:{baseData:{ver:2}}}}var h=d.url||y.src;if(h){function a(e){var t,n,a,i,r,o,s,c,u,p,l;g=!0,m.queue=[],f||(f=!0,t=h,s=function(){var e={},t=d.connectionString;if(t)for(var n=t.split(";"),a=0;a<n.length;a++){var i=n[a].split("=");2===i.length&&(e[i[0][b]()]=i[1])}if(!e[C]){var r=e.endpointsuffix,o=r?e.location:null;e[C]="https://"+(o?o+".":"")+"dc."+(r||"services.visualstudio.com")}return e}(),c=s[D]||d[D]||"",u=s[C],p=u?u+"/v2/track":d.endpointUrl,(l=[]).push((n="SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details)",a=t,i=p,(o=(r=v(c,"Exception")).data).baseType="ExceptionData",o.baseData.exceptions=[{typeName:"SDKLoadFailed",message:n.replace(/\./g,"-"),hasFullStack:!1,stack:n+"\nSnippet failed to load ["+a+"] -- Telemetry is disabled\nHelp Link: https://go.microsoft.com/fwlink/?linkid=2128109\nHost: "+(S&&S.pathname||"_unknown_")+"\nEndpoint: "+i,parsedStack:[]}],r)),l.push(function(e,t,n,a){var i=v(c,"Message"),r=i.data;r.baseType="MessageData";var o=r.baseData;return o.message='AI (Internal): 99 message:"'+("SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details) ("+n+")").replace(/\"/g,"")+'"',o.properties={endpoint:a},i}(0,0,t,p)),function(e,t){if(JSON){var n=T.fetch;if(n&&!y.useXhr)n(t,{method:N,body:JSON.stringify(e),mode:"cors"});else if(XMLHttpRequest){var a=new XMLHttpRequest;a.open(N,t),a.setRequestHeader("Content-type","application/json"),a.send(JSON.stringify(e))}}}(l,p))}function i(e,t){f||setTimeout(function(){!t&&m.core||a()},500)}var e=function(){var n=l.createElement(k);n.src=h;var e=y[w];return!e&&""!==e||"undefined"==n[w]||(n[w]=e),n.onload=i,n.onerror=a,n.onreadystatechange=function(e,t){"loaded"!==n.readyState&&"complete"!==n.readyState||i(0,t)},n}();y.ld<0?l.getElementsByTagName("head")[0].appendChild(e):setTimeout(function(){l.getElementsByTagName(k)[0].parentNode.appendChild(e)},y.ld||0)}try{m.cookie=l.cookie}catch(p){}function t(e){for(;e.length;)!function(t){m[t]=function(){var e=arguments;g||m.queue.push(function(){m[t].apply(m,e)})}}(e.pop())}var n="track",r="TrackPage",o="TrackEvent";t([n+"Event",n+"PageView",n+"Exception",n+"Trace",n+"DependencyData",n+"Metric",n+"PageViewPerformance","start"+r,"stop"+r,"start"+o,"stop"+o,"addTelemetryInitializer","setAuthenticatedUserContext","clearAuthenticatedUserContext","flush"]),m.SeverityLevel={Verbose:0,Information:1,Warning:2,Error:3,Critical:4};var s=(d.extensionConfig||{}).ApplicationInsightsAnalytics||{};if(!0!==d[I]&&!0!==s[I]){var c="onerror";t(["_"+c]);var u=T[c];T[c]=function(e,t,n,a,i){var r=u&&u(e,t,n,a,i);return!0!==r&&m["_"+c]({message:e,url:t,lineNumber:n,columnNumber:a,error:i}),r},d.autoExceptionInstrumented=!0}return m}(y.cfg);function a(){y.onInit&&y.onInit(n)}(T[t]=n).queue&&0===n.queue.length?(n.queue.push(a),n.trackPageView({})):a()}(window,document,{
-src: "https://js.monitor.azure.com/scripts/b/ai.2.min.js", // The SDK URL Source
-// name: "appInsights", // Global SDK Instance name defaults to "appInsights" when not supplied
-// ld: 0, // Defines the load delay (in ms) before attempting to load the sdk. -1 = block page load and add to head. (default) = 0ms load after timeout,
-// useXhr: 1, // Use XHR instead of fetch to report failures (if available),
-crossOrigin: "anonymous", // When supplied this will add the provided value as the cross origin attribute on the script tag
-// onInit: null, // Once the application insights instance has loaded and initialized this callback function will be called with 1 argument -- the sdk instance (DO NOT ADD anything to the sdk.queue -- As they won't get called)
-cfg: { // Application Insights Configuration
- connectionString: "Copy connection string from Application Insights Resource Overview"
- /* ...Other Configuration Options... */
-}});
-</script>
-```
-
-> [!NOTE]
-> For readability and to reduce possible JavaScript errors, all the possible configuration options are listed on a new line in the preceding snippet code. If you don't want to change the value of a commented line, it can be removed.
-
-#### Report script load failures
-
-This version of the snippet detects and reports failures when the SDK is loaded from the CDN as an exception to the Azure Monitor portal (under the failures &gt; exceptions &gt; browser). The exception provides visibility into failures of this type so that you're aware your application isn't reporting telemetry (or other exceptions) as expected. This signal is an important measurement in understanding that you've lost telemetry because the SDK didn't load or initialize, which can lead to:
--- Underreporting of how users are using or trying to use your site.-- Missing telemetry on how your users are using your site.-- Missing JavaScript errors that could potentially be blocking your users from successfully using your site.-
-For information on this exception, see the [SDK load failure](https://learn.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/javascript-sdk-troubleshooting) troubleshooting page.
-
-Reporting of this failure as an exception to the portal doesn't use the configuration option ```disableExceptionTracking``` from the Application Insights configuration. For this reason, if this failure occurs, it will always be reported by the snippet, even when `window.onerror` support is disabled.
-
-Reporting of SDK load failures isn't supported on Internet Explorer 8 or earlier. This behavior reduces the minified size of the snippet by assuming that most environments aren't exclusively Internet Explorer 8 or less. If you have this requirement and you want to receive these exceptions, you'll need to either include a fetch poly fill or create your own snippet version that uses ```XDomainRequest``` instead of ```XMLHttpRequest```. Use the [provided snippet source code](https://github.com/microsoft/ApplicationInsights-JS/blob/master/AISKU/snippet/snippet.js) as a starting point.
-
-> [!NOTE]
-> If you're using a previous version of the snippet, update to the latest version so that you'll receive these previously unreported issues.
-
-#### Snippet configuration options
-
-All configuration options have been moved toward the end of the script. This placement avoids accidentally introducing JavaScript errors that wouldn't just cause the SDK to fail to load, but also it would disable the reporting of the failure.
-
-Each configuration option is shown above on a new line. If you don't want to override the default value of an item listed as [optional], you can remove that line to minimize the resulting size of your returned page.
-
-The available configuration options are listed in this table.
-
-| Name | Type | Description
-|||-
-| src | string *[required]* | The full URL for where to load the SDK from. This value is used for the "src" attribute of a dynamically added &lt;script /&gt; tag. You can use the public CDN location or your own privately hosted one.
-| name | string *[optional]* | The global name for the initialized SDK, defaults to `appInsights`. So ```window.appInsights``` will be a reference to the initialized instance. If you provide a name value or a previous instance appears to be assigned (via the global name appInsightsSDK), this name value will also be defined in the global namespace as ```window.appInsightsSDK=<name value>```. The SDK initialization code uses this reference to ensure it's initializing and updating the correct snippet skeleton and proxy methods.
-| ld | number in ms *[optional]* | Defines the load delay to wait before attempting to load the SDK. Default value is 0ms. Any negative value will immediately add a script tag to the &lt;head&gt; region of the page. The page load event is then blocked until the script is loaded or fails.
-| useXhr | boolean *[optional]* | This setting is used only for reporting SDK load failures. Reporting will first attempt to use fetch() if available and then fall back to XHR. Setting this value to true just bypasses the fetch check. Use of this value is only required if your application is being used in an environment where fetch would fail to send the failure events.
-| crossOrigin | string *[optional]* | By including this setting, the script tag added to download the SDK will include the crossOrigin attribute with this string value. When not defined (the default), no crossOrigin attribute is added. Recommended values aren't defined (the default); ""; or "anonymous." For all valid values, see [HTML attribute: `crossorigin`](https://developer.mozilla.org/en-US/docs/Web/HTML/Attributes/crossorigin) documentation.
-| cfg | object *[required]* | The configuration passed to the Application Insights SDK during initialization.
-
-### Connection string setup
--
-```js
-import { ApplicationInsights } from '@microsoft/applicationinsights-web'
-
-const appInsights = new ApplicationInsights({ config: {
- connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE'
- /* ...Other Configuration Options... */
-} });
-appInsights.loadAppInsights();
-appInsights.trackPageView();
-```
-
-### Send telemetry to the Azure portal
-
-By default, the Application Insights JavaScript SDK autocollects many telemetry items that are helpful in determining the health of your application and the underlying user experience.
-
-This telemetry includes:
--- **Uncaught exceptions** in your app, including information on the:
- - Stack trace.
- - Exception details and message accompanying the error.
- - Line and column number of the error.
- - URL where the error was raised.
-- **Network Dependency Requests** made by your app **XHR** and **Fetch** (fetch collection is disabled by default) requests include information on the:
- - URL of dependency source.
- - Command and method used to request the dependency.
- - Duration of the request.
- - Result code and success status of the request.
- - ID (if any) of the user making the request.
- - Correlation context (if any) where the request is made.
-- **User information** (for example, location, network, IP)-- **Device information** (for example, browser, OS, version, language, model)-- **Session information**-
-### Telemetry initializers
-
-Telemetry initializers are used to modify the contents of collected telemetry before being sent from the user's browser. They can also be used to prevent certain telemetry from being sent by returning `false`. Multiple telemetry initializers can be added to your Application Insights instance. They're executed in the order of adding them.
-
-The input argument to `addTelemetryInitializer` is a callback that takes a [`ITelemetryItem`](https://github.com/microsoft/ApplicationInsights-JS/blob/master/API-reference.md#addTelemetryInitializer) as an argument and returns `boolean` or `void`. If `false` is returned, the telemetry item isn't sent, or else it proceeds to the next telemetry initializer, if any, or is sent to the telemetry collection endpoint.
-
-An example of using telemetry initializers:
-
-```ts
-var telemetryInitializer = (envelope) => {
- envelope.data.someField = 'This item passed through my telemetry initializer';
-};
-appInsights.addTelemetryInitializer(telemetryInitializer);
-appInsights.trackTrace({message: 'This message will use a telemetry initializer'});
-
-appInsights.addTelemetryInitializer(() => false); // Nothing is sent after this is executed
-appInsights.trackTrace({message: 'this message will not be sent'}); // Not sent
-```
-
-## Configuration
-
-Most configuration fields are named so that they can default to false. All fields are optional except for `connectionString`.
-
-| Name | Description | Default |
-||-||
-| connectionString | *Required*<br>Connection string that you obtained from the Azure portal. | string<br/>null |
-| accountId | An optional account ID if your app groups users into accounts. No spaces, commas, semicolons, equal signs, or vertical bars. | string<br/>null |
-| sessionRenewalMs | A session is logged if the user is inactive for this amount of time in milliseconds. | numeric<br/>1800000<br/>(30 mins) |
-| sessionExpirationMs | A session is logged if it has continued for this amount of time in milliseconds. | numeric<br/>86400000<br/>(24 hours) |
-| maxBatchSizeInBytes | Maximum size of telemetry batch. If a batch exceeds this limit, it's immediately sent and a new batch is started. | numeric<br/>10000 |
-| maxBatchInterval | How long to batch telemetry before sending (milliseconds). | numeric<br/>15000 |
-| disable&#8203;ExceptionTracking | If true, exceptions aren't autocollected. | boolean<br/> false |
-| disableTelemetry | If true, telemetry isn't collected or sent. | boolean<br/>false |
-| enableDebug | If true, *internal* debugging data is thrown as an exception *instead* of being logged, regardless of SDK logging settings. Default is false. <br>*Note:* Enabling this setting will result in dropped telemetry whenever an internal error occurs. This setting can be useful for quickly identifying issues with your configuration or usage of the SDK. If you don't want to lose telemetry while debugging, consider using `loggingLevelConsole` or `loggingLevelTelemetry` instead of `enableDebug`. | boolean<br/>false |
-| loggingLevelConsole | Logs *internal* Application Insights errors to console. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) | numeric<br/> 0 |
-| loggingLevelTelemetry | Sends *internal* Application Insights errors as telemetry. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) | numeric<br/> 1 |
-| diagnosticLogInterval | (internal) Polling interval (in ms) for internal logging queue. | numeric<br/> 10000 |
-| samplingPercentage | Percentage of events that will be sent. Default is 100, meaning all events are sent. Set this option if you want to preserve your data cap for large-scale applications. | numeric<br/>100 |
-| autoTrackPageVisitTime | If true, on a pageview, the _previous_ instrumented page's view time is tracked and sent as telemetry and a new timer is started for the current pageview. It's sent as a custom metric named `PageVisitTime` in `milliseconds` and is calculated via the Date [now()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/now) function (if available) and falls back to (new Date()).[getTime()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/getTime) if now() is unavailable (Internet Explorer 8 or less). Default is false. | boolean<br/>false |
-| disableAjaxTracking | If true, Ajax calls aren't autocollected. | boolean<br/> false |
-| disableFetchTracking | If true, Fetch requests aren't autocollected.|boolean<br/>false |
-| overridePageViewDuration | If true, default behavior of trackPageView is changed to record end of page view duration interval when trackPageView is called. If false and no custom duration is provided to trackPageView, the page view performance is calculated by using the navigation timing API. |boolean<br/>
-| maxAjaxCallsPerView | Default 500 controls how many Ajax calls will be monitored per page view. Set to -1 to monitor all (unlimited) Ajax calls on the page. | numeric<br/> 500 |
-| disableDataLossAnalysis | If false, internal telemetry sender buffers will be checked at startup for items not yet sent. | boolean<br/> true |
-| disable&#8203;CorrelationHeaders | If false, the SDK will add two headers (`Request-Id` and `Request-Context`) to all dependency requests to correlate them with corresponding requests on the server side. | boolean<br/> false |
-| correlationHeader&#8203;ExcludedDomains | Disable correlation headers for specific domains. | string[]<br/>undefined |
-| correlationHeader&#8203;ExcludePatterns | Disable correlation headers by using regular expressions. | regex[]<br/>undefined |
-| correlationHeader&#8203;Domains | Enable correlation headers for specific domains. | string[]<br/>undefined |
-| disableFlush&#8203;OnBeforeUnload | If true, flush method won't be called when `onBeforeUnload` event triggers. | boolean<br/> false |
-| enableSessionStorageBuffer | If true, the buffer with all unsent telemetry is stored in session storage. The buffer is restored on page load. | boolean<br />true |
-| cookieCfg | Defaults to cookie usage enabled. For full defaults, see [ICookieCfgConfig](#icookiemgrconfig) settings. | [ICookieCfgConfig](#icookiemgrconfig)<br>(Since 2.6.0)<br/>undefined |
-| ~~isCookieUseDisabled~~<br>disableCookiesUsage | If true, the SDK won't store or read any data from cookies. Disables the User and Session cookies and renders the usage panes and experiences useless. `isCookieUseDisable` is deprecated in favor of `disableCookiesUsage`. When both are provided, `disableCookiesUsage` takes precedence.<br>(Since v2.6.0) And if `cookieCfg.enabled` is also defined, it will take precedence over these values. Cookie usage can be re-enabled after initialization via `core.getCookieMgr().setEnabled(true)`. | Alias for [`cookieCfg.enabled`](#icookiemgrconfig)<br>false |
-| cookieDomain | Custom cookie domain. This option is helpful if you want to share Application Insights cookies across subdomains.<br>(Since v2.6.0) If `cookieCfg.domain` is defined, it will take precedence over this value. | Alias for [`cookieCfg.domain`](#icookiemgrconfig)<br>null |
-| cookiePath | Custom cookie path. This option is helpful if you want to share Application Insights cookies behind an application gateway.<br>If `cookieCfg.path` is defined, it will take precedence over this value. | Alias for [`cookieCfg.path`](#icookiemgrconfig)<br>(Since 2.6.0)<br/>null |
-| isRetryDisabled | If false, retry on 206 (partial success), 408 (timeout), 429 (too many requests), 500 (internal server error), 503 (service unavailable), and 0 (offline, only if detected). | boolean<br/>false |
-| isStorageUseDisabled | If true, the SDK won't store or read any data from local and session storage. | boolean<br/> false |
-| isBeaconApiDisabled | If false, the SDK will send all telemetry by using the [Beacon API](https://www.w3.org/TR/beacon). | boolean<br/>true |
-| onunloadDisableBeacon | When tab is closed, the SDK will send all remaining telemetry by using the [Beacon API](https://www.w3.org/TR/beacon). | boolean<br/> false |
-| sdkExtension | Sets the SDK extension name. Only alphabetic characters are allowed. The extension name is added as a prefix to the `ai.internal.sdkVersion` tag (for example, `ext_javascript:2.0.0`). | string<br/> null |
-| isBrowserLink&#8203;TrackingEnabled | If true, the SDK will track all [browser link](/aspnet/core/client-side/using-browserlink) requests. | boolean<br/>false |
-| appId | AppId is used for the correlation between AJAX dependencies happening on the client side with the server-side requests. When the Beacon API is enabled, it canΓÇÖt be used automatically but can be set manually in the configuration. |string<br/> null |
-| enable&#8203;CorsCorrelation | If true, the SDK will add two headers (`Request-Id` and `Request-Context`) to all CORS requests to correlate outgoing AJAX dependencies with corresponding requests on the server side. | boolean<br/>false |
-| namePrefix | An optional value that will be used as name postfix for localStorage and cookie name. | string<br/>undefined |
-| enable&#8203;AutoRoute&#8203;Tracking | Automatically track route changes in single-page applications. If true, each route change will send a new page view to Application Insights. Hash route changes (`example.com/foo#bar`) are also recorded as new page views.| boolean<br/>false |
-| enableRequest&#8203;HeaderTracking | If true, AJAX and Fetch request headers are tracked. | boolean<br/> false |
-| enableResponse&#8203;HeaderTracking | If true, AJAX and Fetch request response headers are tracked. | boolean<br/> false |
-| distributedTracingMode | Sets the distributed tracing mode. If AI_AND_W3C mode or W3C mode is set, W3C trace context headers (traceparent/tracestate) will be generated and included in all outgoing requests. AI_AND_W3C is provided for backward compatibility with any legacy Application Insights instrumented services. See examples at [this website](./correlation.md#enable-w3c-distributed-tracing-support-for-web-apps).| `DistributedTracingModes`or<br/>numeric<br/>(Since v2.6.0) `DistributedTracingModes.AI_AND_W3C`<br />(v2.5.11 or earlier) `DistributedTracingModes.AI` |
-| enable&#8203;AjaxErrorStatusText | If true, include response error data text in dependency event on failed AJAX requests. | boolean<br/> false |
-| enable&#8203;AjaxPerfTracking |Flag to enable looking up and including more browser window.performance timings in the reported `ajax` (XHR and fetch) reported metrics. | boolean<br/> false |
-| maxAjaxPerf&#8203;LookupAttempts | The maximum number of times to look for the window.performance timings, if available. This option is sometimes required because not all browsers populate the window.performance before reporting the end of the XHR request. For fetch requests, this is added after it's complete.| numeric<br/> 3 |
-| ajaxPerfLookupDelay | The amount of time to wait before reattempting to find the window.performance timings for an `ajax` request. Time is in milliseconds and is passed directly to setTimeout(). | numeric<br/> 25 ms |
-| enableUnhandled&#8203;PromiseRejection&#8203;Tracking | If true, unhandled promise rejections will be autocollected and reported as a JavaScript error. When `disableExceptionTracking` is true (don't track exceptions), the config value will be ignored, and unhandled promise rejections won't be reported. | boolean<br/> false |
-| enablePerfMgr | When enabled (true), this will create local perfEvents for code that has been instrumented to emit perfEvents (via the doPerf() helper). This option can be used to identify performance issues within the SDK based on your usage or optionally within your own instrumented code. [More information is available in the basic documentation](https://github.com/microsoft/ApplicationInsights-JS/blob/master/docs/PerformanceMonitoring.md). Since v2.5.7 | boolean<br/>false |
-| perfEvtsSendAll | When _enablePerfMgr_ is enabled and the [IPerfManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfManager.ts) fires a [INotificationManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/INotificationManager.ts).perfEvent(), this flag determines whether an event is fired (and sent to all listeners) for all events (true) or only for parent events (false &lt;default&gt;).<br />A parent [IPerfEvent](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfEvent.ts) is an event where no other IPerfEvent is still running at the point of this event being created, and its _parent_ property isn't null or undefined. Since v2.5.7 | boolean<br />false |
-| idLength | The default length used to generate new random session and user ID values. Defaults to 22. The previous default value was 5 (v2.5.8 or less). If you need to keep the previous maximum length, you should set this value to 5. | numeric<br />22 |
-
-## Cookie handling
-
-From version 2.6.0, cookie management is now available directly from the instance and can be disabled and re-enabled after initialization.
-
-If disabled during initialization via the `disableCookiesUsage` or `cookieCfg.enabled` configurations, you can now re-enable via the [ICookieMgr](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/ICookieMgr.ts) `setEnabled` function.
-
-The instance-based cookie management also replaces the previous CoreUtils global functions of `disableCookies()`, `setCookie(...)`, `getCookie(...)` and `deleteCookie(...)`. To benefit from the tree-shaking enhancements also introduced as part of version 2.6.0, you should no longer use the global functions.
-
-### ICookieMgrConfig
-
-Cookie configuration for instance-based cookie management added in version 2.6.0.
-
-| Name | Description | Type and default |
-||-||
-| enabled | A boolean that indicates whether the use of cookies by the SDK is enabled by the current instance. If false, the instance of the SDK initialized by this configuration won't store or read any data from cookies. | boolean<br/> true |
-| domain | Custom cookie domain, which is helpful if you want to share Application Insights cookies across subdomains. If not provided, uses the value from root `cookieDomain` value. | string<br/>null |
-| path | Specifies the path to use for the cookie. If not provided, it will use any value from the root `cookiePath` value. | string <br/> / |
-| getCookie | Function to fetch the named cookie value. If not provided, it will use the internal cookie parsing/caching. | `(name: string) => string` <br/> null |
-| setCookie | Function to set the named cookie with the specified value. Only called when adding or updating a cookie. | `(name: string, value: string) => void` <br/> null |
-| delCookie | Function to delete the named cookie with the specified value, separated from setCookie to avoid the need to parse the value to determine whether the cookie is being added or removed. If not provided, it will use the internal cookie parsing/caching. | `(name: string, value: string) => void` <br/> null |
-
-### Simplified usage of new instance Cookie Manager
--- appInsights.[getCookieMgr()](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/ICookieMgr.ts).setEnabled(true/false);-- appInsights.[getCookieMgr()](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/ICookieMgr.ts).set("MyCookie", "the%20encoded%20value");-- appInsights.[getCookieMgr()](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/ICookieMgr.ts).get("MyCookie");-- appInsights.[getCookieMgr()](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/ICookieMgr.ts).del("MyCookie");-
-## Enable time-on-page tracking
-
-By setting `autoTrackPageVisitTime: true`, the time in milliseconds a user spends on each page is tracked. On each new page view, the duration the user spent on the *previous* page is sent as a [custom metric](../essentials/metrics-custom-overview.md) named `PageVisitTime`. This custom metric is viewable in the [Metrics Explorer](../essentials/metrics-getting-started.md) as a log-based metric.
-
-## Enable distributed tracing
-
-Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools.
-
-In JavaScript, correlation is turned off by default to minimize the telemetry we send by default. The following examples show standard configuration options for enabling correlation.
-
-The following sample code shows the configurations required to enable correlation.
-
-# [Snippet](#tab/snippet)
-
-```javascript
-// excerpt of the config section of the JavaScript SDK snippet with correlation
-// between client-side AJAX and server requests enabled.
-cfg: { // Application Insights Configuration
- instrumentationKey: "YOUR_INSTRUMENTATION_KEY_GOES_HERE"
- connectionString: "Copy connection string from Application Insights Resource Overview"
- enableCorsCorrelation: true,
- enableRequestHeaderTracking: true,
- enableResponseHeaderTracking: true,
- correlationHeaderExcludedDomains: ['*.queue.core.windows.net']
- /* ...Other Configuration Options... */
-}});
-</script>
-```
-
-# [npm](#tab/npm)
-
-```javascript
-// excerpt of the config section of the JavaScript SDK snippet with correlation
-// between client-side AJAX and server requests enabled.
-const appInsights = new ApplicationInsights({ config: { // Application Insights Configuration
- instrumentationKey: 'YOUR_INSTRUMENTATION_KEY_GOES_HERE'
- connectionString: "Copy connection string from Application Insights Resource Overview"
- enableCorsCorrelation: true,
- enableRequestHeaderTracking: true,
- enableResponseHeaderTracking: true,
- correlationHeaderExcludedDomains: ['*.queue.core.windows.net']
- /* ...Other Configuration Options... */
-} });
-```
---
-> [!NOTE]
-> There are two distributed tracing modes/protocols: AI (Classic) and [W3C TraceContext](https://www.w3.org/TR/trace-context/) (New). In version 2.6.0 and later, they are _both_ enabled by default. For older versions, users need to [explicitly opt in to W3C mode](../app/correlation.md#enable-w3c-distributed-tracing-support-for-web-apps).
-
-### Route tracking
-
-By default, this SDK will *not* handle state-based route changing that occurs in single page applications. To enable automatic route change tracking for your single page application, you can add `enableAutoRouteTracking: true` to your setup configuration.
-
-### Single-page applications
-
-For single-page applications, reference plug-in documentation for guidance specific to plug-ins.
-
-| Plug-ins |
-||
-| [React](javascript-framework-extensions.md#enable-correlation)|
-| [React Native](javascript-framework-extensions.md#enable-correlation)|
-| [Angular](javascript-framework-extensions.md#enable-correlation)|
-| [Click Analytics Auto-collection](javascript-feature-extensions.md#enable-correlation)|
-
-### Advanced correlation
-
-When a page is first loading and the SDK hasn't fully initialized, we're unable to generate the operation ID for the first request. As a result, distributed tracing is incomplete until the SDK fully initializes.
-To remedy this problem, you can include dynamic JavaScript on the returned HTML page. The SDK will use a callback function during initialization to retroactively pull the operation ID from the `serverside` and populate the `clientside` with it.
-
-# [Snippet](#tab/snippet)
-
-Here's a sample of how to create a dynamic JavaScript using Razor.
-
-```C#
-<script>
-!function(T,l,y){<removed snippet code>,{
- src: "https://js.monitor.azure.com/scripts/b/ai.2.min.js", // The SDK URL Source
- onInit: function(appInsights) {
- var serverId = "@this.Context.GetRequestTelemetry().Context.Operation.Id";
- appInsights.context.telemetryTrace.parentID = serverId;
- },
- cfg: { // Application Insights Configuration
- instrumentationKey: "YOUR_INSTRUMENTATION_KEY_GOES_HERE"
- }});
-</script>
-```
-
-# [npm](#tab/npm)
-
-```js
-import { ApplicationInsights } from '@microsoft/applicationinsights-web'
-const appInsights = new ApplicationInsights({ config: {
- instrumentationKey: 'YOUR_INSTRUMENTATION_KEY_GOES_HERE'
- /* ...Other Configuration Options... */
-} });
-appInsights.context.telemetryContext.parentID = serverId;
-appInsights.loadAppInsights();
-```
-
-When you use an npm-based configuration, a location must be determined to store the operation ID to enable access for the SDK initialization bundle to `appInsights.context.telemetryContext.parentID` so it can populate it before the first page view event is sent.
-
-
-
-> [!CAUTION]
->The application UX is not yet optimized to show these "first hop" advanced distributed tracing scenarios. The data will be available in the requests table for query and diagnostics.
-
-## Extensions
-
-| Extensions |
-||
-| [React](javascript-framework-extensions.md)|
-| [React Native](javascript-framework-extensions.md)|
-| [Angular](javascript-framework-extensions.md)|
-| [Click Analytics Auto-collection](javascript-feature-extensions.md)|
-
-## Explore browser/client-side data
-
-Browser/client-side data can be viewed by going to **Metrics** and adding individual metrics you're interested in.
-
-![Screenshot that shows the Metrics page in Application Insights showing graphic displays of metrics data for a web application.](./media/javascript/page-view-load-time.png)
-
-You can also view your data from the JavaScript SDK via the browser experience in the portal.
-
-Select **Browser**, and then select **Failures** or **Performance**.
-
-![Screenshot that shows the Browser page in Application Insights showing how to add Browser Failures or Browser Performance to the metrics that you can view for your web application.](./media/javascript/browser.png)
-
-### Performance
-
-![Screenshot that shows the Performance page in Application Insights showing graphic displays of Operations metrics for a web application.](./media/javascript/performance-operations.png)
-
-### Dependencies
-
-![Screenshot that shows the Performance page in Application Insights showing graphic displays of Dependency metrics for a web application.](./media/javascript/performance-dependencies.png)
-
-### Analytics
-
-To query your telemetry collected by the JavaScript SDK, select the **View in Logs (Analytics)** button. By adding a `where` statement of `client_Type == "Browser"`, you'll only see data from the JavaScript SDK. Any server-side telemetry collected by other SDKs will be excluded.
-
-```kusto
-// average pageView duration by name
-let timeGrain=5m;
-let dataset=pageViews
-// additional filters can be applied here
-| where timestamp > ago(1d)
-| where client_Type == "Browser" ;
-// calculate average pageView duration for all pageViews
-dataset
-| summarize avg(duration) by bin(timestamp, timeGrain)
-| extend pageView='Overall'
-// render result in a chart
-| render timechart
-```
-
-### Source map support
-
-The minified callstack of your exception telemetry can be unminified in the Azure portal. All existing integrations on the Exception Details panel will work with the newly unminified callstack.
-
-#### Link to Blob Storage account
-
-You can link your Application Insights resource to your own Azure Blob Storage container to automatically unminify call stacks. To get started, see [Automatic source map support](./source-map-support.md).
-
-### Drag and drop
-
-1. Select an Exception Telemetry item in the Azure portal to view its "end-to-end transaction details."
-1. Identify which source maps correspond to this call stack. The source map must match a stack frame's source file but be suffixed with `.map`.
-1. Drag the source maps onto the call stack in the Azure portal.
-
- ![An animated image showing how to drag source map files from a build folder into the Call Stack window in the Azure portal.](https://i.imgur.com/Efue9nU.gif)
-
-### Application Insights web basic
-
-For a lightweight experience, you can instead install the basic version of Application Insights:
-
-```
-npm i --save @microsoft/applicationinsights-web-basic
-```
-
-This version comes with the bare minimum number of features and functionalities and relies on you to build it up as you see fit. For example, it performs no autocollection like uncaught exceptions and AJAX. The APIs to send certain telemetry types, like `trackTrace` and `trackException`, aren't included in this version. For this reason, you'll need to provide your own wrapper. The only API that's available is `track`. A [sample](https://github.com/Azure-Samples/applicationinsights-web-sample1/blob/master/testlightsku.html) is located here.
-
-## Examples
-
-For runnable examples, see [Application Insights JavaScript SDK samples](https://github.com/Azure-Samples?q=applicationinsights-js-demo).
-
-## Upgrade from the old version of Application Insights
-
-Breaking changes in the SDK V2 version:
--- To allow for better API signatures, some of the API calls, such as trackPageView and trackException, have been updated. Running in Internet Explorer 8 and earlier versions of the browser isn't supported.-- The telemetry envelope has field name and structure changes due to data schema updates.-- Moved `context.operation` to `context.telemetryTrace`. Some fields were also changed (`operation.id` --> `telemetryTrace.traceID`).
-
- To manually refresh the current pageview ID, for example, in single-page applications, use `appInsights.properties.context.telemetryTrace.traceID = Microsoft.ApplicationInsights.Telemetry.Util.generateW3CId()`.
-
- > [!NOTE]
- > To keep the trace ID unique, where you previously used `Util.newId()`, now use `Util.generateW3CId()`. Both ultimately end up being the operation ID.
-
-If you're using the current application insights PRODUCTION SDK (1.0.20) and want to see if the new SDK works in runtime, update the URL depending on your current SDK loading scenario.
--- Download via CDN scenario: Update the code snippet that you currently use to point to the following URL:
- ```
- "https://js.monitor.azure.com/scripts/b/ai.2.min.js"
- ```
--- npm scenario: Call `downloadAndSetup` to download the full ApplicationInsights script from CDN and initialize it with a connection string:-
- ```ts
- appInsights.downloadAndSetup({
- connectionString: "Copy connection string from Application Insights Resource Overview",
- url: "https://js.monitor.azure.com/scripts/b/ai.2.min.jss"
- });
- ```
-
-Test in an internal environment to verify the monitoring telemetry is working as expected. If all works, update your API signatures appropriately to SDK v2 and deploy in your production environments.
-
-## SDK performance/overhead
-
-At just 36 KB gzipped, and taking only ~15 ms to initialize, Application Insights adds a negligible amount of load time to your website. Minimal components of the library are quickly loaded when you use this snippet. In the meantime, the full script is downloaded in the background.
-
-While the script is downloading from the CDN, all tracking of your page is queued. After the downloaded script finishes asynchronously initializing, all events that were queued are tracked. As a result, you won't lose any telemetry during the entire life cycle of your page. This setup process provides your page with a seamless analytics system that's invisible to your users.
-
-> Summary:
-> - ![npm version](https://badge.fury.io/js/%40microsoft%2Fapplicationinsights-web.svg)
-> - ![gzip compressed size](https://img.badgesize.io/https://js.monitor.azure.com/scripts/b/ai.2.min.js.svg?compression=gzip)
-> - **15 ms** overall initialization time
-> - **Zero** tracking missed during life cycle of page
-
-## Browser support
-
-![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/master/src/chrome/chrome_48x48.png) | ![Firefox](https://raw.githubusercontent.com/alrra/browser-logos/master/src/firefox/firefox_48x48.png) | ![IE](https://raw.githubusercontent.com/alrra/browser-logos/master/src/edge/edge_48x48.png) | ![Opera](https://raw.githubusercontent.com/alrra/browser-logos/master/src/opera/opera_48x48.png) | ![Safari](https://raw.githubusercontent.com/alrra/browser-logos/master/src/safari/safari_48x48.png)
- | | | | |
-Chrome Latest Γ£ö | Firefox Latest Γ£ö | IE 9+ & Microsoft Edge Γ£ö<br>IE 8- Compatible | Opera Latest Γ£ö | Safari Latest Γ£ö |
-
-## ES3/Internet Explorer 8 compatibility
-
-We need to ensure that this SDK continues to "work" and doesn't break the JavaScript execution when it's loaded by an older browser. It would be ideal to not support older browsers, but numerous large customers can't control which browser their users choose to use.
-
-This statement does *not* mean that we'll only support the lowest common set of features. We need to maintain ES3 code compatibility. New features will need to be added in a manner that wouldn't break ES3 JavaScript parsing and added as an optional feature.
-
-See GitHub for full details on [Internet Explorer 8 support](https://github.com/Microsoft/ApplicationInsights-JS#es3ie8-compatibility).
-
-## Open-source SDK
-
-The Application Insights JavaScript SDK is open source. To view the source code or to contribute to the project, see the [official GitHub repository](https://github.com/Microsoft/ApplicationInsights-JS).
-
-For the latest updates and bug fixes, [consult the release notes](./release-notes.md).
-
-## Troubleshooting
-
-This section helps you troubleshoot common issues.
-
-### I'm getting an error message of Failed to get Request-Context correlation header as it may be not included in the response or not accessible
-
-The `correlationHeaderExcludedDomains` configuration property is an exclude list that disables correlation headers for specific domains. This option is useful when including those headers would cause the request to fail or not be sent because of third-party server configuration. This property supports wildcards.
-An example would be `*.queue.core.windows.net`, as seen in the preceding code sample.
-Adding the application domain to this property should be avoided because it stops the SDK from including the required distributed tracing `Request-Id`, `Request-Context`, and `traceparent` headers as part of the request.
-
-### I'm not sure how to update my third-party server configuration
-
-The server side needs to be able to accept connections with those headers present. Depending on the `Access-Control-Allow-Headers` configuration on the server side, it's often necessary to extend the server-side list by manually adding `Request-Id`, `Request-Context`, and `traceparent` (W3C distributed header).
-
-Access-Control-Allow-Headers: `Request-Id`, `traceparent`, `Request-Context`, `<your header>`
-
-### I'm receiving duplicate telemetry data from the Application Insights JavaScript SDK
-
-If the SDK reports correlation recursively, enable the configuration setting of `excludeRequestFromAutoTrackingPatterns` to exclude the duplicate data. This scenario can occur when you use connection strings. The syntax for the configuration setting is `excludeRequestFromAutoTrackingPatterns: [<endpointUrl>]`.
--
-## Next steps
-
-* [Source map for JavaScript](source-map-support.md)
-* [Track usage](usage-overview.md)
-* [Custom events and metrics](api-custom-events-metrics.md)
-* [Build-measure-learn](usage-overview.md)
-* [Troubleshoot SDK load failure](https://learn.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/javascript-sdk-troubleshooting)
azure-monitor Source Map Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/source-map-support.md
- Title: Source map support for JavaScript applications - Azure Monitor Application Insights
-description: Learn how to upload source maps to your Azure Storage account blob container by using Application Insights.
- Previously updated : 06/23/2020----
-# Source map support for JavaScript applications
-
-Application Insights supports the uploading of source maps to your Azure Storage account blob container. You can use source maps to unminify call stacks found on the **End-to-end transaction details** page. You can also use source maps to unminify any exception sent by the [JavaScript SDK][ApplicationInsights-JS] or the [Node.js SDK][ApplicationInsights-Node.js].
-
-![Screenshot that shows selecting the option to unminify a call stack by linking with a storage account.](./media/source-map-support/details-unminify.gif)
-
-## Create a new storage account and blob container
-
-If you already have an existing storage account or blob container, you can skip this step.
-
-1. [Create a new storage account][create storage account].
-1. [Create a blob container][create blob container] inside your storage account. Set **Public access level** to **Private** to ensure that your source maps aren't publicly accessible.
-
- > [!div class="mx-imgBorder"]
- >![Screenshot that shows setting the container access level to Private.](./media/source-map-support/container-access-level.png)
-
-## Push your source maps to your blob container
-
-Integrate your continuous deployment pipeline with your storage account by configuring it to automatically upload your source maps to the configured blob container.
-
-You can upload source maps to your Azure Blob Storage container with the same folder structure they were compiled and deployed with. A common use case is to prefix a deployment folder with its version, for example, `1.2.3/static/js/main.js`. When you unminify via an Azure blob container called `sourcemaps`, the pipeline tries to fetch a source map located at `sourcemaps/1.2.3/static/js/main.js.map`.
-
-### Upload source maps via Azure Pipelines (recommended)
-
-If you're using Azure Pipelines to continuously build and deploy your application, add an [Azure file copy][azure file copy] task to your pipeline to automatically upload your source maps.
-
-> [!div class="mx-imgBorder"]
-> ![Screenshot that shows adding an Azure file copy task to your pipeline to upload your source maps to Azure Blob Storage.](./media/source-map-support/azure-file-copy.png)
-
-## Configure your Application Insights resource with a source map storage account
-
-You have two options for configuring your Application Insights resource with a source map storage account.
-
-### End-to-end transaction details tab
-
-From the **End-to-end transaction details** tab, select **Unminify**. Configure your resource if it's unconfigured.
-
-1. In the Azure portal, view the details of an exception that's minified.
-1. Select **Unminify**.
-1. If your resource isn't configured, configure it.
-
-### Properties tab
-
-To configure or change the storage account or blob container that's linked to your Application Insights resource:
-
-1. Go to the **Properties** tab of your Application Insights resource.
-1. Select **Change source map Blob Container**.
-1. Select a different blob container as your source map container.
-1. Select **Apply**.
-
-> [!div class="mx-imgBorder"]
-> ![Screenshot that shows reconfiguring your selected Azure blob container on the Properties pane.](./media/source-map-support/reconfigure.png)
-
-## Troubleshooting
-
-This section offers troubleshooting tips for common issues.
-
-### Required Azure role-based access control settings on your blob container
-
-Any user on the portal who uses this feature must be assigned at least as a [Storage Blob Data Reader][storage blob data reader] to your blob container. Assign this role to anyone who might use the source maps through this feature.
-
-> [!NOTE]
-> Depending on how the container was created, this role might not have been automatically assigned to you or your team.
-
-### Source map not found
-
-1. Verify that the corresponding source map is uploaded to the correct blob container.
-1. Verify that the source map file is named after the JavaScript file it maps to and uses the suffix `.map`.
-
- For example, `/static/js/main.4e2ca5fa.chunk.js` searches for the blob named `main.4e2ca5fa.chunk.js.map`.
-1. Check your browser's console to see if any errors were logged. Include this information in any support ticket.
-
-## Next steps
-
-[Azure file copy task](/azure/devops/pipelines/tasks/deploy/azure-file-copy)
-
-<!-- Remote URLs -->
-[create storage account]: ../../storage/common/storage-account-create.md?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&tabs=azure-portal
-[create blob container]: ../../storage/blobs/storage-quickstart-blobs-portal.md
-[storage blob data reader]: ../../role-based-access-control/built-in-roles.md#storage-blob-data-reader
-[ApplicationInsights-JS]: https://github.com/microsoft/applicationinsights-js
-[ApplicationInsights-Node.js]: https://github.com/microsoft/applicationinsights-node.js
-[azure file copy]: https://aka.ms/azurefilecopyreadme
azure-monitor Tutorial Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-asp-net-core.md
public class ValuesController : ControllerBase
public ActionResult<IEnumerable<string>> Get() { //Info level traces are not captured by default
- _logger.LogInfo("An example of an Info trace..")
+ _logger.LogInformation("An example of an Info trace..");
_logger.LogWarning("An example of a Warning trace.."); _logger.LogError("An example of an Error level message");
azure-monitor Worker Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/worker-service.md
The full example is shared at this [GitHub page](https://github.com/microsoft/Ap
```csharp using Microsoft.ApplicationInsights; using Microsoft.ApplicationInsights.DataContracts;
+ using Microsoft.ApplicationInsights.WorkerService;
using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.Logging; using System;
The full example is shared at this [GitHub page](https://github.com/microsoft/Ap
// Being a regular console app, there is no appsettings.json or configuration providers enabled by default. // Hence instrumentation key/ connection string and any changes to default logging level must be specified here. services.AddLogging(loggingBuilder => loggingBuilder.AddFilter<Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider>("Category", LogLevel.Information));
- services.AddApplicationInsightsTelemetryWorkerService("instrumentation key here");
+ services.AddApplicationInsightsTelemetryWorkerService((ApplicationInsightsServiceOptions options) => options.ConnectionString = "InstrumentationKey=<instrumentation key here>");
// To pass a connection string // - aiserviceoptions must be created
azure-monitor Autoscale Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-overview.md
For example, scale out your application by adding VMs when the average CPU usage
When the conditions in the rules are met, one or more autoscale actions are triggered, adding or removing VMs. You can also perform other actions like sending email, notifications, or webhooks to trigger processes in other systems.
-## Scale out and scale up
+## Horizontal vs. vertical scaling
-Autoscale scales in and out, which is an increase or decrease of the number of resource instances. Scaling in and out is also called horizontal scaling. For example, for a virtual machine scale set, scaling out means adding more virtual machines. Scaling in means removing virtual machines. Horizontal scaling is flexible in a cloud situation because you can use it to run a large number of VMs to handle load.
+Autoscale scales in and out, or horizontally. Scaling horizontally is an increase or decrease of the number of resource instances. For example, for a virtual machine scale set, scaling out means adding more virtual machines. Scaling in means removing virtual machines. Horizontal scaling is flexible in a cloud situation because you can use it to run a large number of VMs to handle load.
-In contrast, scaling up and down, or vertical scaling, keeps the number of resources constant but gives those resources more capacity in terms of memory, CPU speed, disk space, and network. Vertical scaling is limited by the availability of larger hardware, which eventually reaches an upper limit. Hardware size availability varies in Azure by region. Vertical scaling might also require a restart of the virtual machine during the scaling process.
+In contrast, scaling up and down, or vertical scaling, keeps the same number of resource instances constant but gives them more capacity in terms of memory, CPU speed, disk space, and network. Vertical scaling is limited by the availability of larger hardware, which eventually reaches an upper limit. Hardware size availability varies in Azure by region. Vertical scaling might also require a restart of the VM during the scaling process. Autoscale does not support vertical scaling.
:::image type="content" source="./media/autoscale-overview/vertical-scaling.png" alt-text="A diagram that shows scaling up by adding CPU and memory to a virtual machine.":::
For code examples, see:
* [Tutorial: Automatically scale a virtual machine scale set with the Azure CLI](../../virtual-machine-scale-sets/tutorial-autoscale-cli.md) * [Tutorial: Automatically scale a virtual machine scale set with an Azure template](../../virtual-machine-scale-sets/tutorial-autoscale-powershell.md)
-## Horizontal vs. vertical scaling
-
-Autoscale scales horizontally, which is an increase or decrease of the number of resource instances. For example, in a virtual machine scale set, scaling out means adding more virtual machines. Scaling in means removing VMs. Horizontal scaling is flexible in a cloud situation because it allows you to run a large number of VMs to handle load.
-
-In contrast, vertical scaling keeps the same number of resources constant but gives them more capacity in terms of memory, CPU speed, disk space, and network. Adding or removing capacity in vertical scaling is known as scaling down. Vertical scaling is limited by the availability of larger hardware, which eventually reaches an upper limit. Hardware size availability varies in Azure by region. Vertical scaling might also require a restart of the VM during the scaling process.
- ## Supported services for autoscale Autoscale supports the following services.
Autoscale supports the following services.
| Azure API Management service | [Automatically scale an Azure API Management instance](../../api-management/api-management-howto-autoscale.md) | | Azure Data Explorer clusters | [Manage Azure Data Explorer clusters scaling to accommodate changing demand](/azure/data-explorer/manage-cluster-horizontal-scaling) | | Azure Stream Analytics | [Autoscale streaming units (preview)](../../stream-analytics/stream-analytics-autoscale.md) |
-| Azure SignalR Service (Premium tier) | [Automatically scale units of an Azure SignalR service](/azure/azure-signalr/signalr-howto-scale-autoscale) |
+| Azure SignalR Service (Premium tier) | [Automatically scale units of an Azure SignalR service](../../azure-signalr/signalr-howto-scale-autoscale.md) |
| Azure Machine Learning workspace | [Autoscale an online endpoint](../../machine-learning/how-to-autoscale-endpoints.md) | | Azure Spring Apps | [Set up autoscale for applications](../../spring-apps/how-to-setup-autoscale.md) | | Azure Media Services | [Autoscaling in Media Services](/azure/media-services/latest/release-notes#autoscaling) |
To learn more about autoscale, see the following resources:
* [Autoscale CLI reference](/cli/azure/monitor/autoscale) * [ARM template resource definition](/azure/templates/microsoft.insights/autoscalesettings) * [PowerShell Az.Monitor reference](/powershell/module/az.monitor/#monitor)
-* [REST API reference: Autoscale settings](/rest/api/monitor/autoscale-settings)
+* [REST API reference: Autoscale settings](/rest/api/monitor/autoscale-settings)
azure-monitor Container Insights Logging V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-logging-v2.md
Title: Configure the ContainerLogV2 schema (preview) for Container Insights
+ Title: Configure the ContainerLogV2 schema for Container Insights
description: Switch your ContainerLog table to the ContainerLogV2 schema.
Last updated 05/11/2022
-# Enable the ContainerLogV2 schema (preview)
-Azure Monitor Container insights is now in public preview of a new schema for container logs, called ContainerLogV2. As part of this schema, there are new fields to make common queries to view Azure Kubernetes Service (AKS) and Azure Arc-enabled Kubernetes data. In addition, this schema is compatible with [Basic Logs](../logs/basic-logs-configure.md), which offers a low-cost alternative to standard analytics logs.
+# Enable the ContainerLogV2 schema
+Azure Monitor Container insights offers a schema for container logs, called ContainerLogV2. As part of this schema, there are fields to make common queries to view Azure Kubernetes Service (AKS) and Azure Arc-enabled Kubernetes data. In addition, this schema is compatible with [Basic Logs](../logs/basic-logs-configure.md), which offers a low-cost alternative to standard analytics logs.
-The ContainerLogV2 schema is a preview feature. Container insights does not yet support the **View in Analytics** option, but the data is available when it's queried directly from the [Log Analytics](./container-insights-log-query.md) interface.
+>[!NOTE]
+>For Windows containers the PodName is not currently collected with ContainerLogV2
The new fields are: * `ContainerName`
azure-monitor Data Collection Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-transformations.md
There are multiple methods to create transformations depending on the data colle
| Transformation in workspace DCR | [Add workspace transformation to Azure Monitor Logs by using the Azure portal](../logs/tutorial-workspace-transformations-portal.md)<br>[Add workspace transformation to Azure Monitor Logs by using Resource Manager templates](../logs/tutorial-workspace-transformations-api.md) ## Cost for transformations
-There's no direct cost for transformations, but you might incur charges for the following changes:
+While transformations themselves don't incur direct costs, the following scenarios can result in additional charges:
-- If your transformation increases the size of the incoming data, like by adding a calculated column, for example, you're charged at the normal rate for ingestion of that extra data.-- If your transformation reduces the incoming data by more than 50%, you're charged for ingestion of the amount of filtered data above 50%.
+- If a transformation increases the size of the incoming data, such as by adding a calculated column, you'll be charged the standard ingestion rate for the extra data.
+- If a transformation reduces the incoming data by more than 50%, you'll be charged for the amount of filtered data above 50%.
-The formula to determine the filter ingestion charge from transformations is `[GB filtered out by transformations] - ( [Total GB ingested] / 2 )`. For example, suppose that you ingest 100 GB on a particular day, and transformations remove 70 GB. You would be charged for 70 GB - (100 GB / 2) or 20 GB. To avoid this charge, you should use other methods to filter incoming data before the transformation is applied.
+To calculate the data processing charge resulting from transformations, use the following formula: [GB filtered out by transformations] - ([Total GB ingested] / 2). For example, if you ingest 100 GB of data and your transformations remove 70 GB, you'll be charged for 70 GB - (100 GB / 2), which is 20 GB. This calculation is done per data collection rule and per day basis. To avoid this charge, it's recommended to filter incoming data using alternative methods before applying transformations. By doing so, you can reduce the amount of data processed by transformations and, therefore, minimize any additional costs.
See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor) for current charges for ingestion and retention of log data in Azure Monitor.
azure-monitor Diagnostic Settings Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings-policy.md
Title: Create diagnostic settings at scale using Azure Policy
-description: Use Azure Policy to create diagnostic settings in Azure Monitor to be created at scale as each Azure resource is created.
-
+ Title: Create diagnostic settings at scale using Azure policies and initiatives
+description: Use Azure Policy to create diagnostic settings in Azure Monitor at scale as each Azure resource is created.
+ Previously updated : 05/09/2022 Last updated : 02/25/2023
-# Create diagnostic settings at scale using Azure Policy
-Since a [diagnostic settings](diagnostic-settings.md) needs to be created for each monitored Azure resource, Azure Policy can be used to automatically create a diagnostic setting as each resource is created. Each Azure resource type has a unique set of categories that need to be listed in the diagnostic setting. Because of this fact, each resource type requires a separate policy definition. Some resource types have built-in policy definitions that you can assign without modification. For other resource types, you need to create a custom definition.
-With the addition of resource log category groups, you can now choose options that dynamically update as the log categories change. For more information, see [diagnostic settings sources](diagnostic-settings.md#sources) listed earlier in this article. All resource types have the "All" category. Some have the "Audit" category.
+# Create diagnostic settings at scale using Azure Policies and Initiatives
+
+In order to monitor Azure resources, it's necessary to create [diagnostic settings](./diagnostic-settings.md) for each resource. This process can be difficult to manage when you have many resources. To simplify the process of creating and applying diagnostic settings at scale, use Azure Policy to automatically generate diagnostic settings for both new and existing resources.
+
+Each Azure resource type has a unique set of categories listed in the diagnostic settings. Each resource type therefore requires a separate policy definition. Some resource types have built-in policy definitions that you can assign without modification. For other resource types, you can create a custom definition.
+
+## Log category groups
+
+Log category groups, group together similar types of logs. Category groups make it easy to refer to multiple logs in a single command. An **allLogs** category group exists containing all of the logs. There's also an **audit** category group that includes all audit logs. By using to a category group, you can define a policy that dynamically updates as new log categories are added to group.
## Built-in policy definitions for Azure Monitor
-There are two built-in policy definitions for each resource type: one to send to a Log Analytics workspace and another to send to an event hub. If you need only one location, assign that policy for the resource type. If you need both, assign both policy definitions for the resource.
+There are generally three built-in policy definitions for each resource type, corresponding to the three destinations to send diagnostics to:
+* Log Analytics workspaces
+* Azure Storage accounts
+* Event hubs
-For example, the following image shows the built-in diagnostic setting policy definitions for Azure Data Lake Analytics.
+Assign the policies for the resource type according to which destinations you need.
-![Partial screenshot from the Azure Policy Definitions page showing two built-in diagnostic setting policy definitions for Data Lake Analytics.](media/diagnostic-settings-policy/built-in-diagnostic-settings.png)
+A set of policies built-in policies and initiatives based on the audit log category groups have been developed to help you apply diagnostics settings with only a few steps. For more information, see [Enable Diagnostics settings by category group using built-in policies.](./diagnostics-settings-policies-deployifnotexists.md)
-For a complete listof built-in policies for Azure Monitor, see [Azure Policy built-in definitions for Azure Monitor](../policy-reference.md)
+For a complete list of built-in policies for Azure Monitor, see [Azure Policy built-in definitions for Azure Monitor](../policy-reference.md)
## Custom policy definitions
-For resource types that don't have a built-in policy, you need to create a custom policy definition. You could do this manually in the Azure portal by copying an existing built-in policy and then modifying it for your resource type. It's more efficient, though, to create the policy programmatically by using a script in the PowerShell Gallery.
+For resource types that don't have a built-in policy, you need to create a custom policy definition. You could do create a new policy manually in the Azure portal by copying an existing built-in policy and then modifying it for your resource type. Alternatively, create the policy programmatically by using a script in the PowerShell Gallery.
The script [Create-AzDiagPolicy](https://www.powershellgallery.com/packages/Create-AzDiagPolicy) creates policy files for a particular resource type that you can install by using PowerShell or the Azure CLI. Use the following procedure to create a custom policy definition for diagnostic settings:
By using initiative parameters, you can specify the workspace or any other detai
![Screenshot that shows initiative parameters on the Parameters tab.](media/diagnostic-settings-policy/initiative-parameters.png) ## Remediation
-The initiative will apply to each virtual machine as it's created. A [remediation task](../../governance/policy/how-to/remediate-resources.md) deploys the policy definitions in the initiative to existing resources, so you can create diagnostic settings for any resources that were already created.
+The initiative will be applied to each virtual machine as it's created. A [remediation task](../../governance/policy/how-to/remediate-resources.md) deploys the policy definitions in the initiative to existing resources, so you can create diagnostic settings for any resources that were already created.
When you create the assignment by using the Azure portal, you have the option of creating a remediation task at the same time. See [Remediate non-compliant resources with Azure Policy](../../governance/policy/how-to/remediate-resources.md) for details on the remediation.
When you create the assignment by using the Azure portal, you have the option of
When deploying a diagnostic setting, you receive an error message, similar to *Metric category 'xxxx' is not supported*. You may receive this error even though your previous deployment succeeded.
-The problem occurs when using a Resource Manager template, REST API, Azure CLI, or Azure PowerShell. Diagnostic settings created via the Azure portal are not affected as only the supported category names are presented.
+The problem occurs when using a Resource Manager template, REST API, Azure CLI, or Azure PowerShell. Diagnostic settings created via the Azure portal aren't affected as only the supported category names are presented.
-The problem is caused by a recent change in the underlying API. Metric categories other than 'AllMetrics' are not supported and never were except for a few specific Azure services. In the past, other category names were ignored when deploying a diagnostic setting. The Azure Monitor backend redirected these categories to 'AllMetrics'. As of February 2021, the backend was updated to specifically confirm the metric category provided is accurate. This change has caused some deployments to fail.
+The problem is caused by a recent change in the underlying API. Metric categories other than 'AllMetrics' aren't supported and never were except for a few specific Azure services. In the past, other category names were ignored when deploying a diagnostic setting. The Azure Monitor backend redirected these categories to 'AllMetrics'. As of February 2021, the backend was updated to specifically confirm the metric category provided is accurate. This change has caused some deployments to fail.
If you receive this error, update your deployments to replace any metric category names with 'AllMetrics' to fix the issue. If the deployment was previously adding multiple categories, only one with the 'AllMetrics' reference should be kept. If you continue to have the problem, contact Azure support through the Azure portal. ### Setting disappears due to non-ASCII characters in resourceID
-Diagnostic settings do not support resourceIDs with non-ASCII characters (for example, Preproducci├│n). Since you cannot rename resources in Azure, your only option is to create a new resource without the non-ASCII characters. If the characters are in a resource group, you can move the resources under it to a new one. Otherwise, you'll need to recreate the resource.
+Diagnostic settings don't support resourceIDs with non-ASCII characters (for example, Preproducci├│n). Since you can't rename resources in Azure, your only option is to create a new resource without the non-ASCII characters. If the characters are in a resource group, you can move the resources under it to a new one. Otherwise, you'll need to recreate the resource.
## Next steps
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
Information on these newer features is included in this article.
## Sources
-Here are the source options.
+There are three sources for diagnostic information:
+* Metrics
+* Resource Logs
+* Activity logs
### Metrics
azure-monitor Diagnostics Settings Policies Deployifnotexists https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostics-settings-policies-deployifnotexists.md
+
+ Title: Enable diagnostics settings by category group using built-in policies.
+description: Use Azure builtin policies to create diagnostic settings in Azure Monitor.
++++ Last updated : 02/25/2023+
+
+
+# Built-in policies for Azure Monitor
+Policies and policy initiatives provide a simple method to enable logging at-scale via diagnostics settings for Azure Monitor. Using a policy initiative, you can turn on audit logging for all [supported resources](#supported-resources) in your Azure environment.
+
+Enable resource logs to track activities and events that take place on your resources and give you visibility and insights into any changes that occur.
+Assign policies to enable resource logs and to send them to destinations according to your needs. Send logs to Event Hubs for third-party SIEM systems, enabling continuous security operations. Send logs to storage accounts for longer term storage or the fulfillment of regulatory compliance.
+
+A set of built-in policies and initiatives exists to direct resource logs to Log Analytics Workspaces, Event Hubs, and Storage Accounts. The policies enable audit logging, sending logs belonging to the **audit** log category group to an Event Hub, Log Analytics workspace or Storage Account. The policies' `effect` is `DeployIfNotExists`, which deploys the policy as a default if there aren't other settings defined.
++
+## Deploy policies.
+Deploy the policies and initiatives using the Portal, CLI, PowerShell, or Azure Resource Management templates
+### [Azure portal](#tab/portal)
+
+The following steps show how to apply the policy to send audit logs to for key vaults to a log analytics workspace.
+
+1. From the Policy page, select **Definitions**.
+
+1. Select your scope. You can apply a policy to the entire subscription, a resource group, or an individual resource.
+1. From the **Definition type** dropdown, select **Policy**.
+1. Select **Monitoring** from the Category dropdown
+1. Enter *keyvault* in the **Search** field.
+1. Select the **Enable logging by category group for Key vaults (microsoft.keyvault/vaults) to Log Analytics** policy,
+ :::image type="content" source="./media/diagnostics-settings-policies-deployifnotexists/policy-definitions.png" alt-text="A screenshot of the policy definitions page.":::
+1. From the policy definition page, select **Assign**
+1. Select the **Parameters** tab.
+1. Select the Log Analytics Workspace that you want to send the audit logs to.
+1. Select the **Remediation** tab.
+ :::image type="content" source="./media/diagnostics-settings-policies-deployifnotexists/assign-policy-parameters.png" alt-text="A screenshot of the assign policy page, parameters tab.":::
+1. On the remediation tab, select the keyvault policy from the **Policy to remediate** dropdown.
+1. Select the **Create a Managed Identity** checkbox.
+1. Under **Type of Managed Identity**, select **System assigned Managed Identity**.
+1. Select **Review + create**, then select **Create** .
+ :::image type="content" source="./media/diagnostics-settings-policies-deployifnotexists/assign-policy-remediation.png" alt-text="A screenshot of the assign policy page, remediation tab.":::
+
+The policy visible in the resources' diagnostic setting after approximately 30 minutes.
+
+### [CLI](#tab/cli)
+To apply a policy using the CLI, use the following commands:
+
+1. Create a policy assignment using [`az policy assignment create`](https://learn.microsoft.com/cli/azure/policy/assignment?view=azure-cli-latest#az-policy-assignment-create).
+ ```azurecli
+ az policy assignment create --name <policy assignment name> --policy "6b359d8f-f88d-4052-aa7c-32015963ecc1" --scope <scope> --params "{\"logAnalytics\": {\"value\": \"<log analytics workspace resource ID"}}" --mi-system-assigned --location <location>
+ ```
+ For example, to apply the policy to send audit logs to a log analytics workspace
+
+ ```azurecli
+ az policy assignment create --name "policy-assignment-1" --policy "6b359d8f-f88d-4052-aa7c-32015963ecc1" --scope /subscriptions/12345678-aaaa-bbbb-cccc-1234567890ab/resourceGroups/rg-001 --params "{\"logAnalytics\": {\"value\": \"/subscriptions/12345678-aaaa-bbbb-cccc-1234567890ab/resourcegroups/rg-001/providers/microsoft.operationalinsights/workspaces/workspace-001\"}}" --mi-system-assigned --location eastus
+ ```
+
+1. Assign the required role to the identity created for the policy assignment.
+Find the role in the policy definition by searching for *roleDefinitionIds*
+
+ ```json
+ ...},
+ "roleDefinitionIds": [
+ "/providers/Microsoft.Authorization/roleDefinitions/92aaf0da-9dab-42b6-94a3-d43ce8d16293"
+ ],
+ "deployment": {
+ "properties": {...
+ ```
+ Assign the required role using [`az policy assignment identity assign`](https://learn.microsoft.com/cli/azure/policy/assignment/identity?view=azure-cli-latest):
+ ```azurecli
+ az policy assignment identity assign --system-assigned --resource-group <resource group name> --role <role name or ID> --identity-scope </scope> --name <policy assignment name>
+ ```
+ For example:
+ ```azurecli
+ az policy assignment identity assign --system-assigned --resource-group rg-001 --role 92aaf0da-9dab-42b6-94a3-d43ce8d16293 --identity-scope /subscriptions/12345678-aaaa-bbbb-cccc-1234567890ab/resourceGroups/rg001 --name policy-assignment-1
+ ```
+1. Trigger a scan to find existing resources using [`az policy state trigger-scan`](https://learn.microsoft.com/cli/azure/policy/state?view=azure-cli-latest#az-policy-state-trigger-scan).
+
+ ```azurecli
+ az policy state trigger-scan --resource-group rg-001
+ ```
+
+1. Create a remediation task to apply the policy to existing resources using [`az policy remediation create`](https://learn.microsoft.com/cli/azure/policy/remediation?view=azure-cli-latest#az-policy-remediation-create).
+
+ ```azurecli
+ az policy remediation create -g <resource group name> --policy-assignment <policy assignment name> --name <remediation name>
+ ```
+
+ For example,
+ ```azurecli
+ az policy remediation create -g rg-001 -n remediation-001 --policy-assignment policy-assignment-1
+ ```
+
+For more information on policy assignment using CLI, see [Azure CLI reference - az policy assignment](https://learn.microsoft.com/cli/azure/policy/assignment?view=azure-cli-latest#az-policy-assignment-create)
+
+### [PowerShell](#tab/Powershell)
+
+To apply a policy using the PowerShell, use the following commands:
+
+1. Set up your environment.
+ Select your subscription and set your resource group
+ ```azurepowershell
+ Select-AzSubscription <subscriptionID>
+ $rg = Get-AzResourceGroup -Name <resource groups name>
+ ```
+
+1. Get the policy definition and configure the parameters for the policy. In the example below we assign the policy to send keyVault logs to a Log Analytics workspace
+ ```azurepowershell
+ $definition = Get-AzPolicyDefinition |Where-Object Name -eq 6b359d8f-f88d-4052-aa7c-32015963ecc1
+ $params = @{"logAnalytics"="/subscriptions/<subscriptionID/resourcegroups/<resourcgroup>/providers/microsoft.operationalinsights/workspaces/<log anlaytics workspace name>"}
+ ```
+
+1. Assign the policy
+ ```azurepowershell
+ $policyAssignment=New-AzPolicyAssignment -Name <assignment name> -DisplayName "assignment display name" -Scope $rg.ResourceId -PolicyDefinition $definition -PolicyparameterObject $params -IdentityType 'SystemAssigned' -Location <location>
+
+ #To get your assignemnt use:
+ $policyAssignment=Get-AzPolicyAssignment -Name '<assignment name>' -Scope '/subscriptions/<subscriptionID>/resourcegroups/<resource group name>'
+
+ ```
+
+1. Assign the required role or roles to the system assigned Managed Identity
+ ```azurepowershell
+ $principalID=$policyAssignment.Identity.PrincipalId
+ $roleDefinitionIds=$definition.Properties.policyRule.then.details.roleDefinitionIds
+ $roleDefinitionIds | ForEach-Object {
+ $roleDefId = $_.Split("/") | Select-Object -Last 1
+ New-AzRoleAssignment -Scope $rg.ResourceId -ObjectId $policyAssignment.Identity.PrincipalId -RoleDefinitionId $roleDefId
+ }
+ ```
+
+1. Scan for compliance, then create a remediation task to force compliance for existing resources.
+ ```azurepowershell
+ Start-AzPolicyComplianceScan -ResourceGroupName $rg.ResourceGroupName
+ Start-AzPolicyRemediation -Name $policyAssignment.Name -PolicyAssignmentId $policyAssignment.PolicyAssignmentId -ResourceGroupName $rg.ResourceGroupName
+ ```
+
+1. Check compliance
+ ```azurepowershell
+ Get-AzPolicyState -PolicyAssignmentName $policyAssignment.Name -ResourceGroupName $policyAssignment.ResourceGroupName|select-object IsCompliant , ResourceID
+ ```
+
+## Remediation tasks
+
+Policies are applied to new resources when they're created. To apply a policy to existing resources, create a remediation task. Remediation tasks bring resources into compliance with a policy.
+
+Remediation tasks act for specific policies. For initiatives that contain multiple policies, create a remediation task for each policy in the initiative where you have resources that you want to bring into compliance.
+
+ Define remediation tasks when you first assign the policy, or at any stage after assignment.
+
+To create a remediation task for policies during the policy assignment, select the **Remediation** tab on **Assign policy** page and select the **Create remediation task** checkbox.
+
+To create a remediation task after the policy has been assigned, select your assigned policy from the list on the Policy Assignments page.
+
+
+Select **Remediate**.
+Track the status of your remediation task in the **Remediation tasks** tab of the Policy Remediation page.
+++++
+For more information on remediation tasks, see [Remediate noncompliant resources](../../governance/policy/how-to/remediate-resources.md)
+
+## Assign initiatives
+
+Initiatives are collections of policies. There are three initiatives for Azure Monitor Diagnostics settings:
++ [Enable audit category group resource logging for supported resources to Event Hubs](https://portal.azure.com/?feature.customportal=false&feature.canmodifystamps=true&Microsoft_Azure_Monitoring_Logs=stage1&Microsoft_OperationsManagementSuite_Workspace=stage1#view/Microsoft_Azure_Policy/InitiativeDetailBlade/id/%2Fproviders%2FMicrosoft.Authorization%2FpolicySetDefinitions%2F1020d527-2764-4230-92cc-7035e4fcf8a7/scopes~/%5B%22%2Fsubscriptions%2F12345678-aaaa-bbbb-cccc-1234567890ab%22%5D)++ [Enable audit category group resource logging for supported resources to Log Analytics](https://portal.azure.com/?feature.customportal=false&feature.canmodifystamps=true&Microsoft_Azure_Monitoring_Logs=stage1&Microsoft_OperationsManagementSuite_Workspace=stage1#view/Microsoft_Azure_Policy/InitiativeDetailBlade/id/%2Fproviders%2FMicrosoft.Authorization%2FpolicySetDefinitions%2Ff5b29bc4-feca-4cc6-a58a-772dd5e290a5/scopes~/%5B%22%2Fsubscriptions%2F12345678-aaaa-bbbb-cccc-1234567890ab%22%5D)++ [Enable audit category group resource logging for supported resources to storage](https://portal.azure.com/?feature.customportal=false&feature.canmodifystamps=true&Microsoft_Azure_Monitoring_Logs=stage1&Microsoft_OperationsManagementSuite_Workspace=stage1#view/Microsoft_Azure_Policy/InitiativeDetailBlade/id/%2Fproviders%2FMicrosoft.Authorization%2FpolicySetDefinitions%2F8d723fb6-6680-45be-9d37-b1a4adb52207/scopes~/%5B%22%2Fsubscriptions%2F12345678-aaaa-bbbb-cccc-1234567890ab%22%5D)+
+In this example, we assign an initiative for sending audit logs to a Log Analytics workspace.
+
+### [Azure portal](#tab/portal)
+
+1. From the policy **Definitions** page, select your scope.
+
+1. Select *Initiative* in the **Definition type** dropdown.
+1. Select *Monitoring* in the **Category** dropdown.
+1. Enter *audit* in the **Search** field.
+1. Select thee *Enable audit category group resource logging for supported resources to Log Analytics* initiative.
+1. On the following page, select **Assign**
+
+1. On the **Basics** tab of the **Assign initiative** page, select a **Scope** that you want the initiative to apply to.
+1. Enter a name in the **Assignment name** field.
+1. Select the **Parameters** tab.
+
+ The **Parameters** contains the parameters defined in the policy. In this case, we need to select the Log Analytics workspace that we want to send the logs to. For more information in the individual parameters for each policy, see [Policy-specific parameters](#policy-specific-parameters).
+
+1. Select the **Log Analytics workspace** to send your audit logs to.
+
+1. Select **Review + create** then **Create**
+
+To verify that your policy or initiative assignment is working, create a resource in the subscription or resource group scope that you defined in your policy assignment.
+
+After 10 minutes, select the **Diagnostics settings** page for your resource.
+Your diagnostic setting appears in the list with the default name *setByPolicy-LogAnalytics* and the workspace name that you configured in the policy.
++
+Change the default name in the **Parameters** tab of the **Assign initiative** or policy page by unselecting the **Only show parameters that need input or review** checkbox.
++
+### [PowerShell](#tab/Powershell)
++
+1. Set up your environment variables
+ ```azurepowershell
+ # Set up your environment variables.
+ $subscriptionId = <your subscription ID>;
+ $rg = Get-AzResourceGroup -Name <your resource group name>;
+ Select-AzSubscription $subscriptionId;
+ $logAnlayticsWorskspaceId=</subscriptions/$subscriptionId/resourcegroups/$rg.ResourceGroupName/providers/microsoft.operationalinsights/workspaces/<your log analytics workspace>;
+ ```
+
+1. Get the initiative definition. In this example, we'll use Initiative *Enable audit category group resource logging for supported resources to `
+Log Analytics*, ResourceID "/providers/Microsoft.Authorization/policySetDefinitions/f5b29bc4-feca-4cc6-a58a-772dd5e290a5"
+ ```azurepowershell
+ $definition = Get-AzPolicySetDefinition |Where-Object ResourceID -eq /providers/Microsoft.Authorization/policySetDefinitions/f5b29bc4-feca-4cc6-a58a-772dd5e290a5;
+ ```
+
+1. Set an assignment name and configure parameters. For this initiative, the parameters include the Log Analytics workspace ID.
+ ```azurepowershell
+ $assignmentName=<your assignment name>;
+ $params = @{"logAnalytics"="/subscriptions/$subscriptionId/resourcegroups/$($rg.ResourceGroupName)/providers/microsoft.operationalinsights/workspaces/<your log analytics workspace>"}
+ ```
+
+1. Assign the initiative using the parameters
+ ```azurepowershell
+ $policyAssignment=New-AzPolicyAssignment -Name $assignmentName -Scope $rg.ResourceId -PolicySetDefinition $definition -PolicyparameterObject $params -IdentityType 'SystemAssigned' -Location eastus;
+ ```
+
+1. Assign the `Contributor` role to the system assigned Managed Identity. For other initiatives, check which roles are required.
+ ```azurepowershell
+ New-AzRoleAssignment -Scope $rg.ResourceId -ObjectId $policyAssignment.Identity.PrincipalId -RoleDefinitionName Contributor;
+ ```
+1. Scan for policy compliance. The `Start-AzPolicyComplianceScan` command takes a few minutes to return
+ ```azurepowershell
+ Start-AzPolicyComplianceScan -ResourceGroupName $rg.ResourceGroupName;
+ ```
+
+1. Get a list of resources to remediate and the required parameters by calling `Get-AzPolicyState`
+ ```azurepowershell
+ $assignmentState=Get-AzPolicyState -PolicyAssignmentName $assignmentName -ResourceGroupName $rg.ResourceGroupName;
+ $policyAssignmentId=$assignmentState.PolicyAssignmentId[0];
+ $policyDefinitionReferenceIds=$assignmentState.PolicyDefinitionReferenceId;
+ ```
+
+1. For each resource type with noncompliant resources, start a remediation task.
+ ```azurepowershell
+ $policyDefinitionReferenceIds | ForEach-Object {
+ $referenceId = $_
+ Start-AzPolicyRemediation -ResourceGroupName $rg.ResourceGroupName -PolicyAssignmentId $policyAssignmentId -PolicyDefinitionReferenceId $referenceId -Name "$($rg.ResourceGroupName) remediation $referenceId";
+ }
+ ```
+
+1. Check the compliance state when the remediation tasks have completed.
+ ```azurepowershell
+ Get-AzPolicyState -PolicyAssignmentName $assignmentName -ResourceGroupName $rg.ResourceGroupName|select-object IsCompliant , ResourceID
+ ```
+
+You can get your policy assignment details using the following command:
+ ```azurepowershell
+ $policyAssignment=Get-AzPolicyAssignment -Name $assignmentName -Scope "/subscriptions/$subscriptionId/resourcegroups/$($rg.ResourceGroupName)";
+ ```
+
+### [CLI](#tab/cli)
++
+1. Sign in to your Azure account using the `az login` command.
+
+1. Select the subscription where you want to apply the policy initiative using the `az account set` command.
+
+1. Assign the initiative using [`az policy assignment create`](https://learn.microsoft.com/cli/azure/policy/assignment?view=azure-cli-latest#az-policy-assignment-create).
+
+ ```azurecli
+ az policy assignment create --name <assignment name> --resource-group <resource group name> --policy-set-definition <initiative name> --params <parameters object> --mi-system-assigned --location <location>
+ ```
+ For example:
+
+ ```azurecli
+ az policy assignment create --name "assign-cli-example-01" --resource-group "cli-example-01" --policy-set-definition 'f5b29bc4-feca-4cc6-a58a-772dd5e290a5' --params '{"logAnalytics":{"value":"/subscriptions/12345678-aaaa-bbbb-cccc-1234567890ab/resourcegroups/cli-example-01/providers/microsoft.operationalinsights/workspaces/cli-example-01-ws"}, "diagnosticSettingName":{"value":"AssignedBy-cli-example-01"}}' --mi-system-assigned --location eastus
+ ```
+1. Assign the required role to the system managed identity
+
+ Find the roles to assign in any of the policy definitions in the initiative by searching the definition for *roleDefinitionIds*, for example:
+
+ ```json
+ ...},
+ "roleDefinitionIds": [
+ "/providers/Microsoft.Authorization/roleDefinitions/92aaf0da-9dab-42b6-94a3-d43ce8d16293"
+ ],
+ "deployment": {
+ "properties": {...
+ ```
+ Assign the required role using [`az policy assignment identity assign`](https://learn.microsoft.com/cli/azure/policy/assignment/identity?view=azure-cli-latest):
+ ```azurecli
+ az policy assignment identity assign --system-assigned --resource-group <resource group name> --role <role name or ID> --identity-scope <scope> --name <policy assignment name>
+ ```
+
+ For example:
+ ```azurecli
+ az policy assignment identity assign --system-assigned --resource-group "cli-example-01" --role 92aaf0da-9dab-42b6-94a3-d43ce8d16293 --identity-scope "/subscriptions/12345678-aaaa-bbbb-cccc-1234567890ab/resourcegroups/cli-example-01" --name assign-cli-example-01
+ ```
+
+1. Create remediation tasks for the policies in the initiative.
+
+ Remediation tasks are created per-policy. Each task is for a specific `definition-reference-id`, specified in the initiative as `policyDefinitionReferenceId`. To find the `definition-reference-id` parameter, use the following command:
+ ```azurecli
+ az policy set-definition show --name f5b29bc4-feca-4cc6-a58a-772dd5e290a5 |grep policyDefinitionReferenceId
+ ```
+ Remediate the resources using [`az policy remediation create`](https://learn.microsoft.com/cli/azure/policy/remediation?view=azure-cli-latest#az-policy-remediation-create)
+
+ ```azurecli
+ az policy remediation create --resource-group <resource group name> --policy-assignment <assignment name> --name <remediation task name> --definition-reference-id "policy specific reference ID" --resource-discovery-mode ReEvaluateCompliance
+ ```
+ For example:
+ ```azurecli
+ az policy remediation create --resource-group "cli-example-01" --policy-assignment assign-cli-example-01 --name "rem-assign-cli-example-01" --definition-reference-id "keyvault-vaults" --resource-discovery-mode ReEvaluateCompliance
+ ```
+ To create a remediation task for all of the policies in the initiative, use the following example:
+ ```bash
+ for policyDefinitionReferenceId in $(az policy set-definition show --name f5b29bc4-feca-4cc6-a58a-772dd5e290a5 |grep policyDefinitionReferenceId |cut -d":" -f2|sed s/\"//g)
+ do
+ az policy remediation create --resource-group "cli-example-01" --policy-assignment assign-cli-example-01 --name remediate-$policyDefinitionReferenceId --definition-reference-id $policyDefinitionReferenceId;
+ done
+ ```
++++
+## Common parameters
+
+The following table describes the common parameters for each set of policies.
+
+|Parameter| Description| Valid Values|Default|
+|||||
+|effect| Enable or disable the execution of the policy|DeployIfNotExists,<br>AuditIfNotExists,<br>Disabled|DeployIfNotExists|
+|diagnosticSettingName|Diagnostic Setting Name||setByPolicy-LogAnalytics|
+|categoryGroup|Diagnostic category group|none,<br>audit,<br>allLogs|audit|
+
+## Policy-specific parameters
+### Log Analytics policy parameters
+ This policy deploys a diagnostic setting using a category group to route logs to a Log Analytics workspace.
+
+|Parameter| Description| Valid Values|Default|
+|||||
+|resourceLocationList|Resource Location List to send logs to nearby Log Analytics. <br>"*" selects all locations|Supported locations|\*|
+|logAnalytics|Log Analytics Workspace|||
+
+### Event Hubs policy parameters
+
+This policy deploys a diagnostic setting using a category group to route logs to an Event Hub.
+
+|Parameter| Description| Valid Values|Default|
+|||||
+|resourceLocation|Resource Location must be the same location as the event hub Namespace|Supported locations||
+|eventHubAuthorizationRuleId|Event Hub Authorization Rule ID. The authorization rule is at event hub namespace level. For example, /subscriptions/{subscription ID}/resourceGroups/{resource group}/providers/Microsoft.EventHub/namespaces/{Event Hub namespace}/authorizationrules/{authorization rule}|||
+|eventHubName|Event Hub Name||Monitoring|
++
+### Storage Accounts policy parameters
+This policy deploys a diagnostic setting using a category group to route logs to a Storage Account.
+
+|Parameter| Description| Valid Values|Default|
+|||||
+|resourceLocation|Resource Location must be in the same location as the Storage Account|Supported locations|
+|storageAccount|Storage Account resourceId|||
+
+## Supported Resources
+
+Built-in Audit logs policies for Log Analytics workspaces, Event Hubs, and Storage Accounts exist for the following resources:
+
+* microsoft.agfoodplatform/farmbeats
+* microsoft.apimanagement/service
+* microsoft.appconfiguration/configurationstores
+* microsoft.attestation/attestationproviders
+* microsoft.automation/automationaccounts
+* microsoft.avs/privateclouds
+* microsoft.cache/redis
+* microsoft.cdn/profiles
+* microsoft.cognitiveservices/accounts
+* microsoft.containerregistry/registries
+* microsoft.devices/iothubs
+* microsoft.eventgrid/topics
+* microsoft.eventgrid/domains
+* microsoft.eventgrid/partnernamespaces
+* microsoft.eventhub/namespaces
+* microsoft.keyvault/vaults
+* microsoft.keyvault/managedhsms
+* microsoft.machinelearningservices/workspaces
+* microsoft.media/mediaservices
+* microsoft.media/videoanalyzers
+* microsoft.netapp/netappaccounts/capacitypools/volumes
+* microsoft.network/publicipaddresses
+* microsoft.network/virtualnetworkgateways
+* microsoft.network/p2svpngateways
+* microsoft.network/frontdoors
+* microsoft.network/bastionhosts
+* microsoft.operationalinsights/workspaces
+* microsoft.purview/accounts
+* microsoft.servicebus/namespaces
+* microsoft.signalrservice/signalr
+* microsoft.signalrservice/webpubsub
+* microsoft.sql/servers/databases
+* microsoft.sql/managedinstances
+
+## Next Steps
+
+* [Create diagnostic settings at scale using Azure Policy](./diagnostic-settings-policy.md)
+* [Azure Policy built-in definitions for Azure Monitor](../policy-reference.md)
+* [Azure Policy Overview](../../governance/policy/overview.md)
+* [Azure Enterprise Policy as Code](https://techcommunity.microsoft.com/t5/core-infrastructure-and-security/azure-enterprise-policy-as-code-a-new-approach/ba-p/3607843)
azure-monitor Data Platform Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-platform-logs.md
The following table describes some of the ways that you can use Azure Monitor Lo
| Visualize | Pin query results rendered as tables or charts to an [Azure dashboard](../../azure-portal/azure-portal-dashboards.md).<br>Create a [workbook](../visualize/workbooks-overview.md) to combine with multiple sets of data in an interactive report. <br>Export the results of a query to [Power BI](./log-powerbi.md) to use different visualizations and share with users outside Azure.<br>Export the results of a query to [Grafana](../visualize/grafana-plugin.md) to use its dashboarding and combine with other data sources.| | Get insights | Logs support [insights](../insights/insights-overview.md) that provide a customized monitoring experience for particular applications and services. | | Retrieve | Access log query results from a:<ul><li>Command line via the [Azure CLI](/cli/azure/monitor/log-analytics) or [Azure PowerShell cmdlets](/powershell/module/az.operationalinsights).</li><li>Custom app via the [REST API](/rest/api/loganalytics/) or client library for [.NET](/dotnet/api/overview/azure/Monitor.Query-readme), [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/monitor/azquery), [Java](/java/api/overview/azure/monitor-query-readme), [JavaScript](/javascript/api/overview/azure/monitor-query-readme), or [Python](/python/api/overview/azure/monitor-query-readme).</li></ul> |
-| Import | Upload logs from a custom app via the [REST API](/azure/azure-monitor/logs/logs-ingestion-api-overview) or client library for [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme), [Java](/java/api/overview/azure/monitor-ingestion-readme), [JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme), or [Python](/python/api/overview/azure/monitor-ingestion-readme). |
+| Import | Upload logs from a custom app via the [REST API](./logs-ingestion-api-overview.md) or client library for [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme), [Java](/java/api/overview/azure/monitor-ingestion-readme), [JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme), or [Python](/python/api/overview/azure/monitor-ingestion-readme). |
| Export | Configure [automated export of log data](./logs-data-export.md) to an Azure Storage account or Azure Event Hubs.<br>Build a workflow to retrieve log data and copy it to an external location by using [Azure Logic Apps](./logicapp-flow-connector.md). | ![Diagram that shows an overview of Azure Monitor Logs.](media/data-platform-logs/logs-overview.png)
The experience of using Log Analytics to work with Azure Monitor queries in the
- Learn about [log queries](./log-query-overview.md) to retrieve and analyze data from a Log Analytics workspace. - Learn about [metrics in Azure Monitor](../essentials/data-platform-metrics.md).-- Learn about the [monitoring data available](../data-sources.md) for various resources in Azure.
+- Learn about the [monitoring data available](../data-sources.md) for various resources in Azure.
azure-monitor Usage Estimated Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/usage-estimated-costs.md
description: Overview of how Azure Monitor is billed and how to estimate and ana
Previously updated : 05/05/2022 Last updated : 03/15/2023 # Azure Monitor cost and usage
To limit the view to Azure Monitor charges, [create a filter](../cost-management
- Insight and Analytics >[!NOTE]
->Usage for Azure Monitor Logs (Log Analytics) can be billed with the **Log Analytics** service (for Pay-as-you-go data ingestion and data retention), or with the **Azure Monitor** service (for Commitment Tiers, Basic Logs and Data Export) or with the **Insight and Analytics** service when using the legacy Per Node pricing tier. Except for a small set of legacy resources, Application Insights data ingestion and retention are billed as the **Log Analytics** service.
+>Usage for Azure Monitor Logs (Log Analytics) can be billed with the **Log Analytics** service (for Pay-as-you-go Data Ingestion and Data Retention), or with the **Azure Monitor** service (for Commitment Tiers, Basic Logs, Search, Search Jobs, Data Archive and Data Export) or with the **Insight and Analytics** service when using the legacy Per Node pricing tier. Except for a small set of legacy resources, classic Application Insights data ingestion and retention are billed as the **Log Analytics** service. Note then when you change your workspace from a Pay-as-you-go pricing tier to a Commitment Tier, on your bill, the costs will appear to shift from Log Analytics to Azure Monitor, reflecting the service associated to each pricing tier.
Other services such as Microsoft Defender for Cloud and Microsoft Sentinel also bill their usage against Log Analytics workspace resources, so you might want to add them to your filter.
azure-netapp-files Faq Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-security.md
NFSv3 protocol doesn't provide support for encryption, so this data-in-flight ca
## Can the storage be encrypted at rest?
-All Azure NetApp Files volumes are encrypted using the FIPS 140-2 standard. All keys are managed by the Azure NetApp Files service.
+All Azure NetApp Files volumes are encrypted using the FIPS 140-2 standard. Learn [how encryption keys managed](#how-are-encryption-keys-managed).
## Is Azure NetApp Files cross-region replication traffic encrypted?
Azure NetApp Files cross-region replication uses TLS 1.2 AES-256 GCM encryption
## How are encryption keys managed?
-Key management for Azure NetApp Files is handled by the service. A unique XTS-AES-256 data encryption key is generated for each volume. An encryption key hierarchy is used to encrypt and protect all volume keys. These encryption keys are never displayed or reported in an unencrypted format. When you delete a volume, Azure NetApp Files immediately deletes the volume's encryption keys.
+By default key management for Azure NetApp Files is handled by the service, using [platform-managed keys](../security/fundamentals/key-management.md). A unique XTS-AES-256 data encryption key is generated for each volume. An encryption key hierarchy is used to encrypt and protect all volume keys. These encryption keys are never displayed or reported in an unencrypted format. When you delete a volume, Azure NetApp Files immediately deletes the volume's encryption keys.
-Customer-managed keys (Bring Your Own Key) using Azure Dedicated HSM is supported on a controlled basis. Support is currently available in the East US, South Central US, West US 2, and US Gov Virginia regions. You can request access at [anffeedback@microsoft.com](mailto:anffeedback@microsoft.com). As capacity becomes available, requests will be approved.
+Alternatively, [customer-managed keys for Azure NetApp Files volume encryption](configure-customer-managed-keys.md) can be used where keys are stored in [Azure Key Vault](../key-vault/general/basic-concepts.md). With customer-managed keys, you can fully manage the relationship between a key's life cycle, key usage permissions, and auditing operations on keys.
-[Customer-managed keys](configure-customer-managed-keys.md) are available with limited regional support.
+Lastly, customer-managed keys using Azure Dedicated HSM is supported on a controlled basis. Support is currently available in the East US, South Central US, West US 2, and US Gov Virginia regions. You can request access at [anffeedback@microsoft.com](mailto:anffeedback@microsoft.com). As capacity becomes available, requests will be approved.
## Can I configure the NFS export policy rules to control access to the Azure NetApp Files service mount target?
azure-netapp-files Faq Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-smb.md
https://support.microsoft.com/topic/april-12-2022-kb5012670-monthly-rollup-cae43
Yes, Azure NetApp Files supports [Alternate Data Streams (ADS)](/openspecs/windows_protocols/ms-fscc/e2b19412-a925-4360-b009-86e3b8a020c8) by default on [SMB volumes](azure-netapp-files-create-volumes-smb.md) and [dual-protocol volumes configured with NTFS security style](create-volumes-dual-protocol.md#considerations) when accessed via SMB.
+## What are SMB/CIFS `oplocks` and are they enabled on Azure NetApp Files volumes?
+
+SMB/CIFS oplocks (opportunistic locks) enable the redirector on a SMB/CIFS client in certain file-sharing scenarios to perform client-side caching of read-ahead, write-behind, and lock information. A client can then work with a file (read or write it) without regularly reminding the server that it needs access to the file. This improves performance by reducing network traffic. SMB/CIFS oplocks are enabled on Azure NetApp Files SMB and dual-protocol volumes.
+ ## Next steps - [FAQs about SMB performance for Azure NetApp Files](azure-netapp-files-smb-performance.md)
azure-netapp-files Performance Benchmarks Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-benchmarks-azure-vmware-solution.md
na Previously updated : 02/07/2023 Last updated : 03/15/2023 # Azure NetApp Files datastore performance benchmarks for Azure VMware Solution
The following `read:write` I/O ratios were tested for each scenario: `100:0, 75:
Benchmarks documented in this article were performed with sufficient volume throughput to prevent soft limits from affecting performance. Benchmarks can be achieved with Azure NetApp Files Premium and Ultra service levels, and in some cases with Standard service level. For more information on volume throughput, see [Performance considerations for Azure NetApp Files](azure-netapp-files-performance-considerations.md).
+Refer to the [Azure NetApp Files datastore for Azure VMware Solution TCO Estimator](https://aka.ms/anfavscalc) to understand the sizing and associated cost benefits of Azure NetApp Files datastores.
+ ## Environment details The results in this article were achieved using the following environment configuration:
azure-netapp-files Performance Impact Kerberos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-impact-kerberos.md
There are two areas of focus: light load and upper limit. The following lists de
* Average IOPS decreased by 53% * Average throughput decreased by 53%
-* Average latency increased by 3.2 ms
+* Average latency increased by 0.2 ms
**Performance impact of krb5i:**
azure-resource-manager Bicep Functions Lambda https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-lambda.md
description: Describes the lambda functions to use in a Bicep file.
Previously updated : 02/09/2023 Last updated : 03/15/2023 # Lambda functions for Bicep
The output from the preceding example sorts the dog objects from the youngest to
`toObject(inputArray, lambda expression, [lambda expression])`
-Converts an array to an object with a custom key function and optional custom value function.
+Converts an array to an object with a custom key function and optional custom value function. See [items](bicep-functions-object.md#items) about converting an object to an array.
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
azure-resource-manager Bicep Functions Object https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-object.md
description: Describes the functions to use in a Bicep file for working with obj
Previously updated : 12/09/2022 Last updated : 03/19/2023 # Object functions for Bicep
The output from the preceding example with the default values is:
`items(object)`
-Converts a dictionary object to an array.
+Converts a dictionary object to an array. See [toObject](bicep-functions-lambda.md#toobject) about converting an array to an object.
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
azure-resource-manager User Defined Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/user-defined-data-types.md
The valid type expressions include:
Each property in an object consists of key and value. The key and value are separated by a colon `:`. The key may be any string (values that would not be a valid identifier must be enclosed in quotes), and the value may be any type syntax expression.
- Properties are required unless they have an optionality marker `?` between the property name and the colon. For example, the `sku` property in the following example is optional:
+ Properties are required unless they have an optionality marker `?` after the property value. For example, the `sku` property in the following example is optional:
```bicep type storageAccountConfigType = { name: string
- sku?: string
+ sku: string?
} ```
The valid type expressions include:
```bicep type myObjectType = { stringProp: string
- recursiveProp?: myObjectType
+ recursiveProp: myObjectType?
} ```
azure-resource-manager Deploy Service Catalog Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/deploy-service-catalog-quickstart.md
description: Describes how to deploy a service catalog's managed application for
Previously updated : 03/01/2023 Last updated : 03/14/2023 # Quickstart: Deploy a service catalog managed application
-In this quickstart, you use the definition you created in the quickstarts to [publish an application definition](publish-service-catalog-app.md) or [publish a definition with bring your own storage](publish-service-catalog-bring-your-own-storage.md) to deploy a service catalog managed application. The deployment creates two resource groups. One resource group contains the managed application and the other is a managed resource group for the deployed resource. The managed application definition deploys an App Service plan, App Service, and storage account.
+In this quickstart, you use the managed application definition that you created using one of the quickstart articles. The deployment creates two resource groups. One resource group contains the managed application and the other is a managed resource group for the deployed resources. The managed application definition deploys an App Service plan, App Service, and storage account.
## Prerequisites
-To complete this quickstart, you need an Azure account with an active subscription. If you completed a quickstart to publish a definition, you should already have an account. Otherwise, [create a free account](https://azure.microsoft.com/free/) before you begin.
+- A managed application definition created with [publish an application definition](publish-service-catalog-app.md) or [publish a definition with bring your own storage](publish-service-catalog-bring-your-own-storage.md).
+- An Azure account with an active subscription. If you don't have an account, [create a free account](https://azure.microsoft.com/free/) before you begin.
+- [Visual Studio Code](https://code.visualstudio.com/).
+- Install the latest version of [Azure PowerShell](/powershell/azure/install-az-ps) or [Azure CLI](/cli/azure/install-azure-cli).
## Create service catalog managed application
-In the Azure portal, use the following steps:
+The examples use the resource groups names created in the _quickstart to publish an application definition_. If you used the quickstart to _publish a definition with bring your own storage_, use those resource group names.
+
+- **Publish application definition**: _packageStorageGroup_ and _appDefinitionGroup_.
+- **Publish definition with bring your own storage**: _packageStorageGroup_, _byosDefinitionStorageGroup_, and _byosAppDefinitionGroup_.
+
+### Get managed application definition
+
+# [PowerShell](#tab/azure-powershell)
+
+To get the managed application's definition with Azure PowerShell, run the following commands.
+
+In Visual Studio Code, open a new PowerShell terminal and sign in to your Azure subscription.
+
+```azurepowershell
+Connect-AzAccount
+```
+
+The command opens your default browser and prompts you to sign in to Azure. For more information, go to [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
+
+From Azure PowerShell, get your managed application's definition. In this example, use the resource group name _appDefinitionGroup_ that was created when you deployed the managed application definition.
+
+```azurepowershell
+Get-AzManagedApplicationDefinition -ResourceGroupName appDefinitionGroup
+```
+
+`Get-AzManagedApplicationDefinition` lists all the available definitions in the specified resource group, like _sampleManagedApplication_.
+
+Create a variable for the managed application definition's resource ID.
+
+```azurepowershell
+$definitionid = (Get-AzManagedApplicationDefinition -ResourceGroupName appDefinitionGroup -Name sampleManagedApplication).ManagedApplicationDefinitionId
+```
+
+You use the `$definitionid` variable's value when you deploy the managed application.
+
+# [Portal](#tab/azure-portal)
+
+To get the managed application's definition from the Azure portal, use the following steps.
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Create a resource**.
In the Azure portal, use the following steps:
:::image type="content" source="./media/deploy-service-catalog-quickstart/select-service-catalog-managed-application.png" alt-text="Screenshot that shows managed application definitions that you can deploy."::: ++
+### Create resource group and parameters
+
+# [PowerShell](#tab/azure-powershell)
+
+Create a resource group for the managed application that's used during the deployment.
+
+```azurepowershell
+New-AzResourceGroup -Name applicationGroup -Location westus3
+```
+
+You also need to create a name for the managed application resource group. The resource group is created when you deploy the managed application.
+
+Run the following commands to create the managed resource group's name.
+
+```azurepowershell
+$mrgprefix = 'rg-sampleManagedApplication-'
+$mrgtimestamp = Get-Date -UFormat "%Y%m%d%H%M%S"
+$mrgname = $mrgprefix + $mrgtimestamp
+$mrgname
+```
+
+The `$mrgprefix` and `$mrgtimestamp` variables are concatenated to create a resource group name like _rg-sampleManagedApplication-20230310100148_ that's stored in the `$mrgname` variable. You use the `$mrgname` variable's value when you deploy the managed application.
+
+You need to provide several parameters to the deployment command for the managed application. You can use a JSON formatted string or create a JSON file. In this example, we use a JSON formatted string. The PowerShell escape character for the quote marks is the backtick (`` ` ``) character. The backtick is also used for line continuation so that commands can use multiple lines.
+
+The JSON formatted string's syntax is as follows:
+
+```json
+"{ `"parameterName`": {`"value`":`"parameterValue`"}, `"parameterName`": {`"value`":`"parameterValue`"} }"
+```
+
+For readability, the completed JSON string uses the backtick for line continuation. The values are stored in the `$params` variable that's used in the deployment command. The parameters in the JSON string are required to deploy the managed resources.
+
+```powershell
+$params="{ `"appServicePlanName`": {`"value`":`"demoAppServicePlan`"}, `
+`"appServiceNamePrefix`": {`"value`":`"demoApp`"}, `
+`"storageAccountNamePrefix`": {`"value`":`"demostg1234`"}, `
+`"storageAccountType`": {`"value`":`"Standard_LRS`"} }"
+```
+
+The parameters to create the managed resources:
+
+- `appServicePlanName`: Create a plan name. Maximum of 40 alphanumeric characters and hyphens. For example, _demoAppServicePlan_. App Service plan names must be unique within a resource group in your subscription.
+- `appServiceNamePrefix`: Create a prefix for the plan name. Maximum of 47 alphanumeric characters or hyphens. For example, _demoApp_. During deployment, the prefix is concatenated with a unique string to create a name that's globally unique across Azure.
+- `storageAccountNamePrefix`: Use only lowercase letters and numbers and a maximum of 11 characters. For example, _demostg1234_. During deployment, the prefix is concatenated with a unique string to create a name globally unique across Azure. Although you're creating a prefix, the control checks for existing names in Azure and might post a validation message that the name already exists. If so, choose a different prefix.
+- `storageAccountType`: The default is Standard_LRS. The other options are Premium_LRS, Standard_LRS, and Standard_GRS.
+
+# [Portal](#tab/azure-portal)
+ 1. Provide values for the **Basics** tab and select **Next: Web App settings**. :::image type="content" source="./media/deploy-service-catalog-quickstart/basics-info.png" alt-text="Screenshot that highlights the required information on the basics tab.":::
In the Azure portal, use the following steps:
- **Subscription**: Select the subscription where you want to deploy the managed application. - **Resource group**: Select the resource group. For this example, create a resource group named _applicationGroup_. - **Region**: Select the location where you want to deploy the resource.
- - **Application Name**: Enter a name for your application. For this example, use _demoManagedApplication_.
+ - **Application Name**: Enter a name for your managed application. For this example, use _demoManagedApplication_.
- **Application resources Resource group name**: The name of the managed resource group that contains the resources that are deployed for the managed application. The default name is in the format `rg-{definitionName}-{dateTime}` but you can change the name. 1. Provide values for the **Web App settings** tab and select **Next: Storage settings**.
In the Azure portal, use the following steps:
:::image type="content" source="./media/deploy-service-catalog-quickstart/storage-settings.png" alt-text="Screenshot that shows the information needed to create a storage account."::: - **Storage account name prefix**: Use only lowercase letters and numbers and a maximum of 11 characters. For example, _demostg1234_. During deployment, the prefix is concatenated with a unique string to create a name globally unique across Azure. Although you're creating a prefix, the control checks for existing names in Azure and might post a validation message that the name already exists. If so, choose a different prefix.
- - **Storage account type**: Select **Change type** to choose a storage account type. The default is Standard LRS.
+ - **Storage account type**: Select **Change type** to choose a storage account type. The default is Standard LRS. The other options are Premium_LRS, Standard_LRS, and Standard_GRS.
++
-1. Review the summary of the values you selected and verify **Validation Passed** is displayed. Select **Create** to deploy the managed application.
+### Deploy the managed application
+
+# [PowerShell](#tab/azure-powershell)
+
+Run the following command to deploy the managed application from your Azure PowerShell session.
+
+```azurepowershell
+New-AzManagedApplication `
+ -Name "demoManagedApplication" `
+ -ResourceGroupName applicationGroup `
+ -Location westus3 `
+ -ManagedResourceGroupName $mrgname `
+ -ManagedApplicationDefinitionId $definitionid `
+ -Kind ServiceCatalog `
+ -Parameter $params
+```
+
+The parameters used in the deployment command:
+
+- `Name`: Specify a name for the managed application. For this example, use _demoManagedApplication_.
+- `ResourceGroupName`: Name of the resource group you created for the managed application.
+- `Location`: Specify the region to deploy the resources. For this example, use _westus3_.
+- `ManagedResourceGroupName`: Uses the `$mrgname` parameters value. The managed resource group is created when the managed application is deployed.
+- `ManagedApplicationDefinitionId`: Uses the `$definitionid` variable's value for the managed application definition's resource ID.
+- `Kind`: Specifies that type of managed application. This example uses _ServiceCatalog_.
+- `Parameter`: Uses the `$parms` variable's value in the JSON formatted string.
+
+# [Portal](#tab/azure-portal)
+
+Review the summary of the values you selected and verify **Validation Passed** is displayed. Select **Create** to deploy the managed application.
:::image type="content" source="./media/deploy-service-catalog-quickstart/summary-validation.png" alt-text="Screenshot that summarizes the values you selected and shows the status of validation passed."::: ++ ## View results After the service catalog managed application is deployed, you have two new resource groups. One resource group contains the managed application. The other resource group contains the managed resources that were deployed. In this example, an App Service, App Service plan, and storage account. ### Managed application
+After the deployment is finished, you can check your managed application's status.
+
+# [PowerShell](#tab/azure-powershell)
+
+Run the following commands to check the managed application's status.
+
+```azurepowershell
+Get-AzManagedApplication -Name demoManagedApplication -ResourceGroupName applicationGroup
+```
+
+Expand the properties to make it easier to read the `Properties` information.
+
+```azurepowershell
+Get-AzManagedApplication -Name demoManagedApplication -ResourceGroupName applicationGroup | Select-Object -ExpandProperty Properties
+```
+
+# [Portal](#tab/azure-portal)
+ Go to the resource group named **applicationGroup** and select **Overview**. The resource group contains your managed application named _demoManagedApplication_.
- :::image type="content" source="./media/deploy-service-catalog-quickstart/view-application-group.png" alt-text="Screenshot that shows the resource group that contains the managed application.":::
Select the managed application's name to get more information like the link to the managed resource group.
- :::image type="content" source="./media/deploy-service-catalog-quickstart/view-managed-application.png" alt-text="Screenshot that shows the managed application's details and highlights the link to the managed resource group.":::
++ ### Managed resources
+You can view the resources deployed to the managed resource group.
+
+# [PowerShell](#tab/azure-powershell)
+
+To display the managed resource group's resources, run the following command. You created the `$mrgname` variable when you created the parameters.
+
+```azurepowershell
+Get-AzResource -ResourceGroupName $mrgname
+```
+
+To display all the role assignments for the managed resource group.
+
+```azurepowershell
+Get-AzRoleAssignment -ResourceGroupName $mrgname
+```
+
+The managed application definition you created in the quickstart articles used a group with the Owner role assignment. You can view the group with the following command.
+
+```azurepowershell
+Get-AzRoleAssignment -ResourceGroupName $mrgname -RoleDefinitionName Owner
+```
+
+You can also list the deny assignments for the managed resource group.
+
+```azurepowershell
+Get-AzDenyAssignment -ResourceGroupName $mrgname
+```
+
+# [Portal](#tab/azure-portal)
+ Go to the managed resource group with the name prefix **rg-sampleManagedApplication** and select **Overview** to display the resources that were deployed. The resource group contains an App Service, App Service plan, and storage account.
- :::image type="content" source="./media/deploy-service-catalog-quickstart/view-managed-resource-group.png" alt-text="Screenshot that shows the managed resource group that contains the resources deployed by the managed application definition.":::
The managed resource group and each resource created by the managed application has a role assignment. When you used a quickstart article to create the definition, you created an Azure Active Directory group. That group was used in the managed application definition. When you deployed the managed application, a role assignment for that group was added to the managed resources.
To see the role assignment from the Azure portal:
The role assignment gives the application's publisher access to manage the storage account. In this example, the publisher might be your IT department. The _Deny assignments_ prevents customers from making changes to a managed resource's configuration. Managed apps are designed so that customers don't need to maintain the resources. The _Deny assignment_ excludes the Azure Active Directory group that was assigned in **Role assignments**. ++ ## Clean up resources
-When your finished with the managed application, you can delete the resource groups and that removes all the resources you created. For example, in this quickstart you created the resource groups _applicationGroup_ and a managed resource group with the prefix _rg-sampleManagedApplication_.
+When you're finished with the managed application, you can delete the resource groups and that removes all the resources you created. For example, in this quickstart you created the resource groups _applicationGroup_ and a managed resource group with the prefix _rg-sampleManagedApplication_.
+
+# [PowerShell](#tab/azure-powershell)
+
+The command prompts you to confirm that you want to remove the resource group.
+
+```azurepowershell
+Remove-AzResourceGroup -Name applicationGroup
+```
+
+# [Portal](#tab/azure-portal)
1. From Azure portal **Home**, in the search field, enter _resource groups_. 1. Select **Resource groups**. 1. Select **applicationGroup** and **Delete resource group**. 1. To confirm the deletion, enter the resource group name and select **Delete**.
-When the resource group that contains the managed application is deleted, the managed resource group is also deleted. In this example, when _applicationGroup_ is deleted the _rg-sampleManagedApplication_ resource group is also deleted.
+ If you want to delete the managed application definition, delete the resource groups you created in the quickstart articles.
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
The following table details the features and limits of the Basic, Standard, and
## Event Grid limits ## Event Hubs limits [!INCLUDE [event-hubs-limits](../../../includes/event-hubs-limits.md)]
azure-resource-manager Template Functions Lambda https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-lambda.md
description: Describes the lambda functions to use in an Azure Resource Manager
Previously updated : 02/09/2023 Last updated : 03/15/2023 # Lambda functions for ARM templates
The output from the preceding example sorts the dog objects from the youngest to
`toObject(inputArray, lambda function, [lambda function])`
-Converts an array to an object with a custom key function and optional custom value function.
+Converts an array to an object with a custom key function and optional custom value function. See [items](template-functions-object.md#items) about converting an object to an array.
In Bicep, use the [toObject](../templates/template-functions-lambda.md#toobject) function.
azure-resource-manager Template Functions Object https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-object.md
The output from the preceding example with the default values is:
`items(object)`
-Converts a dictionary object to an array.
+Converts a dictionary object to an array. See [toObject](template-functions-lambda.md#toobject) about converting an array to an object.
In Bicep, use the [items](../bicep/bicep-functions-object.md#items).
azure-signalr Signalr Howto Event Grid Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-event-grid-integration.md
Once the SignalR Service has been created, the Azure CLI returns output similar
## Create an event endpoint
-In this section, you use a Resource Manager template located in a GitHub repository to deploy a pre-built sample web application to Azure App Service. Later, you subscribe to your registry's Event Grid events and specify this app as the endpoint to which the events are sent.
+In this section, you use a Resource Manager template located in a GitHub repository to deploy a prebuilt sample web application to Azure App Service. Later, you subscribe to your registry's Event Grid events and specify this app as the endpoint to which the events are sent.
To deploy the sample app, set `SITE_NAME` to a unique name for your web app, and execute the following commands. The site name must be unique within Azure because it forms part of the fully qualified domain name (FQDN) of the web app. In a later section, you navigate to the app's FQDN in a web browser to view your registry's events.
Once the deployment succeeds (it might take a few minutes), open your browser, a
`http://<your-site-name>.azurewebsites.net` ## Subscribe to registry events
-In Event Grid, you subscribe to a *topic* to tell it which events you want to track, and where to send them. The command [az eventgrid event-subscription create][az-eventgrid-event-subscription-create] subscribes to the Azure SignalR Service you created and specifies your web app's URL as the endpoint to which it should send events. The environment variables you populated in earlier sections are reused here, so no edits are required.
+In Event Grid, you subscribe to a *topic* to tell it which events you want to track, and where to send them. The command [`az eventgrid event-subscription create`][az-eventgrid-event-subscription-create] subscribes to the Azure SignalR Service you created and specifies your web app's URL as the endpoint to which it should send events. The environment variables you populated in earlier sections are reused here, so no edits are required.
```azurecli-interactive SIGNALR_SERVICE_ID=$(az signalr show --resource-group $RESOURCE_GROUP_NAME --name $SIGNALR_NAME --query id --output tsv)
azure-video-indexer Connect Classic Account To Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-classic-account-to-arm.md
APIs to be changed:
For a full description of [Azure Video Indexer REST API](/rest/api/videoindexer/preview/accounts) calls and documentation, follow the link.
-For code sample generating an access token through ARM see [C# code sample](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/ApiUsage/ArmBased/Program.cs).
+For code sample generating an access token through ARM see [C# code sample](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/API-Samples/C%23/ArmBased/Program.cs).
### Next steps
-Learn how to [Upload a video using C#](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/ApiUsage/ArmBased).
+Learn how to [Upload a video using C#](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/API-Samples/C%23/ArmBased/).
<!-- links --> [docs-arm-overview]: ../azure-resource-manager/management/overview.md
azure-video-indexer Create Account Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/create-account-portal.md
Add new managed identities, switch the default managed identity between user-ass
## Next steps
-Learn how to [Upload a video using C#](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/ApiUsage/ArmBased).
+Learn how to [Upload a video using C#](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/API-Samples/C%23/ArmBased/).
<!-- links -->
azure-video-indexer Deploy With Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/deploy-with-arm-template.md
You need an Azure Media Services account. You can create one for free through [C
### Option 2: Deploy by using a PowerShell script
-1. Open the [template file](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/ARM-Quick-Start/avam.template.json) and inspect its contents.
+1. Open the [template file](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/Deploy-Samples/ArmTemplates/avam.template.json) and inspect its contents.
2. Fill in the required parameters. 3. Run the following PowerShell commands:
azure-video-indexer Odrv Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/odrv-download.md
This parameter specifies the URL of the video or audio file to be indexed. If th
### Code sample > [!NOTE]
-> The following sample is intended for Classic accounts only and isn't compatible with ARM accounts. For an updated sample for ARM, see [this ARM sample repo](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/ApiUsage/ArmBased/Program.cs).
+> The following sample is intended for Classic accounts only and isn't compatible with ARM accounts. For an updated sample for ARM, see [this ARM sample repo](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/API-Samples/C%23/ArmBased/Program.cs).
The following C# code snippets demonstrate the usage of all the Azure Video Indexer APIs together.
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
Azure Video Indexer website is now supporting account management based on ARM in
### Leverage open-source code to create ARM based account
-Added new code samples including HTTP calls to use Azure Video Indexer create, read, update and delete (CRUD) ARM API for solution developers. See [this sample](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/ARM-Quick-Start).
+Added new code samples including HTTP calls to use Azure Video Indexer create, read, update and delete (CRUD) ARM API for solution developers.
## January 2022
azure-video-indexer Storage Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/storage-behind-firewall.md
When you create a Video Indexer account, you must associate it with a Media Services and Storage account. Video Indexer can access Media Services and Storage using system authentication or Managed Identity authentication. Video Indexer validates that the user adding the association has access to the Media Services and Storage account with Azure Resource Manager Role Based Access Control (RBAC).
-If you want to use a firewall to secure your storage account and enable trusted storage, [Managed Identities](/azure/media-services/latest/concept-managed-identities) authentication that allows Video Indexer access through the firewall is the preferred option. It allows Video Indexer and Media Services to access the storage account that has been configured without needing public access for [trusted storage access.](/azure/storage/common/storage-network-security?tabs=azure-portal#grant-access-to-trusted-azure-services)
+If you want to use a firewall to secure your storage account and enable trusted storage, [Managed Identities](/azure/media-services/latest/concept-managed-identities) authentication that allows Video Indexer access through the firewall is the preferred option. It allows Video Indexer and Media Services to access the storage account that has been configured without needing public access for [trusted storage access.](../storage/common/storage-network-security.md?tabs=azure-portal#grant-access-to-trusted-azure-services)
Follow these steps to enable Managed Identity for Media Services and Storage and then lock your storage account. It's assumed that you already created a Video Indexer account and associated with a Media Services and Storage account.
This concludes the tutorial. With these steps you've completed the following act
## Next steps
-[Disaster recovery](video-indexer-disaster-recovery.md)
+[Disaster recovery](video-indexer-disaster-recovery.md)
azure-video-indexer Video Indexer Use Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-use-apis.md
When you're uploading videos by using the API, you have the following options:
The following C# code snippet demonstrates the usage of all the Azure Video Indexer APIs together. > [!NOTE]
-> The following sample is intended for classic accounts only and not compatible with ARM-based accounts. For an updated sample for ARM (recommended), see [this ARM sample repo](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/ApiUsage/ArmBased/Program.cs).
+> The following sample is intended for classic accounts only and not compatible with ARM-based accounts. For an updated sample for ARM (recommended), see [this ARM sample repo](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/API-Samples/C%23/ArmBased/Program.cs).
```csharp var apiUrl = "https://api.videoindexer.ai";
backup Azure Kubernetes Service Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-backup-overview.md
# Overview of Azure Kubernetes Service backup using Azure Backup (preview)
-[Azure Kubernetes Service (AKS)](/azure/aks/intro-kubernetes) backup is a simple, cloud-native process to back up and restore the containerized applications and data running in AKS clusters. You can configure scheduled backup for cluster state and application data (persistent volumes - CSI driver-based Azure Disks). The solution provides granular control to choose a specific namespace or an entire cluster to back up or restore by storing backups locally in a blob container and as disk snapshots. With AKS backup, you can unlock end-to-end scenarios - operational recovery, cloning developer/test environments, or cluster upgrade scenarios.
+[Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) backup is a simple, cloud-native process to back up and restore the containerized applications and data running in AKS clusters. You can configure scheduled backup for cluster state and application data (persistent volumes - CSI driver-based Azure Disks). The solution provides granular control to choose a specific namespace or an entire cluster to back up or restore by storing backups locally in a blob container and as disk snapshots. With AKS backup, you can unlock end-to-end scenarios - operational recovery, cloning developer/test environments, or cluster upgrade scenarios.
AKS backup integrates with Backup center (with other backup management capabilities) to provide a single pane of glass that helps you govern, monitor, operate, and analyze backups at scale. ## How does AKS backup work?
-AKS backup enables you to back up your Kubernetes workloads and persistent volumes deployed in AKS clusters. The solution requires a [**Backup Extension**](/azure/azure-arc/kubernetes/conceptual-extensions) to be installed in the AKS cluster. Backup vault communicates to the Backup Extension to perform backup and restore related operations. You can configure scheduled backups for your clusters as per your backup policy and can restore the backups to the original or an alternate cluster within the same subscription and region. The extension also allows you to enable granular controls to choose a specific namespace or an entire cluster as a backup/restore configuration while performing the specific operation.
+AKS backup enables you to back up your Kubernetes workloads and persistent volumes deployed in AKS clusters. The solution requires a [**Backup Extension**](../azure-arc/kubernetes/conceptual-extensions.md) to be installed in the AKS cluster. Backup vault communicates to the Backup Extension to perform backup and restore related operations. You can configure scheduled backups for your clusters as per your backup policy and can restore the backups to the original or an alternate cluster within the same subscription and region. The extension also allows you to enable granular controls to choose a specific namespace or an entire cluster as a backup/restore configuration while performing the specific operation.
>[!Note] >- You must install Backup Extension in the AKS cluster to enable backups and restores. With the extension installation, a User Identity is created in the AKS cluster's managed resource group (Extension Identity), which gets assigned a set of permissions to access the storage account with the backups stored in the blob container.
Once the backup configuration for an AKS cluster is complete, a backup instance
AKS backup also integrates directly with Backup center to help you manage the protection of all your storage accounts centrally along with all other backup supported workloads. The Backup center is a single pane of glass for all your backup requirements, such as monitoring jobs and state of backups and restores, ensuring compliance and governance, analyzing backup usage, and performing operations pertaining to back up and restore of data.
-AKS backup uses Managed Identity to access other Azure resources. To configure backup of an AKS cluster and to restore from past backup, Backup vault's Managed Identity requires a set of permissions on the AKS cluster and the snapshot resource group where snapshots are created and managed. Currently, the AKS cluster requires a set of permissions on the Snapshot Resource Group. Also, the Backup Extension creates a User Identity and assigns a set of permissions to access the storage account where backups are stored in a blob. You can grant permissions to the Managed Identity using Azure role-based access control (Azure RBAC). Managed Identity is a service principle of a special type that can only be used with Azure resources. Learn more about [Managed Identities](/azure/active-directory/managed-identities-azure-resources/overview.md).
+AKS backup uses Managed Identity to access other Azure resources. To configure backup of an AKS cluster and to restore from past backup, Backup vault's Managed Identity requires a set of permissions on the AKS cluster and the snapshot resource group where snapshots are created and managed. Currently, the AKS cluster requires a set of permissions on the Snapshot Resource Group. Also, the Backup Extension creates a User Identity and assigns a set of permissions to access the storage account where backups are stored in a blob. You can grant permissions to the Managed Identity using Azure role-based access control (Azure RBAC). Managed Identity is a service principle of a special type that can only be used with Azure resources. Learn more about [Managed Identities](../active-directory/managed-identities-azure-resources/overview.md).
## Restore
Incremental snapshots are always stored on standard storage, irrespective of the
## Next steps -- [Prerequisites for Azure Kubernetes Service backup (preview)](azure-kubernetes-service-cluster-backup-concept.md)
+- [Prerequisites for Azure Kubernetes Service backup (preview)](azure-kubernetes-service-cluster-backup-concept.md)
backup Azure Kubernetes Service Backup Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-backup-troubleshoot.md
Title: Troubleshoot Azure Kubernetes Service backup description: Symptoms, causes, and resolutions of Azure Kubernetes Service backup and restore. Previously updated : 03/14/2023 Last updated : 03/15/2023
This article provides troubleshooting steps that help you resolve Azure Kubernet
**Error message**:
- ```Erroe
+ ```Error
{Helm installation from path [] for release [azure-aks-backup] failed with the following error: err [release azure-aks-backup failed, and has been uninstalled due to atomic being set: failed post-install: timed out waiting for the condition]} occurred while doing the operation: {Installing the extension} on the config"` ```
The extension pods aren't exempt, and require the Azure Active Directory (Azure
1. Run the following command: -
- ```azurepowershell-interactive
+ ```azurecli-interactive
az aks pod-identity exception add --resource-group shracrg --cluster-name shractestcluster --namespace dataprotection-microsoft --pod-labels app.kubernetes.io/name=dataprotection-microsoft-kubernetes ``` 2. To verify *Azurepodidentityexceptions* in cluster, run the following command:
- ```azurepowershell-interactive
+ ```azurecli-interactive
kubectl get Azurepodidentityexceptions --all-namespaces ``` 3. To assign the *Storage Account Contributor* role to the extension identity, run the following command:
- ```azurepowershell-interactive
+ ```azurecli-interactive
az role assignment create --assignee-object-id $(az k8s-extension show --name azure-aks-backup --cluster-name aksclustername --resource-group aksclusterresourcegroup --cluster-type managedClusters --query aksAssignedIdentity.principalId --output tsv) --role 'Storage Account Contributor' --scope /subscriptions/subscriptionid/resourceGroups/storageaccountresourcegroup/providers/Microsoft.Storage/storageAccounts/storageaccountname ```
The extension pods aren't exempt, and require the Azure Active Directory (Azure
```Error {"Message":"Error in the getting the Configurations: error {Post \https://centralus.dp.kubernetesconfiguration.azure.com/subscriptions/ subscriptionid /resourceGroups/ aksclusterresourcegroup /provider/managedclusters/clusters/ aksclustername /configurations/getPendingConfigs?api-version=2021-11-01\: dial tcp: lookup centralus.dp.kubernetesconfiguration.azure.com on 10.63.136.10:53: no such host}","LogType":"ConfigAgentTrace","LogLevel":"Error","Environment":"prod","Role":"ClusterConfigAgent","Location":"centralus","ArmId":"/subscriptions/ subscriptionid /resourceGroups/ aksclusterresourcegroup /providers/Microsoft.ContainerService/managedclusters/ aksclustername ","CorrelationId":"","AgentName":"ConfigAgent","AgentVersion":"1.8.14","AgentTimestamp":"2023/01/19 20:24:16"}` ```
-**Cause**: Specific FQDN/application rules are required to use cluster extensions in the AKS clusters. [Learn more](/azure/aks/limit-egress-traffic#cluster-extensions).
+**Cause**: Specific FQDN/application rules are required to use cluster extensions in the AKS clusters. [Learn more](../aks/limit-egress-traffic.md#cluster-extensions).
This error appears due to absence of these FQDN rules because of which configuration information from the Cluster Extensions service wasn't available.
This error appears due to absence of these FQDN rules because of which configura
1. To fetch *Existing CoreDNS-custom* YAML in your cluster (save it on your local for reference later), run the following command:
- ```azurepowershell-interactive
+ ```azurecli-interactive
kubectl get configmap coredns-custom -n kube-system -o yaml ``` 2. To override mapping for *Central US DP* endpoint to public IP (download the YAML file attached), run the following command:
- ```azurepowershell-interactive
+ ```azurecli-interactive
kubectl apply -f corednsms.yaml ``` 3. To force reload `coredns` pods, run the following command:
- ```azurepowershell-interactive
+ ```azurecli-interactive
kubectl delete pod --namespace kube-system -l k8s-app=kube-dns ``` 4. To perform `NSlookup` from the *ExtensionAgent* pod to check if *coreDNS-custom* is working, run the following command:
- ```azurepowershell-interactive
+ ```azurecli-interactive
kubectl exec -i -t pod/extension-agent-<pod guid that's there in your cluster> -n kube-system -- nslookup centralus.dp.kubernetesconfiguration.azure.com ``` 5. To check logs of the *ExtensionAgent* pod, run the following command:
- ```azurepowershell-interactive
+ ```azurecli-interactive
kubectl logs pod/extension-agent-<pod guid thatΓÇÖs there in your cluster> -n kube-system --tail=200 ```
backup Azure Kubernetes Service Cluster Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup.md
Title: Back up Azure Kubernetes Service (AKS) using Azure Backup
description: This article explains how to back up Azure Kubernetes Service (AKS) using Azure Backup. Previously updated : 03/03/2023 Last updated : 03/15/2023
Azure Backup now allows you to back up AKS clusters (cluster resources and persi
- Ensure that the `Microsoft.KubernetesConfiguration` and `Microsoft.DataProtection` providers are registered for your subscription before initiating backup configuration and restore operations.
+- Ensure to perform [all the prerequisites](azure-kubernetes-service-cluster-backup-concept.md) before initiating backup or restore operation for AKS backup.
+ For more information on the supported scenarios, limitations, and availability, see the [support matrix](azure-kubernetes-service-cluster-backup-support-matrix.md). ## Create a Backup vault
backup Azure Kubernetes Service Cluster Manage Backups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-manage-backups.md
Title: Manage Azure Kubernetes Service (AKS) backups using Azure Backup
description: This article explains how to manage Azure Kubernetes Service (AKS) backups using Azure Backup. Previously updated : 03/03/2023 Last updated : 03/15/2023
This section provides the set of Azure CLI commands to create, update, delete op
To install the Backup Extension, use the following command: ```azurecli-interactive
- az k8s-extension create --name azure-aks-backup --extension-type Microsoft.DataProtection.Kubernetes --scope cluster --cluster-type managedClusters --cluster-name aksclustername --resource-group aksclusterrg --release-train preview --configuration-settings blobContainer=containername storageAccount=storageaccountname storageAccountResourceGroup=storageaccountrg storageAccountSubscriptionId=subscriptionid
+ az k8s-extension create --name azure-aks-backup --extension-type Microsoft.DataProtection.Kubernetes --scope cluster --cluster-type managedClusters --cluster-name aksclustername --resource-group aksclusterrg --release-train stable --configuration-settings blobContainer=containername storageAccount=storageaccountname storageAccountResourceGroup=storageaccountrg storageAccountSubscriptionId=subscriptionid
``` ### Update resources in Backup Extension
To install the Backup Extension, use the following command:
To update blob container, CPU, and memory in the Backup Extension, use the following command: ```azurecli-interactive
- az k8s-extension update --name azure-aks-backup --cluster-type managedClusters --cluster-name aksclustername --resource-group aksclusterrg --release-train preview --configuration-settings [blobContainer=containername storageAccount=storageaccountname storageAccountResourceGroup=storageaccountrg storageAccountSubscriptionId=subscriptionid] [cpuLimit=1] [memoryLimit=1Gi]
+ az k8s-extension update --name azure-aks-backup --cluster-type managedClusters --cluster-name aksclustername --resource-group aksclusterrg --release-train stable --configuration-settings [blobContainer=containername storageAccount=storageaccountname storageAccountResourceGroup=storageaccountrg storageAccountSubscriptionId=subscriptionid] [cpuLimit=1] [memoryLimit=1Gi]
[]: denotes the 3 different sub-groups of updates possible (discard the brackets while using the command)
backup Backup Azure Immutable Vault Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-immutable-vault-concept.md
Title: Concept of Immutable vault for Azure Backup (preview)
+ Title: Concept of Immutable vault for Azure Backup
description: This article explains about the concept of Immutable vault for Azure Backup, and how it helps in protecting data from malicious actors. Previously updated : 12/15/2022 Last updated : 02/17/2023
-# Immutable vault for Azure Backup (preview)
+# Immutable vault for Azure Backup
Immutable vault can help you protect your backup data by blocking any operations that could lead to loss of recovery points. Further, you can lock the Immutable vault setting to make it irreversible to prevent any malicious actors from disabling immutability and deleting backups. ## Before you start -- Immutable vault is currently in preview and is available in all Azure public regions.
+- Immutable vault is available in all Azure public regions.
- Immutable vault is supported for Recovery Services vaults and Backup vaults. - Enabling Immutable vault blocks you from performing specific operations on the vault and its protected items. See the [restricted operations](#restricted-operations). - Enabling immutability for the vault is a reversible operation. However, you can choose to make it irreversible to prevent any malicious actors from disabling it (after disabling it, they can perform destructive operations). Learn about [making Immutable vault irreversible](#making-immutability-irreversible).
Immutable vault prevents you from performing the following operations on the v
## Next steps -- Learn [how to manage operations of Azure Backup vault immutability (preview)](backup-azure-immutable-vault-how-to-manage.md).
+- Learn [how to manage operations of Azure Backup vault immutability](backup-azure-immutable-vault-how-to-manage.md).
backup Backup Azure Immutable Vault How To Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-immutable-vault-how-to-manage.md
Title: How to manage Azure Backup Immutable vault operations (preview)
+ Title: How to manage Azure Backup Immutable vault operations
description: This article explains how to manage Azure Backup Immutable vault operations. Previously updated : 09/15/2022 Last updated : 02/17/2023
-# Manage Azure Backup Immutable vault operations (preview)
+# Manage Azure Backup Immutable vault operations
[Immutable vault](backup-azure-immutable-vault-concept.md) can help you protect your backup data by blocking any operations that could lead to loss of recovery points. Further, you can lock the Immutable vault setting to make it irreversible to prevent any malicious actors from disabling immutability and deleting backups.
Follow these steps:
## Next steps -- Learn [about Immutable vault for Azure Backup (preview)](backup-azure-immutable-vault-concept.md).
+- Learn [about Immutable vault for Azure Backup](backup-azure-immutable-vault-concept.md).
backup Backup Azure Private Endpoints Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-private-endpoints-concept.md
# Overview and concepts of private endpoints (v2 experience) for Azure Backup
-Azure Backup allows you to securely perform the backup and restore operations of your data from the Recovery Services vaults using [private endpoints](/azure/private-link/private-endpoint-overview). Private endpoints use one or more private IP addresses from your Azure Virtual Network (VNet), effectively bringing the service into your VNet.
+Azure Backup allows you to securely perform the backup and restore operations of your data from the Recovery Services vaults using [private endpoints](../private-link/private-endpoint-overview.md). Private endpoints use one or more private IP addresses from your Azure Virtual Network (VNet), effectively bringing the service into your VNet.
Azure Backup now provides an enhanced experience in creation and use of private endpoints compared to the [classic experience](private-endpoints-overview.md) (v1).
If the private URL doesn't resolve, it tries the public URL `<azure_backup_svc>.
>- [All public clouds](https://download.microsoft.com/download/1/2/6/126a410b-0e06-45ed-b2df-84f353034fa1/AzureRegionCodesList.docx) >- [China](/azure/china/resources-developer-guide#check-endpoints-in-azure) >- [Germany](/azure/germany/germany-developer-guide#endpoint-mapping)
->- [US Gov](/azure/azure-government/documentation-government-developer-guide)
+>- [US Gov](../azure-government/documentation-government-developer-guide.md)
These private URLs are specific for the vault. Only extensions and agents registered to the vault can communicate with the Azure Backup service over these endpoints. If the public network access for Recovery Services vault is configured to *Deny*, this restricts the clients that aren't running in the VNet from requesting the backup and restore operations on the vault. We recommend that public network access is set to *Deny* along with private endpoint setup. As the extension and agent attempt the private URL first, the `*.privatelink.<geo>.backup.windowsazure.com` DNS resolution of the URL should return the corresponding private IP associated with the private endpoint.
The following diagram shows how the name resolution works for storage accounts u
## Next steps -- Learn [how to configure and manage private endpoints for Azure Backup](backup-azure-private-endpoints-configure-manage.md).-
+- Learn [how to configure and manage private endpoints for Azure Backup](backup-azure-private-endpoints-configure-manage.md).
backup Backup Azure Private Endpoints Configure Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-private-endpoints-configure-manage.md
# Create and use private endpoints (v2 experience) for Azure Backup
-Azure Backup allows you to securely perform the backup and restore operations of your data from the Recovery Services vaults using [private endpoints](/azure/private-link/private-endpoint-overview). Private endpoints use one or more private IP addresses from your Azure Virtual Network (VNet), effectively bringing the service into your VNet.
+Azure Backup allows you to securely perform the backup and restore operations of your data from the Recovery Services vaults using [private endpoints](../private-link/private-endpoint-overview.md). Private endpoints use one or more private IP addresses from your Azure Virtual Network (VNet), effectively bringing the service into your VNet.
Azure Backup now provides an enhanced experience in creation and use of private endpoints compared to the [classic experience](private-endpoints-overview.md) (v1).
To delete private endpoints using REST API, see [this section](/rest/api/virtual
## Next steps -- Learn [about private endpoint for Azure Backup](backup-azure-private-endpoints-concept.md).
+- Learn [about private endpoint for Azure Backup](backup-azure-private-endpoints-concept.md).
backup Backup Azure Vms Enhanced Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-enhanced-policy.md
Title: Back up Azure VMs with Enhanced policy description: Learn how to configure Enhanced policy to back up VMs. Previously updated : 07/04/2022 Last updated : 03/15/2023
Azure Backup now supports _Enhanced policy_ that's needed to support new Azure o
>[!Important] >- [Default policy](./backup-during-vm-creation.md#create-a-vm-with-backup-configured) will not support protecting newer Azure offerings, such as [Trusted Launch VM](backup-support-matrix-iaas.md#tvm-backup), [Ultra SSD](backup-support-matrix-iaas.md#vm-storage-support), [Shared disk](backup-support-matrix-iaas.md#vm-storage-support), and Confidential Azure VMs.
->- Enhanced policy currently doesn't support protecting Ultra SSD.
+>- Enhanced policy currently doesn't support protecting Ultra SSD. You can use [selective disk backup (preview)](selective-disk-backup-restore.md) to exclude these disks, and then configure backup.
>- Backups for VMs having [data access authentication enabled disks](../virtual-machines/windows/download-vhd.md?tabs=azure-portal#secure-downloads-and-uploads-with-azure-ad) will fail. You must enable backup of Trusted Launch VM through enhanced policy only. Enhanced policy provides the following features:
Follow these steps:
>- Enhanced policy is only available to unprotected VMs that are new to Azure Backup. Note that Azure VMs that are protected with existing policy can't be moved to Enhanced policy. >- Back up an Azure VM with disks that has public network access disabled is not supported.
+## Enable selective disk backup and restore (preview)
+
+You can exclude non-critical disks from backup by using selective disk backup to save costs. Using this capability, you can selectively back up a subset of the data disks that are attached to your VM, and then restore a subset of the disks that are available in a recovery point, both from instant restore and vault tier. [Learn more](selective-disk-backup-restore.md).
+ ## Next steps - [Run a backup immediately](./backup-azure-vms-first-look-arm.md#run-a-backup-immediately)
backup Selective Disk Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/selective-disk-backup-restore.md
Title: Selective disk backup and restore for Azure virtual machines description: In this article, learn about selective disk backup and restore using the Azure virtual machine backup solution.- Previously updated : 11/10/2021+ Last updated : 03/15/2023
# Selective disk backup and restore for Azure virtual machines
-Azure Backup supports backing up all the disks (operating system and data) in a VM together using the virtual machine backup solution. Now, using the selective disks backup and restore functionality, you can back up a subset of the data disks in a VM. This provides an efficient and cost-effective solution for your backup and restore needs. Each recovery point contains only the disks that are included in the backup operation. This further allows you to have a subset of disks restored from the given recovery point during the restore operation. This applies to both restore from snapshots and the vault.
+Azure Backup supports backing up all the disks (operating system and data) in a VM together using the virtual machine backup solution. Now, using the selective disks backup and restore functionality, you can back up a subset of the data disks in a VM.
+
+This is supported both for Enhanced Policy (preview) as well as Standard Policy. This provides an efficient and cost-effective solution for your backup and restore needs. Each recovery point contains only the disks that are included in the backup operation. This further allows you to have a subset of disks restored from the given recovery point during the restore operation. This applies to both restore from snapshots and the vault.
+
+>[!Note]
+>- This is supported for both backup policies - [Enhanced policy](backup-azure-vms-enhanced-policy.md) and [Standard policy](backup-during-vm-creation.md#create-a-vm-with-backup-configured).
+>- The *Selective disk backup and restore in Enhanced policy (preview)* is available in public Azure regions only.
## Scenarios This solution is useful particularly in the following scenarios: 1. If you have critical data to be backed up in only one disk, or a subset of the disks and donΓÇÖt want to back up the rest of the disks attached to a VM to minimize the backup storage costs.
-2. If you have other backup solutions for part of your VM or data. For example, if you back up your databases or data using a different workload backup solution and you want to use Azure VM level backup for the rest of the data or disks to build an efficient and robust system using the best capabilities available.
+2. If you've other backup solutions for part of your VM or data. For example, if you back up your databases or data using a different workload backup solution and you want to use Azure VM level backup for the rest of the data or disks to build an efficient and robust system using the best capabilities available.
-Using PowerShell or Azure CLI, you can configure selective disk backup of the Azure VM. Using a script, you can include or exclude data disks using their LUN numbers. Currently, the ability to configure selective disks backup through the Azure portal is limited to the **Backup OS Disk only** option. So you can configure backup of your Azure VM with OS disk, and exclude all the data disks attached to it.
+3. If you're using [Enhanced policy](backup-azure-vms-enhanced-policy.md), you can use this solution to exclude unsupported disks (Ultra Disks, Shared Disks) and configure a VM for backup.
+
+Using PowerShell, Azure CLI, or Azure portal, you can configure selective disk backup of the Azure VM. Using a script, you can include or exclude data disks using their *LUN numbers*. The ability to configure selective disks backup via the Azure portal is limited to the *Backup OS Disk* only for the Standard policy, but can be configured for all data disks for Enhanced policy.
>[!NOTE] > The OS disk is by default added to the VM backup and can't be excluded.
Using PowerShell or Azure CLI, you can configure selective disk backup of the Az
Ensure you're using Az CLI version 2.0.80 or higher. You can get the CLI version with this command:
+>[!Note]
+>These CLI steps apply to selective disk backup for VMs using both policies - enhanced and standard.
+ ```azurecli az --version ```
az account set -s {subscriptionID}
### Configure backup with Azure CLI
-During the configure protection operation, you need to specify the disk list setting with an **inclusion** / **exclusion** parameter, giving the LUN numbers of the disks to be included or excluded in the backup.
+During the configure protection operation, you need to specify the disk list setting with an **inclusion**/**exclusion** parameter, giving the *LUN* numbers of the disks to be included or excluded in the backup.
>[!NOTE] >The configure protection operation overrides the previous settings, they will not be cumulative.
Here you can view the backed-up disks during restore, when you select the recove
![View backed-up disks during restore](./media/selective-disk-backup-restore/during-restore.png)
-Configuring the selective disks backup experience for a VM through the Azure portal is limited to the **Backup OS Disk only** option. To use selective disks backup on already a backed-up VM or for advanced inclusion or exclusion of specific data disks of a VM, use PowerShell or Azure CLI.
+- If you're using Standard policy to back up the VM, configuring the selective disks backup experience for a VM through the Azure portal is limited to the **Backup OS Disk only** option. To use selective disks backup on already a backed-up VM or for advanced inclusion or exclusion of specific data disks of a VM, use PowerShell or Azure CLI.
+
+- If you're using Enhanced policy to back up the VM, you can select the data disks you want to back up, and optionally choose to include disks added to the VM in future for back up.
>[!NOTE] >If data spans across disks, make sure all the dependent disks are included in the backup. If you donΓÇÖt backup all the dependent disks in a volume, during restore the volume comprising of some non-backed up disks won't be created.
-### Backup OS disk only in the Azure portal
+### Backup OS disk only in the Azure portal (Standard policy)
When you enable backup using Azure portal, you can choose the **Backup OS Disk only** option. So you can configure backup of your Azure VM with OS disk, and exclude all data disks attached to it. ![Configure backup for the OS disk only](./media/selective-disk-backup-restore/configure-backup-operating-system-disk.png)
+## Configure Selective Disk Backup in the Azure Portal (Enhanced Policy)
+
+When you enable the backup operation using the Azure portal, you can choose the data disks that you want to include in the backup (the OS disk is always included). You can also choose to include disks that are added in the future for backup automatically by enabling the ΓÇ£Include future disksΓÇ¥ option.
++ ## Using Azure REST API You can configure Azure VM Backup with a few select disks or you can modify an existing VM's protection to include/exclude few disks as documented [here](backup-azure-arm-userestapi-backupazurevms.md#excluding-disks-in-azure-vm-backup).
Selective disk restore is an added functionality you get when you enable the sel
## Limitations
-Selective disks backup functionality isn't supported for classic virtual machines and encrypted virtual machines. So Azure VMs that are encrypted with Azure Disk Encryption (ADE) using BitLocker for encryption of Windows VM, and the dm-crypt feature for Linux VMs are unsupported.
+Selective disks backup functionality for Standard policy isn't supported for classic virtual machines and encrypted virtual machines. So Azure VMs that are encrypted with Azure Disk Encryption (ADE) using BitLocker for encryption of Windows VM, and the dm-crypt feature for Linux VMs are unsupported. However, VMs with Azure Disk Encryption enabled can use selective disk backup with Enhanced policy.
The restore options to **Create new VM** and **Replace existing** aren't supported for the VM for which selective disks backup functionality is enabled.
-Currently, Azure VM backup doesn't support VMs with ultra-disks or shared disks attached to them. Selective disk backup can't be used to in such cases, which exclude the disk and backup the VM.
+Currently, Azure VM backup doesn't support VMs with ultra-disks or shared disks attached to them. Selective disk backup for Standard policy can't be used to in such cases, which exclude the disk and backup the VM. You can use selective disk backup with Enhanced policy to exclude these disks and configure backup.
If you use disk exclusion or selective disks while backing up Azure VM, _[stop protection and retain backup data](backup-azure-manage-vms.md#stop-protection-and-retain-backup-data)_. When resuming backup for this resource, you need to set up disk exclusion settings again.
If you use disk exclusion or selective disks while backing up Azure VM, _[stop p
Azure virtual machine backup follows the existing pricing model, explained in detail [here](https://azure.microsoft.com/pricing/details/backup/).
-**Protected Instance (PI) cost** is calculated for the OS disk only if you choose to back up using the **OS Disk only** option. If you configure backup and select at least one data disk, the PI cost will be calculated for all the disks attached to the VM. **Backup storage cost** is calculated based on only the included disks and so you get to save on the storage cost. **Snapshot cost** is always calculated for all the disks in the VM (both the included and excluded disks).
+### Standard policy
-If you have chosen the Cross Region Restore (CRR) feature, then the [CRR pricing](https://azure.microsoft.com/pricing/details/backup/) applies on the backup storage cost after excluding the disk.
+If you're using Standard policy, **Protected Instance (PI) cost** is calculated for the OS disk only if you choose to back up using the **OS Disk only** option. If you configure backup and select at least one data disk, the PI cost will be calculated for all the disks attached to the VM. **Backup storage cost** is calculated based on only the included disks and so you get to save on the storage cost. **Snapshot cost** is always calculated for all the disks in the VM (both the included and excluded disks).
+
+If you've chosen the Cross Region Restore (CRR) feature, then the [CRR pricing](https://azure.microsoft.com/pricing/details/backup/) applies on the backup storage cost after excluding the disk.
+
+### Enhanced policy
+
+If you're using Enhanced policy, **Protected Instance (PI)** cost, snapshot cost, and vault tier storage cost are all calculated based on the disks that you've included for backup.
+
+**Known limitations**
+
+| OS type | Limitation |
+| | |
+| Windows | - **Spanned volumes**: For spanned volumes (volumes spread across more than one physical disk), ensure that all disks are included in the backup. If not, Azure Backup might not be able to reliably restore the data and exclude it in billing. <br><br> - **Storage pool**: If you're using disks carved out of a storage pool and if a *LUN number* included for backup is common across virtual disks and data disks, the size of the virtual disk is also included in the backup size in addition to the data disks. |
+| Linux | - **Logical volumes**: For logical volumes spread across more than one disk, ensure that all disks are included in the backup. If not, Azure Backup might not be able to reliably restore the data and exclude it in billing. <br><br> - **Distro support**: Azure Backup uses *lsscsi* and *lsblk* to determine the disks being excluded for backup. If your distro (Debian 8.11, 10.13, and so on) doesn't support *lsscsi*, install it using `sudo apt install lsscsi` to ensure Selective disk backup works. |
+
+If you've chosen the Cross Region Restore (CRR) feature, then the [CRR pricing](https://azure.microsoft.com/pricing/details/backup/) applies on the backup storage cost after excluding the disk.
## Frequently asked questions
PI cost is calculated based on actual (used) size of the VM.
### I have configured only OS disk backup, why is the snapshot happening for all the disks?
-Selective disk backup features let you save on backup vault storage cost by hardening the included disks that are part of the backup. However, the snapshot is taken for all the disks that are attached to the VM. So the snapshot cost is always calculated for all the disks in the VM (both the included and excluded disks). For more information, see [billing](#billing).
+If you're using standard policy, the Selective disk backup features let you save on backup vault storage cost by hardening the included disks that are part of the backup. However, the snapshot is taken for all the disks that are attached to the VM. So the snapshot cost is always calculated for all the disks in the VM (both the included and excluded disks). For more information, see [billing](#billing).
+
+If you're using Enhanced policy, the snapshot is taken only for the OS disk and the data disks that you've included.
### I can't configure backup for the Azure virtual machine by excluding ultra disk or shared disks attached to the VM
-Selective disk backup feature is a capability provided on top of the Azure virtual machine backup solution. Currently, Azure VM backup doesn't support VMs with ultra-disk or shared disk attached to them.
+If you're using Standard policy, Azure VM backup doesn't support VMs with ultra-disk or shared disk attached to them and it is not possible to exclude them with selective disk backup and then configure backup.
+
+If you're using Enhanced policy, you can exclude the unsupported disks from the backup via selective disk backup (in the Azure portal, CLI, PowerShell, and so on), and configure backup for the VM.
## Next steps
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md
Title: What's new in Azure Backup description: Learn about new features in Azure Backup. Previously updated : 03/14/2023 Last updated : 03/15/2023
You can learn more about the new releases by bookmarking this page or by [subscr
## Updates summary - March 2023
+ - [Support for selective disk backup with enhanced policy for Azure VM (preview)](#support-for-selective-disk-backup-with-enhanced-policy-for-azure-vm-preview)
- [Azure Kubernetes Service backup (preview)](#azure-kubernetes-service-backup-preview) - [Azure Blob vaulted backups (preview)](#azure-blob-vaulted-backups-preview) - October 2022
You can learn more about the new releases by bookmarking this page or by [subscr
- February 2021 - [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview)
+## Support for selective disk backup with enhanced policy for Azure VM (preview)
+
+Azure Backup now provides *Selective Disk backup and restore* capability to Enhanced policy. Using this capability, you can selectively back up a subset of the data disks that are attached to your VM, and then restore a subset of the disks that are available in a recovery point, both from instant restore and vault tier.
+
+This is useful when you:
+
+- Manage critical data in a subset of the VM disks.
+- Use database backup solutions and want to back up only their OS disk to reduce cost.
+
+For more information, see [Selective disk backup and restore](selective-disk-backup-restore.md).
+ ## Azure Kubernetes Service backup (preview) Azure Kubernetes Service (AKS) backup is a simple, cloud-native process to back up and restore the containerized applications and data running in AKS clusters. You can configure scheduled backup for both cluster state and application data (persistent volumes - CSI driver based Azure Disks).
batch Batch Spot Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-spot-vms.md
Title: Run workloads on cost-effective Spot VMs description: Learn how to provision Spot VMs to reduce the cost of Azure Batch workloads. Previously updated : 12/14/2021 Last updated : 03/15/2023
To view these metrics in the Azure portal
- Spot VMs in Batch don't support setting a max price and don't support price-based evictions. They can only be evicted for capacity reasons. - Spot VMs are only available for Virtual Machine Configuration pools and not for Cloud Service Configuration pools, which are [deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). - Spot VMs aren't available for some clouds, VM sizes, and subscription offer types. See more about [Spot limitations](../virtual-machines/spot-vms.md#limitations).
+- Currently, [Ephemeral OS disks](create-pool-ephemeral-os-disk.md) are not supported with Spot VMs due to the service managed
+eviction policy of Stop-Deallocate.
## Next steps
batch Create Pool Ephemeral Os Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/create-pool-ephemeral-os-disk.md
Title: Use ephemeral OS disk nodes for Azure Batch pools description: Learn how and why to create a Batch pool that uses ephemeral OS disk nodes. Previously updated : 09/03/2021 Last updated : 03/15/2023 ms.devlang: csharp
ms.devlang: csharp
Some Azure virtual machine (VM) series support the use of [ephemeral OS disks](../virtual-machines/ephemeral-os-disks.md), which create the OS disk on the node virtual machine local storage. The default Batch pool configuration uses [Azure managed disks](../virtual-machines/managed-disks-overview.md) for the node OS disk, where the managed disk is like a physical disk, but virtualized and persisted in remote Azure Storage.
-For Batch workloads, the main benefits of using ephemeral OS disks are reduced costs associated with pools, the potential for faster node start time, and improved application performance due to better OS disk performance. When choosing whether ephemeral OS disks should be used for your workload, consider the following:
+For Batch workloads, the main benefits of using ephemeral OS disks are reduced costs associated with pools, the potential for faster node start time, and improved application performance due to better OS disk performance. When choosing whether ephemeral OS disks should be used for your workload, consider the following impacts:
-- There is lower read/write latency to ephemeral OS disks, which may lead to improved application performance.-- There is no storage cost for ephemeral OS disks, whereas there is a cost for each managed OS disk.-- Reimaging the node, when supported by Batch, will be faster for ephemeral disks compared to managed disks.
+- There's lower read/write latency to ephemeral OS disks, which may lead to improved application performance.
+- There's no storage cost for ephemeral OS disks, whereas there's a cost for each managed OS disk.
+- Reimage for compute nodes is faster for ephemeral disks compared to managed disks, when supported by Batch.
- Node start time may be slightly faster when ephemeral OS disks are used.-- Ephemeral OS disks are not highly durable and available; when a VM is removed for any reason, the OS disk is lost. Since Batch workloads are inherently stateless, and don't normally rely on changes to the OS disk being persisted, ephemeral OS disks are appropriate to use for most Batch workloads.-- The use of an ephemeral OS disk is not currently supported by all Azure VM series. If a VM size doesn't support an ephemeral OS disk, a managed OS disk must be used.
+- Ephemeral OS disks aren't highly durable and available; when a VM is removed for any reason, the OS disk is lost. Since Batch workloads are inherently stateless, and don't normally rely on changes to the OS disk being persisted, ephemeral OS disks are appropriate to use for most Batch workloads.
+- Ephemeral OS disks aren't currently supported by all Azure VM series. If a VM size doesn't support an ephemeral OS disk, a managed OS disk must be used.
> [!NOTE] > Ephemeral OS disk configuration is only applicable to 'virtualMachineConfiguration' pools, and aren't supported by 'cloudServiceConfigurationΓÇÖ pools. We recommend using 'virtualMachineConfiguration for your Batch pools, as 'cloudServiceConfiguration' pools do not support all features and no new capabilities are planned. You won't be able to create new 'cloudServiceConfiguration' pools or add new nodes to existing pools [after February 29, 2024](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). For more information, see [Migrate Batch pool configuration from Cloud Services to Virtual Machine](batch-pool-cloud-service-to-virtual-machine-configuration.md).
Alternately, you can programmatically query to check the 'EphemeralOSDiskSupport
## Create a pool that uses ephemeral OS disks
-The `EphemeralOSDiskSettings` property is not set by default. You must set this property in order to configure ephemeral OS disk use on the pool nodes.
+The `EphemeralOSDiskSettings` property isn't set by default. You must set this property in order to configure ephemeral OS disk use on the pool nodes.
+
+> [!TIP]
+> Ephemeral OS disks cannot be used in conjunction with Spot VMs in Batch pools due to the service managed eviction policy.
The following example shows how to create a Batch pool where the nodes use ephemeral OS disks and not managed disks.
virtualMachineConfiguration.OSDisk.EphemeralOSDiskSettings.Placement = DiffDiskP
## Next steps
+- See the [Ephemeral OS Disks FAQ](../virtual-machines/ephemeral-os-disks-faq.md).
- Learn about the [Batch service workflow and primary resources](batch-service-workflow-features.md) such as pools, nodes, jobs, and tasks. - Learn about [costs that may be associated with Azure Batch workloads](budget.md).
chaos-studio Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/troubleshooting.md
You can upgrade your Virtual Machine Scale Sets instances with Azure CLI:
az vmss update-instances --resource-group myResourceGroup --name myScaleSet --instance-ids {instanceIds} ```
-For more information, see [How to bring VMs up-to-date with the latest scale set model](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-scale-set#how-to-bring-vms-up-to-date-with-the-latest-scale-set-model)
+For more information, see [How to bring VMs up-to-date with the latest scale set model](../virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-scale-set.md#how-to-bring-vms-up-to-date-with-the-latest-scale-set-model)
### AKS Chaos Mesh faults fail AKS Chaos Mesh faults may fail for various reasons related to missing prerequisites:
From the **Experiments** list in the Azure portal, click on the experiment name
This may happen if you onboarded the agent using the Azure portal, which has a known issue: Enabling an agent-based target doesn't assign the user-assigned managed identity to the virtual machine or Virtual Machine Scale Set.
-To resolve this, navigate to the virtual machine or Virtual Machine Scale Set in the Azure portal, go to **Identity**, open the **User assigned** tab, and **Add** your user-assigned identity to the virtual machine. Once complete, you may need to reboot the virtual machine for the agent to connect.
+To resolve this, navigate to the virtual machine or Virtual Machine Scale Set in the Azure portal, go to **Identity**, open the **User assigned** tab, and **Add** your user-assigned identity to the virtual machine. Once complete, you may need to reboot the virtual machine for the agent to connect.
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 3/1/2023 Last updated : 3/14/2023
# Azure Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to the Guest OS you are using. Updates always carry forward for the particular [family][family-explain] they were introduced in.
+## March 2023 Guest OS
+
+>[!NOTE]
+
+>The March Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the March Guest OS. This list is subject to change.
+
+| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
+| | | | | |
+| Rel 23-03 | [5023697] | Latest Cumulative Update(LCU) | 5.79 | Mar 14, 2023 |
+| Rel 23-03 | [5022835] | IE Cumulative Updates | 2.135, 3.122, 4.115 | Feb 14, 2023 |
+| Rel 23-03 | [5023705] | Latest Cumulative Update(LCU) | 7.23 | Mar 14, 2023 |
+| Rel 23-03 | [5023702] | Latest Cumulative Update(LCU) | 6.55 | Mar 14, 2023 |
+| Rel 23-03 | [5022523] | .NET Framework 3.5 Security and Quality Rollup  | 2.135 | Feb 14, 2023 |
+| Rel 23-03 | [5022515] | .NET Framework 4.6.2 Security and Quality Rollup  | 2.135 | Feb 14, 2023 |
+| Rel 23-03 | [5022574] | .NET Framework 3.5 Security and Quality Rollup  | 4.115 | Feb 14, 2023 |
+| Rel 23-03 | [5022513] | .NET Framework 4.6.2 Security and Quality Rollup  | 4.115 | Feb 14, 2023 |
+| Rel 23-03 | [5022574] | .NET Framework 3.5 Security and Quality Rollup  | 3.122 | Feb 14, 2023 |
+| Rel 23-03 | [5022512] | .NET Framework 4.6.2 Security and Quality Rollup  | 3.122 | Feb 14, 2023 |
+| Rel 23-03 | [5022511] | . NET Framework 4.7.2 Cumulative Update  | 6.55 | Feb 14, 2023 |
+| Rel 23-03 | [5022507] | .NET Framework 4.8 Security and Quality Rollup  | 7.23 | Feb 14, 2023 |
+| Rel 23-03 | [5023769] | Monthly Rollup  | 2.135 | Mar 14, 2023 |
+| Rel 23-03 | [5023756] | Monthly Rollup  | 3.122 | Mar 14, 2023 |
+| Rel 23-03 | [5023765] | Monthly Rollup  | 4.115 | Mar 14, 2023 |
+| Rel 23-03 | [5023791] | Servicing Stack Update  | 3.122 | Mar 14, 2023 |
+| Rel 23-03 | [5023790] | Servicing Stack update | 4.115 | Mar 14, 2022 |
+| Rel 23-03 | [4578013] | OOB Standalone Security Update  | 4.115 | Aug 19, 2020 |
+| Rel 23-03 | [5023788] | Servicing Stack Update | 5.79 | Mar 14, 2023 |
+| Rel 23-03 | [5017397] | Servicing Stack Update LKG  | 2.135 | Sep 13, 2022 |
+| Rel 23-03 | [4494175] | Microcode  | 5.79 | Sep 1, 2020 |
+| Rel 23-03 | [4494174] | Microcode  | 6.55 | Sep 1, 2020 |
+| Rel 23-03 | [5023793] | Servicing Stack Update  | 7.23 | |
+
+[5023697]: https://support.microsoft.com/kb/5023697
+[5022835]: https://support.microsoft.com/kb/5022835
+[5023705]: https://support.microsoft.com/kb/5023705
+[5023702]: https://support.microsoft.com/kb/5023702
+[5022523]: https://support.microsoft.com/kb/5022523
+[5022515]: https://support.microsoft.com/kb/5022515
+[5022574]: https://support.microsoft.com/kb/5022574
+[5022513]: https://support.microsoft.com/kb/5022513
+[5022574]: https://support.microsoft.com/kb/5022574
+[5022512]: https://support.microsoft.com/kb/5022512
+[5022511]: https://support.microsoft.com/kb/5022511
+[5022507]: https://support.microsoft.com/kb/5022507
+[5023769]: https://support.microsoft.com/kb/5023769
+[5023756]: https://support.microsoft.com/kb/5023756
+[5023765]: https://support.microsoft.com/kb/5023765
+[5023791]: https://support.microsoft.com/kb/5023791
+[5023790]: https://support.microsoft.com/kb/5023790
+[4578013]: https://support.microsoft.com/kb/4578013
+[5023788]: https://support.microsoft.com/kb/5023788
+[5017397]: https://support.microsoft.com/kb/5017397
+[4494175]: https://support.microsoft.com/kb/4494175
+[4494174]: https://support.microsoft.com/kb/4494174
+ ## February 2023 Guest OS
cognitive-services Migrate From Custom Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/migrate-from-custom-vision.md
# Migrate a Custom Vision project to Image Analysis 4.0 preview
-You can migrate an existing Azure Custom Vision project to the new Image Analysis 4.0 system. [Custom Vision](/azure/cognitive-services/custom-vision-service/overview) is a model customization service that existed before Image Analysis 4.0.
+You can migrate an existing Azure Custom Vision project to the new Image Analysis 4.0 system. [Custom Vision](../../custom-vision-service/overview.md) is a model customization service that existed before Image Analysis 4.0.
This guide uses a Python script to take all of the training data from an existing Custom Vision project (images and their label data) and convert it to a COCO file. You can then import the COCO file into Vision Studio to train a custom model. See [Create and train a custom model](model-customization.md) and go to the section on importing a COCO file&mdash;you can follow the guide from there to the end.
This guide uses a Python script to take all of the training data from an existin
* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/) * [Python 3.x](https://www.python.org/) * A Custom Vision resource where an existing project is stored.
-* An Azure Storage resource - [Create one](/azure/storage/common/storage-account-create?tabs=azure-portal)
+* An Azure Storage resource - [Create one](../../../storage/common/storage-account-create.md?tabs=azure-portal)
## Install libraries
cognitive-services Model Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/model-customization.md
This guide shows you how to create and train a custom image classification model
* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services) * Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Computer Vision resource" target="_blank">create a Computer Vision resource </a> in the Azure portal to get your key and endpoint. If you're following this guide using Vision Studio, you must create your resource in the East US region. After it deploys, select **Go to resource**. Copy the key and endpoint to a temporary location to use later on.
-* An Azure Storage resource - [Create one](/azure/storage/common/storage-account-create?tabs=azure-portal)
+* An Azure Storage resource - [Create one](../../../storage/common/storage-account-create.md?tabs=azure-portal)
* A set of images with which to train your classification model. You can use the set of [sample images on GitHub](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ImageClassification/Images). Or, you can use your own images. You only need about 3-5 images per class. > [!NOTE]
The API call returns an **ImageAnalysisResult** JSON object, which contains all
In this guide, you created and trained a custom image classification model using Image Analysis. Next, learn more about the Analyze Image 4.0 API, so you can call your custom model from an application using REST or library SDKs. * [Call the Analyze Image API](./call-analyze-image-40.md#use-a-custom-model)
-* See the [Model customization concepts](../concept-model-customization.md) guide for a broad overview of this feature and a list of frequently asked questions.
+* See the [Model customization concepts](../concept-model-customization.md) guide for a broad overview of this feature and a list of frequently asked questions.
cognitive-services Get Started Intent Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-intent-recognition.md
keywords: intent recognition
# Quickstart: Recognize intents with the Speech service and LUIS > [!IMPORTANT]
-> LUIS will be retired on October 1st 2025 and starting April 1st 2023 you will not be able to create new LUIS resources. We recommend [migrating your LUIS applications](/azure/cognitive-services/language-service/conversational-language-understanding/how-to/migrate-from-luis) to [conversational language understanding](/azure/cognitive-services/language-service/conversational-language-understanding/overview) to benefit from continued product support and multilingual capabilities.
+> LUIS will be retired on October 1st 2025 and starting April 1st 2023 you will not be able to create new LUIS resources. We recommend [migrating your LUIS applications](../language-service/conversational-language-understanding/how-to/migrate-from-luis.md) to [conversational language understanding](../language-service/conversational-language-understanding/overview.md) to benefit from continued product support and multilingual capabilities.
> > Conversational Language Understanding (CLU) is available for C# and C++ with the [Speech SDK](speech-sdk.md) version 1.25 or later. See the [quickstart](get-started-intent-recognition-clu.md) to recognize intents with the Speech SDK and CLU.
keywords: intent recognition
## Next steps > [!div class="nextstepaction"]
-> [See more LUIS samples on GitHub](https://github.com/Azure/pizza_luis_bot)
+> [See more LUIS samples on GitHub](https://github.com/Azure/pizza_luis_bot)
cognitive-services How To Use Custom Entity Pattern Matching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-custom-entity-pattern-matching.md
In this guide, you use the Speech SDK to develop a console application that deri
## When to use pattern matching Use pattern matching if:
-* You're only interested in matching strictly what the user said. These patterns match more aggressively than [conversational language understanding (CLU)](/azure/cognitive-services/language-service/conversational-language-understanding/overview).
+* You're only interested in matching strictly what the user said. These patterns match more aggressively than [conversational language understanding (CLU)](../language-service/conversational-language-understanding/overview.md).
* You don't have access to a CLU model, but still want intents. For more information, see the [pattern matching overview](./pattern-matching-overview.md).
cognitive-services How To Use Simple Language Pattern Matching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-simple-language-pattern-matching.md
In this guide, you use the Speech SDK to develop a C++ console application that
## When to use pattern matching Use pattern matching if:
-* You're only interested in matching strictly what the user said. These patterns match more aggressively than [conversational language understanding (CLU)](/azure/cognitive-services/language-service/conversational-language-understanding/overview).
+* You're only interested in matching strictly what the user said. These patterns match more aggressively than [conversational language understanding (CLU)](../language-service/conversational-language-understanding/overview.md).
* You don't have access to a CLU model, but still want intents. For more information, see the [pattern matching overview](./pattern-matching-overview.md).
Intents will be added using calls to the IntentRecognizer->AddIntent() API.
::: zone pivot="programming-language-java" [!INCLUDE [java](includes/how-to/intent-recognition/jav)]
cognitive-services Intent Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/intent-recognition.md
In this overview, you will learn about the benefits and capabilities of intent r
The Speech SDK provides an embedded pattern matcher that you can use to recognize intents in a very strict way. This is useful for when you need a quick offline solution. This works especially well when the user is going to be trained in some way or can be expected to use specific phrases to trigger intents. For example: "Go to floor seven", or "Turn on the lamp" etc. It is recommended to start here and if it no longer meets your needs, switch to using [CLU](#conversational-language-understanding) or a combination of the two. Use pattern matching if:
-* You're only interested in matching strictly what the user said. These patterns match more aggressively than [conversational language understanding (CLU)](/azure/cognitive-services/language-service/conversational-language-understanding/overview).
+* You're only interested in matching strictly what the user said. These patterns match more aggressively than [conversational language understanding (CLU)](../language-service/conversational-language-understanding/overview.md).
* You don't have access to a CLU model, but still want intents. For more information, see the [pattern matching concepts](./pattern-matching-overview.md) and then:
Both a Speech resource and Language resource are required to use CLU with the Sp
> [!IMPORTANT] > When you use conversational language understanding with the Speech SDK, you are charged both for the Speech-to-text recognition request and the Language service request for CLU. For more information about pricing for conversational language understanding, see [Language service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/).
-For information about how to use conversational language understanding without the Speech SDK and without speech recognition, see the [Language service documentation](/azure/cognitive-services/language-service/conversational-language-understanding/overview).
+For information about how to use conversational language understanding without the Speech SDK and without speech recognition, see the [Language service documentation](../language-service/conversational-language-understanding/overview.md).
> [!IMPORTANT]
-> LUIS will be retired on October 1st 2025 and starting April 1st 2023 you will not be able to create new LUIS resources. We recommend [migrating your LUIS applications](/azure/cognitive-services/language-service/conversational-language-understanding/how-to/migrate-from-luis) to [conversational language understanding](/azure/cognitive-services/language-service/conversational-language-understanding/overview) to benefit from continued product support and multilingual capabilities.
+> LUIS will be retired on October 1st 2025 and starting April 1st 2023 you will not be able to create new LUIS resources. We recommend [migrating your LUIS applications](../language-service/conversational-language-understanding/how-to/migrate-from-luis.md) to [conversational language understanding](../language-service/conversational-language-understanding/overview.md) to benefit from continued product support and multilingual capabilities.
> > Conversational Language Understanding (CLU) is available for C# and C++ with the [Speech SDK](speech-sdk.md) version 1.25 or later. See the [quickstart](get-started-intent-recognition-clu.md) to recognize intents with the Speech SDK and CLU. ## Next steps * [Intent recognition with simple pattern matching](how-to-use-simple-language-pattern-matching.md)
-* [Intent recognition with CLU quickstart](get-started-intent-recognition-clu.md)
+* [Intent recognition with CLU quickstart](get-started-intent-recognition-clu.md)
cognitive-services Openai Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/openai-speech.md
keywords: speech to text, openai
## Next steps - [Learn more about Speech](overview.md)-- [Learn more about Azure OpenAI](/azure/cognitive-services/openai/overview)
+- [Learn more about Azure OpenAI](../openai/overview.md)
cognitive-services Power Automate Batch Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/power-automate-batch-transcription.md
Last updated 03/09/2023
This article describes how to use [Power Automate](/power-automate/getting-started) and the [Azure Cognitive Services for Batch Speech-to-text connector](/connectors/cognitiveservicesspe/) to transcribe audio files from an Azure Storage container. The connector uses the [Batch Transcription REST API](batch-transcription.md), but you don't need to write any code to use it. If the connector doesn't meet your requirements, you can still use the [REST API](rest-speech-to-text.md#transcriptions) directly.
-In addition to [Power Automate](/power-automate/getting-started), you can use the [Azure Cognitive Services for Batch Speech-to-text connector](/connectors/cognitiveservicesspe/) with [Power Apps](/power-apps) and [Logic Apps](/azure/logic-apps/).
+In addition to [Power Automate](/power-automate/getting-started), you can use the [Azure Cognitive Services for Batch Speech-to-text connector](/connectors/cognitiveservicesspe/) with [Power Apps](/power-apps) and [Logic Apps](../../logic-apps/index.yml).
> [!TIP] > Try more Speech features in [Speech Studio](https://aka.ms/speechstudio/speechtotexttool) without signing up or writing any code.
Later you'll [upload files to the container](#upload-files-to-the-container) aft
### Create SAS URI by path
-To transcribe an audio file that's in your [Azure Blob Storage container](#create-the-azure-blob-storage-container) you need a [Shared Access Signature (SAS) URI](/azure/storage/common/storage-sas-overview) for the file.
+To transcribe an audio file that's in your [Azure Blob Storage container](#create-the-azure-blob-storage-container) you need a [Shared Access Signature (SAS) URI](../../storage/common/storage-sas-overview.md) for the file.
The [Azure Blob Storage connector](/connectors/azureblob/) supports SAS URIs for individual blobs, but not for entire containers.
You can select and expand the **Create transcription** to see detailed input and
- [Azure Cognitive Services for Batch Speech-to-text connector](/connectors/cognitiveservicesspe/) - [Azure Blob Storage connector](/connectors/azureblob/)-- [Power Platform](/power-platform/)
+- [Power Platform](/power-platform/)
cognitive-services Speech Synthesis Markup Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup-structure.md
Some examples of contents that are allowed in each element are described in the
The Speech service automatically handles punctuation as appropriate, such as pausing after a period, or using the correct intonation when a sentence ends with a question mark.
+## Special characters
+
+To use the characters `&`, `<`, and `>` within the SSML element's value or text, you must use the entity format. Specifically you must use `&amp;` in place of `&`, `&lt;` use in place of `<`, and use `&gt;` in place of `>`. Otherwise the SSML will not be parsed correctly.
+
+For example, specify `green &amp; yellow` instead of `green & yellow`. The following SSML will be parsed as expected:
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+ <voice name="en-US-JennyNeural">
+ My favorite colors are green &amp; yellow.
+ </voice>
+</speak>
+```
+ Special characters such as quotation marks, apostrophes, and brackets, must be escaped. For more information, see [Extensible Markup Language (XML) 1.0: Appendix D](https://www.w3.org/TR/xml/#sec-entexpand).
-Attribute values must be enclosed by double quotation marks. For example, `<prosody volume="90">` is a well-formed, valid element, but `<prosody volume=90>` won't be recognized.
+Attribute values must be enclosed by double or single quotation marks. For example, `<prosody volume="90">` and `<prosody volume='90'>` are well-formed, valid elements, but `<prosody volume=90>` won't be recognized.
## Speak root element
cognitive-services Create Use Glossaries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/how-to-guides/create-use-glossaries.md
+
+ Title: Create and use a glossary with Document Translation
+description: How to create and use a glossary with Document Translation.
++++ Last updated : 03/14/2023++
+# Use glossaries with Document Translation
+
+A glossary is a list of terms with definitions that you create for the Document Translation service to use during the translation process. Currently, the glossary feature supports one-to-one source-to-target language translation. Common use cases for glossaries include:
+
+* **Context-specific terminology**. Create a glossary that designates specific meanings for your unique context.
+
+* **No translation**. For example, you can restrict Document Translation from translating product name brands by using a glossary with the same source and target text.
+
+* **Specified translations for ambiguous words**. Choose a specific translation for poly&#8203;semantic words.
+
+## Create, upload, and use a glossary file
+
+1. **Create your glossary file.** Create a file in a supported format (preferably tab-separated values) that contains all the terms and phrases you want to use in your translation.
+
+ To check if your file format is supported, *see* [Get supported glossary formats](../reference/get-supported-glossary-formats.md).
+
+ The following English-source glossary contains words that can have different meanings depending upon the context in which they're used. The glossary provides the expected translation for each word in the file to help ensure accuracy.
+
+ For instance, when the word `Bank` appears in a financial document, it should be translated to reflect its financial meaning. If the word `Bank` appears in a geographical document, it may refer to shore to reflect its topographical meaning. Similarly, the word `Crane` can refer to either a bird or machine.
+
+ ***Example glossary .tsv file: English-to-French***
+
+ ```tsv
+ Bank Banque
+ Card Carte
+ Crane Grue
+ Office Office
+ Tiger Tiger
+ US United States
+ ```
+
+1. **Upload your glossary to Azure storage**. To complete this step, you need an [Azure Blob Storage account](https://ms.portal.azure.com/#create/Microsoft.StorageAccount) with [containers](/azure/storage/blobs/storage-quickstart-blobs-portal?branch=main#create-a-container) to store and organize your blob data within your storage account.
+
+1. **Specify your glossary in the translation request.** Include the **`glossary URL`**, **`format`**, and **`version`** in your **`POST`** request:
+
+ :::code language="json" source="../../../../../cognitive-services-rest-samples/curl/Translator/translate-with-glossary.json" range="1-23" highlight="13-15":::
+
+### Case sensitivity
+
+By default, Azure Cognitive Services Translator service API is **case-sensitive**, meaning that it matches terms in the source text based on case.
+
+* **Partial sentence application**. When your glossary is applied to **part of a sentence**, the Document Translation API checks whether the glossary term matches the case in the source text. If the casing doesn't match, the glossary isn't applied.
+
+* **Complete sentence application**. When your glossary is applied to a **complete sentence**, the service becomes **case-insensitive**. It matches the glossary term regardless of its case in the source text. This provision applies the correct results for use cases involving idioms and quotes.
+
+## Next steps
+
+Try the Document Translation how-to guide to asynchronously translate whole documents using a programming language of your choice:
+
+> [!div class="nextstepaction"]
+> [Use Document Translation REST APIs](use-rest-api-programmatically.md)
cognitive-services Cognitive Services Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-virtual-networks.md
Virtual networks (VNETs) are supported in [regions where Cognitive Services are
> [!div class="checklist"] > * Anomaly Detector
+> * Azure OpenAI
> * Computer Vision > * Content Moderator > * Custom Vision
Virtual networks (VNETs) are supported in [regions where Cognitive Services are
> [!NOTE]
-> If you're using LUIS, Speech Services, or Language services, the **CognitiveServicesManagement** tag only enables you use the service using the SDK or REST API. To access and use LUIS portal , Speech Studio or Language Studio from a virtual network, you will need to use the following tags:
+> If you're using, Azure OpenAI, LUIS, Speech Services, or Language services, the **CognitiveServicesManagement** tag only enables you use the service using the SDK or REST API. To access and use Azure OpenAI Studio, LUIS portal , Speech Studio or Language Studio from a virtual network, you will need to use the following tags:
+ > * **AzureActiveDirectory** > * **AzureFrontDoor.Frontend** > * **AzureResourceManager** > * **CognitiveServicesManagement**-
+> * **CognitiveServicesFrontEnd**
## Change the default network access rule
When creating the private endpoint, you must specify the Cognitive Services reso
### Connecting to private endpoints
+> [!NOTE]
+> Azure OpenAI Service uses a different private DNS zone and public DNS zone forwarder than other Azure Cognitive Services. Refer to the [Azure services DNS zone configuration article](../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration) for the correct zone and forwader names.
+ Clients on a VNet using the private endpoint should use the same connection string for the Cognitive Services resource as clients connecting to the public endpoint. The exception is the Speech Services, which require a separate endpoint. See the section on [Private endpoints with the Speech Services](#private-endpoints-with-the-speech-services). We rely upon DNS resolution to automatically route the connections from the VNet to the Cognitive Services resource over a private link. We create a [private DNS zone](../dns/private-dns-overview.md) attached to the VNet with the necessary updates for the private endpoints, by default. However, if you're using your own DNS server, you may need to make additional changes to your DNS configuration. The section on [DNS changes](#dns-changes-for-private-endpoints) below describes the updates required for private endpoints.
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
When using our embeddings models, keep in mind their limitations and risks.
### Embeddings Models | Model ID | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | | | | | | | | |
-| text-ada-embeddings-002 | No | Yes | East US, South Central US, West Europe | N/A |2,046 | Sep 2021 |
+| text-embeddings-ada-002 | No | Yes | East US, South Central US, West Europe | N/A |2,046 | Sep 2021 |
| text-similarity-ada-001 | No | Yes | East US, South Central US, West Europe | N/A | 2,046 | Aug 2020 | | text-similarity-babbage-001 | No | Yes | South Central US, West Europe | N/A | 2,046 | Aug 2020 | | text-similarity-curie-001 | No | Yes | East US, South Central US, West Europe | N/A | 2046 | Aug 2020 |
cognitive-services Chatgpt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/chatgpt.md
keywords: ChatGPT
The ChatGPT model (`gpt-35-turbo`) is a language model designed for conversational interfaces and the model behaves differently than previous GPT-3 models. Previous models were text-in and text-out, meaning they accepted a prompt string and returned a completion to append to the prompt. However, the ChatGPT model is conversation-in and message-out. The model expects a prompt string formatted in a specific chat-like transcript format, and returns a completion that represents a model-written message in the chat. While the prompt format was designed specifically for multi-turn conversations, you'll find it can also work well for non-chat scenarios too.
-The ChatGPT model can be used with the same [completion API](/azure/cognitive-services/openai/reference#completions) that you use for other models like text-davinci-002, but it requires a unique prompt format known as Chat Markup Language (ChatML). It's important to use the new prompt format to get the best results. Without the right prompts, the model tends to be verbose and provides less useful responses.
+The ChatGPT model can be used with the same [completion API](../reference.md#completions) that you use for other models like text-davinci-002, but it requires a unique prompt format known as Chat Markup Language (ChatML). It's important to use the new prompt format to get the best results. Without the right prompts, the model tends to be verbose and provides less useful responses.
## Working with the ChatGPT model
When are my taxes due?
#### Using data for grounding
-You can also include relevant data or information in the system message to give the model extra context for the conversation. If you only need to include a small amount of information, you can hard code it in the system message. If you have a large amount of data that the model should be aware of, you can use [embeddings](/azure/cognitive-services/openai/tutorials/embeddings?tabs=command-line) or a product like [Azure Cognitive Search](https://azure.microsoft.com/services/search/) to retrieve the most relevant information at query time.
+You can also include relevant data or information in the system message to give the model extra context for the conversation. If you only need to include a small amount of information, you can hard code it in the system message. If you have a large amount of data that the model should be aware of, you can use [embeddings](../tutorials/embeddings.md?tabs=command-line) or a product like [Azure Cognitive Search](https://azure.microsoft.com/services/search/) to retrieve the most relevant information at query time.
``` <|im_start|>system
cognitive-services Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/monitoring.md
Title: Monitoring Azure OpenAI Service
-description: Start here to learn how to monitor Azure OpenAI Service
--
+ Title: Monitoring Azure OpenAI Service
+description: Start here to learn how to monitor Azure OpenAI Service
++ Previously updated : 02/13/2023 Last updated : 03/15/2023 # Monitoring Azure OpenAI Service When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
-This article describes the monitoring data generated by Azure OpenAI Service. Azure OpenAI is part of Cognitive Services, which uses [Azure Monitor](/azure/azure-monitor/overview). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource).
+This article describes the monitoring data generated by Azure OpenAI Service. Azure OpenAI is part of Cognitive Services, which uses [Azure Monitor](../../../azure-monitor/overview.md). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../../../azure-monitor/essentials/monitor-azure-resource.md).
## Monitoring data
-Azure OpenAI collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/essentials/monitor-azure-resource#monitoring-data-from-Azure-resources).
+Azure OpenAI collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/essentials/monitor-azure-resource#monitoring-data-from-azure-resources).
## Collection and routing
Resource Logs aren't collected and stored until you create a diagnostic setting
See [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect.
-Keep in mind that using diagnostic settings and sending data to Azure Monitor Logs has additional costs associated with it. To understand more, consult the [Azure Monitor cost calculation guide](/azure/azure-monitor/logs/cost-logs).
+Keep in mind that using diagnostic settings and sending data to Azure Monitor Logs has additional costs associated with it. To understand more, consult the [Azure Monitor cost calculation guide](/azure/azure-monitor/logs/cost-logs).
The metrics and logs you can collect are discussed in the following sections. ## Analyzing metrics
-You can analyze metrics for *Azure OpenAI* by opening **Metrics** which can be found underneath the **Monitoring** section when viewing your Azure OpenAI resource in the Azure portal. See [Getting started with Azure Metrics Explorer](/azure/azure-monitor/essentials/metrics-getting-started) for details on using this tool.
+You can analyze metrics for *Azure OpenAI* by opening **Metrics** which can be found underneath the **Monitoring** section when viewing your Azure OpenAI resource in the Azure portal. See [Getting started with Azure Metrics Explorer](../../../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
-Azure OpenAI is a part of Cognitive Services. For a list of all platform metrics collected for Cognitive Services and Azure OpenAI, see [Cognitive Services supported metrics](/azure/azure-monitor/essentials/metrics-supported#microsoftcognitiveservicesaccounts).
+Azure OpenAI is a part of Cognitive Services. For a list of all platform metrics collected for Cognitive Services and Azure OpenAI, see [Cognitive Services supported metrics](../../../azure-monitor/essentials/metrics-supported.md#microsoftcognitiveservicesaccounts).
For the current subset of metrics available in Azure OpenAI:
For the current subset of metrics available in Azure OpenAI:
## Analyzing logs
-Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
+Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/resource-logs-schema).
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../../../azure-monitor/essentials/resource-logs-schema.md).
-The [Activity log](/azure/azure-monitor/essentials/activity-log) is a type of platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+The [Activity log](/azure/azure-monitor/essentials/activity-log) is a type of platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
-For a list of the types of resource logs available for Azure OpenAI and other Cognitive Services, see [Resource provider operations for Cognitive Services](/azure/role-based-access-control/resource-provider-operations#microsoftcognitiveservices)
+For a list of the types of resource logs available for Azure OpenAI and other Cognitive Services, see [Resource provider operations for Cognitive Services](/azure/role-based-access-control/resource-provider-operations#microsoftcognitiveservices)
### Kusto queries > [!IMPORTANT]
-> When you select **Logs** from the Azure OpenAI menu, Log Analytics is opened with the query scope set to the current Azure OpenAI resource. This means that log queries will only include data from that resource. If you want to run a query that includes data from other resources or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](/azure/azure-monitor/logs/scope) for details.
+> When you select **Logs** from the Azure OpenAI menu, Log Analytics is opened with the query scope set to the current Azure OpenAI resource. This means that log queries will only include data from that resource. If you want to run a query that includes data from other resources or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../../../azure-monitor/logs/scope.md) for details.
To explore and get a sense of what type of information is available for your Azure OpenAI resource a useful query to start with once you have deployed a model and sent some completion calls through the playground is as follows:
Every organization's alerting needs are going to vary, and will also evolve over
Errors below certain thresholds can often be evaluated through regular analysis of data in Azure Monitor Logs. As you analyze your log data over time, you may also find that a certain condition not occurring for a long enough period of time might be valuable to track with alerts. Sometimes the absence of an event in a log is just as important a signal as an error.
-Depending on what type of application you're developing in conjunction with your use of Azure OpenAI, [Azure Monitor Application Insights](/azure/azure-monitor/overview#application-insights) may offer additional monitoring benefits at the application layer.
+Depending on what type of application you're developing in conjunction with your use of Azure OpenAI, [Azure Monitor Application Insights](../../../azure-monitor/overview.md) may offer additional monitoring benefits at the application layer.
+ ## Next steps -- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.-- Read [Understand log searches in Azure Monitor logs](../../../azure-monitor/logs/log-query-overview.md).
+- See [Monitoring Azure resources with Azure Monitor](../../../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+- Read [Understand log searches in Azure Monitor logs](../../../azure-monitor/logs/log-query-overview.md).
cognitive-services Ethics Responsible Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/ethics-responsible-use.md
- Title: Ethics and responsible use - Personalizer-
-description: These guidelines are aimed at helping you to implement personalization in a way that helps you build trust in your company and service. Be sure to pause to research, learn and deliberate on the impact of the personalization on people's lives. When in doubt, seek guidance.
--
-ms.
--- Previously updated : 06/12/2019--
-# Guidelines for responsible implementation of Personalizer
-
-For people and society to realize the full potential of AI, implementations need to be designed in such a way that they earn the trust of those adding AI to their applications and the users of applications built with AI. These guidelines are aimed at helping you to implement Personalizer in a way that helps you build trust in your company and service. Be sure to pause to research, learn and deliberate on the impact of the personalization on people's lives. When in doubt, seek guidance.
-
-These guidelines are not intended as legal advice and you should separately ensure that your application complies with the fast-paced developments in the law in this area and in your sector.
-
-Also, in designing your application using Personalizer, you should consider a broad set of responsibilities you have when developing any data-centric AI system, including ethics, privacy, security, safety, inclusion, transparency and accountability. You can read more about these in the [Recommended reading](#recommended-reading) section.
-
-You can use the following content as a starter checklist, and customize and refine it to your scenario. This document has two main sections: The first is dedicated to highlighting responsible use considerations when choosing scenarios, features and rewards for Personalizer. The second take a set of values Microsoft believes should be considered when building AI systems, and provides actionable suggestions and risks on how your use of Personalizer influences them.
--
-## Your responsibility
-
-All guidelines for responsible implementation build on the foundation that developers and businesses using Personalizer are responsible and accountable for the effects of using these algorithms in society. If you are developing an application that your organization will deploy, you should recognize your role and responsibility for its operation and how it affects people. If you are designing an application to be deployed by a third party, come to a shared understanding with them of who is ultimately responsible for the behavior of the application, and document that understanding.
-
-Trust is built on the notion of fulfilled commitments - consider your users, society, and the legal framework your applications works in, to identify explicit and implicit commitments they may have.
-
-Microsoft is continuously putting effort into its tools and documents to help you act on these responsibilities. [Provide feedback to Microsoft](mailto:cogsvcs-RL-feedback@microsoft.com?subject%3DPersonalizer%20Responsible%20Use%20Feedback&body%3D%5BPlease%20share%20any%20question%2C%20idea%20or%20concern%5D) if you believe additional tools, product features and documents would help you implement these guidelines for using Personalizer.
--
-## Factors for responsibly implementing Personalizer
-
-Implementing Personalizer can be of great value to your users and your business. To implement Personalizer responsibly, start by considering the following guidelines when:
-
-* Choosing use cases to apply Personalization.
-* Building [reward functions](concept-rewards.md).
-* Choosing which [features](concepts-features.md) about the context and possible actions you will use for personalization.
--
-## Choosing use cases for Personalizer
-
-Using a service that learns to personalize content and user interfaces is useful. It can also be misapplied if the way the personalization creates negative side effects in the real world, including if users are unaware of content personalization.
-
-Examples of uses of Personalizer with heightened potential for negative side effects or a lack of transparency include scenarios where the "reward" depends on many long-term complex factors that, when over-simplified into an immediate reward can have unfavorable results for individuals. These tend to be considered "consequential" choices, or choices that involve a risk of harm. For example:
--
-* **Finance**: Personalizing offers on loan, financial, and insurance products, where risk factors are based on data the individuals don't know about, can't obtain, or can't dispute.
-* **Education**: Personalizing ranks for school courses and education institutions where recommendations may propagate biases and reduce users' awareness of other options.
-* **Democracy and Civic Participation**: Personalizing content for users with the goal of influencing opinions is consequential and manipulative.
-* **Third-party reward evaluation**: Personalizing items where the reward is based on a latter 3rd party evaluation of the user, instead of having a reward generated by the user's own behavior.
-* **Intolerance to Exploration**: Any situation where the exploration behavior of Personalizer may cause harm.
-
-When choosing use cases for Personalizer:
-
-* Start the design process considering how the personalization helps your users.
-* Consider the negative consequences in the real world if some items aren't ranked for users due to personalization patterns or exploration.
-* Consider whether your use case constitutes automated processing which significantly affects data subjects that is regulated under [GDPR](https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32016R0679) Article 22 or other laws.
-* Consider self-fulfilling prophecy loops. This may happen if a personalization reward trains a model so it may subsequently further exclude a demographic group from accessing relevant content. For example, most people in a low-income neighborhood don't obtain a premium insurance offer, and slowly nobody in the neighborhood tends to see the offer at all if there isn't enough exploration.
-* Save copies of models and learning policies in case it is necessary to reproduce Personalizer in the future. You can do this periodically or every model refresh period.
-* Consider the level of exploration adequate for the space and how to use it as a tool to mitigate "echo chamber" effects.
--
-## Selecting features for Personalizer
-
-Personalizing content depends on having useful information about the content and the user. Keep in mind, for some applications and industries, some user features can be directly or indirectly considered discriminatory and potentially illegal.
-
-Consider the effect of these features:
-
-* **User demographics**: Features regarding sex, gender, age, race, religion: These features may be not allowed in certain applications for regulatory reasons, and it may not be ethical to personalize around them because the personalization would propagate generalizations and bias. An example of this bias propagation is a job posting for engineering not being shown to elderly or gender-based audiences.
-* **Locale information**: In many places of the world, location information (such as a zip code, postal code, or neighborhood name) can be highly correlated with income, race and religion.
-* **User Perception of Fairness**: Even in cases where your application is making sound decisions, consider the effect of users perceiving that content displayed in your application changes in a way that appears to be correlated to features that would be discriminatory.
-* **Unintended Bias in Features**: There are types of biases that may be introduced by using features that only affect a subset of the population. This requires extra attention if features are being generated algorithmically, such as when using image analysis to extract items in a picture or text analysis to discover entities in text. Make yourself aware of the characteristics of the services you use to create these features.
-
-Apply the following practices when choosing features to send in contexts and actions to Personalizer:
-
-* Consider the legality and ethics of using certain features for some applications, and whether innocent-looking features may be proxies for others you want to or should avoid,
-* Be transparent to users that algorithms and data analysis are being used to personalize the options they see.
-* Ask yourself: Would my users care and be happy if I used this information to personalize the content for them? Would I feel comfortable showing them how the decision was made to highlight or hide certain items?
-* Use behavioral rather than classification or segmentation data based on other characteristics. Demographic information was traditionally used by retailers for historical reasons ΓÇô demographic attributes seemed simple to collect and act upon before a digital era, - but question how relevant demographic information is when you have actual interaction, contextual, and historical data that relates more closely to the preferences and identity of users.
-* Consider how to prevent features from being 'spoofed' by malicious users, which if exploited in large numbers can lead to training Personalizer in misleading ways to purposefully disrupt, embarrass and harass certain classes of users.
-* When appropriate and feasible, design your application to allow your users to opt in or opt out of having certain personal features used. These could be grouped, such as "Location information", "Device Information", "Past Purchase History" etc.
--
-## Computing rewards for Personalizer
-
-Personalizer strives to improve the choice of which action to reward based on the reward score provided by your application business logic.
-
-A well-built reward score will act as a short-term proxy to a business goal, that is tied to an organization's mission.
-
-For example, rewarding on clicks will make the Personalizer Service seek clicks at the expense of everything else, even if what is clicked on is distracting or not tied to a business outcome.
-
-As a contrasting example, a news site may want to set rewards tied to something more meaningful than clicks, such as "Did the user spend enough time to read the content?" "Did they click on relevant articles or references?". With Personalizer it is easy to tie metrics closely to rewards. But be careful not to confound short-term user engagement with good outcomes.
-
-### Unintended consequences from reward scores
-Reward scores may be built with the best of intentions, but can still create unexpected consequences or unintended results on how Personalizer ranks content.
-
-Consider the following examples:
-
-* Rewarding video content personalization on the percentage of the video length watched will probably tend to rank shorter videos.
-* Rewarding social media shares, without sentiment analysis of how it's shared or the content itself, may lead to ranking offensive, unmoderated, or inflammatory content, which tends to incite a lot of "engagement", but adds little value.
-* Rewarding the action on user interface elements that users don't expect to change may interfere with the usability and predictability of the user interface, where buttons are surprisingly changing location or purpose without warning, making it harder for certain groups of users to stay productive.
-
-Implement these best practices:
-
-* Run offline experiments with your system using different reward approaches to understand impact and side-effects.
-* Evaluate your reward functions and ask yourself how would an extremely naïve person bend its interpretation and reach undesirable outcomes with it.
--
-## Responsible design considerations
-
-The following are areas of design for responsible implementations of AI. Learn more abut this framework in [The Future Computed](https://news.microsoft.com/futurecomputed/).
-
-![AI Values from Future Computed](media/ethics-and-responsible-use/ai-values-future-computed.png)
-
-### Accountability
-*People who design and deploy AI Systems must be accountable for how their systems operate*.
-
-* Create internal guidelines on how to implement Personalizer, document, and communicate them to your team, executives, and suppliers.
-* Perform periodic reviews of how reward scores are computed, perform offline evaluations to see what features are affecting Personalizer, and use the results to eliminate unneeded and unnecessary features.
-* Communicate clearly to your users how Personalizer is used, to what purpose, and with what data.
-* Archive information and assets - such as models, learning policies, and other data - that Personalizer uses to function, to be able to reproduce results.
-
-### Transparency
-*AI Systems Should be Understandable*. With Personalizer:
-
-* *Give users information about how the content was personalized.* For example, you can show your users a button labeled `Why These Suggestions?` showing which top features of the user and actions played a role in the results of Personalizer.
-* Make sure your terms of use make mention that you will use information about users and their behavior to personalize the experience.
-
-### Fairness
-*AI Systems should treat all people fairly*.
-
-* Don't use Personalizer for use cases where the outcomes are long-term, consequential, or involve real harm.
-* Don't use features that are not appropriate to personalize content with, or that may help propagate undesired biases. For example, anyone with similar financial circumstances should see the same personalized recommendations for financial products.
-* Understand biases that may exist in features that are sourced from editors, algorithmic tools, or users themselves.
-
-### Reliability and safety
-*AI Systems should perform reliably and safely*. For Personalizer:
-
-* *Don't provide actions to Personalizer that shouldn't be chosen*. For example, inappropriately movies should be filtered out of the actions to personalize if making a recommendation for an anonymous or under-age user.
-* *Manage your Personalizer model as a business asset*. Consider how often to save and back up the model and learning policies behind your Personalizer Loop, and otherwise treat it as an important business asset. Reproducing past results is important for self-audit and measuring improvement.
-* *Provide channels to get direct feedback from users*. In addition to coding safety checks to make sure only the right audiences see the right content, provide a feedback mechanism for users to report content that may be surprising or disturbing. Especially if your content comes from users or 3rd parties, consider using Microsoft Content Moderator or additional tools to review and validate content.
-* *Perform frequent offline Evaluations*. This will help you monitor trends and make sure effectiveness is known.
-* *Establish a process to detect and act on malicious manipulation*. There are actors that will take advantage of machine learning and AI systems' ability to learn from their environment to shift the outcome towards their goals. If your use of Personalizer is in a position to influence important choices, make sure to have appropriate means to detect and mitigate these classes of attacks, including human review in appropriate circumstances.
-
-### Security and privacy
-*AI Systems should be secure and respect privacy*. When using Personalizer:
-
-* *Inform users up front about the data that is collected and how it is used and obtain their consent beforehand*, following your local and industry regulations.
-* *Provide privacy-protecting user controls.* For applications that store personal information, consider providing an easy-to-find button for functions such as:
- * `Show me all you know about me`
- * `Forget my last interaction`
- * `Delete all you know about me`
-
-In some cases, these may be legally required. Consider the tradeoffs in retraining models periodically so they don't contain traces of deleted data.
-
-### Inclusiveness
-*Address a broad range of human needs and experiences*.
-* *Provide personalized experiences for accessibility-enabled interfaces.* The efficiency that comes from good personalization - applied to reduce the amount of effort, movement, and needless repetition in interactions- can be especially beneficial to people with disabilities.
-* *Adjust application behavior to context*. You can use Personalizer to disambiguate between intents in a chat bot, for example, as the right interpretation may be contextual and one size may not fit all.
--
-## Proactive readiness for increased data protection and governance
-
-It is hard to predict specific changes in regulatory contexts, but in general it would be wise to go beyond the minimum legal framework in ensuring respectful use of personal data, and providing transparency and choice related to algorithmic decision making.
--
-* Consider planning ahead to a situation where there may be new restrictions on data collected from individuals, and there is a need to show how it was used to make decisions.
-* Consider extra readiness where users may include marginalized vulnerable populations, children, users in economic vulnerability, or users otherwise susceptible to influence from algorithmic manipulation.
-* Consider the widespread dissatisfaction with how audience-targeting and audience-influencing data collection programs and algorithms have played out, and how to avoid proven strategic errors.
--
-## Proactive assessments during your project lifecycle
-
-Consider creating methods for team members, users and business owners to report concerns regarding responsible use, and creating a process that prioritizes their resolution and prevents retaliation.
-
-Any person thinking about side effects of use of any technology is limited by their perspective and life experience. Expand the range of opinions available by bringing in more diverse voices into your teams, users, or advisory boards; such that it is possible and encouraged for them to speak up. Consider training and learning materials to further expand the team knowledge in this domain, and to add capability to discuss complex and sensitive topics.
-
-Consider treating tasks regarding responsible use just like other crosscutting tasks in the application lifecycle, such as tasks related to user experience, security, or DevOps. These tasks and their requirements can't be an afterthought. Responsible use should be discussed and verified throughout the application lifecycle.
-
-## Questions and feedback
-
-Microsoft is continuously putting effort into tools and documents to help you act on these responsibilities. Our team invites you to [provide feedback to Microsoft](mailto:cogsvcs-RL-feedback@microsoft.com?subject%3DPersonalizer%20Responsible%20Use%20Feedback&body%3D%5BPlease%20share%20any%20question%2C%20idea%20or%20concern%5D) if you believe additional tools, product features, and documents would help you implement these guidelines for using Personalizer.
-
-## Recommended reading
-
-* See Microsoft's six principles for the responsible development of AI published in the January 2018 book, [The Future Computed](https://news.microsoft.com/futurecomputed/)
-* [Who Owns the Future?](https://www.goodreads.com/book/show/15802693-who-owns-the-future) by Jaron Lanier.
-* [Weapons of Math Destruction](https://www.goodreads.com/book/show/28186015-weapons-of-math-destruction) by - Cathy O'Neil
-* [Ethics and Data Science](https://www.oreilly.com/library/view/ethics-and-data/9781492043898/) by DJ Patil, Hilary Mason, Mike Loukides.
-* [ACM Code of Ethics](https://www.acm.org/code-of-ethics)
-* [Genetic Information Nondiscrimination Act - GINA](https://en.wikipedia.org/wiki/Genetic_Information_Nondiscrimination_Act)
-* [FATML Principles for Accountable Algorithms](https://www.fatml.org/resources/principles-for-accountable-algorithms)
--
-## Next steps
-
-[Features: action and context](concepts-features.md).
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/chat/concepts.md
Typically the thread creator and participants have same level of access to the t
### Chat Data Azure Communication Services stores chat messages for 90 days. Chat thread participants can use `ListMessages` to view message history for a particular thread. However, the API does not return messages once the 90 day period has passed. Users that are removed from a chat thread are able to view previous message history for 90 days but cannot send or receive new messages. To learn more about data being stored in Azure Communication Services chat service, refer to the [data residency and privacy page](../privacy.md).
-For customers that use Virtual appointments, refer to our Teams Interoperability [user privacy](/azure/communication-services/concepts/interop/guest/privacy#chat-storage) for storage of chat messages in Teams meetings.
+For customers that use Virtual appointments, refer to our Teams Interoperability [user privacy](../interop/guest/privacy.md#chat-storage) for storage of chat messages in Teams meetings.
### Service limits - The maximum number of participants allowed in a chat thread is 250.
This way, the message history contains both original and translated messages. In
> [Get started with chat](../../quickstarts/chat/get-started.md) The following documents may be interesting to you:-- Familiarize yourself with the [Chat SDK](sdk-features.md)
+- Familiarize yourself with the [Chat SDK](sdk-features.md)
communication-services Exception Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/exception-policy.md
In the following example, we configure an Exception Policy that will cancel a jo
::: zone pivot="programming-language-csharp" ```csharp
-await client.SetExceptionPolicyAsync(
- id: "policy-1",
- name: "My Exception Policy",
- rules: new List<ExceptionRule>
- {
- new ExceptionRule(
- id: "rule-1",
- trigger: new QueueLengthExceptionTrigger(threshold: 100),
- actions: new List<ExceptionAction>
+await routerClient.CreateExceptionPolicyAsync(
+ new CreateExceptionPolicyOptions(
+ exceptionPolicyId: "policy-1",
+ exceptionRules: new List<ExceptionRule>
{
- new CancelExceptionAction("cancel-action")
+ new ExceptionRule(
+ id: "rule-1",
+ trigger: new QueueLengthExceptionTrigger(threshold: 100),
+ actions: new List<ExceptionAction>
+ {
+ new CancelExceptionAction("cancel-action")
+ })
})
- });
+ {
+ Name = "My exception policy"
+ }
+);
``` ::: zone-end
In the following example, we configure an Exception Policy with rules that will:
::: zone pivot="programming-language-csharp" ```csharp
-await client.SetExceptionPolicyAsync(
- id: "policy-1",
- name: "My Exception Policy",
- rules: new List<ExceptionRule>
- {
- new ExceptionRule(
- id: "rule-1",
- trigger: new WaitTimeExceptionTrigger(TimeSpan.FromMinutes(1)),
- actions: new List<ExceptionAction>
- {
- new ManualReclassifyExceptionAction(id: "action1", priority: 10)
- }),
- new ExceptionRule(
- id: "rule-2",
- trigger: new WaitTimeExceptionTrigger(TimeSpan.FromMinutes(5)),
- actions: new List<ExceptionAction>
+await routerClient.CreateExceptionPolicyAsync(
+ new CreateExceptionPolicyOptions(
+ exceptionPolicyId: "policy-1",
+ exceptionRules: new List<ExceptionRule>
{
- new ManualReclassifyExceptionAction(id: "action2", queueId: "queue-2")
+ new ExceptionRule(
+ id: "rule-1",
+ trigger: new WaitTimeExceptionTrigger(TimeSpan.FromMinutes(1)),
+ actions: new List<ExceptionAction>
+ {
+ new ManualReclassifyExceptionAction(id: "action1", priority: 10)
+ }),
+ new ExceptionRule(
+ id: "rule-2",
+ trigger: new WaitTimeExceptionTrigger(TimeSpan.FromMinutes(5)),
+ actions: new List<ExceptionAction>
+ {
+ new ManualReclassifyExceptionAction(id: "action2", queueId: "queue-2")
+ })
})
- });
+ {
+ Name = "My Exception Policy"
+ }
+);
``` ::: zone-end
communication-services Matching Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/matching-concepts.md
In the following example we register a worker to
::: zone pivot="programming-language-csharp" ```csharp
-var worker = await client.RegisterWorkerAsync(
- id: "worker-1",
- queueIds: new[] { "queue-1", "queue-2" },
- totalCapacity: 2,
- channelConfigurations: new List<ChannelConfiguration>
- {
- new ChannelConfiguration(channelId: "voice", capacityCostPerJob: 2),
- new ChannelConfiguration(channelId: "chat", capacityCostPerJob: 1)
- },
- labels: new LabelCollection()
+var worker = await client.CreateWorkerAsync(
+ new CreateWorkerOptions(
+ workerId: "worker-1",
+ totalCapacity: 2)
{
- ["Skill"] = 11,
- ["English"] = true,
- ["French"] = false,
- ["Vendor"] = "Acme"
+ QueueIds = new Dictionary<string, QueueAssignment>()
+ {
+ ["queue-1"] = new QueueAssignment(),
+ ["queue-2"] = new QueueAssignment()
+ },
+ ChannelConfigurations = new Dictionary<string, ChannelConfiguration>()
+ {
+ ["voice"] = new ChannelConfiguration(2),
+ ["chat"] = new ChannelConfiguration(1)
+ },
+ Labels = new Dictionary<string, LabelValue>()
+ {
+ ["Skill"] = new LabelValue(11),
+ ["English"] = new LabelValue(true),
+ ["French"] = new LabelValue(false),
+ ["Vendor"] = new LabelValue("Acme")
+ },
} ); ```
communication-services Router Rule Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/router-rule-concepts.md
In this example a `StaticRule`, which is a subtype of `RouterRule` can be used t
::: zone pivot="programming-language-csharp" ```csharp
-await client.SetClassificationPolicyAsync(
- id: "my-policy-id",
- prioritizationRule: new StaticRule(5)
-);
+await routerAdministration.CreateClassificationPolicyAsync(
+ new CreateClassificationPolicyOptions(classificationPolicyId: "my-policy-id")
+ {
+ PrioritizationRule = new StaticRule(new LabelValue(5))
+ });
``` ::: zone-end
In this example a `ExpressionRule`, which is a subtype of `RouterRule` can be us
::: zone pivot="programming-language-csharp" ```csharp
-await client.SetClassificationPolicyAsync(
- id: "my-policy-id",
- prioritizationRule: new ExpressionRule("If(job.Urgent = true, 10, 5)")
-);
+await routerAdministration.CreateClassificationPolicyAsync(
+ new CreateClassificationPolicyOptions(classificationPolicyId: "my-policy-id")
+ {
+ PrioritizationRule = new ExpressionRule("If(job.Escalated = true, 10, 5)") // this will check whether the job has a label "Escalated" set to "true"
+ });
``` ::: zone-end
await client.upsertClassificationPolicy({
id: "my-policy-id", prioritizationRule: { kind: "expression-rule",
- expression: "If(job.Urgent = true, 10, 5)"
+ expression: "If(job.Escalated = true, 10, 5)"
} }); ```
communication-services Emergency Calling Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/emergency-calling-concept.md
Emergency calling is automatically enabled for all users of the Azure Communicat
- If you don't provide the country/region code to the SDK, Microsoft uses the IP address to determine the country or region of the caller.
- If the IP address can't provide reliable geolocation (for example, the caller is on a virtual private network), you must set the ISO code of the calling country or region by using the API in the Azure Communication Services Calling SDK. See the example in the [quickstart for adding emergency calling](/azure/communication-services/quickstarts/telephony/emergency-calling).
+ If the IP address can't provide reliable geolocation (for example, the caller is on a virtual private network), you must set the ISO code of the calling country or region by using the API in the Azure Communication Services Calling SDK. See the example in the [quickstart for adding emergency calling](../../quickstarts/telephony/emergency-calling.md).
- If the caller is dialing from a US territory (for example, Guam, US Virgin Islands, Northern Mariana Islands, or American Samoa), you must set the ISO code to US.
communication-services Accept Decline Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/accept-decline-offer.md
This guide lays out the steps you need to take to observe a Job Router offer. It also outlines how to accept or decline job offers. - ## Prerequisites - An Azure account with an active subscription. [Create an Azure account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
communication-services Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/azure-function.md
# Azure function rule engine - As part of the customer extensibility model, Azure Communication Services Job Router supports an Azure Function based rule engine. It gives you the ability to bring your own Azure function. With Azure Functions, you can incorporate custom and complex logic into the process of routing. ## Creating an Azure function
public static class GetPriority
Inspect your deployed function in the Azure portal and locate the function Uri and authentication key. Then use the SDK to configure a policy that uses a rule engine to point to that function. ```csharp
-await client.SetClassificationPolicyAsync(
- "policy-1",
- prioritizationRule: new AzureFunctionRule("<insert function uri>", new AzureFunctionRuleCredential("<insert function key>")));
+await client.CreateClassificationPolicyAsync(
+ options: new CreateClassificationPolicyOptions("policy-1")
+ {
+ PrioritizationRule = new FunctionRule("<insert function uri>", new FunctionRuleCredential("<insert function key>"))
+ }
+);
``` When a new job is submitted or updated, this function will be called to determine the priority of the job.
communication-services Customize Worker Scoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/customize-worker-scoring.md
# How to customize how workers are ranked for the best worker distribution mode - The `best-worker` distribution mode selects the workers that are best able to handle the job first. The logic to rank Workers can be customized, with an expression or Azure function to compare two workers. The following example shows how to customize this logic with your own Azure Function. ## Scenario: Custom scoring rule in best worker distribution mode
communication-services Escalate Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/escalate-job.md
This guide shows you how to escalate a Job in a Queue by using an Exception Policy. - ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
Create an exception policy, which you will attach to the regular queue, which is
```csharp // create the exception policy
-await client.SetExceptionPolicyAsync(
- id: "Escalate_XBOX_Policy",
- name: "Add escalated label and reclassify XBOX Job requests after 5 minutes",
- rules: new List<ExceptionRule>()
- {
- new (
- id: "Escalated_Rule",
- trigger: new WaitTimeExceptionTrigger(TimeSpan.FromMinutes(5)),
- actions: new List<ExceptionAction>
+await routerClient.CreateExceptionPolicyAsync(
+ new CreateExceptionPolicyOptions(
+ exceptionPolicyId: "Escalate_XBOX_Policy",
+ exceptionRules: new Dictionary<string, ExceptionRule>()
{
- new ReclassifyExceptionAction("EscalateReclassifyExceptionAction")
{
- LabelsToUpsert = new LabelCollection(
- new Dictionary<string, object>
- {
- ["Escalated"] = true,
- })
+ ["Escalated_Rule"] = new ExceptionRule(new WaitTimeExceptionTrigger(300),
+ new Dictionary<string, ExceptionAction>()
+ {
+ "EscalateReclassifyExceptionAction" = new ReclassifyExceptionAction(
+ classificationPolicyId: "<classification policy id>",
+ labelsToUpsert: new Dictionary<string, LabelValue>()
+ {
+ ["Escalated"] = new LabelValue(true),
+ })
+ })
} }
- )
- });
+ ))
+ {
+ Name = "My exception policy"
+ }
+);
``` ## Classification policy configuration
await client.SetExceptionPolicyAsync(
Create a Classification Policy to handle the new label added to the Job. This policy will evaluate the `Escalated` label and assign the Job to either Queue. The policy will also use the [RulesEngine](../../concepts/router/router-rule-concepts.md) to increase the priority of the Job from `1` to `10`. ```csharp
-await client.SetClassificationPolicyAsync(
- id: "Classify_XBOX_Voice_Jobs",
- name: "Classify XBOX Voice Jobs",
- queueSelector: new QueueIdSelector(
- new ExpressionRule(
- "If(job.Escalated = true, \"XBOX_Queue\", \"XBOX_Escalation_Queue\")")),
- workerSelectors: null,
- prioritizationRule: new ExpressionRule("If(job.Escalated = true, 10, 1)"),
- fallbackQueueId: "Default");
+await routerAdministrationClient.CreateClassificationPolicyAsync(
+ options: new CreateClassificationPolicyOptions("Classify_XBOX_Voice_Jobs")
+ {
+ Name = "Classify XBOX Voice Jobs",
+ PrioritizationRule = new ExpressionRule("If(job.Escalated = true, 10, 1)"),
+ QueueSelectors = new List<QueueSelectorAttachment>()
+ {
+ new ConditionalQueueSelectorAttachment(
+ condition: new ExpressionRule("If(job.Escalated = true, true, false)"),
+ labelSelectors: new List<QueueSelector>()
+ {
+ new QueueSelector("Id", LabelOperator.Equal, new LabelValue("XBOX_Escalation_Queue"))
+ }),
+ new ConditionalQueueSelectorAttachment(
+ condition: new ExpressionRule("If(job.Escalated = false, true, false)"),
+ labelSelectors: new List<QueueSelector>()
+ {
+ new QueueSelector("Id", LabelOperator.Equal, new LabelValue("XBOX_Queue"))
+ })
+ },
+ FallbackQueueId = "Default"
+ });
``` ## Queue configuration
Create the necessary Queues for regular and escalated Jobs and assign the Except
```csharp // create a queue for regular Jobs and attach the exception policy
-await client.SetQueueAsync(
- id: "XBOX_Queue",
- name: "XBOX Queue",
- distributionPolicyId: "Round_Robin_Policy",
- exceptionPolicyId: "XBOX_Escalate_Policy"
-);
+await routerAdministrationClient.CreateQueueAsync(
+ options: new CreateQueueOptions("XBOX_Queue", "Round_Robin_Policy")
+ {
+ Name = "XBOX Queue",
+ ExceptionPolicyId = "XBOX_Escalate_Policy"
+ });
// create a queue for escalated Jobs
-await client.SetQueueAsync(
- id: "XBOX_Escalation_Queue",
- name: "XBOX Escalation Queue",
- distributionPolicyId: "Round_Robin_Policy"
-);
+await routerAdministrationClient.CreateQueueAsync(
+ options: new CreateQueueOptions("XBOX_Escalation_Queue", "Round_Robin_Policy")
+ {
+ Name = "XBOX Escalation Queue",
+ });
``` ## Job lifecycle
await client.SetQueueAsync(
When you submit the Job, specify the Classification Policy ID as follows. For this particular example, the requirement would be to find a worker with a label called `XBOX_Hardware`, which has a value greater than or equal to the number `7`. ```csharp
-await client.CreateJobWithClassificationPolicyAsync(
- channelId: ManagedChannels.AcsVoiceChannel,
- classificationPolicyId: "Classify_XBOX_Voice_Jobs",
- workerSelectors: new List<LabelSelector>
+await routerClient.CreateJobAsync(
+ options: new CreateJobWithClassificationPolicyOptions(
+ jobId: "<jobId>",
+ channelId: ManagedChannels.AcsVoiceChannel,
+ classificationPolicyId: "Classify_XBOX_Voice_Jobs")
{
- new (
- key: "XBOX_Hardware",
- @operator: LabelOperator.GreaterThanEqual,
- value: 7)
- }
+ RequestedWorkerSelectors = new List<WorkerSelector>
+ {
+ new WorkerSelector(key: "XBOX_Hardware", labelOperator: LabelOperator.GreaterThanEqual, value: new LabelValue(7))
+ }
+ });
); ```
communication-services Job Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/job-classification.md
zone_pivot_groups: acs-js-csharp
Learn to use a classification policy in Job Router to dynamically resolve the queue and priority while also attaching worker selectors to a Job. - ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
The following example will leverage [PowerFx Expressions](https://powerapps.micr
::: zone pivot="programming-language-csharp" ```csharp
-var policy = await client.SetClassificationPolicyAsync(
- id: "XBOX_NA_QUEUE_Priority_1_10",
- name: "Select XBOX Queue and set priority to 1 or 10",
- queueSelector: new QueueIdSelector(
- new ExpressionRule("If(job.Region = \"NA\", \"XBOX_NA_QUEUE\", \"XBOX_DEFAULT_QUEUE\")")
- ),
- prioritizationRule: new ExpressionRule("If(job.Hardware_VIP = true, 10, 1)"),
- fallbackQueueId: "DEFAULT_QUEUE"
-);
+await routerAdministrationClient.CreateClassificationPolicyAsync(
+ options: new CreateClassificationPolicyOptions("XBOX_NA_QUEUE_Priority_1_10")
+ {
+ Name = "Select XBOX Queue and set priority to 1 or 10",
+ PrioritizationRule = new ExpressionRule("If(job.Hardware_VIP = true, 10, 1)"),
+ QueueSelectors = new List<QueueSelectorAttachment>()
+ {
+ new ConditionalQueueSelectorAttachment(
+ condition: new ExpressionRule("If(job.Region = \"NA\", true, false)"),
+ labelSelectors: new List<QueueSelector>()
+ {
+ new QueueSelector("Id", LabelOperator.Equal, new LabelValue("XBOX_NA_QUEUE"))
+ }),
+ new ConditionalQueueSelectorAttachment(
+ condition: new ExpressionRule("If(job.Region != \"NA\", true, false)"),
+ labelSelectors: new List<QueueSelector>()
+ {
+ new QueueSelector("Id", LabelOperator.Equal, new LabelValue("XBOX_DEFAULT_QUEUE"))
+ })
+ }
+ });
``` ::: zone-end
The following example will cause the classification policy to evaluate the Job l
::: zone pivot="programming-language-csharp" ```csharp
-var job = await client.CreateJobAsync(
- channelId: "voice",
- classificationPolicyId: "XBOX_NA_QUEUE_Priority_1_10",
- labels: new LabelCollection()
+var job = await routerClient.CreateJobAsync(
+ options: new CreateJobWithClassificationPolicyOptions(
+ jobId: "<job id>",
+ channelId: "voice",
+ classificationPolicyId: "XBOX_NA_QUEUE_Priority_1_10")
{
- ["Region"] = "NA",
- ["Caller_Id"] = "tel:7805551212",
- ["Caller_NPA_NXX"] = "780555",
- ["XBOX_Hardware"] = 7
- }
-);
-
-// returns a new GUID such as: 4ad7f4b9-a0ff-458d-b3ec-9f84be26012b
+ Labels = new Dictionary<string, LabelValue>()
+ {
+ {"Region", new LabelValue("NA")},
+ {"Caller_Id", new LabelValue("tel:7805551212")},
+ {"Caller_NPA_NXX", new LabelValue("780555")},
+ {"XBOX_Hardware", new LabelValue(7)}
+ }
+ });
``` ::: zone-end
In this example, the Classification Policy is configured with a static attachmen
::: zone pivot="programming-language-csharp" ```csharp
-await client.SetClassificationPolicyAsync(
- id: "policy-1",
- workerSelectors: new List<LabelSelectorAttachment>
+await routerAdministrationClient.CreateClassificationPolicyAsync(
+ options: new CreateClassificationPolicyOptions("policy-1")
{
- new StaticLabelSelector(
- new LabelSelector("Foo", LabelOperator.Equal, "Bar")
- )
- }
-);
+ WorkerSelectors = new List<WorkerSelectorAttachment>()
+ {
+ new StaticWorkerSelectorAttachment(new WorkerSelector("Foo", LabelOperator.Equal, new LabelValue("Bar")))
+ }
+ });
``` ::: zone-end
In this example, the Classification Policy is configured with a conditional atta
::: zone pivot="programming-language-csharp" ```csharp
-await client.SetClassificationPolicyAsync(
- id: "policy-1",
- workerSelectors: new List<LabelSelectorAttachment>
+await routerAdministrationClient.CreateClassificationPolicyAsync(
+ options: new CreateClassificationPolicyOptions("policy-1")
{
- new ConditionalLabelSelector(
- condition: new ExpressionRule("job.Urgent = true"),
- labelSelectors: new List<LabelSelector>
- {
- new LabelSelector("Foo", LabelOperator.Equal, "Bar")
- })
- }
-);
+ WorkerSelectors = new List<WorkerSelectorAttachment>()
+ {
+ new ConditionalWorkerSelectorAttachment(
+ condition: new ExpressionRule("job.Urgent = true")),
+ labelSelectors: new List<WorkerSelector>()
+ {
+ new WorkerSelector("Foo", LabelOperator.Equal, "Bar")
+ })
+ }
+ });
``` ::: zone-end
In this example, the Classification Policy is configured to attach a worker sele
::: zone pivot="programming-language-csharp" ```csharp
-await client.SetClassificationPolicyAsync(
- id: "policy-1",
- workerSelectors: new List<LabelSelectorAttachment>
+await routerAdministrationClient.CreateClassificationPolicyAsync(
+ options: new CreateClassificationPolicyOptions("policy-1")
{
- new PassThroughLabelSelector(key: "Foo", @operator: LabelOperator.Equal)
- }
-);
+ WorkerSelectors = new List<WorkerSelectorAttachment>()
+ {
+ new PassThroughQueueSelectorAttachment("Foo", LabelOperator.Equal)
+ }
+ });
``` ::: zone-end
In this example, the Classification Policy is configured with a weighted allocat
::: zone pivot="programming-language-csharp" ```csharp
-await client.SetClassificationPolicyAsync(
- id: "policy-1",
- workerSelectors: new List<LabelSelectorAttachment>
+await routerAdministrationClient.CreateClassificationPolicyAsync(
+ options: new CreateClassificationPolicyOptions("policy-1")
{
- new WeightedAllocationLabelSelector(new WeightedAllocation[]
+ WorkerSelectors = new List<WorkerSelectorAttachment>()
{
- new WeightedAllocation(
- weight: 0.3,
- labelSelectors: new List<LabelSelector>
+ new WeightedAllocationWorkerSelectorAttachment(
+ new List<WorkerWeightedAllocation>()
{
- new LabelSelector("Vendor", LabelOperator.Equal, "A")
- }),
- new WeightedAllocation(
- weight: 0.7,
- labelSelectors: new List<LabelSelector>
- {
- new LabelSelector("Vendor", LabelOperator.Equal, "B")
+ new WorkerWeightedAllocation(0.3,
+ new List<WorkerSelector>()
+ {
+ new WorkerSelector("Vendor", LabelOperator.Equal, "A")
+ }),
+ new WorkerWeightedAllocation(0.7,
+ new List<WorkerSelector>()
+ {
+ new WorkerSelector("Vendor", LabelOperator.Equal, "B")
+ })
})
- })
- }
-);
+ }
+ });
``` ::: zone-end
await client.upsertClassificationPolicy(
## Reclassify a job after submission
-Once the Job Router has received, and classified a Job using a policy, you have the option of reclassifying it using the SDK. The following example illustrates one way to increase the priority of the Job to `10`, simply by specifying the **Job ID**, calling the `ReclassifyJobAsync` method, and including the `Hardware_VIP` label.
+Once the Job Router has received, and classified a Job using a policy, you have the option of reclassifying it using the SDK. The following example illustrates one way to increase the priority of the Job, simply by specifying the **Job ID**, calling the `ReclassifyJobAsync` method.
::: zone pivot="programming-language-csharp" ```csharp
-var reclassifiedJob = await client.ReclassifyJobAsync(
- jobId: "4ad7f4b9-a0ff-458d-b3ec-9f84be26012b",
- classificationPolicyId: null,
- labelsToUpdate: new LabelCollection()
- {
- ["Hardware_VIP"] = true
- }
-);
+var reclassifiedJob = await routerClient.ReclassifyJobAsync("<job id>");
``` ::: zone-end
var reclassifiedJob = await client.ReclassifyJobAsync(
::: zone pivot="programming-language-javascript" ```typescript
-await client.reclassifyJob("4ad7f4b9-a0ff-458d-b3ec-9f84be26012b", {
- classificationPolicyId: null,
- labelsToUpdate: {
- Hardware_VIP: true
- }
-});
+await client.reclassifyJob("<jobId>");
``` ::: zone-end
communication-services Manage Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/manage-queue.md
This guide outlines the steps to create and manage a Job Router queue. - ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
This guide outlines the steps to create and manage a Job Router queue.
To create a simple queue in Job Router, use the SDK to specify the **queue ID**, **name**, and a **distribution policy ID**. The distribution policy must be created in advance as the Job Router will validate its existence upon creation of the queue. In the following example, a distribution policy is created to control how Job offers are generated for Workers. ```csharp
-var distributionPolicy = await client.SetDistributionPolicyAsync(
- id: "Longest_Idle_45s_Min1Max10",
- name: "Longest Idle matching with a 45s offer expiration; min 1, max 10 offers",
- offerTTL: TimeSpan.FromSeconds(45),
- mode: new LongestIdleMode(
- minConcurrentOffers: 1,
- maxConcurrentOffers: 10)
+var distributionPolicy = await administrationClient.CreateDistributionPolicyAsync(
+ new CreateDistributionPolicyOptions(
+ distributionPolicyId: "Longest_Idle_45s_Min1Max10",
+ offerTtl: TimeSpan.FromSeconds(45),
+ mode: new LongestIdleMode(
+ minConcurrentOffers: 1,
+ maxConcurrentOffers: 10)
+ {
+ Name = "Longest Idle matching with a 45s offer expiration; min 1, max 10 offers"
+ }
);
-var queue = await client.SetQueueAsync(
- id: "XBOX_DEFAULT_QUEUE",
- name: "XBOX Default Queue",
- distributionPolicy: "Longest_Idle_45s_Min1Max10"
+var queue = await administrationClient.CreateQueueAsync(
+ options: new CreateQueueOptions("XBOX_DEFAULT_QUEUE", "Longest_Idle_45s_Min1Max10")
+ {
+ Name = "XBOX Default Queue"
+ }
); ``` ## Update a queue
-The Job Router SDK will create a new queue or update an existing queue when the `SetQueue` or `SetQueueAsync` method is called.
+The Job Router SDK will update an existing queue when the `UpdateQueue` or `UpdateQueueAsync` method is called.
```csharp
-var queue = await client.SetQueueAsync(
- id: "XBOX_DEFAULT_QUEUE",
- name: "XBOX Default Queue",
- distributionPolicy: "Longest_Idle_45s_Min1Max10"
+var queue = await administrationClient.UpdateQueueAsync(
+ options: new UpdateQueueOptions("XBOX_DEFAULT_QUEUE")
+ {
+ Name = "XBOX Default Queue",
+ DistributionPolicyId = "Longest_Idle_45s_Min1Max10",
+ Labels = new Dictionary<string, LabelValue>()
+ {
+ ["Additional-Queue-Label"] = new LabelValue("ChatQueue")
+ }
+ });
); ```
communication-services Preferred Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/preferred-worker.md
zone_pivot_groups: acs-js-csharp
In the context of a call center, customers might be assigned an account manager or have a relationship with a specific worker. As such, You'd want to route a specific job to a specific worker if possible. - ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
In the following example, a job is created that targets a specific worker. If th
::: zone pivot="programming-language-csharp" ```csharp
-await client.CreateJobAsync(
- channelId: "<channel id>",
- queueId: "<queue id>",
- workerSelectors: new List<LabelSelector>
- {
- new LabelSelector(
- key: "Id",
- @operator: LabelOperator.Equal,
- value: "<preferred worker id>",
- ttl: TimeSpan.FromMinutes(1))
- });
+await routerClient.CreateJobAsync(
+ options: new CreateJobOptions(
+ jobId: "<job id>",
+ channelId: "<channel id>",
+ queueId: "<queue id>")
+ {
+ RequestedWorkerSelectors = new List<WorkerSelector>()
+ {
+ new WorkerSelector("Id", LabelOperator.Equal, "<preferred worker id>", TimeSpan.FromMinutes(1))
+ }
+ });
``` ::: zone-end
communication-services Subscribe Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/subscribe-events.md
This guide outlines the steps to set up a subscription for Job Router events and
For more details on Event Grid, see the [Event Grid documentation][event-grid-overview]. - ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
communication-services Get Started Router https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/router/get-started-router.md
static async Task Main(string[] args)
} ```
-## Authenticate the Job Router client
+## Authenticate the Job Router clients
Job Router clients can be authenticated using your connection string acquired from an Azure Communication Services resource in the Azure portal. ```csharp // Get a connection string to our Azure Communication Services resource. var connectionString = "your_connection_string";
-var client = new RouterClient(connectionString);
+var routerClient = new RouterClient(connectionString);
+var routerAdministrationClient = new RouterAdministrationClient(connectionString);
``` ## Create a distribution policy
var client = new RouterClient(connectionString);
Job Router uses a distribution policy to decide how Workers will be notified of available Jobs and the time to live for the notifications, known as **Offers**. Create the policy by specifying the **ID**, a **name**, an **offerTTL**, and a distribution **mode**. ```csharp
-var distributionPolicy = await routerClient.SetDistributionPolicyAsync(
- id: "distribution-policy-1",
- name: "My Distribution Policy",
- offerTTL: TimeSpan.FromSeconds(30),
- mode: new LongestIdleMode()
+var distributionPolicy = await routerAdministrationClient.CreateDistributionPolicyAsync(
+ new CreateDistributionPolicyOptions(
+ distributionPolicyId: "distribution-policy-1",
+ offerTtl: TimeSpan.FromMinutes(1),
+ mode: new LongestIdleMode())
+ {
+ Name = "My distribution policy"
+ }
); ```
var distributionPolicy = await routerClient.SetDistributionPolicyAsync(
Create the Queue by specifying an **ID**, **name**, and provide the **Distribution Policy** object's ID you created above. ```csharp
-var queue = await routerClient.SetQueueAsync(
- id: "queue-1",
- name: "My Queue",
- distributionPolicyId: distributionPolicy.Value.Id
+var queue = await routerAdministrationClient.CreateQueueAsync(
+ options: new CreateQueueOptions("queue-1", distributionPolicy.Value.Id)
+ {
+ Name = "My job queue"
+ }
); ``` ## Submit a job
-Now, we can submit a job directly to that queue, with a worker selector the requires the worker to have the label `Some-Skill` greater than 10.
+Now, we can submit a job directly to that queue, with a worker selector that requires the worker to have the label `Some-Skill` greater than 10.
```csharp var job = await routerClient.CreateJobAsync(
- channelId: "my-channel",
- queueId: queue.Value.Id,
- priority: 1,
- workerSelectors: new List<LabelSelector>
+ options: new CreateJobOptions(
+ jobId: jobId,
+ channelId: "my-channel",
+ queueId: queue.Value.Id)
{
- new LabelSelector(
- key: "Some-Skill",
- @operator: LabelOperator.GreaterThan,
- value: 10)
- });
+ Priority = 1,
+ RequestedWorkerSelectors = new List<WorkerSelector>
+ {
+ new WorkerSelector(
+ key: "Some-Skill",
+ labelOperator: LabelOperator.GreaterThan,
+ value: 10)
+ }
+ }
+);
```
-## Register a worker
+## Create a worker
-Now, we register a worker to receive work from that queue, with a label of `Some-Skill` equal to 11 and capacity on `my-channel`.
+Now, we create a worker to receive work from that queue, with a label of `Some-Skill` equal to 11 and capacity on `my-channel`. In order for the worker to receive offers make sure that the property **AvailableForOffers** is set to **true**.
```csharp
-var worker = await routerClient.RegisterWorkerAsync(
- id: "worker-1",
- queueIds: new[] { queue.Value.Id },
- totalCapacity: 1,
- labels: new LabelCollection()
- {
- ["Some-Skill"] = 11
- },
- channelConfigurations: new List<ChannelConfiguration>
+var worker = await routerClient.CreateWorkerAsync(
+ new CreateWorkerOptions(
+ workerId: "worker-1",
+ totalCapacity: 1)
{
- new ChannelConfiguration("my-channel", 1)
+ QueueIds = new Dictionary<string, QueueAssignment>()
+ {
+ [queue.Value.Id] = new QueueAssignment()
+ },
+ ChannelConfigurations = new Dictionary<string, ChannelConfiguration>()
+ {
+ ["my-channel"] = new ChannelConfiguration(1)
+ },
+ Labels = new Dictionary<string, LabelValue>()
+ {
+ ["Some-Skill"] = new LabelValue(11)
+ },
+ AvailableForOffers = true
} ); ``` ### Offer
-We should get a [RouterWorkerOfferIssued][offer_issued_event] from our [EventGrid subscription][subscribe_events].
+We should get a [RouterWorkerOfferIssued][offer_issued_event] from our [Event Grid subscription][subscribe_events].
However, we could also wait a few seconds and then query the worker directly against the JobRouter API to see if an offer was issued to it. ```csharp
communication-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/whats-new.md
+
+ Title: What's new in Azure Communication Services #Required; page title is displayed in search results. Include the brand.
+description: All of the latest additions to Azure Communication Services #Required; article description that is displayed in search results.
++++ Last updated : 03/12/2023 #Required; mm/dd/yyyy format.++++
+# What's new in Azure Communication Services
+
+We're adding new capabilities to Azure Communication Services all the time, so we created this page to share the latest developments in the platform. Bookmark this page and make it your go-to resource to find out all the latest capabilities of Azure Communication Services.
++
+## Updated documentation
+We heard your feedback and made it easier to find the documentation you need as quickly and easily as possible. We're making our docs more readable, easier to understand, and more up-to-date. There's a new landing page design and an updated, better organized table of contents. We've added some of the content you've told us you need and will be continuing to do so, and we're editing existing documentation as well. Don't hesitate to use the feedback link at the top of each page to tell us if a page needs refreshing. Thanks!
+
+## Teams interoperability (General Availability)
+Azure Communication Services can be used to build custom applications and experiences that enable interaction with Microsoft Teams users over voice, video, chat, and screen sharing. The [Communication Services UI Library](./concepts/ui-library/ui-library-overview.md) provides customizable, production-ready UI components that can be easily added to these applications. The following video demonstrates some of the capabilities of Teams interoperability:
+
+<br>
+
+> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWGTqQ]
+
+To learn more about teams interoperability, visit the [teams interop overview page](./concepts/teams-interop.md)
+
+## New calling features
+Our calling team has been working hard to expand and improve our feature set in response to your requests. Some of the new features we've enabled include:
+
+### Background blur and custom backgrounds (Public Preview)
+Background blur gives users a way to remove visual distractions behind a participant so that callers can engage in a conversation without disruptive activity or confidential information appearing in the background. This feature is especially useful in a context such as telehealth, where a provider or patient might want to obscure their surroundings to protect sensitive information. Background blur can be applied across all virtual appointment scenarios to protect user privacy, including telebanking and virtual hearings. In addition to enhanced confidentiality, the custom backgrounds capability allows for more creativity of expression, allowing users to upload custom backgrounds to host a more fun, personalized calling experience. This feature is currently available on Web Desktop and will be expanding to other platforms in the future.
+
+*Figure 1: Custom background*
+
+To learn more about custom backgrounds and background blur, visit the overview on [adding visual effects to your call](./concepts/voice-video-calling/video-effects.md).
+
+### Raw media access (Public Preview)
+The video media access API provides support for developers to get real-time access to video streams so that they can capture, analyze, and process video content during active calls. Developers can access the incoming call video stream directly on the call object and send custom outgoing video stream during the call. This feature sets the foreground services to support different kinds of video and audio manipulation. Outgoing video access can be captured and implemented with screen sharing, background blur, and video filters before being published to the recipient, allowing viewers to build privacy into their calling experience. In more complex scenarios, video access can be fitted with a virtual environment to support augmented reality. Spatial audio can be injected into remote incoming audio to add music to enhance a waiting room lobby.
+
+To learn more about raw media access visit the [media access overview](./concepts/voice-video-calling/media-access.md)
+
+Other new calling features include:
+- Webview support for iOS and Android
+- Early media support in call flows
+- Chat composite for mobile native development
+- Added browser support for JS Calling SDK
+- Call readiness tools
+- Simulcast
+
+Take a look at our feature update blog posts from [January](https://techcommunity.microsoft.com/t5/azure-communication-services/azure-communication-services-calling-features-update/ba-p/3735073) and [February](https://techcommunity.microsoft.com/t5/azure-communication-services/azure-communication-services-february-2023-feature-updates/ba-p/3737486) for more detailed information and links to numerous quickstarts.
+
+## Rooms (Public Preview)
+Azure Communication Services provides a concept of a room for developers who are building structured conversations such as virtual appointments or virtual events. Rooms currently allow voice and video calling.
+
+To learn more about rooms, visit the [overview page](./concepts/rooms/room-concept.md)
+
+## Sample Builder Rooms integration
+
+**We are excited to announce that we have integrated Rooms into our Virtual Appointment Sample.**
+
+Azure Communication Services (ACS) provides the concept of a room. Rooms allow developers to build structured conversations such as scheduled virtual appointments or virtual events. Rooms allow control through roles and permissions and enable invite-only experiences. Rooms currently allow voice and video calling.
+
+## Enabling a faster sample building experience
+
+Data indicates that ~40% of customers abandon the Sample Builder due to the challenging nature of the configuration process, particularly during the Microsoft Bookings setup. To address this issue, we've implemented a solution that streamlines the deployment process by using Rooms for direct virtual appointment creation within the Sample Builder. This change results in a significant reduction of deployment time, as the configuration of Microsoft Bookings isn't enforced, but rather transformed into an optional feature that can be configured in the deployed Sample. Additionally, we've incorporated a feedback button into the Sample Builder and made various enhancements to its accessibility. With Sample Builder, customers can effortlessly customize and deploy their applications to Azure or their Git repository, without the need for any coding expertise.
++
+*Figure 2: Scheduling experience options.*
++
+*Figure 3:  Feedback form.*
++
+Sample Builder is already in General Availability and can be accessed [on Azure portal](https://ms.portal.azure.com/#@microsoft.onmicrosoft.com/resource/subscriptions/50ad1522-5c2c-4d9a-a6c8-67c11ecb75b8/resourceGroups/serooney-tests/providers/Microsoft.Communication/CommunicationServices/email-tests/sample_applications).
++
+## Call Automation (Public Preview)
+Azure Communication Services Call Automation provides developers the ability to build server-based, intelligent call workflows, and call recording for voice and PSTN channels. The SDKs, available for .NET and Java, uses an action-event model to help you build personalized customer interactions. Your communication applications can listen to real-time call events and perform control plane actions (like answer, transfer, play audio, start recording, etc.) to steer and control calls based on your business logic.
+
+ACS Call Automation can be used to build calling workflows for customer service scenarios, as depicted in the following high-level architecture. You can answer inbound calls or make outbound calls. Execute actions like playing a welcome message, connecting the customer to a live agent on an ACS Calling SDK client app to answer the incoming call request. With support for ACS PSTN or Direct Routing, you can then connect this workflow back to your contact center.
+
+*Figure 4: Call Automation Architecture*
+
+To learn more, visit our [Call Automation overview article](./concepts/call-automation/call-automation.md).
+
+## Phone number expansion now in General Availability (Generally Available)
+ We're excited to announce that we have launched Phone numbers in Canada, United Kingdom, Italy, Ireland and Sweden from Public Preview into General Availability. ACS Direct Offers are now generally available in the following countries and regions: **United States, Puerto Rico, Canada, United Kingdom, Italy, Ireland** and **Sweden**.
+
+To learn more about the different ways you can acquire a phone number in these regions, visit the [article on how to get and manage phone numbers](./quickstarts/telephony/get-phone-number.md), or [reaching out to the IC3 Service Desk](https://github.com/Azure/Communication/blob/master/special-order-numbers.md). 
+
+Enjoy all of these new features. Be sure to check back here periodically for more news and updates on all of the new capabilities we've added to our platform! For a complete list of new features and bug fixes, visit our [releases page](https://github.com/Azure/Communication/releases) on GitHub.
communications-gateway Monitoring Azure Communications Gateway Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/monitoring-azure-communications-gateway-data-reference.md
Azure Communications Gateway has the following dimensions associated with its me
## See Also - See [Monitoring Azure Communications Gateway](monitor-azure-communications-gateway.md) for a description of monitoring Azure Communications Gateway.-- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
+- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
communications-gateway Plan And Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/plan-and-manage-costs.md
Azure Communications Gateway runs on Azure infrastructure that accrues costs whe
When you deploy or use Azure Communications Gateway, you'll be charged for your use of the voice features of the product. The charges are based on the number of users assigned to the platform by a series of SBC User meters. The meters include: -- A "service availability" meter that is charged hourly and includes the use of 999 users for testing and early adoption.-- A per-user meter that charges based on the number of users that are assigned to the deployment. This per-user fee is calculated from the maximum number of users during your billing cycle, excluding the initial 999 users included in the service availability fee.
+- A "service availability" meter that is charged hourly and includes the use of 1000 users for testing and early adoption.
+- A series of tiered per-user meters that charge based on the number of users that are assigned to the deployment. The per-user fee is based on the maximum number of users during your billing cycle, excluding the 1000 users included in the service availability fee.
+
+For example, if you have 28,000 users assigned to the deployment each month you'll pay:
+* The service availability fee for each hour in the month
+* 24,000 users in the 1001-25000 tier
+* 3000 users in the 25001-100000 tier
+
+> [!TIP]
+> If you receive a quote through Microsoft Volume Licensing, pricing may be presented as aggregated so that the values are easily readable (for example showing the per-user meters in groups of 10 or 100 rather than the pricing for individual users). This does not impact the way you will be billed.
If you choose to deploy the API Bridge (for API mediation or the API Bridge Number Management Portal), you'll also be charged for your API Bridge usage. Fees for API Bridge work in the same way as the SBC User meters: a service availability meter and a per-user meter. The number of users charged for the API Bridge is always the same as the number of users charged on the SBC User meters.
You can also [export your cost data](../cost-management-billing/costs/tutorial-e
## Next steps
+- View [Azure Communications Gateway pricing](https://azure.microsoft.com/pricing/details/communications-gateway/).
- Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
connectors Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/managed.md
For more information, see these topics:
[azure-automation-doc]: /connectors/azureautomation/ "Create and manage automation jobs for your cloud and on-premises infrastructure" [azure-blob-storage-doc]: ./connectors-create-api-azureblobstorage.md "Manage files in your blob container with Azure blob storage connector" [azure-cosmos-db-doc]: ./connectors-create-api-cosmos-db.md "Connect to Azure Cosmos DB so that you can access and manage Azure Cosmos DB documents"
-[azure-event-grid-doc]: ../event-grid/monitor-virtual-machine-changes-event-grid-logic-app.md "Monitor events published by an Event Grid, for example, when Azure resources or third-party resources change"
+[azure-event-grid-doc]: ../event-grid/monitor-virtual-machine-changes-logic-app.md "Monitor events published by an Event Grid, for example, when Azure resources or third-party resources change"
[azure-event-hubs-doc]: ./connectors-create-api-azure-event-hubs.md "Connect to Azure Event Hubs so that you can receive and send events between logic app workflows and Event Hubs" [azure-file-storage-doc]: /connectors/azurefile/ "Connect to your Azure Storage account so that you can create, update, get, and delete files" [azure-key-vault-doc]: /connectors/keyvault/ "Connect to your Azure Key Vault so that you can manage your secrets and keys"
container-apps Application Lifecycle Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/application-lifecycle-management.md
Previously updated : 10/25/2022 Last updated : 3/13/2023
The Azure Container Apps application lifecycle revolves around [revisions](revis
When you deploy a container app, the first revision is automatically created. [More revisions are created](revisions.md) as [containers](containers.md) change, or any adjustments are made to the `template` section of the configuration.
-A container app flows through four phases: deployment, update, deactivation, and shutdown.
+A container app flows through four phases: deployment, update, deactivation, and shut down.
## Deployment
As a container app is updated with a [revision scope-change](revisions.md#revisi
### Zero downtime deployment
-In single revision mode, Container Apps automatically ensures your app does not experience downtime when creating a new revision. The existing active revision is not deactivated until the new revision is ready. If ingress is enabled, the existing revision will continue to receive 100% of the traffic until the new revision is ready.
+In single revision mode, Container Apps automatically ensures your app doesn't experience downtime when creating a new revision. The existing active revision isn't deactivated until the new revision is ready. If ingress is enabled, the existing revision continues to receive 100% of the traffic until the new revision is ready.
> [!NOTE] > A new revision is considered ready when one of its replicas starts and becomes ready. A replica is ready when all of its containers start and pass their [startup and readiness probes](./health-probes.md).
-In multiple revision mode, you control when revisions are activated or deactivated and which revisions receive ingress traffic. If a [traffic splitting rule](./revisions-manage.md#traffic-splitting) is configured with `latestRevision` set to `true`, traffic does not switch to the latest revision until it is ready.
+In multiple revision mode, you control when revisions are activated or deactivated and which revisions receive ingress traffic. If a [traffic splitting rule](./revisions-manage.md#traffic-splitting) is configured with `latestRevision` set to `true`, traffic doesn't switch to the latest revision until it's ready.
## Deactivate
The containers are shut down in the following situations:
When a shutdown is initiated, the container host sends a [SIGTERM message](https://wikipedia.org/wiki/Signal_(IPC)) to your container. The code implemented in the container can respond to this operating system-level message to handle termination.
-If your application does not respond within 30 seconds to the `SIGTERM` message, then [SIGKILL](https://wikipedia.org/wiki/Signal_(IPC)) terminates your container.
+If your application doesn't respond within 30 seconds to the `SIGTERM` message, then [SIGKILL](https://wikipedia.org/wiki/Signal_(IPC)) terminates your container.
+
+Additionally, make sure your application can gracefully handle shutdowns. Containers restart regularly, so don't expect state to persist inside a container. Instead, use external caches for expensive in-memory cache requirements.
## Next steps
container-apps Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/environment.md
Previously updated : 10/18/2022 Last updated : 03/13/2023 # Azure Container Apps environments
-Individual container apps are deployed to a single Container Apps environment, which acts as a secure boundary around groups of container apps. Container Apps in the same environment are deployed in the same virtual network and write logs to the same Log Analytics workspace. You may provide an [existing virtual network](vnet-custom.md) when you create an environment.
+A Container Apps environment is a secure boundary around groups of container apps that share the same virtual network and write logs to the same logging destination.
+
+Container Apps environments are fully managed where Azure handles OS upgrades, scale operations, failover procedures, and resource balancing.
:::image type="content" source="media/environments/azure-container-apps-environments.png" alt-text="Azure Container Apps environments.":::
Reasons to deploy container apps to different environments include situations wh
- Two applications never share the same compute resources - Two Dapr applications can't communicate via the Dapr service invocation API
+Also, you may provide an [existing virtual network](vnet-custom.md) when you create an environment.
+ ## Logs Settings relevant to the Azure Container Apps environment API resource. | Property | Description | |||
-| `properties.appLogsConfiguration` | Used for configuring Log Analytics workspace where logs for all apps in the environment will be published |
+| `properties.appLogsConfiguration` | Used for configuring the Log Analytics workspace where logs for all apps in the environment are published. |
| `properties.containerAppsConfiguration.daprAIInstrumentationKey` | App Insights instrumentation key provided to Dapr for tracing | ## Billing
container-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/overview.md
Previously updated : 06/23/2022 Last updated : 03/13/2023 # Azure Container Apps overview
-Azure Container Apps enables you to run microservices and containerized applications on a serverless platform. Common uses of Azure Container Apps include:
+Azure Container Apps is a fully managed environment that enables you to run microservices and containerized applications on a serverless platform. Common uses of Azure Container Apps include:
- Deploying API endpoints - Hosting background processing applications
container-instances Container Instances Tutorial Deploy Confidential Containers Cce Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-tutorial-deploy-confidential-containers-cce-arm.md
With the ARM template that you've crafted and the Azure CLI confcom extension, y
![Screenshot of Build your own template in the editor button on deployment screen, PNG.](./media/container-instances-confidential-containers-tutorials/confidential-containers-cce-build-template.png)
-1. Select **Load file** and upload **template.json**, which you've modified by generating adding a CCE policy.
+1. Select **Load file** and upload **template.json**, which you've modified by adding the CCE policy you generated in the previous steps.
![Screenshot of Load file button on template screen, PNG.](./media/container-instances-confidential-containers-tutorials/confidential-containers-cce-load-file.png)
container-registry Container Registry Event Grid Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-event-grid-quickstart.md
Once the registry has been created, the Azure CLI returns output similar to the
## Create an event endpoint
-In this section, you use a Resource Manager template located in a GitHub repository to deploy a pre-built sample web application to Azure App Service. Later, you subscribe to your registry's Event Grid events and specify this app as the endpoint to which the events are sent.
+In this section, you use a Resource Manager template located in a GitHub repository to deploy a prebuilt sample web application to Azure App Service. Later, you subscribe to your registry's Event Grid events and specify this app as the endpoint to which the events are sent.
To deploy the sample app, set `SITE_NAME` to a unique name for your web app, and execute the following commands. The site name must be unique within Azure because it forms part of the fully qualified domain name (FQDN) of the web app. In a later section, you navigate to the app's FQDN in a web browser to view your registry's events.
You should see the sample app rendered with no event messages displayed:
![Web browser showing sample web app with no events displayed][sample-app-02] ## Subscribe to registry events
-In Event Grid, you subscribe to a *topic* to tell it which events you want to track, and where to send them. The following [az eventgrid event-subscription create][az-eventgrid-event-subscription-create] command subscribes to the container registry you created, and specifies your web app's URL as the endpoint to which it should send events. The environment variables you populated in earlier sections are reused here, so no edits are required.
+In Event Grid, you subscribe to a *topic* to tell it which events you want to track, and where to send them. The following [`az eventgrid event-subscription create`][az-eventgrid-event-subscription-create] command subscribes to the container registry you created, and specifies your web app's URL as the endpoint to which it should send events. The environment variables you populated in earlier sections are reused here, so no edits are required.
```azurecli-interactive ACR_REGISTRY_ID=$(az acr show --name $ACR_NAME --query id --output tsv)
Execute the following Azure CLI command to build a container image from the cont
az acr build --registry $ACR_NAME --image myimage:v1 -f Dockerfile https://github.com/Azure-Samples/acr-build-helloworld-node.git#main ```
-You should see output similar to the following while ACR Tasks builds and then pushes your image. The following sample output has been truncated for brevity.
+You should see output similar to the following while ACR Tasks build and then pushes your image. The following sample output has been truncated for brevity.
```output Sending build context to ACR...
Step 1/5 : FROM node:9-alpine
... ```
-To verify that the built image is in your registry, execute the following command to view the tags in the "myimage" repository:
+To verify that the built image is in your registry, execute the following command to view the tags in the `myimage` repository:
```azurecli-interactive az acr repository show-tags --name $ACR_NAME --repository myimage
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md
You can choose to restore any combination of provisioned throughput containers,
The following configurations aren't restored after the point-in-time recovery: * Firewall, VNET, Data plane RBAC or private endpoint settings.
+* Consistency settings, by default - account is restored with session consistency.
* Regions. * Stored procedures, triggers, UDFs. * Role-based access control assignments. These will need to be re-assigned.
-You can add these configurations to the restored account after the restore is completed. An ability to prevent public access to restored account is described [here-to-befilled with url]().
+You can add these configurations to the restored account after the restore is completed.
## Restorable timestamp for live accounts
cosmos-db How To Container Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-container-copy.md
This article describes how to create, monitor, and manage intra-account container copy jobs using Azure PowerShell or CLI commands.
-## Pre-requisites
+## Prerequisites
-* You may use the portal [Cloud Shell](../cloud-shell/quickstart-powershell.md#start-cloud-shell) to run container copy commands. Alternately, you may run the commands locally; make sure you have [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps-msi) downloaded and installed on your machine.
+* You may use the portal [Cloud Shell](/azure/cloud-shell/quickstart?tabs=powershell) to run container copy commands. Alternately, you may run the commands locally; make sure you have [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps-msi) downloaded and installed on your machine.
* Currently, container copy is only supported in [these regions](intra-account-container-copy.md#supported-regions). Make sure your account's write region belongs to this list.
az extension add --name cosmosdb-preview
## Set shell variables
-First, set all of the variables that each individual script will use.
+First, set all of the variables that each individual script uses.
```azurepowershell-interactive $resourceGroup = "<resource-group-name>"
Create a job to copy a container within an Azure Cosmos DB API for NoSQL account
```azurepowershell-interactive az cosmosdb dts copy `
- --resource-group $resourceGroup `
+ --resource-group $resourceGroup `
--account-name $accountName ` --job-name $jobName ` --source-sql-container database=$sourceDatabase container=$sourceContainer `
az cosmosdb dts resume `
``` ## Get support for container copy issues
-For issues related to intra-account container copy, please raise a New Support Request from the Azure Portal with the Problem Type as 'Data Migration' and Problem subtype as 'Intra-account container copy'.
+For issues related to intra-account container copy, please raise a **New Support Request** from the Azure portal. Set the **Problem Type** as 'Data Migration' and **Problem subtype** as 'Intra-account container copy'.
## Next steps
cosmos-db Intra Account Container Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/intra-account-container-copy.md
Intra-account container copy jobs can be [created and managed using CLI commands
## Get started
-To get started using container copy jobs, register for "Intra-account offline container copy (Cassandra & SQL)" preview from the ['Preview Features'](access-previews.md) list in the Azure portal. Once the registration is complete, the preview will be effective for all Cassandra and API for NoSQL accounts in the subscription.
+To get started using container copy jobs, register for "Intra-account offline container copy (Cassandra & SQL)" preview from the ['Preview Features'](access-previews.md) list in the Azure portal. Once the registration is complete, the preview is effective for all Cassandra and API for NoSQL accounts in the subscription.
## Overview of steps needed to do container copy
Intra-account container copy jobs perform offline data copy using the source con
* The container copy jobs run on these instances. * A single job is executed across all instances at any time. * The instances are shared by all the container copy jobs running within the same account.
-* The platform may de-allocate the instances if they're idle for >15 mins.
+* The platform may deallocate the instances if they're idle for >15 mins.
> [!NOTE] > We currently only support offline container copy jobs. So, we strongly recommend to stop performing any operations on the source container prior to beginning the container copy. Item deletions and updates done on the source container after beginning the copy job may not be captured. Hence, continuing to perform operations on the source container while the container job is in progress may result in additional or missing data on the target container.
Container copy jobs are currently supported on best-effort basis. We don't provi
### Can I create multiple container copy jobs within an account?
-Yes, you can create multiple jobs within the same account. The jobs will run consecutively. You can [list all the jobs](how-to-container-copy.md#list-all-the-container-copy-jobs-created-in-an-account) created within an account and monitor their progress.
+Yes, you can create multiple jobs within the same account. The jobs run consecutively. You can [list all the jobs](how-to-container-copy.md#list-all-the-container-copy-jobs-created-in-an-account) created within an account and monitor their progress.
### Can I copy an entire database within the Azure Cosmos DB account?
-You'll have to create a job for each container in the database.
+You must create a job for each container in the database.
### I have an Azure Cosmos DB account with multiple regions. In which region will the container copy job run?
-The container copy job will run in the write region. If there are accounts configured with multi-region writes, the job will run in one of the regions from the list.
+The container copy job runs in the write region. If there are accounts configured with multi-region writes, the job runs in one of the regions from the list.
### What happens to the container copy jobs when the account's write region changes?
The account's write region may change in the rare scenario of a region outage or
### Why is a new database *__datatransferstate* created in the account when I run container copy jobs? Am I being charged for this database? * *__datatransferstate* is a database that is created while running container copy jobs. This database is used by the platform to store the state and progress of the copy job.
-* The database uses manual provisioned throughput of 800 RUs. You'll be charged for this database.
-* Deleting this database will remove the container copy job history from the account. It can be safely deleted once all the jobs in the account have completed, if you no longer need the job history. The platform won't clean up the *__datatransferstate* database automatically.
+* The database uses manual provisioned throughput of 800 RUs. You are charged for this database.
+* Deleting this database removes the container copy job history from the account. It can be safely deleted once all the jobs in the account have completed, if you no longer need the job history. The platform doesn't clean up the *__datatransferstate* database automatically.
## Supported regions
Currently, container copy is supported in the following regions:
* Error - (Request) is blocked by your Cosmos DB account firewall settings.
- The job creation request could be blocked if the client IP isn't allowed as per the VNet and Firewall IPs configured on the account. In order to get past this issue, you need to [allow access to the IP through the Firewall setting](how-to-configure-firewall.md). Alternately, you may set **Accept connections from within public Azure datacenters** in your firewall settings and run the container copy commands through the portal [Cloud Shell](../cloud-shell/quickstart-powershell.md#start-cloud-shell).
+ The job creation request could be blocked if the client IP isn't allowed as per the VNet and Firewall IPs configured on the account. In order to get past this issue, you need to [allow access to the IP through the Firewall setting](how-to-configure-firewall.md). Alternately, you may set **Accept connections from within public Azure datacenters** in your firewall settings and run the container copy commands through the portal [Cloud Shell](/azure/cloud-shell/quickstart?tabs=powershell).
```output InternalServerError Request originated from IP xxx.xxx.xxx.xxx through public internet. This is blocked by your Cosmos DB account firewall settings. More info: https://aka.ms/cosmosdb-tsg-forbidden
cosmos-db Best Practice Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/best-practice-dotnet.md
Previously updated : 08/30/2022 Last updated : 03/14/2023
Watch the video below to learn more about using the .NET SDK from an Azure Cosmo
| <input type="checkbox"/> | Regions | Make sure to run your application in the same [Azure region](../distribute-data-globally.md) as your Azure Cosmos DB account, whenever possible to reduce latency. Enable 2-4 regions and replicate your accounts in multiple regions for [best availability](../distribute-data-globally.md). For production workloads, enable [automatic failover](../how-to-manage-database-account.md#configure-multiple-write-regions). In the absence of this configuration, the account will experience loss of write availability for all the duration of the write region outage, as manual failover won't succeed due to lack of region connectivity. To learn how to add multiple regions using the .NET SDK visit [here](tutorial-global-distribution.md) | | <input type="checkbox"/> | Availability and Failovers | Set the [ApplicationPreferredRegions](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationpreferredregions?view=azure-dotnet&preserve-view=true) or [ApplicationRegion](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationregion?view=azure-dotnet&preserve-view=true) in the v3 SDK, and the [PreferredLocations](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.preferredlocations?view=azure-dotnet&preserve-view=true) in the v2 SDK using the [preferred regions list](./tutorial-global-distribution.md?tabs=dotnetv3%2capi-async#preferred-locations). During failovers, write operations are sent to the current write region and all reads are sent to the first region within your preferred regions list. For more information about regional failover mechanics, see the [availability troubleshooting guide](troubleshoot-sdk-availability.md). | | <input type="checkbox"/> | CPU | You may run into connectivity/availability issues due to lack of resources on your client machine. Monitor your CPU utilization on nodes running the Azure Cosmos DB client, and scale up/out if usage is high. |
-| <input type="checkbox"/> | Hosting | Use [Windows 64-bit host](performance-tips-query-sdk.md#use-local-query-plan-generation) processing for best performance, whenever possible. |
+| <input type="checkbox"/> | Hosting | Use [Windows 64-bit host](performance-tips-query-sdk.md#use-local-query-plan-generation) processing for best performance, whenever possible. For Direct mode latency-sensitive production workloads, we highly recommend using at least 4-cores and 8-GB memory VMs whenever possible.
| <input type="checkbox"/> | Connectivity Modes | Use [Direct mode](sdk-connection-modes.md) for the best performance. For instructions on how to do this, see the [V3 SDK documentation](performance-tips-dotnet-sdk-v3.md#networking) or the [V2 SDK documentation](performance-tips.md#networking).| |<input type="checkbox"/> | Networking | If using a virtual machine to run your application, enable [Accelerated Networking](../../virtual-network/create-vm-accelerated-networking-powershell.md) on your VM to help with bottlenecks due to high traffic and reduce latency or CPU jitter. You might also want to consider using a higher end Virtual Machine where the max CPU usage is under 70%. |
-|<input type="checkbox"/> | Ephemeral Port Exhaustion | For sparse or sporadic connections, we set the [`IdleConnectionTimeout`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout?view=azure-dotnet&preserve-view=true) and [`PortReuseMode`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.portreusemode?view=azure-dotnet&preserve-view=true) to `PrivatePortPool`. The `IdleConnectionTimeout` property helps control the time after which unused connections are closed. This will reduce the number of unused connections. By default, idle connections are kept open indefinitely. The value set must be greater than or equal to 10 minutes. We recommended values between 20 minutes and 24 hours. The `PortReuseMode` property allows the SDK to use a small pool of ephemeral ports for various Azure Cosmos DB destination endpoints. |
+|<input type="checkbox"/> | Ephemeral Port Exhaustion | For sparse or sporadic connections, we set the [`IdleConnectionTimeout`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout?view=azure-dotnet&preserve-view=true) and [`PortReuseMode`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.portreusemode?view=azure-dotnet&preserve-view=true) to `PrivatePortPool`. The `IdleConnectionTimeout` property helps control the time after which unused connections are closed. This reduces the number of unused connections. By default, idle connections are kept open indefinitely. The value set must be greater than or equal to 10 minutes. We recommended values between 20 minutes and 24 hours. The `PortReuseMode` property allows the SDK to use a small pool of ephemeral ports for various Azure Cosmos DB destination endpoints. |
|<input type="checkbox"/> | Use Async/Await | Avoid blocking calls: `Task.Result`, `Task.Wait`, and `Task.GetAwaiter().GetResult()`. The entire call stack is asynchronous in order to benefit from [async/await](/dotnet/csharp/programming-guide/concepts/async/) patterns. Many synchronous blocking calls lead to [Thread Pool starvation](/archive/blogs/vancem/diagnosing-net-core-threadpool-starvation-with-perfview-why-my-service-is-not-saturating-all-cores-or-seems-to-stall) and degraded response times. |
-|<input type="checkbox"/> | End-to-End Timeouts | To get end-to-end timeouts, you'll need to use both `RequestTimeout` and `CancellationToken` parameters. For more details on timeouts with Azure Cosmos DB [visit](troubleshoot-dotnet-sdk-request-timeout.md) |
+|<input type="checkbox"/> | End-to-End Timeouts | To get end-to-end timeouts, you need to use both `RequestTimeout` and `CancellationToken` parameters. For more details [visit our timeout troubleshooting guide](troubleshoot-dotnet-sdk-request-timeout.md). |
|<input type="checkbox"/> | Retry Logic | A transient error is an error that has an underlying cause that soon resolves itself. Applications that connect to your database should be built to expect these transient errors. To handle them, implement retry logic in your code instead of surfacing them to users as application errors. The SDK has built-in logic to handle these transient failures on retryable requests like read or query operations. For accounts configured with a single write region, the SDK won't retry on writes for transient failures as writes aren't idempotent. For accounts configured with multiple write regions, there are [some scenarios](troubleshoot-sdk-availability.md#transient-connectivity-issues-on-tcp-protocol) where the SDK will automatically retry writes on other regions. The SDK does allow users to configure retry logic for throttles. For details on which errors to retry on [visit](conceptual-resilient-sdk-applications.md#should-my-application-retry-on-errors) | |<input type="checkbox"/> | Caching database/collection names | Retrieve the names of your databases and containers from configuration or cache them on start. Calls like `ReadDatabaseAsync` or `ReadDocumentCollectionAsync` and `CreateDatabaseQuery` or `CreateDocumentCollectionQuery` will result in metadata calls to the service, which consume from the system-reserved RU limit. `CreateIfNotExist` should also only be used once for setting up the database. Overall, these operations should be performed infrequently. | |<input type="checkbox"/> | Bulk Support | In scenarios where you may not need to optimize for latency, we recommend enabling [Bulk support](https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk/) for dumping large volumes of data. |
-| <input type="checkbox"/> | Parallel Queries | The Azure Cosmos DB SDK supports [running queries in parallel](performance-tips-query-sdk.md?pivots=programming-language-csharp) for better latency and throughput on your queries. We recommend setting the `MaxConcurrency` property within the `QueryRequestsOptions` to the number of partitions you have. If you aren't aware of the number of partitions, start by using `int.MaxValue`, which will give you the best latency. Then decrease the number until it fits the resource restrictions of the environment to avoid high CPU issues. Also, set the `MaxBufferedItemCount` to the expected number of results returned to limit the number of pre-fetched results. |
-| <input type="checkbox"/> | Performance Testing Backoffs | When performing testing on your application, you should implement backoffs at [`RetryAfter`](performance-tips-dotnet-sdk-v3.md#sdk-usage) intervals. Respecting the backoff helps ensure that you'll spend a minimal amount of time waiting between retries. |
+| <input type="checkbox"/> | Parallel Queries | The Azure Cosmos DB SDK supports [running queries in parallel](performance-tips-query-sdk.md?pivots=programming-language-csharp) for better latency and throughput on your queries. We recommend setting the `MaxConcurrency` property within the `QueryRequestsOptions` to the number of partitions you have. If you aren't aware of the number of partitions, start by using `int.MaxValue`, which will give you the best latency. Then decrease the number until it fits the resource restrictions of the environment to avoid high CPU issues. Also, set the `MaxBufferedItemCount` to the expected number of results returned to limit the number of prefetched results. |
+| <input type="checkbox"/> | Performance Testing Backoffs | When performing testing on your application, you should implement backoffs at [`RetryAfter`](performance-tips-dotnet-sdk-v3.md#sdk-usage) intervals. Respecting the backoff helps ensure that you spend a minimal amount of time waiting between retries. |
| <input type="checkbox"/> | Indexing | The Azure Cosmos DB indexing policy also allows you to specify which document paths to include or exclude from indexing by using indexing paths (IndexingPolicy.IncludedPaths and IndexingPolicy.ExcludedPaths). Ensure that you exclude unused paths from indexing for faster writes. For a sample on how to create indexes using the SDK [visit](performance-tips-dotnet-sdk-v3.md#indexing-policy) | | <input type="checkbox"/> | Document Size | The request charge of a specified operation correlates directly to the size of the document. We recommend reducing the size of your documents as operations on large documents cost more than operations on smaller documents. | | <input type="checkbox"/> | Increase the number of threads/tasks | Because calls to Azure Cosmos DB are made over the network, you might need to vary the degree of concurrency of your requests so that the client application spends minimal time waiting between requests. For example, if you're using the [.NET Task Parallel Library](/dotnet/standard/parallel-programming/task-parallel-library-tpl), create on the order of hundreds of tasks that read from or write to Azure Cosmos DB. |
Watch the video below to learn more about using the .NET SDK from an Azure Cosmo
[!INCLUDE[cosmos-db-dotnet-sdk-diagnostics](../includes/dotnet-sdk-diagnostics.md)]
+## Best practices for HTTP connections
+
+The .NET SDK uses `HttpClient` to perform HTTP requests regardless of the connectivity mode configured. In [Direct mode](sdk-connection-modes.md#direct-mode) HTTP is used for metadata operations and in Gateway mode it's used for both data plane and metadata operations. One of the [fundamentals of HttpClient](/dotnet/fundamentals/networking/http/httpclient-guidelines#dns-behavior) is to make sure the `HttpClient` can react to DNS changes on your account by **customizing the pooled connection lifetime**. As long as pooled connections are kept open, they don't react to DNS changes. This setting forces pooled **connections to be closed** periodically, ensuring that your application reacts to DNS changes. Our recommendation is that you customize this value according to your [connectivity mode](sdk-connection-modes.md) and workload to balance the performance impact of frequently creating new connections, with needing to react to DNS changes (availability). A 5-minute value would be a good start that can be increased if it's impacting performance particularly for Gateway mode.
+
+You can inject your custom HttpClient through `CosmosClientOptions.HttpClientFactory`, for example:
+
+```csharp
+// Use a Singleton instance of the SocketsHttpHandler, which you can share across any HttpClient in your application
+SocketsHttpHandler socketsHttpHandler = new SocketsHttpHandler();
+// Customize this value based on desired DNS refresh timer
+socketsHttpHandler.PooledConnectionLifetime = TimeSpan.FromMinutes(5);
+
+CosmosClientOptions cosmosClientOptions = new CosmosClientOptions()
+{
+ // Pass your customized SocketHttpHandler to be used by the CosmosClient
+ // Make sure `disposeHandler` is `false`
+ HttpClientFactory = () => new HttpClient(socketsHttpHandler, disposeHandler: false)
+};
+
+// Use a Singleton instance of the CosmosClient
+return new CosmosClient("<connection-string>", cosmosClientOptions);
+```
+
+If you use [.NET dependency injection](/dotnet/core/extensions/dependency-injection), you can simplify the Singleton process:
+
+```csharp
+SocketsHttpHandler socketsHttpHandler = new SocketsHttpHandler();
+// Customize this value based on desired DNS refresh timer
+socketsHttpHandler.PooledConnectionLifetime = TimeSpan.FromMinutes(5);
+// Registering the Singleton SocketsHttpHandler lets you reuse it across any HttpClient in your application
+services.AddSingleton<SocketsHttpHandler>(socketsHttpHandler);
+
+// Use a Singleton instance of the CosmosClient
+services.AddSingleton<CosmosClient>(serviceProvider =>
+{
+ SocketsHttpHandler socketsHttpHandler = serviceProvider.GetRequiredService<SocketsHttpHandler>();
+ CosmosClientOptions cosmosClientOptions = new CosmosClientOptions()
+ {
+ HttpClientFactory = () => new HttpClient(socketsHttpHandler, disposeHandler: false)
+ };
+
+ return new CosmosClient("<connection-string>", cosmosClientOptions);
+});
+```
+ ## Best practices when using Gateway mode Increase `System.Net MaxConnections` per host when you use Gateway mode. Azure Cosmos DB requests are made over HTTPS/REST when you use Gateway mode. They're subject to the default connection limit per hostname or IP address. You might need to set `MaxConnections` to a higher value (from 100 through 1,000) so that the client library can use multiple simultaneous connections to Azure Cosmos DB. In .NET SDK 1.8.0 and later, the default value for `ServicePointManager.DefaultConnectionLimit` is 50. To change the value, you can set `Documents.Client.ConnectionPolicy.MaxConnectionLimit` to a higher value.
cost-management-billing Create Subscriptions Deploy Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/create-subscriptions-deploy-resources.md
The message can appear for customers with the following Azure subscription agree
Expect a delay before you can create another subscription.
-If you're new to Azure and don't have any consumption usage, read the [Get started guide for Azure developers](/azure/guides/developer/azure-developer-guide) to help you get started with Azure services.
+If you're new to Azure and don't have any consumption usage, read the [Get started guide for Azure developers](../../guides/developer/azure-developer-guide.md) to help you get started with Azure services.
## Need help? Contact us.
data-catalog Data Catalog Migration To Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-migration-to-azure-purview.md
Title: Migrate from Azure Data Catalog to Microsoft Purview
description: Steps to migrate from Azure Data Catalog to Microsoft's unified data governance service--Microsoft Purview. Previously updated : 12/16/2022 Last updated : 03/15/2023 #Customer intent: As an Azure Data Catalog user, I want to know why and how to migrate to Microsoft Purview so that I can use the best tools to manage my data.
Look at [Microsoft Purview](https://azure.microsoft.com/services/purview/) and u
## Migrate to Microsoft Purview
-Manually migrate your data from Azure Data Catalog to Microsoft Purview.
- [Create a Microsoft Purview account](../purview/create-catalog-portal.md), [create collections](../purview/create-catalog-portal.md) in your data map, set up [permissions for your users](../purview/catalog-permissions.md), and onboard your data sources. We suggest you review the Microsoft Purview best practices documentation before deploying your Microsoft Purview account, so you can deploy the best environment for your data landscape.
data-factory Compute Optimized Data Flow Retire https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/compute-optimized-data-flow-retire.md
From now through 31 August 2024, your Compute Optimized data flows will continue
* [Visit the Azure Data Factory pricing page for the latest updated pricing available for General Purpose and Memory Optimized data flows](https://azure.microsoft.com/pricing/details/data-factory/data-pipeline/) * [Find more detailed information at the data flows FAQ here](./frequently-asked-questions.yml#mapping-data-flows)
-* [Post questions and find answers on data flows on Microsoft Q&A](/azure/data-factory/frequently-asked-questions)
+* [Post questions and find answers on data flows on Microsoft Q&A](./frequently-asked-questions.yml)
data-factory Concepts Integration Runtime Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-integration-runtime-performance.md
While increasing the shuffle partitions, make sure data is spread across well. A
Here are the steps on how it's set in a custom integration runtime. You can't set it for autoresolve integrtaion runtime. 1. From ADF portal under **Manage**, select a custom integration run time and you go to edit mode.
-2. Under dataflow run time tab, go to **Compute Cusotm Properties** section.
+2. Under dataflow run time tab, go to **Compute Custom Properties** section.
3. Select **Shuffle Partitions** under Property name, input value of your choice, like 250, 500 etc.
-You can do same by editing JSON file of runtime by adding array with property name and value after *cleanup* property.
+You can do same by editing JSON file of runtime by adding an array with property name and value after an existing property like *cleanup* property.
## Time to live
data-factory Connector Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-blob-storage.md
The following properties are supported for storage account key authentication in
| Property | Description | Required | |: |: |: | | type | The `type` property must be set to `AzureBlobStorage` (suggested) or `AzureStorage` (see the following notes). | Yes |
-| containerUri | Specify the Azure Blob container URI which has enabled Anonymous read access by taking this format `https://<AccountName>.blob.core.windows.net/<ContainerName>` and [Configure anonymous public read access for containers and blobs](/azure/storage/blobs/anonymous-read-access-configure#set-the-public-access-level-for-a-container) | Yes |
+| containerUri | Specify the Azure Blob container URI which has enabled Anonymous read access by taking this format `https://<AccountName>.blob.core.windows.net/<ContainerName>` and [Configure anonymous public read access for containers and blobs](../storage/blobs/anonymous-read-access-configure.md#set-the-public-access-level-for-a-container) | Yes |
| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or the self-hosted integration runtime (if your data store is in a private network). If this property isn't specified, the service uses the default Azure integration runtime. | No | **Example:**
Azure Data Factory can get new or changed files only from Azure Blob Storage by
## Next steps
-For a list of data stores that the Copy activity supports as sources and sinks, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
+For a list of data stores that the Copy activity supports as sources and sinks, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Create Azure Ssis Integration Runtime Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-azure-ssis-integration-runtime-portal.md
This article shows you how to create an Azure-SQL Server Integration Services (SSIS) integration runtime (IR) in Azure Data Factory (ADF) or Synapse Pipelines via Azure portal. > [!NOTE]
-> Azure-SSIS IR in Azure Synapse Analytics is in public preview, please check [limitations](https://aka.ms/AAfq9i3) for preview.
+> There are certain features that are not available for Azure-SSIS IR in Azure Synapse Analytics, please check the [limitations](https://aka.ms/AAfq9i3).
## Provision an Azure-SSIS integration runtime
data-factory How Does Managed Airflow Work https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-does-managed-airflow-work.md
You'll need to upload a sample DAG onto an accessible Storage account.
### Steps to import 1. Copy-paste the content (either v2.x or v1.10 based on the Airflow environment that you have setup) into a new file called as **tutorial.py**.
- Upload the **tutorial.py** to a blob storage. ([How to upload a file into blob](/azure/storage/blobs/storage-quickstart-blobs-portal))
+ Upload the **tutorial.py** to a blob storage. ([How to upload a file into blob](../storage/blobs/storage-quickstart-blobs-portal.md))
> [!NOTE] > You will need to select a directory path from a blob storage account that contains folders named **dags** and **plugins** to import those into the Airflow environment. **Plugins** are not mandatory. You can also have a container named **dags** and upload all Airflow files within it.
If you're using Airflow version 1.x, delete DAGs that are deployed on any Airflo
* [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md) * [Refresh a Power BI dataset with Managed Airflow](tutorial-refresh-power-bi-dataset-with-airflow.md) * [Managed Airflow pricing](airflow-pricing.md)
-* [How to change the password for Managed Airflow environments](password-change-airflow.md)
+* [How to change the password for Managed Airflow environments](password-change-airflow.md)
data-factory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new.md
This page is updated monthly, so revisit it regularly. For older months' update
Check out our [What's New video archive](https://www.youtube.com/playlist?list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv) for all of our monthly update videos.
+## February 2023
+
+### Data movement
+
+- Anonymous authentication type supported for Azure Blob storage [Learn more](connector-azure-blob-storage.md?tabs=data-factory#anonymous-authentication)
+- Updated SAP template to easily move SAP data to ADLSGen2 in Delta format [Learn more](industry-sap-templates.md)
+
+### Monitoring
+
+Container monitoring view available in default ADF studio [Learn more](how-to-manage-studio-preview-exp.md#container-view)
+
+### Orchestration
+
+- Set pipeline output value (Public preview) [Learn more](tutorial-pipeline-return-value.md)
+- Managed Airflow (Public preview) [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-managed-airflow-in-azure-data-factory/ba-p/3730151)
+
+### Developer productivity
+
+Dark theme support added (Public preview) [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-dark-mode-for-adf-studio/ba-p/3757961)
+ ## January 2023 ### Data flow
data-manager-for-agri How To Set Up Sensors Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-sensors-partner.md
This marks the completion of the onboarding flow for partners as well. Once the
## Next steps
-* Test our APIs [here](/rest/api/data-manager-for-agri).
+* Test our APIs [here](/rest/api/data-manager-for-agri).
data-manager-for-agri Quickstart Install Data Manager For Agriculture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/quickstart-install-data-manager-for-agriculture.md
Use this document to get started with the steps to install Data Manager for Agri
## 1: Register resource provider
-Follow steps 1-5 in Resource Provider [documentation](https://docs.microsoft.com/azure/azure-resource-manager/management/resource-providers-and-types#register-resource-provider).
+Follow steps 1-5 in Resource Provider [documentation](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
In step 5 in the above documentation, search for `Microsoft.AgFoodPlatform` and register the same.
After providing the details and accepting terms and conditions, click on "review
You can access Data Manager for Agriculture resource through an app registered in Azure Active Directory. Use the Azure portal for App registration, this enables Microsoft identity platform to provide authentication and authorization services for your app accessing Data Manager for Agriculture.
-Follow the steps provided in <a href="https://docs.microsoft.com/azure/active-directory/develop/quickstart-register-app#register-an-application" target="_blank">App Registration</a> **until step 8** to generate the following information:
+Follow the steps provided in <a href="/azure/active-directory/develop/quickstart-register-app#register-an-application" target="_blank">App Registration</a> **until step 8** to generate the following information:
* **Application (client) ID** * **Directory (tenant) ID**
Write down these three values, you would need them in the next step.
The Application (client) ID created is like the User ID of the application, and now you need to create its corresponding Application password (client secret) for the application to identify itself.
-Follow the steps provided in <a href="https://docs.microsoft.com/azure/active-directory/develop/quickstart-register-app#add-a-client-secret" target="_blank">Add a client secret</a> to generate **Client Secret** and copy the client secret generated.
+Follow the steps provided in <a href="/azure/active-directory/develop/quickstart-register-app#add-a-client-secret" target="_blank">Add a client secret</a> to generate **Client Secret** and copy the client secret generated.
## 5: Role assignment
With working **API endpoint (instanceUri)** and **access_token**, you now can st
## Next steps * See the Hierarchy Model and learn how to create and organize your agriculture data [here](./concepts-hierarchy-model.md).
-* Understand our APIs [here](/rest/api/data-manager-for-agri).
+* Understand our APIs [here](/rest/api/data-manager-for-agri).
databox-online Azure Stack Edge Gpu Deploy Arc Kubernetes Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-arc-kubernetes-cluster.md
[!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)]
-This article shows you how to enable Azure Arc on an existing Kubernetes cluster on your Azure Stack Edge Pro device.
+This article shows you how to enable Azure Arc on an existing Kubernetes cluster on your Azure Stack Edge Pro device.
-This procedure is intended for those who have reviewed the [Kubernetes workloads on Azure Stack Edge Pro device](azure-stack-edge-gpu-kubernetes-workload-management.md) and are familiar with the concepts of [What is Azure Arc-enabled Kubernetes (Preview)?](../azure-arc/kubernetes/overview.md).
+This procedure assumes that you have read and understood the following articles:
+- [Kubernetes workloads on Azure Stack Edge Pro device](azure-stack-edge-gpu-kubernetes-workload-management.md)
+- [What is Azure Arc-enabled Kubernetes (Preview)?](../azure-arc/kubernetes/overview.md)
## Prerequisites
-Before you can enable Azure Arc on Kubernetes cluster, make sure that you have completed the following prerequisites on your Azure Stack Edge Pro device and the client that you will use to access the device:
+Make sure that you've completed the following prerequisites on your Azure Stack Edge Pro device and the client that you use to access the device:
### For device
Before you can enable Azure Arc on Kubernetes cluster, make sure that you have c
1. The device has the compute role configured via Azure portal and has a Kubernetes cluster. See [Configure compute](azure-stack-edge-gpu-deploy-configure-compute.md). 1. You've owner access to the subscription. You would need this access during the role assignment step for your service principal.
-
+ ### For client accessing the device
-1. You have a Windows client system that will be used to access the Azure Stack Edge Pro device.
-
+1. You have a Windows client system that is used to access the Azure Stack Edge Pro device.
+ - The client is running Windows PowerShell 5.0 or later. To download the latest version of Windows PowerShell, go to [Install Windows PowerShell](/powershell/scripting/install/installing-powershell-core-on-windows).
-
- - You can have any other client with a [Supported operating system](azure-stack-edge-gpu-system-requirements.md#supported-os-for-clients-connected-to-device) as well. This article describes the procedure when using a Windows client.
-
-
-1. You have completed the procedure described in [Access the Kubernetes cluster on Azure Stack Edge Pro device](azure-stack-edge-gpu-create-kubernetes-cluster.md). You have:
-
- - Installed `kubectl` on the client.
- - Make sure that the `kubectl` client version is skewed no more than one version from the Kubernetes master version running on your Azure Stack Edge Pro device.
+
+ - You can have any other client with a [Supported operating system](azure-stack-edge-gpu-system-requirements.md#supported-os-for-clients-connected-to-device) as well. This article describes the procedure when using a Windows client.
++
+1. You've completed the procedure described in [Access the Kubernetes cluster on Azure Stack Edge Pro device](azure-stack-edge-gpu-create-kubernetes-cluster.md). You have:
+
+ - Installed `kubectl` on the client.
+ - Make sure that the `kubectl` client version is skewed no more than one version from the Kubernetes master version running on your Azure Stack Edge Pro device.
- Use `kubectl version` to check the version of kubectl running on the client. Make a note of the full version.
- - In the local UI of your Azure Stack Edge Pro device, go to **Software update** and note the Kubernetes server version number.
-
- ![Verify Kubernetes server version number](media/azure-stack-edge-gpu-connect-powershell-interface/verify-kubernetes-version-1.png)
-
- - Verify these two versions are compatible.
+ - In the local UI of your Azure Stack Edge Pro device, go to **Software update** and note the Kubernetes server version number.
+
+ ![Verify Kubernetes server version number](media/azure-stack-edge-gpu-connect-powershell-interface/verify-kubernetes-version-1.png)
+
+ - Verify these two versions are compatible.
## Register Kubernetes resource providers
-Before you enable Azure Arc on the Kubernetes cluster, you will need to enable and register `Microsoft.Kubernetes` and `Microsoft.KubernetesConfiguration` against your subscription.
+Before you enable Azure Arc on the Kubernetes cluster, you need to enable and register `Microsoft.Kubernetes` and `Microsoft.KubernetesConfiguration` against your subscription.
-1. To enable a resource provider, in the Azure portal, go to the subscription that you are planning to use for the deployment. Go to **Resource Providers**.
+1. To enable a resource provider, in the Azure portal, go to the subscription that you're planning to use for the deployment. Go to **Resource Providers**.
1. In the right-pane, search for the providers you want to add. In this example, `Microsoft.Kubernetes` and `Microsoft.KubernetesConfiguration`. ![Register Kubernetes resource providers](media/azure-stack-edge-gpu-connect-powershell-interface/register-k8-resource-providers-1.png)
-1. Select a resource provider and from the top of the command bar, select **Register**. Registration takes several minutes.
+1. Select a resource provider and from the top of the command bar, select **Register**. Registration takes several minutes.
![Register Kubernetes resource providers 2](media/azure-stack-edge-gpu-connect-powershell-interface/register-k8-resource-providers-2.png) 1. Refresh the UI until you see that the resource provider is registered. Repeat the process for both resource providers.
-
+ ![Register Kubernetes resource providers 3](media/azure-stack-edge-gpu-connect-powershell-interface/register-k8-resource-providers-4.png) You can also register resource providers via the `az cli`. For more information, see [Register the two providers for Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/quickstart-connect-cluster.md#register-providers-for-azure-arc-enabled-kubernetes).
You can also register resource providers via the `az cli`. For more information,
1. To create a service principal, use the following command via the `az cli`.
- `az ad sp create-for-rbac --name "<Informative name for service principal>"`
+ `az ad sp create-for-rbac --name "<Informative name for service principal>"`
+
+ For information on how to log into the `az cli`, [Start Cloud Shell in Azure portal](/azure/cloud-shell/quickstart). If using `az cli` on a local client to create the service principal, make sure that you're running version 2.25 or later.
- For information on how to log into the `az cli`, [Start Cloud Shell in Azure portal](../cloud-shell/quickstart-powershell.md#start-cloud-shell). If using `az cli` on a local client to create the service principal, make sure that you are running version 2.25 or later.
+ Here's an example.
- Here is an example.
-
```azurecli PS /home/user> az ad sp create-for-rbac --name "https://azure-arc-for-ase-k8s" {
You can also register resource providers via the `az cli`. For more information,
PS /home/user> ```
-1. Make a note of the `appID`, `name`, `password`, and `tenantID` as you will use this as input in the next command.
+1. Make a note of the `appID`, `name`, `password`, and `tenantID` as you'll use these values as input to the next command.
1. After creating the new service principal, assign the `Kubernetes Cluster - Azure Arc Onboarding` role to the newly created principal. This is a built-in Azure role (use the role ID in the command) with limited permissions. Use the following command: `az role assignment create --role 34e09817-6cbe-4d01-b1a2-e0eac5743d41 --assignee <appId-from-service-principal> --scope /subscriptions/<SubscriptionID>/resourceGroups/<Resource-group-name>`
- Here is an example.
-
+ Here's an example.
+ ```azurecli PS /home/user> az role assignment create --role 34e09817-6cbe-4d01-b1a2-e0eac5743d41 --assignee xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --scope /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myaserg1 {
Follow these steps to configure the Kubernetes cluster for Azure Arc management:
1. Type:
- `Set-HcsKubernetesAzureArcAgent -SubscriptionId "<Your Azure Subscription Id>" -ResourceGroupName "<Resource Group Name>" -ResourceName "<Azure Arc resource name (shouldn't exist already)>" -Location "<Region associated with resource group>" -TenantId "<Tenant Id of service principal>" -ClientId "<App id of service principal>"`
-
- When this command is run, there is a followup prompt to enter the `ClientSecret`. Provide the service principal password.
+ `Set-HcsKubernetesAzureArcAgent -SubscriptionId "<Your Azure Subscription Id>" -ResourceGroupName "<Resource Group Name>" -ResourceName "<Azure Arc resource name (shouldn't exist already)>" -Location "<Region associated with resource group>" -TenantId "<Tenant Id of service principal>" -ClientId "<App id of service principal>"`
+
+ When this command is run, there's a follow-up prompt to enter the `ClientSecret`. Provide the service principal password.
- Add the `CloudEnvironment` parameter if you are using a cloud other than Azure public. You can set this parameter to `AZUREPUBLICCLOUD`, `AZURECHINACLOUD`, `AZUREGERMANCLOUD`, and `AZUREUSGOVERNMENTCLOUD`.
+ Add the `CloudEnvironment` parameter if you're using a cloud other than Azure public. You can set this parameter to `AZUREPUBLICCLOUD`, `AZURECHINACLOUD`, `AZUREGERMANCLOUD`, and `AZUREUSGOVERNMENTCLOUD`.
> [!NOTE]
- > - To deploy Azure Arc on your device, make sure that you are using a [Supported region for Azure Arc](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc).
+ > - To deploy Azure Arc on your device, make sure that you are using a [Supported region for Azure Arc](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc).
> - Use the `az account list-locations` command to figure out the exact location name to pass in the `Set-HcsKubernetesAzureArcAgent` cmdlet. Location names are typically formatted without any spaces.
- > - `ClientId` and `ClientSecret` are required.
-
- Here is an example:
-
+ > - `ClientId` and `ClientSecret` are required.
+
+ Here's an example:
+ ```powershell [10.100.10.10]: PS>Set-HcsKubernetesAzureArcAgent -SubscriptionId "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -ResourceGroupName "myaserg1" -ResourceName "myasetestresarc" -Location "westeurope" -TenantId "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -ClientId "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
-
+ WARNING: A script or application on the remote computer 10.126.76.0 is sending a prompt request. When you are prompted, enter sensitive information, such as credentials or passwords, only if you trust the remote computer and the application or script that is requesting the data.
Follow these steps to configure the Kubernetes cluster for Azure Arc management:
ClientSecret: ********************************** [10.100.10.10]: PS> ```
-
+ In the Azure portal, a resource should be created with the name you provided in the preceding command. ![Go to Azure Arc resource](media/azure-stack-edge-gpu-connect-powershell-interface/verify-azure-arc-enabled-1.png)
Follow these steps to configure the Kubernetes cluster for Azure Arc management:
`kubectl get deployments,pods -n azure-arc`
- Here is a sample output that shows the Azure Arc agents that were deployed on your Kubernetes cluster in the `azure-arc` namespace.
+ Here's a sample output that shows the Azure Arc agents that were deployed on your Kubernetes cluster in the `azure-arc` namespace.
```powershell [10.128.44.240]: PS>kubectl get deployments,pods -n azure-arc
To remove the Azure Arc management, follow these steps:
1. 1. [Connect to the PowerShell interface](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) of your device. 2. Type:
- `Remove-HcsKubernetesAzureArcAgent`
+ `Remove-HcsKubernetesAzureArcAgent`
> [!NOTE]
To remove the Azure Arc management, follow these steps:
## Next steps
-To understand how to run an Azure Arc deployment, see
+To understand how to run an Azure Arc deployment, see
[Deploy a stateless PHP `Guestbook` application with Redis via GitOps on an Azure Stack Edge Pro device](azure-stack-edge-gpu-deploy-stateless-application-git-ops-guestbook.md)
ddos-protection Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/diagnostic-logging.md
Title: 'Tutorial: View and configure Azure DDoS Protection diagnostic logging'
-description: Learn how to configure reports and flow logs.
+ Title: 'Configure Azure DDoS Protection diagnostic logging through portal'
+description: Learn how to configure Azure DDoS Protection diagnostic logs.
--+ Previously updated : 10/12/2022 Last updated : 03/14/2023
-# Tutorial: View and configure Azure DDoS Protection diagnostic logging
+# Configure Azure DDoS Protection diagnostic logging through portal
-Azure DDoS Protection provides detailed attack insights and visualization with DDoS Attack Analytics. Customers protecting their virtual networks against DDoS attacks have detailed visibility into attack traffic and actions taken to mitigate the attack via attack mitigation reports & mitigation flow logs. Rich telemetry is exposed via Azure Monitor including detailed metrics during the duration of a DDoS attack. Alerting can be configured for any of the Azure Monitor metrics exposed by DDoS Protection. Logging can be further integrated with [Microsoft Sentinel](../sentinel/data-connectors/azure-ddos-protection.md), Splunk (Azure Event Hubs), OMS Log Analytics, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.
-
-The following diagnostic logs are available for Azure DDoS Protection:
--- **DDoSProtectionNotifications**: Notifications will notify you anytime a public IP resource is under attack, and when attack mitigation is over.-- **DDoSMitigationFlowLogs**: Attack mitigation flow logs allow you to review the dropped traffic, forwarded traffic and other interesting data-points during an active DDoS attack in near-real time. You can ingest the constant stream of this data into Microsoft Sentinel or to your third-party SIEM systems via event hub for near-real time monitoring, take potential actions and address the need of your defense operations.-- **DDoSMitigationReports**: Attack mitigation reports use the Netflow protocol data, which is aggregated to provide detailed information about the attack on your resource. Anytime a public IP resource is under attack, the report generation will start as soon as the mitigation starts. There will be an incremental report generated every 5 mins and a post-mitigation report for the whole mitigation period. This is to ensure that in an event the DDoS attack continues for a longer duration of time, you'll be able to view the most current snapshot of mitigation report every 5 minutes and a complete summary once the attack mitigation is over.-- **AllMetrics**: Provides all possible metrics available during the duration of a DDoS attack.-
-In this tutorial, you'll learn how to:
-
-> [!div class="checklist"]
-> * Configure Azure DDoS Protection diagnostic logs, including notifications, mitigation reports and mitigation flow logs.
-> * Enable diagnostic logging on all public IPs in a defined scope.
-> * View log data in workbooks.
+In this guide, you'll learn how to configure Azure DDoS Protection diagnostic logs, including notifications, mitigation reports and mitigation flow logs.
## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- Before you can complete the steps in this tutorial, you must first create a [Azure DDoS protection plan](manage-ddos-protection.md). DDoS Network Protection must be enabled on a virtual network or DDoS IP Protection must be enabled on a public IP address. -- DDoS monitors public IP addresses assigned to resources within a virtual network. If you don't have any resources with public IP addresses in the virtual network, you must first create a resource with a public IP address. You can monitor the public IP address of all resources deployed through Resource Manager (not classic) listed in [Virtual network for Azure services](../virtual-network/virtual-network-for-azure-services.md#services-that-can-be-deployed-into-a-virtual-network) (including Azure Load Balancers where the backend virtual machines are in the virtual network), except for Azure App Service Environments. To continue with this tutorial, you can quickly create a [Windows](../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../virtual-machines/linux/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine.-
-## Configure Azure DDoS Protection diagnostic logs
-
-If you want to automatically enable diagnostic logging on all public IPs within an environment, skip to [Enable diagnostic logging on all public IPs](#enable-diagnostic-logging-on-all-public-ips).
-
-1. Select **All services** on the top, left of the portal.
-1. Enter *Monitor* in the **Filter** box. When **Monitor** appears in the results, select it.
-1. Under **Settings**, select **Diagnostic Settings**.
-1. Select the **Subscription** and **Resource group** that contain the public IP address you want to log.
-1. Select **Public IP Address** for **Resource type**, then select the specific public IP address you want to enable logs for.
-1. Select **Add diagnostic setting**. Under **Category Details**, select as many of the following options you require, and then select **Save**.
-
- :::image type="content" source="./media/ddos-attack-telemetry/ddos-diagnostic-settings.png" alt-text="Screenshot of DDoS diagnostic settings." lightbox="./media/ddos-attack-telemetry/ddos-diagnostic-settings.png":::
-
-
-1. Under **Destination details**, select as many of the following options as you require:
-
- - **Archive to a storage account**: Data is written to an Azure Storage account. To learn more about this option, see [Archive resource logs](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-storage).
- - **Stream to an event hub**: Allows a log receiver to pick up logs using Azure Event Hubs. Event hubs enable integration with Splunk or other SIEM systems. To learn more about this option, see [Stream resource logs to an event hub](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-event-hubs).
- - **Send to Log Analytics**: Writes logs to the Azure Monitor service. To learn more about this option, see [Collect logs for use in Azure Monitor logs](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-log-analytics-workspace).
-
-### Query Azure DDOS Protection logs in log analytics workspace
-
-For more information on log schemas, see [Monitoring Azure DDoS Protection](monitor-ddos-protection-reference.md#diagnostic-logs).
-#### DDoSProtectionNotifications logs
+- Before you can complete the steps in this guide, you must first create a [Azure DDoS protection plan](manage-ddos-protection.md). DDoS Network Protection must be enabled on a virtual network or DDoS IP Protection must be enabled on a public IP address.
+- In order to use diagnostic logging, you must first create a [Log Analytics workspace with diagnostic settings enabled](ddos-configure-log-analytics-workspace.md).
+- DDoS monitors public IP addresses assigned to resources within a virtual network. If you don't have any resources with public IP addresses in the virtual network, you must first create a resource with a public IP address. You can monitor the public IP address of all resources deployed through Resource Manager (not classic) listed in [Virtual network for Azure services](../virtual-network/virtual-network-for-azure-services.md#services-that-can-be-deployed-into-a-virtual-network) (including Azure Load Balancers where the backend virtual machines are in the virtual network), except for Azure App Service Environments. To continue with this guide, you can quickly create a [Windows](../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../virtual-machines/linux/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine.
-1. Under the **Log analytics workspaces** blade, select your log analytics workspace.
+## Configure diagnostic logs
-1. Under **General**, select on **Logs**
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. In the search box at the top of the portal, enter **Monitor**. Select **Monitor** in the search results.
+1. Select **Diagnostic Settings** under **Settings** in the left pane, then select the following information in the **Diagnostic settings** page. Next, select **Add diagnostic setting**.
-1. In Query explorer, type in the following Kusto Query and change the time range to Custom and change the time range to last three months. Then hit Run.
+ :::image type="content" source="./media/ddos-attack-telemetry/ddos-monitor-diagnostic-settings.png" alt-text="Screenshot of Monitor diagnostic settings.":::
- ```kusto
- AzureDiagnostics
- | where Category == "DDoSProtectionNotifications"
- ```
+ | Setting | Value |
+ |--|--|
+ |Subscription | Select the **Subscription** that contains the public IP address you want to log. |
+ | Resource group | Select the **Resource group** that contains the public IP address you want to log. |
+ |Resource type | Select **Public IP Addresses**.|
+ |Resource | Select the specific **Public IP address** you want to log metrics for. |
-1. To view **DDoSMitigationFlowLogs** change the query to the following and keep the same time range and hit Run.
+1. On the *Diagnostic setting* page, under *Destination details*, select **Send to Log Analytics workspace**, then enter the following information, then select **Save**.
- ```kusto
- AzureDiagnostics
- | where Category == "DDoSMitigationFlowLogs"
- ```
+ :::image type="content" source="./media/ddos-attack-telemetry/ddos-public-ip-diagnostic-setting.png" alt-text="Screenshot of DDoS diagnostic settings.":::
-1. To view **DDoSMitigationReports** change the query to the following and keep the same time range and hit Run.
+ | Setting | Value |
+ |--|--|
+ | Diagnostic setting name | Enter **myDiagnosticSettings**. |
+ |**Logs**| Select **allLogs**.|
+ |**Metrics**| Select **AllMetrics**. |
+ |**Destination details**| Select **Send to Log Analytics workspace**.|
+ | Subscription | Select your Azure subscription. |
+ | Log Analytics Workspace | Select **myLogAnalyticsWorkspace**. |
- ```kusto
- AzureDiagnostics
- | where Category == "DDoSMitigationReports"
- ```
-## Enable diagnostic logging on all public IPs
+## Validate
-This [built-in policy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F752154a7-1e0f-45c6-a880-ac75a7e4f648) automatically enables diagnostic logging on all public IP logs in a defined scope. See [Azure Policy built-in definitions for Azure DDoS Protection](policy-reference.md) for full list of built-in policies.
+1. In the search box at the top of the portal, enter **Monitor**. Select **Monitor** in the search results.
+1. Select **Diagnostic Settings** under **Settings** in the left pane, then select the following information in the **Diagnostic settings** page:
+ :::image type="content" source="./media/ddos-attack-telemetry/ddos-monitor-diagnostic-settings-enabled.png" alt-text="Screenshot of Monitor public ip diagnostic settings enabled.":::
-## View log data in workbooks
+ | Setting | Value |
+ |--|--|
+ |Subscription | Select the **Subscription** that contains the public IP address. |
+ | Resource group | Select the **Resource group** that contains the public IP address. |
+ |Resource type | Select **Public IP Addresses**.|
-### Microsoft Sentinel data connector
-
-You can connect logs to Microsoft Sentinel, view and analyze your data in workbooks, create custom alerts, and incorporate it into investigation processes. To connect to Microsoft Sentinel, see [Connect to Microsoft Sentinel](../sentinel/data-connectors/azure-ddos-protection.md).
---
-### Azure DDoS Protection workbook
-
-You can use [this Azure Resource Manager (ARM) template](https://aka.ms/ddosworkbook) to deploy an attack analytics workbook. This workbook allows you to visualize attack data across several filterable panels to easily understand whatΓÇÖs at stake.
-
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Network-Security%2Fmaster%2FAzure%20DDoS%20Protection%2FWorkbook%20-%20Azure%20DDOS%20monitor%20workbook%2FAzureDDoSWorkbook_ARM.json)
---
-## Validate and test
-
-To simulate a DDoS attack to validate your logs, see [Test with simulation partners](test-through-simulations.md).
+1. Confirm your *Diagnostic status* is **Enabled**.
## Next steps
-In this tutorial, you learned how to:
--- Configure Azure DDoS Protection diagnostic logs, including notifications, mitigation reports and mitigation flow logs.-- Enable diagnostic logging on all public IPs in a defined scope.-- View log data in workbooks.
+In this guide, you learned how to configure Azure DDoS Protection diagnostic logs, including notifications, mitigation reports and mitigation flow logs.
-To learn how to configure attack alerts, continue to the next tutorial.
+To learn how to configure attack alerts, continue to the next guide.
> [!div class="nextstepaction"]
-> [View and configure DDoS protection alerts](alerts.md)
+> [Configure DDoS protection alerts](alerts.md)
defender-for-cloud Defender For Containers Vulnerability Assessment Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md
Some images may reuse tags from an image that was already scanned. For example,
Currently, Defender for Containers can scan images in Azure Container Registry (ACR) and AWS Elastic Container Registry (ECR) only. Docker Registry, Microsoft Artifact Registry/Microsoft Container Registry, and Microsoft Azure Red Hat OpenShift (ARO) built-in container image registry are not supported.
-Images should first be imported to ACR. Learn more about [importing container images to an Azure container registry](/azure/container-registry/container-registry-import-images?tabs=azure-cli).
+Images should first be imported to ACR. Learn more about [importing container images to an Azure container registry](../container-registry/container-registry-import-images.md?tabs=azure-cli).
## Next steps
-Learn more about the Defender for Cloud [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads).
+Learn more about the Defender for Cloud [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads).
defender-for-cloud Devops Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/devops-faq.md
You can [check which account is signed in](https://app.vssps.visualstudio.com/pr
The Azure DevOps service only supports `TfsGit`.
-Ensure that you've [onboarded your repositories](/azure/defender-for-cloud/quickstart-onboard-devops?branch=main) to Microsoft Defender for Cloud. If you still can't see your repository, ensure that you're signed in with the correct Azure DevOps organization user account. Your Azure subscription and Azure DevOps Organization need to be in the same tenant. If the user for the connector is wrong, you need to delete the previously created connector, sign in with the correct user account and re-create the connector.
+Ensure that you've [onboarded your repositories](./quickstart-onboard-devops.md?branch=main) to Microsoft Defender for Cloud. If you still can't see your repository, ensure that you're signed in with the correct Azure DevOps organization user account. Your Azure subscription and Azure DevOps Organization need to be in the same tenant. If the user for the connector is wrong, you need to delete the previously created connector, sign in with the correct user account and re-create the connector.
### Secret scan didn't run on my code
If no scan is performed for 14 days, the scan results revert to `N/A`.
### I donΓÇÖt see Recommendations for findings
-Ensure that you've onboarded the project with the connector and that your repository (that build is for), is onboarded to Microsoft Defender for Cloud. You can learn how to [onboard your DevOps repository](/azure/defender-for-cloud/quickstart-onboard-devops?branch=main) to Defender for Cloud.
+Ensure that you've onboarded the project with the connector and that your repository (that build is for), is onboarded to Microsoft Defender for Cloud. You can learn how to [onboard your DevOps repository](./quickstart-onboard-devops.md?branch=main) to Defender for Cloud.
You must have more than a [stakeholder license](https://azure.microsoft.com/pricing/details/devops/azure-devops-services/) to the repos to onboard them, and you need to be at least the security reader on the subscription where the connector is created. You can confirm if you've onboarded the repositories by seeing them in the inventory list in Microsoft Defender for Cloud.
You can learn more about [Microsoft Security DevOps](https://marketplace.visuals
## Next steps -- [Overview of Defender for DevOps](defender-for-devops-introduction.md)
+- [Overview of Defender for DevOps](defender-for-devops-introduction.md)
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
This page provides you with information about:
- Bug fixes - Deprecated functionality
+## September 2022
+
+Updates in September include:
+
+- [Suppress alerts based on Container and Kubernetes entities](#suppress-alerts-based-on-container-and-kubernetes-entities)
+- [Defender for Servers supports File Integrity Monitoring with Azure Monitor Agent](#defender-for-servers-supports-file-integrity-monitoring-with-azure-monitor-agent)
+- [Legacy Assessments APIs deprecation](#legacy-assessments-apis-deprecation)
+- [Extra recommendations added to identity](#extra-recommendations-added-to-identity)
+- [Removed security alerts for machines reporting to cross-tenant Log Analytics workspaces](#removed-security-alerts-for-machines-reporting-to-cross-tenant-log-analytics-workspaces)
+
+### Suppress alerts based on Container and Kubernetes entities
+
+- Kubernetes Namespace
+- Kubernetes Pod
+- Kubernetes Secret
+- Kubernetes ServiceAccount
+- Kubernetes ReplicaSet
+- Kubernetes StatefulSet
+- Kubernetes DaemonSet
+- Kubernetes Job
+- Kubernetes CronJob
+
+Learn more about [alert suppression rules](alerts-suppression-rules.md).
+
+### Defender for Servers supports File Integrity Monitoring with Azure Monitor Agent
+
+File integrity monitoring (FIM) examines operating system files and registries for changes that might indicate an attack.
+
+FIM is now available in a new version based on Azure Monitor Agent (AMA), which you can [deploy through Defender for Cloud](auto-deploy-azure-monitoring-agent.md).
+
+Learn more about [File Integrity Monitoring with the Azure Monitor Agent](file-integrity-monitoring-enable-ama.md).
+
+### Legacy Assessments APIs deprecation
+
+The following APIs are deprecated:
+
+- Security Tasks
+- Security Statuses
+- Security Summaries
+
+These three APIs exposed old formats of assessments and are replaced by the [Assessments APIs](/rest/api/defenderforcloud/assessments) and [SubAssessments APIs](/rest/api/defenderforcloud/sub-assessments). All data that is exposed by these legacy APIs are also available in the new APIs.
+
+### Extra recommendations added to identity
+
+Defender for Cloud's recommendations for improving the management of users and accounts.
+
+#### New recommendations
+
+The new release contains the following capabilities:
+
+- **Extended evaluation scope** ΓÇô Coverage has been improved for identity accounts without MFA and external accounts on Azure resources (instead of subscriptions only) which allows your security administrators to view role assignments per account.
+
+- **Improved freshness interval** - The identity recommendations now have a freshness interval of 12 hours.
+
+- **Account exemption capability** - Defender for Cloud has many features you can use to customize your experience and ensure that your secure score reflects your organization's security priorities. For example, you can [exempt resources and recommendations from your secure score](exempt-resource.md).
+
+ This update allows you to exempt specific accounts from evaluation with the six recommendations listed in the following table.
+
+ Typically, you'd exempt emergency ΓÇ£break glassΓÇ¥ accounts from MFA recommendations, because such accounts are often deliberately excluded from an organization's MFA requirements. Alternatively, you might have external accounts that you'd like to permit access to, that don't have MFA enabled.
+
+ > [!TIP]
+ > When you exempt an account, it won't be shown as unhealthy and also won't cause a subscription to appear unhealthy.
+
+ | Recommendation | Assessment key |
+ |--|--|
+ |Accounts with owner permissions on Azure resources should be MFA enabled|6240402e-f77c-46fa-9060-a7ce53997754|
+ |Accounts with write permissions on Azure resources should be MFA enabled|c0cb17b2-0607-48a7-b0e0-903ed22de39b|
+ |Accounts with read permissions on Azure resources should be MFA enabled|dabc9bc4-b8a8-45bd-9a5a-43000df8aa1c|
+ |Guest accounts with owner permissions on Azure resources should be removed|20606e75-05c4-48c0-9d97-add6daa2109a|
+ |Guest accounts with write permissions on Azure resources should be removed|0354476c-a12a-4fcc-a79d-f0ab7ffffdbb|
+ |Guest accounts with read permissions on Azure resources should be removed|fde1c0c9-0fd2-4ecc-87b5-98956cbc1095|
+ |Blocked accounts with owner permissions on Azure resources should be removed|050ac097-3dda-4d24-ab6d-82568e7a50cf|
+ |Blocked accounts with read and write permissions on Azure resources should be removed| 1ff0b4c9-ed56-4de6-be9c-d7ab39645926 |
+
+The recommendations although in preview, will appear next to the recommendations that are currently in GA.
+
+### Removed security alerts for machines reporting to cross-tenant Log Analytics workspaces
+
+In the past, Defender for Cloud let you choose the workspace that your Log Analytics agents report to. When a machine belonged to one tenant (ΓÇ£Tenant AΓÇ¥) but its Log Analytics agent reported to a workspace in a different tenant (ΓÇ£Tenant BΓÇ¥), security alerts about the machine were reported to the first tenant (ΓÇ£Tenant AΓÇ¥).
+
+With this change, alerts on machines connected to Log Analytics workspace in a different tenant no longer appear in Defender for Cloud.
+
+If you want to continue receiving the alerts in Defender for Cloud, connect the Log Analytics agent of the relevant machines to the workspace in the same tenant as the machine.
+
+Learn more about [security alerts](alerts-overview.md).
+ ## August 2022 Updates in August include:
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Agentless vulnerability assessment scanning for images in ECR repositories helps
Learn more about [vulnerability assessment for Amazon ECR images](defender-for-containers-vulnerability-assessment-elastic.md).
-## September 2022
-
-Updates in September include:
--- [Suppress alerts based on Container and Kubernetes entities](#suppress-alerts-based-on-container-and-kubernetes-entities)-- [Defender for Servers supports File Integrity Monitoring with Azure Monitor Agent](#defender-for-servers-supports-file-integrity-monitoring-with-azure-monitor-agent)-- [Legacy Assessments APIs deprecation](#legacy-assessments-apis-deprecation)-- [Extra recommendations added to identity](#extra-recommendations-added-to-identity)-- [Removed security alerts for machines reporting to cross-tenant Log Analytics workspaces](#removed-security-alerts-for-machines-reporting-to-cross-tenant-log-analytics-workspaces)-
-### Suppress alerts based on Container and Kubernetes entities
--- Kubernetes Namespace-- Kubernetes Pod-- Kubernetes Secret-- Kubernetes ServiceAccount-- Kubernetes ReplicaSet-- Kubernetes StatefulSet-- Kubernetes DaemonSet-- Kubernetes Job-- Kubernetes CronJob-
-Learn more about [alert suppression rules](alerts-suppression-rules.md).
-
-### Defender for Servers supports File Integrity Monitoring with Azure Monitor Agent
-
-File integrity monitoring (FIM) examines operating system files and registries for changes that might indicate an attack.
-
-FIM is now available in a new version based on Azure Monitor Agent (AMA), which you can [deploy through Defender for Cloud](auto-deploy-azure-monitoring-agent.md).
-
-Learn more about [File Integrity Monitoring with the Azure Monitor Agent](file-integrity-monitoring-enable-ama.md).
-
-### Legacy Assessments APIs deprecation
-
-The following APIs are deprecated:
--- Security Tasks-- Security Statuses-- Security Summaries-
-These three APIs exposed old formats of assessments and are replaced by the [Assessments APIs](/rest/api/defenderforcloud/assessments) and [SubAssessments APIs](/rest/api/defenderforcloud/sub-assessments). All data that is exposed by these legacy APIs are also available in the new APIs.
-
-### Extra recommendations added to identity
-
-Defender for Cloud's recommendations for improving the management of users and accounts.
-
-#### New recommendations
-
-The new release contains the following capabilities:
--- **Extended evaluation scope** ΓÇô Coverage has been improved for identity accounts without MFA and external accounts on Azure resources (instead of subscriptions only) which allows your security administrators to view role assignments per account.--- **Improved freshness interval** - The identity recommendations now have a freshness interval of 12 hours.--- **Account exemption capability** - Defender for Cloud has many features you can use to customize your experience and ensure that your secure score reflects your organization's security priorities. For example, you can [exempt resources and recommendations from your secure score](exempt-resource.md).-
- This update allows you to exempt specific accounts from evaluation with the six recommendations listed in the following table.
-
- Typically, you'd exempt emergency ΓÇ£break glassΓÇ¥ accounts from MFA recommendations, because such accounts are often deliberately excluded from an organization's MFA requirements. Alternatively, you might have external accounts that you'd like to permit access to, that don't have MFA enabled.
-
- > [!TIP]
- > When you exempt an account, it won't be shown as unhealthy and also won't cause a subscription to appear unhealthy.
-
- | Recommendation | Assessment key |
- |--|--|
- |Accounts with owner permissions on Azure resources should be MFA enabled|6240402e-f77c-46fa-9060-a7ce53997754|
- |Accounts with write permissions on Azure resources should be MFA enabled|c0cb17b2-0607-48a7-b0e0-903ed22de39b|
- |Accounts with read permissions on Azure resources should be MFA enabled|dabc9bc4-b8a8-45bd-9a5a-43000df8aa1c|
- |Guest accounts with owner permissions on Azure resources should be removed|20606e75-05c4-48c0-9d97-add6daa2109a|
- |Guest accounts with write permissions on Azure resources should be removed|0354476c-a12a-4fcc-a79d-f0ab7ffffdbb|
- |Guest accounts with read permissions on Azure resources should be removed|fde1c0c9-0fd2-4ecc-87b5-98956cbc1095|
- |Blocked accounts with owner permissions on Azure resources should be removed|050ac097-3dda-4d24-ab6d-82568e7a50cf|
- |Blocked accounts with read and write permissions on Azure resources should be removed| 1ff0b4c9-ed56-4de6-be9c-d7ab39645926 |
-
-The recommendations although in preview, will appear next to the recommendations that are currently in GA.
-
-### Removed security alerts for machines reporting to cross-tenant Log Analytics workspaces
-
-In the past, Defender for Cloud let you choose the workspace that your Log Analytics agents report to. When a machine belonged to one tenant (ΓÇ£Tenant AΓÇ¥) but its Log Analytics agent reported to a workspace in a different tenant (ΓÇ£Tenant BΓÇ¥), security alerts about the machine were reported to the first tenant (ΓÇ£Tenant AΓÇ¥).
-
-With this change, alerts on machines connected to Log Analytics workspace in a different tenant no longer appear in Defender for Cloud.
-
-If you want to continue receiving the alerts in Defender for Cloud, connect the Log Analytics agent of the relevant machines to the workspace in the same tenant as the machine.
-
-Learn more about [security alerts](alerts-overview.md).
- ## Next steps For past changes to Defender for Cloud, see [Archive for what's new in Defender for Cloud?](release-notes-archive.md).
defender-for-cloud Support Matrix Defender For Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-cloud.md
Microsoft Defender for Cloud is available in the following Azure cloud environme
## Supported operating systems
-Defender for Cloud depends on the [Azure Monitor Agent](/azure/azure-monitor/agents/agents-overview) or the [Log Analytics agent](/azure/azure-monitor/agents/log-analytics-agent). Make sure that your machines are running one of the supported operating systems as described on the following pages:
+Defender for Cloud depends on the [Azure Monitor Agent](../azure-monitor/agents/agents-overview.md) or the [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md). Make sure that your machines are running one of the supported operating systems as described on the following pages:
- Azure Monitor Agent
- - [Azure Monitor Agent for Windows supported operating systems](/azure/azure-monitor/agents/agents-overview#windows)
- - [Azure Monitor Agent for Linux supported operating systems](/azure/azure-monitor/agents/agents-overview#linux)
+ - [Azure Monitor Agent for Windows supported operating systems](../azure-monitor/agents/agents-overview.md#windows)
+ - [Azure Monitor Agent for Linux supported operating systems](../azure-monitor/agents/agents-overview.md#linux)
- Log Analytics agent
- - [Log Analytics agent for Windows supported operating systems](/azure/azure-monitor/agents/agents-overview#windows)
- - [Log Analytics agent for Linux supported operating systems](/azure/azure-monitor/agents/agents-overview#linux)
+ - [Log Analytics agent for Windows supported operating systems](../azure-monitor/agents/agents-overview.md#windows)
+ - [Log Analytics agent for Linux supported operating systems](../azure-monitor/agents/agents-overview.md#linux)
Also ensure your Log Analytics agent is [properly configured to send data to Defender for Cloud](working-with-log-analytics-agent.md#manual-agent).
To learn more about the specific Defender for Cloud features available on Window
This article explained how Microsoft Defender for Cloud is supported in the Azure, Azure Government, and Azure China 21Vianet clouds. Now that you're familiar with the Defender for Cloud capabilities supported in your cloud, learn how to: - [Manage security recommendations in Defender for Cloud](review-security-recommendations.md)-- [Manage and respond to security alerts in Defender for Cloud](managing-and-responding-alerts.md)
+- [Manage and respond to security alerts in Defender for Cloud](managing-and-responding-alerts.md)
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Defender for Cloud won't include these recommendations as built-in recommendatio
**Estimated date for change: May 2023**
-We announced previously the [availability of identity recommendations V2 (preview)](release-notes.md#extra-recommendations-added-to-identity), which included enhanced capabilities.
+We announced previously the [availability of identity recommendations V2 (preview)](release-notes-archive.md#extra-recommendations-added-to-identity), which included enhanced capabilities.
As part of these changes, the following recommendations will be released as General Availability (GA) and replace the V1 recommendations that are set to be deprecated.
defender-for-cloud Zero Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/zero-trust.md
There's a clear mapping from the goals we've described in the [infrastructure de
| Harden configuration | [Review your security recommendations](review-security-recommendations.md) and [track your secure score improvement overtime](secure-score-access-and-track.md). You can also prioritize which recommendations to remediate based on potential attack paths, by leveraging the [attack path](how-to-manage-attack-path.md) feature. | |Employ hardening mechanisms | Least privilege access is one of the three principles of Zero Trust. Defender for Cloud can assist you to harden VMs and network using this principle by leveraging features such as:<br>[Just-in-time (JIT) virtual machine (VM) access](just-in-time-access-overview.md)<br>[Adaptive network hardening](adaptive-network-hardening.md)<br>[Adaptive application controls](adaptive-application-controls.md). | |Set up threat detection | Defender for Cloud offers an integrated cloud workload protection platform (CWPP), Microsoft Defender for Cloud.<br>Microsoft Defender for Cloud provides advanced, intelligent, protection of Azure and hybrid resources and workloads.<br>One of the Microsoft Defender plans, Microsoft Defender for servers, includes a native integration with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/).<br>Learn more in [Introduction to Microsoft Defender for Cloud](/azure/security-center/azure-defender). |
-|Automatically block suspicious behavior | Many of the hardening recommendations in Defender for Cloud offer a *deny* option. This feature lets you prevent the creation of resources that don't satisfy defined hardening criteria. Learn more in [Prevent misconfigurations with Enforce/Deny recommendations](/azure/defender-for-cloud/prevent-misconfigurations). |
+|Automatically block suspicious behavior | Many of the hardening recommendations in Defender for Cloud offer a *deny* option. This feature lets you prevent the creation of resources that don't satisfy defined hardening criteria. Learn more in [Prevent misconfigurations with Enforce/Deny recommendations](./prevent-misconfigurations.md). |
|Automatically flag suspicious behavior | Microsoft Defenders for Cloud's security alerts are triggered by advanced detections. Defender for Cloud prioritizes and lists the alerts, along with the information needed for you to quickly investigate the problem. Defender for Cloud also provides detailed steps to help you remediate attacks. For a full list of the available alerts, see [Security alerts - a reference guide](alerts-reference.md).| ### Protect your Azure PaaS services with Defender for Cloud
There's a clear mapping from the goals we've described in the [infrastructure de
With Defender for Cloud enabled on your subscription, and Microsoft Defender for Cloud enabled for all available resource types, you'll have a layer of intelligent threat protection - powered by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) - protecting resources in Azure Key Vault, Azure Storage, Azure DNS, and other Azure PaaS services. For a full list, see [What resource types can Microsoft Defender for Cloud secure?](defender-for-cloud-introduction.md). ### Azure Logic Apps
-Use [Azure Logic Apps](/azure/logic-apps/) to build automated scalable workflows, business processes, and enterprise orchestrations to integrate your apps and data across cloud services and on-premises systems.
+Use [Azure Logic Apps](../logic-apps/index.yml) to build automated scalable workflows, business processes, and enterprise orchestrations to integrate your apps and data across cloud services and on-premises systems.
Defender for Cloud's [workflow automation](workflow-automation.md) feature lets you automate responses to Defender for Cloud triggers.
There are Azure-native tools for ensuring you can view your alert data in all of
#### Microsoft Sentinel
-Defender for Cloud natively integrates with [Microsoft Sentinel](/azure/sentinel/overview), Microsoft's cloud-native, security information event management (SIEM) and security orchestration automated response (SOAR) solution.
+Defender for Cloud natively integrates with [Microsoft Sentinel](../sentinel/overview.md), Microsoft's cloud-native, security information event management (SIEM) and security orchestration automated response (SOAR) solution.
There are two approaches to ensuring your Defender for Cloud data is represented in Microsoft Sentinel:
There are two approaches to ensuring your Defender for Cloud data is represented
- **Stream your audit logs** - An alternative way to investigate Defender for Cloud alerts in Microsoft Sentinel is to stream your audit logs into Microsoft Sentinel: - [Connect Windows security events](/azure/sentinel/connect-windows-security-events)
- - [Collect data from Linux-based sources using Syslog](/azure/sentinel/connect-syslog)
+ - [Collect data from Linux-based sources using Syslog](../sentinel/connect-syslog.md)
- [Connect data from Azure Activity log](/azure/sentinel/connect-azure-activity) #### Stream alerts with Microsoft Graph Security API
Microsoft Defender for Cloud protects workloads wherever they're running: in Azu
#### Integrate Defender for Cloud with on-premises machines
-To secure hybrid cloud workloads, you can extend Defender for Cloud's protections by connecting on-premises machines to [Azure Arc enabled servers](/azure/azure-arc/servers/overview).
+To secure hybrid cloud workloads, you can extend Defender for Cloud's protections by connecting on-premises machines to [Azure Arc enabled servers](../azure-arc/servers/overview.md).
Learn about how to connect machines in [Connect your non-Azure machines to Defender for Cloud](quickstart-onboard-machines.md).
To view the security posture of **Google Cloud Platform** machines in Defender f
## Next steps
-To learn more about Microsoft Defender for Cloud and Microsoft Defender for Cloud, see the complete [Defender for Cloud documentation](index.yml).
+To learn more about Microsoft Defender for Cloud and Microsoft Defender for Cloud, see the complete [Defender for Cloud documentation](index.yml).
defender-for-iot Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/device-inventory.md
The following columns are available on OT sensors only:
- The number of **Unacknowledged Alerts** alerts associated with the device > [!NOTE]
-> The additional **Agent type** and **Agent version** columns are used for by device builders. For more information, see [Microsoft Defender for IoT for device builders documentation](/azure/defender-for-iot/device-builders/).
+> The additional **Agent type** and **Agent version** columns are used for by device builders. For more information, see [Microsoft Defender for IoT for device builders documentation](../device-builders/index.yml).
## Next steps
For more information, see:
- [Manage your OT device inventory from a sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md) - [Manage your OT device inventory from an on-premises management console](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md) - [Microsoft Defender for IoT - supported IoT, OT, ICS, and SCADA protocols](concept-supported-protocols.md)-- [Investigate devices on a device map](how-to-work-with-the-sensor-device-map.md)
+- [Investigate devices on a device map](how-to-work-with-the-sensor-device-map.md)
defender-for-iot Iot Advanced Threat Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/iot-advanced-threat-monitoring.md
After youΓÇÖve [configured your Defender for IoT data to trigger new incidents i
- Learn about recommended remediation steps by selecting an alert in the incident timeline and viewing the **Remediation steps** area.
- - Select an IoT device entity from the **Entities** list to open its [device entity page](/azure/sentinel/entity-pages). For more information, see [Investigate further with IoT device entities](#investigate-further-with-iot-device-entities).
+ - Select an IoT device entity from the **Entities** list to open its [device entity page](../../sentinel/entity-pages.md). For more information, see [Investigate further with IoT device entities](#investigate-further-with-iot-device-entities).
For more information, see [Investigate incidents with Microsoft Sentinel](../../sentinel/investigate-cases.md).
This playbook updates the incident severity according to the importance level of
> [!div class="nextstepaction"] > [Use playbooks with automation rules](../../sentinel/tutorial-respond-threats-playbook.md)
-For more information, see our blog: [Defending Critical Infrastructure with the Microsoft Sentinel: IT/OT Threat Monitoring Solution](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/defending-critical-infrastructure-with-the-microsoft-sentinel-it/ba-p/3061184)
+For more information, see our blog: [Defending Critical Infrastructure with the Microsoft Sentinel: IT/OT Threat Monitoring Solution](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/defending-critical-infrastructure-with-the-microsoft-sentinel-it/ba-p/3061184)
defender-for-iot Iot Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/iot-solution.md
Before you start, make sure you have the following requirements on your workspac
## Connect your data from Defender for IoT to Microsoft Sentinel
-Start by enabling the [Defender for IoT data connector](/azure/sentinel/data-connectors/microsoft-defender-for-iot.md) to stream all your Defender for IoT events into Microsoft Sentinel.
+Start by enabling the [Defender for IoT data connector](../../sentinel/data-connectors/microsoft-defender-for-iot.md) to stream all your Defender for IoT events into Microsoft Sentinel.
**To enable the Defender for IoT data connector**:
defender-for-iot Monitor Zero Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/monitor-zero-trust.md
To perform the tasks in this tutorial, you need:
- The following permissions:
- - Access to the Azure portal as a [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) user. For more information, see [Azure user roles and permissions for Defender for IoT](roles-azure.md).
+ - Access to the Azure portal as a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) user. For more information, see [Azure user roles and permissions for Defender for IoT](roles-azure.md).
- Access to your sensors as an **Admin** or **Security Analyst** user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
Microsoft Sentinel's new [incident experience](https://techcommunity.microsoft.c
- **Review OT alert remediation steps** directly on the incident details page
-For more information, see [Tutorial: Investigate and detect threats for IoT devices](iot-advanced-threat-monitoring.md) and [Navigate and investigate incidents in Microsoft Sentinel](/azure/sentinel/investigate-incidents).
+For more information, see [Tutorial: Investigate and detect threats for IoT devices](iot-advanced-threat-monitoring.md) and [Navigate and investigate incidents in Microsoft Sentinel](../../sentinel/investigate-incidents.md).
## February 2023
For more information, see [Tutorial: Investigate and detect threats for IoT devi
### Microsoft Sentinel: Microsoft Defender for IoT solution version 2.0.2
-[Version 2.0.2](release-notes-sentinel.md#version-202) of the Microsoft Defender for IoT solution is now available in the [Microsoft Sentinel content hub](/azure/sentinel/sentinel-solutions-catalog), with improvements in analytics rules for incident creation, an enhanced incident details page, and performance improvements for analytics rule queries.
+[Version 2.0.2](release-notes-sentinel.md#version-202) of the Microsoft Defender for IoT solution is now available in the [Microsoft Sentinel content hub](../../sentinel/sentinel-solutions-catalog.md), with improvements in analytics rules for incident creation, an enhanced incident details page, and performance improvements for analytics rule queries.
For more information, see:
The following Defender for IoT options and configurations have been moved, remov
## Next steps
-[Getting started with Defender for IoT](getting-started.md)
+[Getting started with Defender for IoT](getting-started.md)
deployment-environments How To Create Access Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-create-access-environments.md
Previously updated : 01/26/2022 Last updated : 03/14/2023 # Create and access an environment by using the Azure CLI
This article shows you how to create and access an [environment](concept-environ
- [Create and configure a dev center](quickstart-create-and-configure-devcenter.md). - [Create and configure a project](quickstart-create-and-configure-projects.md).-- Install the Azure Deployment Environments Azure CLI extension:-
- 1. [Download and install the Azure CLI](/cli/azure/install-azure-cli).
- 1. Install the Azure Deployment Environments AZ CLI extension:
-
- - **Automated installation**
-
- In PowerShell, run the https://aka.ms/DevCenter/Install-DevCenterCli.ps1 script:
-
- ```powershell
- iex "& { $(irm https://aka.ms/DevCenter/Install-DevCenterCli.ps1 ) }"
- ```
-
- The script uninstalls any existing dev center extension and installs the latest version.
-
- - **Manual installation**
-
- Run the following command in the Azure CLI:
-
- ```azurecli
- az extension add --source https://fidalgos