Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Partner Deduce | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-deduce.md | In order to collect the user_agent from client-side, create your own `**ContentD To customize the user interface, you specify a URL in the `ContentDefinition` element with customized HTML content. In the self-asserted technical profile or orchestration step, you point to that ContentDefinition identifier. -1. Open the `TrustFrameworksExtension.xml` and define a new **ContentDefinition** to customize the [self-asserted technical profile](/azure/active-directory-b2c/self-asserted-technical-profile). +1. Open the `TrustFrameworksExtension.xml` and define a new **ContentDefinition** to customize the [self-asserted technical profile](./self-asserted-technical-profile.md). 1. Find the `BuildingBlocks` element and add the `**api.selfassertedDeduce**` ContentDefinition: The **ClaimsSchema** element defines the claim types that can be referenced as p ### Step 6: Add Deduce ClaimsProvider -A **claims provider** is an interface to communicate with different types of parties via its [technical profiles](/azure/active-directory-b2c/technicalprofiles). +A **claims provider** is an interface to communicate with different types of parties via its [technical profiles](./technicalprofiles.md). - `SelfAsserted-UserAgent` self-asserted technical profile is used to collect user_agent from client-side. -- `deduce_insight_api` technical profile sends data to the Deduce RESTful service in an input claims collection and receives data back in an output claims collection. For more information, see [integrate REST API claims exchanges in your Azure AD B2C custom policy](/azure/active-directory-b2c/api-connectors-overview?pivots=b2c-custom-policy)+- `deduce_insight_api` technical profile sends data to the Deduce RESTful service in an input claims collection and receives data back in an output claims collection. For more information, see [integrate REST API claims exchanges in your Azure AD B2C custom policy](./api-connectors-overview.md?pivots=b2c-custom-policy) You can define Deduce as a claims provider by adding it to the **ClaimsProvider** element in the extension file of your policy. For additional information, review the following articles: - [Custom policies in Azure AD B2C](./custom-policy-overview.md) -- [Get started with custom policies in Azure AD B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy&tabs=applications)+- [Get started with custom policies in Azure AD B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy&tabs=applications) |
active-directory-domain-services | Ad Auth No Join Linux Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/ad-auth-no-join-linux-vm.md | + + Title: Active Directory authentication non domain joined Linux Virtual Machines +description: Active Directory authentication non domain joined Linux Virtual Machines. +++++++ Last updated : 10/12/2022+++++# Active Directory authentication non domain joined Linux Virtual Machines ++Currently Linux distribution can work as member of Active Directory domains, which gives them access to the AD authentication system. To take advantage of AD authentication in some cases, we can avoid the AD join. To let users sign in on Azure Linux VM with Active Directory account you have different choices. One possibility is to Join in Active Directory the VM. Another possibility is to base the authentication flow through LDAP to your Active Directory without Join the VM on AD. This article shows you how to authenticate with AD credential on your Linux system (CentosOS) based on LDAP. ++## Prerequisites ++To complete the authentication flow we assume, you already have: ++* An Active Directory Domain Services already configured. +* A Linux VM (for the test we use CentosOS based machine). +* A network infrastructure that allows communication between Active Directory and the Linux VM. +* A dedicated User Account for read AD objects. +* The Linux VM need to have these packages installed: + - sssd + - sssd-tools + - sssd-ldap + - openldap-clients +* An LDAPS Certificate correctly configured on the Linux VM. +* A CA Certificate correctly imported into Certificate Store of the Linux VM (the path varies depending on the Linux distro). ++## Active Directory User Configuration ++To read Users in your Active Directory Domain Services create a ReadOnlyUser in AD. For create a new user follow the steps below: ++1. Connect to your *Domain Controller*. +2. Click *Start*, point to *Administrative Tools*, and then click *Active Directory Users and Computers* to start the Active Directory Users and Computers console. +3. Click the domain name that you created, and then expand the contents. +4. Right-click Users, point to *New*, and then click *User*. +5. Type the first name, last name, and user logon name of the new user, and then click Next. In lab environment we used a user called *ReadOnlyUser*. +6. Type a *new password*, confirm the password, and then click to select one of the following check boxes if needed: + - Users must change password at next logon (recommended for most user) + - User cannot change password + - Password never expires + - Account is disabled (If you disable the account the authentication will fail) +7. Click *Next*. ++Review the information that you provided, and if everything is correct, click Finish. ++> [!NOTE] +> The lab environment is based on: +> - Windows Server 2016 Domain and Forest Functional Level. +> - Linux client Centos 8.5. ++## Linux Virtual Machines Configuration ++> [!NOTE] +> You must run these command with sudo permission ++On your Linux VM, install the following packages: *sssd sssd-tools sssd-ldap openldap-client*: ++```console +yum install -y sssd sssd-tools sssd-ldap openldap-clients +``` ++After the installation check if LDAP search works. In order to check it try an LDAP search following the example below: ++```console +ldapsearch -H ldaps://contoso.com -x \ + -D CN=ReadOnlyUser,CN=Users,DC=contoso,DC=com -w Read0nlyuserpassword \ + -b CN=Users,DC=contoso,DC=com +``` ++If the LDAP query works fine, you will obtain an output with some information like follow: ++```console +extended LDIF ++LDAPv3 +base <CN=Users,DC=contoso,DC=com> with scope subtree +filter: (objectclass=*) +requesting: ALL ++Users, contoso.com +dn: CN=Users,DC=contoso,DC=com +objectClass: top +objectClass: container +cn: Users +description: Default container for upgraded user accounts +distinguishedName: CN=Users,DC=contoso,DC=com +instanceType: 4 +whenCreated: 20220913115340.0Z +whenChanged: 20220913115340.0Z +uSNCreated: 5660 +uSNChanged: 5660 +showInAdvancedViewOnly: FALSE +name: Users +objectGUID:: i9MABLytKUurB2uTe/dOzg== +systemFlags: -1946157056 +objectCategory: CN=Container,CN=Schema,CN=Configuration,DC=contoso,DC=com +isCriticalSystemObject: TRUE +dSCorePropagationData: 20220930113600.0Z +dSCorePropagationData: 20220930113600.0Z +dSCorePropagationData: 20220930113600.0Z +dSCorePropagationData: 20220930113600.0Z +dSCorePropagationData: 16010101000000.0Z +``` ++> [!NOTE] +> If your get and error run the following command: +> +> ldapsearch -H ldaps://contoso.com -x \ +> -D CN=ReadOnlyUser,CN=Users,DC=contoso,DC=com -w Read0nlyuserpassword \ +> -b CN=Users,DC=contoso,DC=com -d 3 +> +> Troubleshoot according to the output. ++## Create sssd.conf file ++Create */etc/sssd/sssd.conf* with a content like the following. Remember to update the *ldap_uri*, *ldap_search_base* and *ldap_default_bind_dn*. ++Command for file creation: ++```console +vi /etc/sssd/sssd.conf +``` ++Example sssd.conf: ++```bash +[sssd] +config_file_version = 2 +domains = default +services = nss, pam +full_name_format = %1$s ++[nss] ++[pam] ++[domain/default] +id_provider = ldap +cache_credentials = True +ldap_uri = ldaps://contoso.com +ldap_search_base = CN=Users,DC=contoso,DC=com +ldap_schema = AD +ldap_default_bind_dn = CN=ReadOnlyUser,CN=Users,DC=contoso,DC=com +ldap_default_authtok_type = obfuscated_password +ldap_default_authtok = generated_password ++# Obtain the CA root certificate for your LDAPS connection. +ldap_tls_cacert = /etc/pki/tls/cacerts.pem ++# This setting disables cert verification. +#ldap_tls_reqcert = allow ++# Only if the LDAP directory doesn't provide uidNumber and gidNumber attributes +ldap_id_mapping = True ++# Consider setting enumerate=False for very large directories +enumerate = True ++# Only needed if LDAP doesn't provide homeDirectory and loginShell attributes +fallback_homedir = /home/%u +default_shell = /bin/bash +access_provider = permit +sudo_provider = ldap +auth_provider = ldap +autofs_provider = ldap +resolver_provider = ldap ++``` ++Save the file with *ESC + wq!* command. ++> [!NOTE] +> If you don't have a valid TLS certificate under */etc/pki/tls/* called *cacerts.pem* the bind doesn't work ++## Change permission for sssd.conf and create the obfuscated password ++Set the permission to sssd.conf to 600 with the following command: ++```console +chmod 600 /etc/sssd/sssd.conf +``` ++After that create an obfuscated password for the Bind DN account. You must insert the Domain password for ReadOnlyUser: ++```console +sss_obfuscate --domain default +``` ++The password will be placed automatically in the configuration file. ++## Configure the sssd service ++Start the sssd service: ++```console +service sssd start +``` ++Now configure the service with the *authconfig* tool: ++```console +authconfig --enablesssd --enablesssdauth --enablemkhomedir --updateall +``` ++At this point restart the service: ++```console +systemctl restart sssd +``` ++## Test the configuration ++The final step is to check that the flow works properly. To check this, try logging in with one of your AD users in Active Directory. We tried with a user called *ADUser*. If the configuration is correct, you will get the following result: ++```console +[centosuser@centos8 ~]su - ADUser@contoso.com +Last login: Wed Oct 12 15:13:39 UTC 2022 on pts/0 +[ADUser@Centos8 ~]$ exit ++``` +Now you are ready to use AD authentication on your Linux VM. ++<!-- INTERNAL LINKS --> +[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md +[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md +[create-azure-ad-ds-instance]: tutorial-create-instance.md |
active-directory-domain-services | Fleet Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/fleet-metrics.md | The following table describes the metrics that are available for Azure AD DS. ## Azure Monitor alert -You can configure metric alerts for Azure AD DS to be notified of possible problems. Metric alerts are one type of alert for Azure Monitor. For more information about other types of alerts, see [What are Azure Monitor Alerts?](/azure/azure-monitor/alerts/alerts-overview). +You can configure metric alerts for Azure AD DS to be notified of possible problems. Metric alerts are one type of alert for Azure Monitor. For more information about other types of alerts, see [What are Azure Monitor Alerts?](../azure-monitor/alerts/alerts-overview.md). -To view and manage Azure Monitor alert, a user needs to be assigned [Azure Monitor roles](/azure/azure-monitor/roles-permissions-security). +To view and manage Azure Monitor alert, a user needs to be assigned [Azure Monitor roles](../azure-monitor/roles-permissions-security.md). In Azure Monitor or Azure AD DS Metrics, click **New alert** and configure an Azure AD DS instance as the scope. Then choose the metrics you want to measure from the list of available signals: You can upvote to enable multiple resource selection to correlate data between r ## Next steps -- [Check the health of an Azure Active Directory Domain Services managed domain](check-health.md)+- [Check the health of an Azure Active Directory Domain Services managed domain](check-health.md) |
active-directory | Use Scim To Build Users And Groups Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-build-users-and-groups-endpoints.md | Go to the [reference code](https://github.com/AzureAD/SCIMReferenceCode) from Gi 1. If not installed, add [Azure App Service for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureappservice) extension. -1. To deploy the Microsoft.SCIM.WebHostSample app to Azure App Services, [create a new App Services](/azure/app-service/tutorial-dotnetcore-sqldb-app#2create-the-app-service). +1. To deploy the Microsoft.SCIM.WebHostSample app to Azure App Services, [create a new App Services](../../app-service/tutorial-dotnetcore-sqldb-app.md#2create-the-app-service). 1. In the Visual Studio Code terminal, run the .NET CLI command below. This command generates a deployable publish folder for the app in the bin/debug/publish directory. |
active-directory | Concept Authentication Strengths | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-strengths.md | An authentication strength Conditional Access policy works together with [MFA tr - **Using 'Require one of the selected controls' with 'require authentication strength' control** - After you select authentication strengths grant control and additional controls, all the selected controls must be satisfied in order to gain access to the resource. Using **Require one of the selected controls** isn't applicable, and will default to requiring all the controls in the policy. +- **Authentication loop** - when the user is required to use Microsoft Authenticator (Phone Sign-in) but the user is not registered for this method, they will be given instructions on how to set up the Microsoft Authenticator, that does not include how to enable Passwordless sign-in. As a result, the user can get into an authentication loop. To avoid this issue, make sure the user is registered for the method before the Conditional Access policy is enforced. Phone Sign-in can be registered using the steps outlined here: [Add your work or school account to the Microsoft Authenticator app](https://support.microsoft.com/en-us/account-billing/add-your-work-or-school-account-to-the-microsoft-authenticator-app-43a73ab5-b4e8-446d-9e54-2a4cb8e4e93c) + ## Limitations - **Conditional Access policies are only evaluated after the initial authentication** - As a result, authentication strength will not restrict a user's initial authentication. Suppose you are using the built-in phishing-resistant MFA strength. A user can still type in their password, but they will be required to use a phishing-resistant method such as FIDO2 security key before they can continue. |
active-directory | Concept Certificate Based Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication.md | The following images show how Azure AD CBA simplifies the customer environment b ||| | Great user experience |- Users who need certificate-based authentication can now directly authenticate against Azure AD and not have to invest in federated AD FS.<br>- Portal UI enables users to easily configure how to map certificate fields to a user object attribute to look up the user in the tenant ([certificate username bindings](concept-certificate-based-authentication-technical-deep-dive.md#understanding-the-username-binding-policy))<br>- Portal UI to [configure authentication policies](concept-certificate-based-authentication-technical-deep-dive.md#understanding-the-authentication-binding-policy) to help determine which certificates are single-factor versus multifactor. | | Easy to deploy and administer |- Azure AD CBA is a free feature, and you don't need any paid editions of Azure AD to use it. <br>- No need for complex on-premises deployments or network configuration.<br>- Directly authenticate against Azure AD. |-| Secure |- On-premises passwords don't need to be stored in the cloud in any form.<br>- Protects your user accounts by working seamlessly with Azure AD Conditional Access policies, including unphishable [multifactor authentication](concept-mfa-howitworks.md) (MFA which requires [licensed edition](concept-mfa-licensing.md)) and blocking legacy authentication.<br>- Strong authentication support where users can define authentication policies through the certificate fields, such as issuer or policy OID (object identifiers), to determine which certificates qualify as single-factor versus multifactor.<br>- The feature works seamlessly with [Conditional Access features](../conditional-access/overview.md) and authentication strength capability to enforce MFA to help secure your users. | +| Secure |- On-premises passwords don't need to be stored in the cloud in any form.<br>- Protects your user accounts by working seamlessly with Azure AD Conditional Access policies, including Phishing-Resistant [multifactor authentication](concept-mfa-howitworks.md) (MFA requires [licensed edition](concept-mfa-licensing.md)) and blocking legacy authentication.<br>- Strong authentication support where users can define authentication policies through the certificate fields, such as issuer or policy OID (object identifiers), to determine which certificates qualify as single-factor versus multifactor.<br>- The feature works seamlessly with [Conditional Access features](../conditional-access/overview.md) and authentication strength capability to enforce MFA to help secure your users. | ## Supported scenarios |
active-directory | How To Mfa Server Migration Utility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-server-migration-utility.md | Azure MFA Server can provide MFA functionality for third-party solutions that us For RADIUS deployments that canΓÇÖt be upgraded, youΓÇÖll need to deploy an NPS Server and install the [Azure AD MFA NPS extension](howto-mfa-nps-extension.md). -For LDAP deployments that canΓÇÖt be upgraded or moved to RADIUS, [determine if Azure Active Directory Domain Services can be used](/azure/active-directory/fundamentals/auth-ldap). In most cases, LDAP was deployed to support in-line password changes for end users. Once migrated, end users can manage their passwords by using [self-service password reset in Azure AD](tutorial-enable-sspr.md). +For LDAP deployments that canΓÇÖt be upgraded or moved to RADIUS, [determine if Azure Active Directory Domain Services can be used](../fundamentals/auth-ldap.md). In most cases, LDAP was deployed to support in-line password changes for end users. Once migrated, end users can manage their passwords by using [self-service password reset in Azure AD](tutorial-enable-sspr.md). -If you enabled the [MFA Server Authentication provider in AD FS 2.0](/azure/active-directory/authentication/howto-mfaserver-adfs-windows-server#secure-windows-server-ad-fs-with-azure-multi-factor-authentication-server) on any relying party trusts except for the Office 365 relying party trust, youΓÇÖll need to upgrade to [AD FS 3.0](/windows-server/identity/ad-fs/deployment/upgrading-to-ad-fs-in-windows-server) or federate those relying parties directly to Azure AD if they support modern authentication methods. Determine the best plan of action for each of the dependencies. +If you enabled the [MFA Server Authentication provider in AD FS 2.0](./howto-mfaserver-adfs-windows-server.md#secure-windows-server-ad-fs-with-azure-multi-factor-authentication-server) on any relying party trusts except for the Office 365 relying party trust, youΓÇÖll need to upgrade to [AD FS 3.0](/windows-server/identity/ad-fs/deployment/upgrading-to-ad-fs-in-windows-server) or federate those relying parties directly to Azure AD if they support modern authentication methods. Determine the best plan of action for each of the dependencies. ### Backup Azure AD MFA Server datafile Make a backup of the MFA Server data file located at %programfiles%\Multi-Factor Authentication Server\Data\PhoneFactor.pfdata (default location) on your primary MFA Server. Make sure you have a copy of the installer for your currently installed version in case you need to roll back. If you no longer have a copy, contact Customer Support Services. |
active-directory | Howto Mfaserver Deploy Mobileapp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-deploy-mobileapp.md | -# Enable mobile app authentication with Azure Multi-Factor Authentication Server +# Enable mobile app authentication with Azure AD Multi-Factor Authentication Server -The Microsoft Authenticator app offers an additional out-of-band verification option. Instead of placing an automated phone call or SMS to the user during login, Azure Multi-Factor Authentication pushes a notification to the Authenticator app on the user's smartphone or tablet. The user simply taps **Verify** (or enters a PIN and taps "Authenticate") in the app to complete their sign-in. +The Microsoft Authenticator app offers an extra out-of-band verification option. Instead of placing an automated phone call or SMS to the user during login, Azure AD Multi-Factor Authentication pushes a notification to the Authenticator app on the user's smartphone or tablet. The user simply taps **Verify** (or enters a PIN and taps "Authenticate") in the app to complete their sign-in. Using a mobile app for two-step verification is preferred when phone reception is unreliable. If you use the app as an OATH token generator, it doesn't require any network or internet connection. > [!IMPORTANT]-> As of July 1, 2019, Microsoft no longer offers Azure Multi-Factor Authentication Server (MFA Server) for new deployments. New customers that want to require multifactor authentication (MFA) during sign-in events should use cloud-based Azure AD Multi-Factor Authentication. -> +> In September 2022, Microsoft announced deprecation of Azure AD Multi-Factor Authentication Server. Beginning September 30, 2024, Azure AD Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md). + > To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md).-> -> Existing customers that activated Azure Multi-Factor Authentication Server before July 1, 2019 can download the latest version, future updates, and generate activation credentials as usual. + > [!IMPORTANT]-> If you have installed Azure Multi-Factor Authentication Server v8.x or higher, most of the steps below are not required. Mobile app authentication can be set up by following the steps under [Configure the mobile app](#configure-the-mobile-app-settings-in-mfa-server). +> If you have installed Azure AD Multi-Factor Authentication Server v8.x or higher, most of the steps below are not required. Mobile app authentication can be set up by following the steps under [Configure the mobile app](#configure-the-mobile-app-settings-in-mfa-server). ## Requirements -To use the Authenticator app, you must be running Azure Multi-Factor Authentication Server v8.x or higher +To use the Authenticator app, you must be running Azure AD Multi-Factor Authentication Server v8.x or higher ## Configure the mobile app settings in MFA Server To use the Authenticator app, you must be running Azure Multi-Factor Authenticat ## Next steps -- [Advanced scenarios with Azure Multi-Factor Authentication Server and third-party VPNs](howto-mfaserver-nps-vpn.md).+- [Advanced scenarios with Azure AD Multi-Factor Authentication Server and third-party VPNs](howto-mfaserver-nps-vpn.md). |
active-directory | Howto Mfaserver Deploy Upgrade Pf | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-deploy-upgrade-pf.md | Title: Upgrade PhoneFactor to Azure MFA Server - Azure Active Directory -description: Get started with Azure MFA Server when you upgrade from the older phonefactor agent. + Title: Upgrade PhoneFactor to Azure AD Multi-Factor Authentication Server - Azure Active Directory +description: Get started with Azure AD Multi-Factor Authentication Server when you upgrade from the older phonefactor agent. Previously updated : 07/11/2018 Last updated : 10/18/2022 -+ -# Upgrade the PhoneFactor Agent to Azure Multi-Factor Authentication Server +# Upgrade the PhoneFactor Agent to Azure AD Multi-Factor Authentication Server -To upgrade the PhoneFactor Agent v5.x or older to Azure Multi-Factor Authentication Server, uninstall the PhoneFactor Agent and affiliated components first. Then the Multi-Factor Authentication Server and its affiliated components can be installed. +To upgrade the PhoneFactor Agent v5.x or older to Azure AD Multi-Factor Authentication Server, uninstall the PhoneFactor Agent and affiliated components first. Then the Multi-Factor Authentication Server and its affiliated components can be installed. > [!IMPORTANT]-> As of July 1, 2019, Microsoft no longer offers MFA Server for new deployments. New customers that want to require multi-factor authentication (MFA) during sign-in events should use cloud-based Azure AD Multi-Factor Authentication. -> +> In September 2022, Microsoft announced deprecation of Azure AD Multi-Factor Authentication Server. Beginning September 30, 2024, Azure AD Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md). + > To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md).-> -> Existing customers that activated MFA Server before July 1, 2019 can download the latest version, future updates, and generate activation credentials as usual. + ## Uninstall the PhoneFactor Agent 1. First, back up the PhoneFactor data file. The default installation location is C:\Program Files\PhoneFactor\Data\Phonefactor.pfdata. -2. If the User Portal is installed: +2. If the User portal is installed: 1. Navigate to the install folder and back up the web.config file. The default installation location is C:\inetpub\wwwroot\PhoneFactor. 2. If you have added custom themes to the portal, back up your custom folder below the C:\inetpub\wwwroot\PhoneFactor\App_Themes directory. - 3. Uninstall the User Portal either through the PhoneFactor Agent (only available if installed on the same server as the PhoneFactor Agent) or through Windows Programs and Features. + 3. Uninstall the User portal either through the PhoneFactor Agent (only available if installed on the same server as the PhoneFactor Agent) or through Windows Programs and Features. 3. If the Mobile App Web Service is installed: The installation path is picked up from the registry from the previous PhoneFact 2. If the Web Service SDK was previously installed, install the new Web Service SDK through the Multi-Factor Authentication Server User Interface. - The default virtual directory name is now **MultiFactorAuthWebServiceSdk** instead of **PhoneFactorWebServiceSdk**. If you want to use the previous name, you must change the name of the virtual directory during installation. Otherwise, if you allow the install to use the new default name, you have to change the URL in any applications that reference the Web Service SDK (like the User Portal and Mobile App Web Service) to point at the correct location. + The default virtual directory name is now **MultiFactorAuthWebServiceSdk** instead of **PhoneFactorWebServiceSdk**. If you want to use the previous name, you must change the name of the virtual directory during installation. Otherwise, if you allow the install to use the new default name, you have to change the URL in any applications that reference the Web Service SDK (like the User portal and Mobile App Web Service) to point at the correct location. -3. If the User Portal was previously installed on the PhoneFactor Agent Server, install the new Multi-Factor Authentication User Portal through the Multi-Factor Authentication Server User Interface. +3. If the User portal was previously installed on the PhoneFactor Agent Server, install the new Multi-Factor Authentication User portal through the Multi-Factor Authentication Server User Interface. - The default virtual directory name is now **MultiFactorAuth** instead of **PhoneFactor**. If you want to use the previous name, you must change the name of the virtual directory during installation. Otherwise, if you allow the install to use the new default name, you should click the User Portal icon in the Multi-Factor Authentication Server and update the User Portal URL on the Settings tab. + The default virtual directory name is now **MultiFactorAuth** instead of **PhoneFactor**. If you want to use the previous name, you must change the name of the virtual directory during installation. Otherwise, if you allow the install to use the new default name, you should click the User portal icon in the Multi-Factor Authentication Server and update the User portal URL on the Settings tab. -4. If the User Portal and/or Mobile App Web Service was previously installed on a different server from the PhoneFactor Agent: +4. If the User portal and/or Mobile App Web Service was previously installed on a different server from the PhoneFactor Agent: - 1. Go to the install location (for example, C:\Program Files\PhoneFactor) and copy one or more installers to the other server. There are 32-bit and 64-bit installers for both the User Portal and Mobile App Web Service. They are called MultiFactorAuthenticationUserPortalSetupXX.msi and MultiFactorAuthenticationMobileAppWebServiceSetupXX.msi. + 1. Go to the install location (for example, C:\Program Files\PhoneFactor) and copy one or more installers to the other server. There are 32-bit and 64-bit installers for both the User portal and Mobile App Web Service. They're called MultiFactorAuthenticationUserPortalSetupXX.msi and MultiFactorAuthenticationMobileAppWebServiceSetupXX.msi. - 2. To install the User Portal on the web server, open a command prompt as an administrator and run MultiFactorAuthenticationUserPortalSetupXX.msi. + 2. To install the User portal on the web server, open a command prompt as an administrator and run MultiFactorAuthenticationUserPortalSetupXX.msi. - The default virtual directory name is now **MultiFactorAuth** instead of **PhoneFactor**. If you want to use the previous name, you must change the name of the virtual directory during installation. Otherwise, if you allow the install to use the new default name, you should click the User Portal icon in the Multi-Factor Authentication Server and update the User Portal URL on the Settings tab. Existing users need to be informed of the new URL. + The default virtual directory name is now **MultiFactorAuth** instead of **PhoneFactor**. If you want to use the previous name, you must change the name of the virtual directory during installation. Otherwise, if you allow the install to use the new default name, you should click the User portal icon in the Multi-Factor Authentication Server and update the User portal URL on the Settings tab. Existing users need to be informed of the new URL. - 3. Go to the User Portal install location (for example, C:\inetpub\wwwroot\MultiFactorAuth) and edit the web.config file. Copy the values in the appSettings and applicationSettings sections from your original web.config file that was backed up before the upgrade into the new web.config file. If the new default virtual directory name was kept when installing the Web Service SDK, change the URL in the applicationSettings section to point to the correct location. If any other defaults were changed in the previous web.config file, apply those same changes to the new web.config file. + 3. Go to the User portal install location (for example, C:\inetpub\wwwroot\MultiFactorAuth) and edit the web.config file. Copy the values in the appSettings and applicationSettings sections from your original web.config file that was backed up before the upgrade into the new web.config file. If the new default virtual directory name was kept when installing the Web Service SDK, change the URL in the applicationSettings section to point to the correct location. If any other defaults were changed in the previous web.config file, apply those same changes to the new web.config file. > [!NOTE] > When upgrading from a version of Azure MFA Server older than 8.0 to 8.0+ that the mobile app web service can be uninstalled after the upgrade ## Next steps -- [Install the users portal](howto-mfaserver-deploy-userportal.md) for the Azure Multi-Factor Authentication Server.+- [Install the users portal](howto-mfaserver-deploy-userportal.md) for the Azure AD Multi-Factor Authentication Server. - [Configure Windows Authentication](howto-mfaserver-windows.md) for your applications. |
active-directory | Howto Mfaserver Deploy Userportal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-deploy-userportal.md | -# User portal for the Azure Multi-Factor Authentication Server +# User portal for the Azure AD Multi-Factor Authentication Server -The user portal is an IIS web site that allows users to enroll in Azure Multi-Factor Authentication (MFA) and maintain their accounts. A user may change their phone number, change their PIN, or choose to bypass two-step verification during their next sign-on. +The user portal is an IIS web site that allows users to enroll in Azure AD Multi-Factor Authentication (MFA) and maintain their accounts. A user may change their phone number, change their PIN, or choose to bypass two-step verification during their next sign-on. Users sign in to the user portal with their normal username and password, then either complete a two-step verification call or answer security questions to complete their authentication. If user enrollment is allowed, users configure their phone number and PIN the first time they sign in to the user portal. User portal Administrators may be set up and granted permission to add new users and update existing users. -Depending on your environment, you may want to deploy the user portal on the same server as Azure Multi-Factor Authentication Server or on another internet-facing server. +Depending on your environment, you may want to deploy the user portal on the same server as Azure AD Multi-Factor Authentication Server or on another internet-facing server. > [!IMPORTANT]-> As of July 1, 2019, Microsoft no longer offers MFA Server for new deployments. New customers that want to require multi-factor authentication (MFA) during sign-in events should use cloud-based Azure AD Multi-Factor Authentication. -> +> In September 2022, Microsoft announced deprecation of Azure AD Multi-Factor Authentication Server. Beginning September 30, 2024, Azure AD Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md). + > To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md).-> -> Existing customers that activated MFA Server before July 1, 2019 can download the latest version, future updates, and generate activation credentials as usual. - ++ > [!NOTE] > The user portal is only available with Multi-Factor Authentication Server. If you use Multi-Factor Authentication in the cloud, refer your users to the [Set-up your account for two-step verification](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc) or [Manage your settings for two-step verification](https://support.microsoft.com/account-billing/change-your-two-step-verification-method-and-settings-c801d5ad-e0fc-4711-94d5-33ad5d4630f7). ## Install the web service SDK -In either scenario, if the Azure Multi-Factor Authentication Web Service SDK is **not** already installed on the Azure Multi-Factor Authentication (MFA) Server, complete the steps that follow. +In either scenario, if the Azure AD Multi-Factor Authentication Web Service SDK is **not** already installed on the Azure AD Multi-Factor Authentication (MFA) Server, complete the steps that follow. 1. Open the Multi-Factor Authentication Server console. 2. Go to the **Web Service SDK** and select **Install Web Service SDK**. The Web Service SDK must be secured with a TLS/SSL certificate. A self-signed ce  -## Deploy the user portal on the same server as the Azure Multi-Factor Authentication Server +## Deploy the user portal on the same server as the Azure AD Multi-Factor Authentication Server -The following pre-requisites are required to install the user portal on the **same server** as the Azure Multi-Factor Authentication Server: +The following pre-requisites are required to install the user portal on the **same server** as the Azure AD Multi-Factor Authentication Server: * IIS, including ASP.NET, and IIS 6 meta base compatibility (for IIS 7 or higher) * An account with admin rights for the computer and Domain if applicable. The account needs permissions to create Active Directory security groups. * Secure the user portal with a TLS/SSL certificate.-* Secure the Azure Multi-Factor Authentication Web Service SDK with a TLS/SSL certificate. +* Secure the Azure AD Multi-Factor Authentication Web Service SDK with a TLS/SSL certificate. To deploy the user portal, follow these steps: -1. Open the Azure Multi-Factor Authentication Server console, click the **User Portal** icon in the left menu, then click **Install User Portal**. +1. Open the Azure AD Multi-Factor Authentication Server console, click the **User Portal** icon in the left menu, then click **Install User Portal**. 2. Complete the install using the defaults unless you need to change them for some reason. 3. Bind a TLS/SSL Certificate to the site in IIS If you have questions about configuring a TLS/SSL Certificate on an IIS server, ## Deploy the user portal on a separate server -If the server where Azure Multi-Factor Authentication Server is running is not internet-facing, you should install the user portal on a **separate, internet-facing server**. +If the server where Azure AD Multi-Factor Authentication Server is running isn't internet-facing, you should install the user portal on a **separate, internet-facing server**. If your organization uses the Microsoft Authenticator app as one of the verification methods, and want to deploy the user portal on its own server, complete the following requirements: -* Use v6.0 or higher of the Azure Multi-Factor Authentication Server. +* Use v6.0 or higher of the Azure AD Multi-Factor Authentication Server. * Install the user portal on an internet-facing web server running Microsoft internet Information Services (IIS) 6.x or higher. * When using IIS 6.x, ensure ASP.NET v2.0.50727 is installed, registered, and set to **Allowed**. * When using IIS 7.x or higher, IIS, including Basic Authentication, ASP.NET, and IIS 6 meta base compatibility. * Secure the user portal with a TLS/SSL certificate.-* Secure the Azure Multi-Factor Authentication Web Service SDK with a TLS/SSL certificate. -* Ensure that the user portal can connect to the Azure Multi-Factor Authentication Web Service SDK over TLS/SSL. -* Ensure that the user portal can authenticate to the Azure Multi-Factor Authentication Web Service SDK using the credentials of a service account in the "PhoneFactor Admins" security group. This service account and group should exist in Active Directory if the Azure Multi-Factor Authentication Server is running on a domain-joined server. This service account and group exist locally on the Azure Multi-Factor Authentication Server if it is not joined to a domain. +* Secure the Azure AD Multi-Factor Authentication Web Service SDK with a TLS/SSL certificate. +* Ensure that the user portal can connect to the Azure AD Multi-Factor Authentication Web Service SDK over TLS/SSL. +* Ensure that the user portal can authenticate to the Azure AD Multi-Factor Authentication Web Service SDK using the credentials of a service account in the "PhoneFactor Admins" security group. This service account and group should exist in Active Directory if the Azure AD Multi-Factor Authentication Server is running on a domain-joined server. This service account and group exist locally on the Azure AD Multi-Factor Authentication Server if it isn't joined to a domain. -Installing the user portal on a server other than the Azure Multi-Factor Authentication Server requires the following steps: +Installing the user portal on a server other than the Azure AD Multi-Factor Authentication Server requires the following steps: -1. **On the MFA Server**, browse to the installation path (Example: C:\Program Files\Multi-Factor Authentication Server), and copy the file **MultiFactorAuthenticationUserPortalSetup64** to a location accessible to the internet-facing server where you will install it. +1. **On the MFA Server**, browse to the installation path (Example: C:\Program Files\Multi-Factor Authentication Server), and copy the file **MultiFactorAuthenticationUserPortalSetup64** to a location accessible to the internet-facing server where you'll install it. 2. **On the internet-facing web server**, run the MultiFactorAuthenticationUserPortalSetup64 install file as an administrator, change the Site if desired and change the Virtual directory to a short name if you would like. 3. Bind a TLS/SSL Certificate to the site in IIS. Installing the user portal on a server other than the Azure Multi-Factor Authent If you have questions about configuring a TLS/SSL Certificate on an IIS server, see the article [How to Set Up SSL on IIS](/iis/manage/configuring-security/how-to-set-up-ssl-on-iis). -## Configure user portal settings in the Azure Multi-Factor Authentication Server +## Configure user portal settings in the Azure AD Multi-Factor Authentication Server -Now that the user portal is installed, you need to configure the Azure Multi-Factor Authentication Server to work with the portal. +Now that the user portal is installed, you need to configure the Azure AD Multi-Factor Authentication Server to work with the portal. -1. In the Azure Multi-Factor Authentication Server console, click the **User Portal** icon. On the Settings tab, enter the URL to the user portal in the **User Portal URL** textbox. If email functionality has been enabled, this URL is included in the emails that are sent to users when they are imported into the Azure Multi-Factor Authentication Server. +1. In the Azure AD Multi-Factor Authentication Server console, click the **User Portal** icon. On the Settings tab, enter the URL to the user portal in the **User Portal URL** textbox. If email functionality has been enabled, this URL is included in the emails that are sent to users when they're imported into the Azure AD Multi-Factor Authentication Server. 2. Choose the settings that you want to use in the User Portal. For example, if users are allowed to choose their authentication methods, ensure that **Allow users to select method** is checked, along with the methods they can choose from. 3. Define who should be Administrators on the **Administrators** tab. You can create granular administrative permissions using the checkboxes and dropdowns in the Add/Edit boxes. Optional configuration:  -Azure Multi-Factor Authentication server provides several options for the user portal. The following table provides a list of these options and an explanation of what they are used for. +Azure AD Multi-Factor Authentication server provides several options for the user portal. The following table provides a list of these options and an explanation of what they're used for. | User Portal Settings | Description | |: |: | | User Portal URL | Enter the URL of where the portal is being hosted. | | Primary authentication | Specify the type of authentication to use when signing in to the portal. Either Windows, Radius, or LDAP authentication. |-| Allow users to log in | Allow users to enter a username and password on the sign-in page for the User portal. If this option is not selected, the boxes are grayed out. | +| Allow users to log in | Allow users to enter a username and password on the sign-in page for the User portal. If this option isn't selected, the boxes are grayed out. | | Allow user enrollment | Allow a user to enroll in Multi-Factor Authentication by taking them to a setup screen that prompts them for additional information such as telephone number. Prompt for backup phone allows users to specify a secondary phone number. Prompt for third-party OATH token allows users to specify a third-party OATH token. |-| Allow users to initiate One-Time Bypass | Allow users to initiate a one-time bypass. If a user sets this option up, it will take effect the next time the user signs in. Prompt for bypass seconds provides the user with a box so they can change the default of 300 seconds. Otherwise, the one-time bypass is only good for 300 seconds. | +| Allow users to initiate One-Time Bypass | Allow users to initiate a one-time bypass. If a user sets up this option, it will take effect the next time the user signs in. Prompt for bypass seconds provides the user with a box so they can change the default of 300 seconds. Otherwise, the one-time bypass is only good for 300 seconds. | | Allow users to select method | Allow users to specify their primary contact method. This method can be phone call, text message, mobile app, or OATH token. | | Allow users to select language | Allow users to change the language that is used for the phone call, text message, mobile app, or OATH token. | | Allow users to activate mobile app | Allow users to generate an activation code to complete the mobile app activation process that is used with the server. You can also set the number of devices they can activate the app on, between 1 and 10. | | Use security questions for fallback | Allow security questions in case two-step verification fails. You can specify the number of security questions that must be successfully answered. | | Allow users to associate third-party OATH token | Allow users to specify a third-party OATH token. |-| Use OATH token for fallback | Allow for the use of an OATH token in case two-step verification is not successful. You can also specify the session timeout in minutes. | +| Use OATH token for fallback | Allow for the use of an OATH token in case two-step verification isn't successful. You can also specify the session timeout in minutes. | | Enable logging | Enable logging on the user portal. The log files are located at: C:\Program Files\Multi-Factor Authentication Server\Logs. | > [!IMPORTANT] > Starting in March of 2019 the phone call options will not be available to MFA Server users in free/trial Azure AD tenants. SMS messages are not impacted by this change. Phone call will continue to be available to users in paid Azure AD tenants. This change only impacts free/trial Azure AD tenants. -These settings become visible to the user in the portal once they are enabled and they are signed in to the user portal. +The user can see these settings after they sign in to the user portal.  These settings become visible to the user in the portal once they are enabled an If you want your users to sign in and enroll, you must select the **Allow users to log in** and **Allow user enrollment** options under the Settings tab. Remember that the settings you select affect the user sign-in experience. -For example, when a user signs in to the user portal for the first time, they are then taken to the Azure Multi-Factor Authentication User Setup page. Depending on how you have configured Azure Multi-Factor Authentication, the user may be able to select their authentication method. +For example, when a user signs in to the user portal for the first time, they're then taken to the Azure AD Multi-Factor Authentication User Setup page. Depending on how you have configured Azure AD Multi-Factor Authentication, the user may be able to select their authentication method. If they select the Voice Call verification method or have been pre-configured to use that method, the page prompts the user to enter their primary phone number and extension if applicable. They may also be allowed to enter a backup phone number.  -If the user is required to use a PIN when they authenticate, the page prompts them to create a PIN. After entering their phone number(s) and PIN (if applicable), the user clicks the **Call Me Now to Authenticate** button. Azure Multi-Factor Authentication performs a phone call verification to the user's primary phone number. The user must answer the phone call and enter their PIN (if applicable) and press # to move on to the next step of the self-enrollment process. +If the user is required to use a PIN when they authenticate, the page prompts them to create a PIN. After entering their phone number(s) and PIN (if applicable), the user clicks the **Call Me Now to Authenticate** button. Azure AD Multi-Factor Authentication performs a phone call verification to the user's primary phone number. The user must answer the phone call and enter their PIN (if applicable) and press # to move on to the next step of the self-enrollment process. -If the user selects the Text Message verification method or has been pre-configured to use that method, the page prompts the user for their mobile phone number. If the user is required to use a PIN when they authenticate, the page also prompts them to enter a PIN. After entering their phone number and PIN (if applicable), the user clicks the **Text Me Now to Authenticate** button. Azure Multi-Factor Authentication performs an SMS verification to the user's mobile phone. The user receives the text message with a one-time-passcode (OTP), then replies to the message with that OTP plus their PIN (if applicable). +If the user selects the Text Message verification method or has been pre-configured to use that method, the page prompts the user for their mobile phone number. If the user is required to use a PIN when they authenticate, the page also prompts them to enter a PIN. After entering their phone number and PIN (if applicable), the user clicks the **Text Me Now to Authenticate** button. Azure AD Multi-Factor Authentication performs an SMS verification to the user's mobile phone. The user receives the text message with a one-time-passcode (OTP), then replies to the message with that OTP plus their PIN (if applicable).  If the user selects the Mobile App verification method, the page prompts the use The page then displays an activation code and a URL along with a barcode picture. If the user is required to use a PIN when they authenticate, the page additionally prompts them to enter a PIN. The user enters the activation code and URL into the Microsoft Authenticator app or uses the barcode scanner to scan the barcode picture and clicks the Activate button. -After the activation is complete, the user clicks the **Authenticate Me Now** button. Azure Multi-Factor Authentication performs a verification to the user's mobile app. The user must enter their PIN (if applicable) and press the Authenticate button in their mobile app to move on to the next step of the self-enrollment process. +After the activation is complete, the user clicks the **Authenticate Me Now** button. Azure AD Multi-Factor Authentication performs a verification to the user's mobile app. The user must enter their PIN (if applicable) and press the Authenticate button in their mobile app to move on to the next step of the self-enrollment process. -If the administrators have configured the Azure Multi-Factor Authentication Server to collect security questions and answers, the user is then taken to the Security Questions page. The user must select four security questions and provide answers to their selected questions. +If the administrators have configured the Azure AD Multi-Factor Authentication Server to collect security questions and answers, the user is then taken to the Security Questions page. The user must select four security questions and provide answers to their selected questions.  The user self-enrollment is now complete and the user is signed in to the user p ## Next steps -- [Deploy the Azure Multi-Factor Authentication Server Mobile App Web Service](howto-mfaserver-deploy-mobileapp.md)+- [Deploy the Azure AD Multi-Factor Authentication Server Mobile App Web Service](howto-mfaserver-deploy-mobileapp.md) |
active-directory | Configure Token Lifetimes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/configure-token-lifetimes.md | -You can specify the lifetime of an access, SAML, or ID token issued by Microsoft identity platform. You can set token lifetimes for all apps in your organization, for a multi-tenant (multi-organization) application, or for a specific service principal in your organization. For more info, read [configurable token lifetimes](active-directory-configurable-token-lifetimes.md). -In this section, we walk through a common policy scenario that can help you impose new rules for token lifetime. In the example, you learn how to create a policy that requires users to authenticate more frequently in your web app. +In the following steps, you'll implement a common policy scenario that imposes new rules for token lifetime. It's possible to specify the lifetime of an access, SAML, or ID token issued by the Microsoft identity platform. This can be set for all apps in your organization or for a specific service principal. They can also be set for multi-organizations (multi-tenant application). ++For more information, see [configurable token lifetimes](active-directory-configurable-token-lifetimes.md). ## Get started To get started, download the latest [Azure AD PowerShell Module Public Preview release](https://www.powershellgallery.com/packages/AzureADPreview). -Next, run the `Connect` command to sign in to your Azure AD admin account. Run this command each time you start a new session. +Next, run the `Connect-AzureAD` command to sign in to your Azure Active Directory (Azure AD) admin account. Run this command each time you start a new session. ```powershell Connect-AzureAD -Confirm Connect-AzureAD -Confirm ## Create a policy for web sign-in -In this example, you create a policy that requires users to authenticate more frequently in your web app. This policy sets the lifetime of the access/ID tokens to the service principal of your web app. +In the following steps, you'll create a policy that requires users to authenticate more frequently in your web app. This policy sets the lifetime of the access/ID tokens to the service principal of your web app. 1. Create a token lifetime policy. To see all policies that have been created in your organization, run the [Get-Az Get-AzureADPolicy -All $true ``` -To see which apps and service principals are linked to a specific policy you identified run the following [Get-AzureADPolicyAppliedObject](/powershell/module/azuread/get-azureadpolicyappliedobject?view=azureadps-2.0-preview&preserve-view=true) cmdlet by replacing **1a37dad8-5da7-4cc8-87c7-efbc0326cf20** with any of your policy IDs. Then you can decide whether to configure Conditional Access sign-in frequency or remain with the Azure AD defaults. +To see which apps and service principals are linked to a specific policy that you identified, run the following [`Get-AzureADPolicyAppliedObject`](/powershell/module/azuread/get-azureadpolicyappliedobject?view=azureadps-2.0-preview&preserve-view=true) cmdlet by replacing `1a37dad8-5da7-4cc8-87c7-efbc0326cf20` with any of your policy IDs. Then you can decide whether to configure Conditional Access sign-in frequency or remain with the Azure AD defaults. ```powershell Get-AzureADPolicyAppliedObject -id 1a37dad8-5da7-4cc8-87c7-efbc0326cf20 Get-AzureADPolicyAppliedObject -id 1a37dad8-5da7-4cc8-87c7-efbc0326cf20 If your tenant has policies which define custom values for the refresh and session token configuration properties, Microsoft recommends you update those policies to values that reflect the defaults described above. If no changes are made, Azure AD will automatically honor the default values. ### Troubleshooting-Some users have reported a `Get-AzureADPolicy : The term 'Get-AzureADPolicy' is not recognized` error after running the `Get-AzureADPolicy` cmdlet. As a workaround, run the following to uninstall/re-install the AzureAD module and then install the AzureADPreview module: +Some users have reported a `Get-AzureADPolicy : The term 'Get-AzureADPolicy' is not recognized` error after running the `Get-AzureADPolicy` cmdlet. As a workaround, run the following to uninstall/re-install the AzureAD module, and then install the AzureADPreview module: ```powershell # Uninstall the AzureAD Module |
active-directory | Howto Create Self Signed Certificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-create-self-signed-certificate.md | -Azure Active Directory (Azure AD) supports two types of authentication for service principals: **password-based authentication** (app secret) and **certificate-based authentication**. While app secrets can easily be created in the Azure portal, it's recommended that your application uses a certificate. +Azure Active Directory (Azure AD) supports two types of authentication for service principals: **password-based authentication** (app secret) and **certificate-based authentication**. While app secrets can easily be created in the Azure portal, they're long-lived, and not as secure as certificates. It's therefore recommended that your application uses a certificate rather than a secret. -For testing, you can use a self-signed public certificate instead of a Certificate Authority (CA)-signed certificate. This article shows you how to use Windows PowerShell to create and export a self-signed certificate. +For testing, you can use a self-signed public certificate instead of a Certificate Authority (CA)-signed certificate. In this how-to, you'll use Windows PowerShell to create and export a self-signed certificate. > [!CAUTION] > Self-signed certificates are not trusted by default and they can be difficult to maintain. Also, they may use outdated hash and cipher suites that may not be strong. For better security, purchase a certificate signed by a well-known certificate authority. -You configure various parameters for the certificate. For example, the cryptographic and hash algorithms, the certificate validity period, and your domain name. Then export the certificate with or without its private key depending on your application needs. +While creating the certificate using PowerShell, you can specify parameters like cryptographic and hash algorithms, certificate validity period, and domain name. The certificate can then be exported with or without its private key depending on your application needs. -The application that initiates the authentication session requires the private key while the application that confirms the authentication requires the public key. So, if you're authenticating from your PowerShell desktop app to Azure AD, you only export the public key (`.cer` file) and upload it to the Azure portal. Your PowerShell app uses the private key from your local certificate store to initiate authentication and obtain access tokens for Microsoft Graph. +The application that initiates the authentication session requires the private key while the application that confirms the authentication requires the public key. So, if you're authenticating from your PowerShell desktop app to Azure AD, you only export the public key (*.cer* file) and upload it to the Azure portal. The PowerShell app uses the private key from your local certificate store to initiate authentication and obtain access tokens for Microsoft Graph. -Your application may also be running from another machine, such as Azure Automation. In this scenario, you export the public and private key pair from your local certificate store, upload the public key to the Azure portal, and the private key (a `.pfx` file) to Azure Automation. Your application running in Azure Automation will use the private key to initiate authentication and obtain access tokens for Microsoft Graph. +Your application may also be running from another machine, such as Azure Automation. In this scenario, you export the public and private key pair from your local certificate store, upload the public key to the Azure portal, and the private key (a *.pfx* file) to Azure Automation. Your application running in Azure Automation will use the private key to initiate authentication and obtain access tokens for Microsoft Graph. This article uses the `New-SelfSignedCertificate` PowerShell cmdlet to create the self-signed certificate and the `Export-Certificate` cmdlet to export it to a location that is easily accessible. These cmdlets are built-in to modern versions of Windows (Windows 8.1 and greater, and Windows Server 2012R2 and greater). The self-signed certificate will have the following configuration: This article uses the `New-SelfSignedCertificate` PowerShell cmdlet to create th + The certificate is valid for only one year. + The certificate is supported for use for both client and server authentication. -> [!NOTE] -> To customize the start and expiry date as well as other properties of the certificate, see the [`New-SelfSignedCertificate` reference](/powershell/module/pki/new-selfsignedcertificate?view=windowsserver2019-ps&preserve-view=true). +To customize the start and expiry date and other properties of the certificate, refer to [New-SelfSignedCertificate](/powershell/module/pki/new-selfsignedcertificate?view=windowsserver2019-ps&preserve-view=true). -## Option 1: Create and export your public certificate without a private key +## Create and export your public certificate Use the certificate you create using this method to authenticate from an application running from your machine. For example, authenticate from Windows PowerShell. $cert = New-SelfSignedCertificate -Subject "CN=$certname" -CertStoreLocation "Ce ``` -The **$cert** variable in the previous command stores your certificate in the current session and allows you to export it. The command below exports the certificate in `.cer` format. You can also export it in other formats supported on the Azure portal including `.pem` and `.crt`. +The `$cert` variable in the previous command stores your certificate in the current session and allows you to export it. The command below exports the certificate in *.cer* format. You can also export it in other formats supported on the Azure portal including *.pem* and *.crt*. ```powershell Export-Certificate -Cert $cert -FilePath "C:\Users\admin\Desktop\$certname.cer" Your certificate is now ready to upload to the Azure portal. Once uploaded, retrieve the certificate thumbprint for use to authenticate your application. +## (Optional): Export your public certificate with its private key -## Option 2: Create and export your public certificate with its private key +If your application will be running from another machine or cloud, such as Azure Automation, you'll also need a private key. -Use this option to create a certificate and its private key if your application will be running from another machine or cloud, such as Azure Automation. --In an elevated PowerShell prompt, run the following command and leave the PowerShell console session open. Replace `{certificateName}` with name that you wish to give your certificate. --```powershell -$certname = "{certificateName}" ## Replace {certificateName} -$cert = New-SelfSignedCertificate -Subject "CN=$certname" -CertStoreLocation "Cert:\CurrentUser\My" -KeyExportPolicy Exportable -KeySpec Signature -KeyLength 2048 -KeyAlgorithm RSA -HashAlgorithm SHA256 --``` --The **$cert** variable in the previous command stores your certificate in the current session and allows you to export it. The command below exports the certificate in `.cer` format. You can also export it in other formats supported on the Azure portal including `.pem` and `.crt`. ---```powershell --Export-Certificate -Cert $cert -FilePath "C:\Users\admin\Desktop\$certname.cer" ## Specify your preferred location --``` --Still in the same session, create a password for your certificate private key and save it in a variable. In the following command, replace `{myPassword}` with the password that you wish to use to protect your certificate private key. +Following on from the previous commands, create a password for your certificate private key and save it in a variable. Replace `{myPassword}` with the password that you wish to use to protect your certificate private key. ```powershell $mypwd = ConvertTo-SecureString -String "{myPassword}" -Force -AsPlainText ## R ``` -Now, using the password you stored in the `$mypwd` variable, secure, and export your private key. +Using the password you stored in the `$mypwd` variable, secure and export your private key using the command; ```powershell Export-PfxCertificate -Cert $cert -FilePath "C:\Users\admin\Desktop\$certname.pf ``` -Your certificate (`.cer` file) is now ready to upload to the Azure portal. You also have a private key (`.pfx` file) that is encrypted and can't be read by other parties. Once uploaded, retrieve the certificate thumbprint for use to authenticate your application. +Your certificate (*.cer* file) is now ready to upload to the Azure portal. The private key (*.pfx* file) is encrypted and can't be read by other parties. Once uploaded, retrieve the certificate thumbprint, which you can use to authenticate your application. ## Optional task: Delete the certificate from the keystore. -If you created the certificate using Option 2, you can delete the key pair from your personal store. First, run the following command to retrieve the certificate thumbprint. +You can delete the key pair from your personal store by running the following command to retrieve the certificate thumbprint. ```powershell |
active-directory | V2 Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-overview.md | -The Microsoft identity platform helps you build applications your users and customers can sign in to using their Microsoft identities or social accounts, and provide authorized access to your own APIs or Microsoft APIs like Microsoft Graph. +The Microsoft identity platform helps you build applications your users and customers can sign in to using their Microsoft identities or social accounts. It authorizes access to your own APIs or Microsoft APIs like Microsoft Graph. There are several components that make up the Microsoft identity platform: - **OAuth 2.0 and OpenID Connect standard-compliant authentication service** enabling developers to authenticate several identity types, including: - Work or school accounts, provisioned through Azure AD- - Personal Microsoft account, like Skype, Xbox, and Outlook.com + - Personal Microsoft accounts (Skype, Xbox, Outlook.com) - Social or local accounts, by using Azure AD B2C-- **Open-source libraries**: Microsoft Authentication Libraries (MSAL) and support for other standards-compliant libraries+- **Open-source libraries**: Microsoft Authentication Library (MSAL) and support for other standards-compliant libraries. - **Application management portal**: A registration and configuration experience in the Azure portal, along with the other Azure management capabilities. - **Application configuration API and PowerShell**: Programmatic configuration of your applications through the Microsoft Graph API and PowerShell so you can automate your DevOps tasks. - **Developer content**: Technical documentation including quickstarts, tutorials, how-to guides, and code samples. -For developers, the Microsoft identity platform offers integration of modern innovations in the identity and security space like passwordless authentication, step-up authentication, and Conditional Access. You don't need to implement such functionality yourself: applications integrated with the Microsoft identity platform natively take advantage of such innovations. +> [!VIDEO https://www.youtube.com/embed/uDU1QTSw7Ps] -With the Microsoft identity platform, you can write code once and reach any user. You can build an app once and have it work across many platforms, or build an app that functions as a client as well as a resource application (API). +For developers, the Microsoft identity platform offers integration of modern innovations in the identity and security space like passwordless authentication, step-up authentication, and Conditional Access. You don't need to implement such functionality yourself. Applications integrated with the Microsoft identity platform natively take advantage of such innovations. -For a video overview of the platform and a demo of the authentication experience, see [What is the Microsoft identity platform for developers?](https://youtu.be/uDU1QTSw7Ps). +With the Microsoft identity platform, you can write code once and reach any user. You can build an app once and have it work across many platforms, or build an app that functions as both a client and a resource application (API). ## Getting started -Choose the [application scenario](authentication-flows-app-scenarios.md) you'd like to build. Each of these scenario paths starts with an overview and links to a quickstart to help you get up and running: +Choose your preferred [application scenario](authentication-flows-app-scenarios.md). Each of these scenario paths has an overview and links to a quickstart to help you get started: - [Single-page app (SPA)](scenario-spa-overview.md) - [Web app that signs in users](scenario-web-app-sign-user-overview.md) Learn how core authentication and Azure AD concepts apply to the Microsoft ident [Azure AD B2B](../external-identities/what-is-b2b.md) - Invite external users into your Azure AD tenant as "guest" users, and assign permissions for authorization while they use their existing credentials for authentication. -[Azure Active Directory for developers (v1.0)](../azuread-dev/v1-overview.md) - Shown here for developers with existing apps that use the older v1.0 endpoint. **Do not** use v1.0 for new projects. +[Azure Active Directory for developers (v1.0)](../azuread-dev/v1-overview.md) - Exclusively for developers with existing apps that use the older v1.0 endpoint. **Do not** use v1.0 for new projects. ## Next steps -If you have an Azure account you already have access to an Azure Active Directory tenant, but most Microsoft identity platform developers need their own Azure AD tenant for use while developing applications, a "dev tenant." +If you have an Azure account, then you have access to an Azure Active Directory tenant. However, most Microsoft identity platform developers need their own Azure AD tenant for use while developing applications, known as a *dev tenant*. Learn how to create your own tenant for use while building your applications: -[Quickstart: Set up an Azure AD tenant](quickstart-create-new-tenant.md) +> [!div class="nextstepaction"] +> [Quickstart: Set up an Azure AD tenant](quickstart-create-new-tenant.md) |
active-directory | Workload Identity Federation Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-considerations.md | The following Azure Resource Manager template (ARM template) example creates thr *Applies to: applications and user-assigned managed identities (public preview)* -It is possible to use a deny [Azure Policy](/azure/governance/policy/overview) as in the following ARM template example: +It is possible to use a deny [Azure Policy](../../governance/policy/overview.md) as in the following ARM template example: ```json { The following error codes may be returned when creating, updating, getting, list | 400 | Federated Identity Credential name '{ficName}' is invalid. | Alphanumeric, dash, underscore, no more than 3-120 symbols. First symbol is alphanumeric. | | 404 | The parent user-assigned identity doesn't exist. | Check user assigned identity name in federated identity credentials resource path. | | 400 | Issuer and subject combination already exists for this Managed Identity. | This is a constraint. List all federated identity credentials associated with the user-assigned identity to find existing federated identity credential. |-| 409 | Conflict | Concurrent write request to federated identity credential resources under the same user-assigned identity has been denied. +| 409 | Conflict | Concurrent write request to federated identity credential resources under the same user-assigned identity has been denied. |
active-directory | Workload Identity Federation Create Trust User Assigned Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust-user-assigned-managed-identity.md | To learn more about supported regions, time to propagate federated credential up ## Prerequisites -- If you're unfamiliar with managed identities for Azure resources, check out the [overview section](/azure/active-directory/managed-identities-azure-resources/overview). Be sure to review the [difference between a system-assigned and user-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview#managed-identity-types).+- If you're unfamiliar with managed identities for Azure resources, check out the [overview section](../managed-identities-azure-resources/overview.md). Be sure to review the [difference between a system-assigned and user-assigned managed identity](../managed-identities-azure-resources/overview.md#managed-identity-types). - If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue. - Get the information for your external IdP and software workload, which you need in the following steps.-- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Managed Identity Contributor](/azure/role-based-access-control/built-in-roles#managed-identity-contributor) role assignment.-- [Create a user-assigned manged identity](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity)+- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment. +- [Create a user-assigned manged identity](../managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity) - Find the object ID of the user-assigned managed identity, which you need in the following steps. ## Configure a federated identity credential on a user-assigned managed identity To delete a specific federated identity credential, select the **Delete** icon f ## Prerequisites -- If you're unfamiliar with managed identities for Azure resources, check out the [overview section](/azure/active-directory/managed-identities-azure-resources/overview). Be sure to review the [difference between a system-assigned and user-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview#managed-identity-types).+- If you're unfamiliar with managed identities for Azure resources, check out the [overview section](../managed-identities-azure-resources/overview.md). Be sure to review the [difference between a system-assigned and user-assigned managed identity](../managed-identities-azure-resources/overview.md#managed-identity-types). - If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue. - Get the information for your external IdP and software workload, which you need in the following steps.-- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Managed Identity Contributor](/azure/role-based-access-control/built-in-roles#managed-identity-contributor) role assignment.-- [Create a user-assigned manged identity](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azcli#create-a-user-assigned-managed-identity-1)+- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment. +- [Create a user-assigned manged identity](../managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azcli#create-a-user-assigned-managed-identity-1) - Find the object ID of the user-assigned managed identity, which you need in the following steps. [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../../includes/azure-cli-prepare-your-environment-no-header.md)] az identity federated-credential delete --name $ficId --identity-name $uaId --re ## Prerequisites -- If you're unfamiliar with managed identities for Azure resources, check out the [overview section](/azure/active-directory/managed-identities-azure-resources/overview). Be sure to review the [difference between a system-assigned and user-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview#managed-identity-types).+- If you're unfamiliar with managed identities for Azure resources, check out the [overview section](../managed-identities-azure-resources/overview.md). Be sure to review the [difference between a system-assigned and user-assigned managed identity](../managed-identities-azure-resources/overview.md#managed-identity-types). - If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue. - Get the information for your external IdP and software workload, which you need in the following steps.-- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Managed Identity Contributor](/azure/role-based-access-control/built-in-roles#managed-identity-contributor) role assignment.-- [Create a user-assigned manged identity](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-arm#create-a-user-assigned-managed-identity-3)+- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment. +- [Create a user-assigned manged identity](../managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-arm#create-a-user-assigned-managed-identity-3) - Find the object ID of the user-assigned managed identity, which you need in the following steps. ## Template creation and editing Resource Manager templates help you deploy new or modified resources defined by ## Configure a federated identity credential on a user-assigned managed identity -Federated identity credential and parent user assigned identity can be created or updated be means of template below. You can [deploy ARM templates](/azure/azure-resource-manager/templates/quickstart-create-templates-use-the-portal) from the [Azure portal](https://portal.azure.com). +Federated identity credential and parent user assigned identity can be created or updated be means of template below. You can [deploy ARM templates](../../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md) from the [Azure portal](https://portal.azure.com). All of the template parameters are mandatory. Make sure that any kind of automation creates federated identity credentials und ## Prerequisites -- If you're unfamiliar with managed identities for Azure resources, check out the [overview section](/azure/active-directory/managed-identities-azure-resources/overview). Be sure to review the [difference between a system-assigned and user-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview#managed-identity-types).+- If you're unfamiliar with managed identities for Azure resources, check out the [overview section](../managed-identities-azure-resources/overview.md). Be sure to review the [difference between a system-assigned and user-assigned managed identity](../managed-identities-azure-resources/overview.md#managed-identity-types). - If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue. - Get the information for your external IdP and software workload, which you need in the following steps.-- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Managed Identity Contributor](/azure/role-based-access-control/built-in-roles#managed-identity-contributor) role assignment.+- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment. - You can run all the commands in this article either in the cloud or locally: - To run in the cloud, use [Azure Cloud Shell](../../cloud-shell/overview.md). - To run locally, install [curl](https://curl.haxx.se/download.html) and the [Azure CLI](/cli/azure/install-azure-cli).-- [Create a user-assigned manged identity](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-rest#create-a-user-assigned-managed-identity-4)+- [Create a user-assigned manged identity](../managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-rest#create-a-user-assigned-managed-identity-4) - Find the object ID of the user-assigned managed identity, which you need in the following steps. ## Obtain a bearer access token https://management.azure.com/subscriptions/<SUBSCRIPTION ID>/resourceGroups/<RES ## Next steps -- For information about the required format of JWTs created by external identity providers, read about the [assertion format](active-directory-certificate-credentials.md#assertion-format).+- For information about the required format of JWTs created by external identity providers, read about the [assertion format](active-directory-certificate-credentials.md#assertion-format). |
active-directory | Workload Identity Federation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation.md | The following scenarios are supported for accessing Azure AD protected resources - GitHub Actions. First, [Configure a trust relationship](workload-identity-federation-create-trust-github.md) between your app in Azure AD and a GitHub repo in the Azure portal or using Microsoft Graph. Then [configure a GitHub Actions workflow](/azure/developer/github/connect-from-azure) to get an access token from Microsoft identity provider and access Azure resources. - Google Cloud. First, configure a trust relationship between your app in Azure AD and an identity in Google Cloud. Then configure your software workload running in Google Cloud to get an access token from Microsoft identity provider and access Azure AD protected resources. See [Access Azure AD protected resources from an app in Google Cloud](workload-identity-federation-create-trust-gcp.md).-- Workloads running on Kubernetes. Establish a trust relationship between your app or user-assigned managed identity in Azure AD and a Kubernetes workload (described in the [workload identity overview](/azure/aks/workload-identity-overview)).+- Workloads running on Kubernetes. Establish a trust relationship between your app or user-assigned managed identity in Azure AD and a Kubernetes workload (described in the [workload identity overview](../../aks/workload-identity-overview.md)). - Workloads running in compute platforms outside of Azure. [Configure a trust relationship](workload-identity-federation-create-trust.md) between your Azure AD application registration and the external IdP for your compute platform. You can use tokens issued by that platform to authenticate with Microsoft identity platform and call APIs in the Microsoft ecosystem. Use the [client credentials flow](v2-oauth2-client-creds-grant-flow.md#third-case-access-token-request-with-a-federated-credential) to get an access token from Microsoft identity platform, passing in the identity provider's JWT instead of creating one yourself using a stored certificate. ## How it works Learn more about how workload identity federation works: - How to create, delete, get, or update [federated identity credentials](workload-identity-federation-create-trust.md) on an app registration. - How to create, delete, get, or update [federated identity credentials](workload-identity-federation-create-trust-user-assigned-managed-identity.md) on a user-assigned managed identity. - Read the [GitHub Actions documentation](https://docs.github.com/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-azure) to learn more about configuring your GitHub Actions workflow to get an access token from Microsoft identity provider and access Azure resources.-- For information about the required format of JWTs created by external identity providers, read about the [assertion format](active-directory-certificate-credentials.md#assertion-format).+- For information about the required format of JWTs created by external identity providers, read about the [assertion format](active-directory-certificate-credentials.md#assertion-format). |
active-directory | Invitation Email Elements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/invitation-email-elements.md | We use a LinkedIn-like pattern for the From address. This pattern should make it > [!NOTE] > For the Azure service operated by [21Vianet in China](/azure/china), the sender address is Invites@oe.21vianet.com. -> For the [Azure AD Government](/azure/azure-government), the sender address is invites@azuread.us. +> For the [Azure AD Government](../../azure-government/index.yml), the sender address is invites@azuread.us. ### Reply To See the following articles on Azure AD B2B collaboration: - [How do Azure Active Directory admins add B2B collaboration users?](add-users-administrator.md) - [How do information workers add B2B collaboration users?](add-users-information-worker.md) - [B2B collaboration invitation redemption](redemption-experience.md)-- [Add B2B collaboration users without an invitation](add-user-without-invite.md)+- [Add B2B collaboration users without an invitation](add-user-without-invite.md) |
active-directory | User Flow Add Custom Attributes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-flow-add-custom-attributes.md | -You can create custom attributes in the Azure portal and use them in your self-service sign-up user flows. You can also read and write these attributes by using the [Microsoft Graph API](../../active-directory-b2c/microsoft-graph-operations.md). Microsoft Graph API supports creating and updating a user with extension attributes. Extension attributes in the Graph API are named by using the convention `extension_<extensions-app-id>_attributename`. For example: +You can create custom attributes in the Azure portal and use them in your [self-service sign-up user flows](self-service-sign-up-user-flow.md). You can also read and write these attributes by using the [Microsoft Graph API](../../active-directory-b2c/microsoft-graph-operations.md). Microsoft Graph API supports creating and updating a user with extension attributes. Extension attributes in the Graph API are named by using the convention `extension_<extensions-app-id>_attributename`. For example: ```JSON "extension_831374b3bd5041bfaa54263ec9e050fc_loyaltyNumber": "212342" ``` -The `<extensions-app-id>` is specific to your tenant. To find this identifier, navigate to Azure Active Directory > App registrations > All applications. Search for the app that starts with "aad-extensions-app" and select it. On the app's Overview page, note the Application (client) ID. +The `<extensions-app-id>` is specific to your tenant. To find this identifier, navigate to **Azure Active Directory** > **App registrations** > **All applications**. Search for the app that starts with "aad-extensions-app" and select it. On the app's Overview page, note the Application (client) ID. ## Create a custom attribute The `<extensions-app-id>` is specific to your tenant. To find this identifier, n 3. In the left menu, select **External Identities**. 4. Select **Custom user attributes**. The available user attributes are listed. -  + :::image type="content" source="media/user-flow-add-custom-attributes/user-attributes.png" alt-text="Screenshot of selecting custom user attributes for sign-up." lightbox="media/user-flow-add-custom-attributes/user-attributes-large-image.png"::: + 5. To add an attribute, select **Add**. 6. In the **Add an attribute** pane, enter the following values: - - **Name** - Provide a name for the custom attribute (for example, "Shoesize"). + - **Name** - Provide a name for the custom attribute (for example, "Shoe size"). - **Data Type** - Choose a data type (**String**, **Boolean**, or **Int**).- - **Description** - Optionally, enter a description of the custom attribute for internal use. This description is not visible to the user. + - **Description** - Optionally, enter a description of the custom attribute for internal use. This description isn't visible to the user. -  + :::image type="content" source="media/user-flow-add-custom-attributes/add-an-attribute.png" alt-text="Screenshot of adding a custom attribute."::: 7. Select **Create**. -The custom attribute is now available in the list of user attributes and for use in your user flows. A custom attribute is only created the first time it is used in any user flow, and not when you add it to the list of user attributes. +The custom attribute is now available in the list of user attributes and for use in your user flows. A custom attribute is only created the first time it's used in any user flow, and not when you add it to the list of user attributes. -Once you've created a new user using a user flow that uses the newly created custom attribute, the object can be queried in [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). You should now see **ShoeSize** in the list of attributes collected during the sign-up journey on the user object. You can call the Graph API from your application to get the data from this attribute after it is added to the user object. +Once you've created a new user using a user flow that uses the newly created custom attribute, the object can be queried in [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). You should now see **ShoeSize** in the list of attributes collected during the sign-up journey on the user object. You can call the Graph API from your application to get the data from this attribute after it's added to the user object. ## Next steps -[Add a self-service sign-up user flow to an app](self-service-sign-up-user-flow.md) +[Add a self-service sign-up user flow to an app](self-service-sign-up-user-flow.md) |
active-directory | Active Directory Users Assign Role Azure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md | Title: Assign Azure AD roles to users - Azure Active Directory | Microsoft Docs -description: Instructions about how to assign administrator and non-administrator roles to users with Azure Active Directory. + Title: Manage Azure AD user roles - Azure Active Directory | Microsoft Docs +description: Instructions about how to assign and update user roles with Azure Active Directory. -+ Previously updated : 08/17/2022- Last updated : 10/17/2022+ -# Assign administrator and non-administrator roles to users with Azure Active Directory +# Assign user roles with Azure Active Directory -In Azure Active Directory (Azure AD), if one of your users needs permission to manage Azure AD resources, you must assign them to a role that provides the permissions they need. For info on which roles manage Azure resources and which roles manage Azure AD resources, see [Classic subscription administrator roles, Azure roles, and Azure AD roles](../../role-based-access-control/rbac-and-directory-admin-roles.md). +The ability to manage Azure resources is granted by assigning roles that provide the required permissions. Roles can be assigned to individual users or groups. To align with the [Zero Trust guiding principles](/azure/security/fundamentals/zero-trust), use Just-In-Time and Just-Enough-Access policies when assigning roles. -For more information about the available Azure AD roles, see [Assigning administrator roles in Azure Active Directory](../roles/permissions-reference.md). To add users, see [Add new users to Azure Active Directory](add-users-azure-active-directory.md). +Before assigning roles to users, review the following Microsoft Learn articles: ++- [Learn about Azure AD roles](../roles/concept-understand-roles.md) +- [Learn about role based access control](../../role-based-access-control/rbac-and-directory-admin-roles.md) +- [Explore the Azure built-in roles](../roles/permissions-reference.md) ## Assign roles -A common way to assign Azure AD roles to a user is on the **Assigned roles** page for a user. You can also configure the user eligibility to be elevated just-in-time into a role using Privileged Identity Management (PIM). For more information about how to use PIM, see [Privileged Identity Management](../privileged-identity-management/index.yml). +There are two main steps to the role assignment process. First you'll select the role to assign. Then you'll adjust the role settings and duration. ++### Select the role to assign ++1. Sign in to the [Azure portal](https://portal.azure.com/) using the Privileged Role Administrator role for the directory. ++1. Go to **Azure Active Directory** > **Users**. ++1. Search for and select the user getting the role assignment. ++  ++1. Select **Assigned roles** from the side menu, then select **Add assignments**. ++  ++1. Select a role to assign from the dropdown list and select the **Next** button. ++### Adjust the role settings ++You can assign roles as either _eligible_ or _active_. Eligible roles are assigned to a user but must be elevated Just-In-Time by the user through Privileged Identity Management (PIM). For more information about how to use PIM, see [Privileged Identity Management](../privileged-identity-management/index.yml). -> [!Note] -> If you have an Azure AD Premium P2 license plan and already use PIM, all role management tasks are performed in the [Privileged Identity Management experience](../roles/manage-roles-portal.md). This feature is currently limited to assigning only one role at a time. You can't currently select multiple roles and assign them to a user all at once. -> ->  + -## Assign a role to a user +1. From the Setting section of the **Add assignments** page, select an **Assignment type** option. -1. Go to the [Azure portal](https://portal.azure.com/) and sign in using a Global administrator account for the directory. +1. Leave the **Permanently eligible** option selected if the role should always be available to elevate for the user. -2. Search for and select **Azure Active Directory**. + If you uncheck this option, you can specify a date range for the role eligibility. -  +1. Select the **Assign** button. -3. Select **Users**. + Assigned roles appear in the associated section for the user, so eligible and active roles are listed separately. -4. Search for and select the user getting the role assignment. For example, _Alain Charon_. +  -  +## Update roles -5. On the **Alain Charon - Profile** page, select **Assigned roles**. +You can change the settings of a role assignment, for example to change an active role to eligible. - The **Alain Charon - Administrative roles** page appears. +1. Go to **Azure Active Directory** > **Users**. -6. Select **Add assignments**, select the role to assign to Alain (for example, _Application administrator_), and then choose **Select**. +1. Search for and select the user getting their role updated. -  +1. Go to the **Assigned roles** page and select the **Update** link for the role that needs to be changed. - The Application administrator role is assigned to Alain Charon and it appears on the **Alain Charon - Administrative roles** page. +1. Change the settings as needed and select the **Save** button. -## Remove a role assignment +  -If you need to remove the role assignment from a user, you can also do that from the **Alain Charon - Administrative roles** page. +## Remove roles -### To remove a role assignment from a user +You can remove role assignments from the **Administrative roles** page for a selected user. -1. Select **Azure Active Directory**, select **Users**, and then search for and select the user getting the role assignment removed. For example, _Alain Charon_. +1. Go to **Azure Active Directory** > **Users**. -2. Select **Assigned roles**, select **Application administrator**, and then select **Remove assignment**. +1. Search for and select the user getting the role assignment removed. -  +1. Go to the **Assigned roles** page and select the **Remove** link for the role that needs to be removed. Confirm the change in the pop-up message. - The Application administrator role is removed from Alain Charon and it no longer appears on the **Alain Charon - Administrative roles** page. ## Next steps If you need to remove the role assignment from a user, you can also do that from - [Add guest users from another directory](../external-identities/what-is-b2b.md) -Other user management tasks you can check out -are available in [Azure Active Directory user management documentation](../enterprise-users/index.yml). +- [Explore other user management tasks](../enterprise-users/index.yml) |
active-directory | Active Directory Users Profile Azure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-users-profile-azure-portal.md | Title: Add or update user profile information - Azure AD -description: Instructions about how to add information to a user's profile in Azure Active Directory, including a picture and job details. +description: Instructions about how to manage a user's profile and settings in Azure Active Directory. -+ Previously updated : 08/17/2022- Last updated : 10/17/2022+ -# Add or update a user's profile information using Azure Active Directory -Add user profile information, including a profile picture, job-specific information, and some settings using Azure Active Directory (Azure AD). For more information about adding new users, see [How to add or delete users in Azure Active Directory](add-users-azure-active-directory.md). +# Add or update a user's profile information and settings +A user's profile information and settings can be managed on an individual basis and for all users in your directory. When you look at these settings together, you can see how permissions, restrictions, and other connections work together. -## Add or change profile information -As you'll see, there's more information available in a user's profile than what you're able to add during the user's creation. All this additional information is optional and can be added as needed by your organization. --## To add or change profile information +This article covers how to add user profile information, such as a profile picture and job-specific information. You can also choose to allow users to connect their LinkedIn accounts or restrict access to the Azure AD administration portal. Some settings may be managed in more than one area of Azure AD. For more information about adding new users, see [How to add or delete users in Azure Active Directory](add-users-azure-active-directory.md). ->[!Note] ->The user name and email address properties can't contain accent characters. +## Add or change profile information +When new users are created, only some details are added to their user profile. If your organization needs more details, they can be added after the user is created. 1. Sign in to the [Azure portal](https://portal.azure.com/) in the User Administrator role for the organization. -2. Select **Azure Active Directory**, select **Users**, and then select a user. For example, _Alain Charon_. -- The **Alain Charon - Profile** page appears. +1. Go to **Azure Active Directory** > **Users** and select a user. + +1. There are two ways to edit user profile details. Either select **Edit properties** from the top of the page or select **Properties**. -  +  -3. Select **Edit** to optionally add or update the information included in each of the editable sections. +1. After making any changes, select the **Save** button. - - **Profile picture.** Select a thumbnail image for the user's account. This picture appears in Azure Active Directory and on the user's personal pages, such as the myapps.microsoft.com page. +If you selected the **Edit properties option**: + - The full list of properties appears in edit mode on the **All** category. + - To edit properties based on the category, select a category from the top of the page. + - Select the **Save** button at the bottom of the page to save any changes. + +  + +If you selected the **Properties tab option**: + - The full list of properties appears for you to review. + - To edit a property, select the pencil icon next to the category heading. + - Select the **Save** button at the bottom of the page to save any changes. + +  - - **Identity.** Add or update an additional identity value for the user, such as a married last name. You can set this name independently from the values of First name and Last name. For example, you could use it to include initials, a company name, or to change the sequence of names shown. In another example, for two users whose names are ΓÇÿChris GreenΓÇÖ you could use the Identity string to set their names to 'Chris B. Green' 'Chris R. Green (Contoso).' +### Profile categories +There are six categories of profile details you may be able to edit. - - **Job info.** Add any job-related information, such as the user's job title, department, or manager. +- **Identity:** Add or update other identity values for the user, such as a married last name. You can set this name independently from the values of First name and Last name. For example, you could use it to include initials, a company name, or to change the sequence of names shown. If you have two users with the same name, such as ΓÇÿChris Green,ΓÇÖ you could use the Identity string to set their names to 'Chris B. Green' and 'Chris R. Green.' - - **Settings.** Decide whether the user can sign in to Azure Active Directory tenant. You can also specify the user's global location. +- **Job information:** Add any job-related information, such as the user's job title, department, or manager. - - **Contact info.** Add any relevant contact information for the user, except for some user's phone or mobile contact info (only a global administrator can update for users in administrator roles). +- **Contact info:** Add any relevant contact information for the user. - - **Authentication contact info.** Verify this information to make sure there's an active phone number and email address for the user. This information is used by Azure Active Directory to make sure the user is really the user during sign-in. Authentication contact info can be updated only by a global administrator. +- **Parental controls:** For organizations like K-12 school districts, the user's age group may need to be provided. *Minors* are 12 and under, *Not adult* are 13-18 years old, and *Adults* are 18 and over. The combination of age group and consent provided by parent options determine the Legal age group classification. The Legal age group classification may limit the user's access and authority. -4. Select **Save**. +- **Settings:** Decide whether the user can sign in to the Azure Active Directory tenant. You can also specify the user's global location. - All your changes are saved for the user. +- **On-premises:** Accounts synced from Windows Server Active Directory include additional values not applicable to Azure AD accounts. >[!Note] >You must use Windows Server Active Directory to update the identity, contact info, or job info for users whose source of authority is Windows Server Active Directory. After you complete your update, you must wait for the next synchronization cycle to complete before you'll see the changes.- > - > If you're having issues updating a user's Profile picture, please ensure that your Office 365 Exchange Online Enterprise App is Enabled for users to sign-in. -## Next steps -After you've updated your users' profiles, you can perform the following basic processes: +### Add or edit the profile picture +On the user's overview page, select the camera icon in the lower-right corner of the user's thumbnail. If no image has been added, the user's initials appear here. This picture appears in Azure Active Directory and on the user's personal pages, such as the myapps.microsoft.com page. +All your changes are saved for the user. ++>[!Note] +> If you're having issues updating a user's profile picture, please ensure that your Office 365 Exchange Online Enterprise App is Enabled for users to sign in. ++## Manage settings for all users +In the **User settings** area of Azure AD, you can adjust several settings that affect all users, such as restricting access to the Azure AD administration portal, how external collaboration is managed, and providing users the option to connect their LinkedIn account. Some settings are managed in a separate area of Azure AD and linked from this page. ++Go to **Azure AD** > **User settings**. ++## Next steps - [Add or delete users](add-users-azure-active-directory.md) - [Assign roles to users](active-directory-users-assign-role-azure-portal.md) - [Create a basic group and add members](active-directory-groups-create-azure-portal.md) -Or you can perform other user management tasks, such as assigning delegates, using policies, and sharing user accounts. For more information about other available actions, see [Azure Active Directory user management documentation](../enterprise-users/index.yml). +- [View Azure AD enterprise user management documentation](../enterprise-users/index.yml). |
active-directory | Add Users Azure Active Directory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/add-users-azure-active-directory.md | Title: Add or delete users - Azure Active Directory | Microsoft Docs description: Instructions about how to add new users or delete existing users using Azure Active Directory. --++ Previously updated : 08/17/2022- Last updated : 10/17/2022+ -Add new users or delete existing users from your Azure Active Directory (Azure AD) organization. To add or delete users you must be a User administrator or Global administrator. +Add new users or delete existing users from your Azure Active Directory (Azure AD) tenant. To add or delete users, you must be a User Administrator or Global Administrator. [!INCLUDE [GDPR-related guidance](../../../includes/gdpr-hybrid-note.md)] ## Add a new user -You can create a new user using the Azure Active Directory portal. +You can create a new user for your organization or invite an external user from the same starting point. ->[!Note] ->The user name and email address properties can't contain accent characters. --To add a new user, follow these steps: --1. Sign in to the [Azure portal](https://portal.azure.com/) in the User Administrator role for the organization. --1. Search for and select *Azure Active Directory* from any page. +1. Sign in to the [Azure portal](https://portal.azure.com/) in the User Administrator role. -1. Select **Users**, and then select **New user**. +1. Navigate to **Azure Active Directory** > **Users**. -  +1. Select either **Create new user** or **Invite external user** from the menu. You can change this setting on the next screen. -1. On the **User** page, enter information for this user: +  - - **Name**. Required. The first and last name of the new user. For example, *Mary Parker*. +1. On the **New User** page, provide the new user's information: - - **User name**. Required. The user name of the new user. For example, `mary@contoso.com`. + - **Identity:** Add a user name and display name for the user. **User name** and **Name** are required and can't contain accent characters. You can also add a first and last name. - The domain part of the user name must use either the initial default domain name, *\<yourdomainname>.onmicrosoft.com*, or a custom domain name, such as *contoso.com*. For more information about how to create a custom domain name, see [Add your custom domain name using the Azure Active Directory portal](add-custom-domain.md). + The domain part of the user name must use either the initial default domain name, *\<yourdomainname>.onmicrosoft.com*, or a custom domain name, such as *contoso.com*. For more information about how to create a custom domain name, see [Add your custom domain name using the Azure Active Directory portal](add-custom-domain.md). - - **Groups**. Optionally, you can add the user to one or more existing groups. You can also add the user to groups at a later time. For more information about adding users to groups, see [Create a basic group and add members using Azure Active Directory](active-directory-groups-create-azure-portal.md). + - **Groups and roles:** Optional. Add the user to one or more existing groups. Group membership can be set at any time. For more information about adding users to groups, see the [manage groups article](how-to-manage-groups.md). - - **Directory role**: If you require Azure AD administrative permissions for the user, you can add them to an Azure AD role. You can assign the user to be a Global administrator or one or more of the limited administrator roles in Azure AD. For more information about assigning roles, see [How to assign roles to users](active-directory-users-assign-role-azure-portal.md). + - **Settings:** Optional. Toggle the option to block sign-in for the user or set the user's default location. - - **Job info**: You can add more information about the user here, or do it later. For more information about adding user info, see [How to add or change user profile information](active-directory-users-profile-azure-portal.md). + - **Job info**: Optional. Add the user's job title, department, company name, and manager. These details can be updated at any time. For more information about adding other user info, see [How to manage user profile information](active-directory-users-profile-azure-portal.md). 1. Copy the autogenerated password provided in the **Password** box. You'll need to give this password to the user to sign in for the first time. The user is created and added to your Azure AD organization. ## Add a new guest user -You can also invite new guest user to collaborate with your organization by selecting **Invite user** from the **New user** page. If your organization's external collaboration settings are configured such that you're allowed to invite guests, the user will be emailed an invitation they must accept in order to begin collaborating. For more information about inviting B2B collaboration users, see [Invite B2B users to Azure Active Directory](../external-identities/add-users-administrator.md) +You can also invite new guest user to collaborate with your organization by selecting **Invite user** from the **New user** page. If your organization's external collaboration settings are configured to allow guests, the user will be emailed an invitation they must accept in order to begin collaborating. For more information about inviting B2B collaboration users, see [Invite B2B users to Azure Active Directory](../external-identities/add-users-administrator.md). -## Add a consumer user +The process for inviting a guest is the same as [adding a new user](add-users-azure-active-directory.md#add-a-new-user), with two exceptions. The email address won't follow the same domain rules as users from your organization. You can also include a personal message. -There might be scenarios in which you want to manually create consumer accounts in your Azure Active Directory B2C (Azure AD B2C) directory. For more information about creating consumer accounts, see [Create and delete consumer users in Azure AD B2C](../../active-directory-b2c/manage-users-portal.md). +## Add other users -## Add a new user within a hybrid environment +There might be scenarios in which you want to manually create consumer accounts in your Azure Active Directory B2C (Azure AD B2C) directory. For more information about creating consumer accounts, see [Create and delete consumer users in Azure AD B2C](../../active-directory-b2c/manage-users-portal.md). If you have an environment with both Azure Active Directory (cloud) and Windows Server Active Directory (on-premises), you can add new users by syncing the existing user account data. For more information about hybrid environments and users, see [Integrate your on-premises directories with Azure Active Directory](../hybrid/whatis-hybrid-identity.md). If you have an environment with both Azure Active Directory (cloud) and Windows You can delete an existing user using Azure Active Directory portal. ->[!Note] ->You must have a Global administrator, Privileged authentication administrator or User administrator role assignment to delete users in your organization. Global admins and Privileged authentication admins can delete any users including other admins. User administrators can delete any non-admin users, Helpdesk administrators and other User administrators. For more information, see [Administrator role permissions in Azure AD](../roles/permissions-reference.md). +- You must have a Global Administrator, Privileged Authentication Administrator or User Administrator role assignment to delete users in your organization. +- Global Admins and Privileged Authentication Admins can delete any users including other admins. +- User Administrators can delete any non-admin users, Helpdesk Administrators and other User Administrators. +- For more information, see [Administrator role permissions in Azure AD](../roles/permissions-reference.md). To delete a user, follow these steps: -1. Sign in to the [Azure portal](https://portal.azure.com/) using a User administrator account for the organization. +1. Sign in to the [Azure portal](https://portal.azure.com/) using one of the appropriate roles listed above. -1. Search for and select *Azure Active Directory* from any page. +1. Go to **Azure Active Directory** > **Users**. -1. Search for and select the user you want to delete from your Azure AD tenant. For example, _Mary Parker_. +1. Search for and select the user you want to delete from your Azure AD tenant. 1. Select **Delete user**. -  +  The user is deleted and no longer appears on the **Users - All users** page. The user can be seen on the **Deleted users** page for the next 30 days and can be restored during that time. For more information about restoring a user, see [Restore or remove a recently deleted user using Azure Active Directory](active-directory-users-restore.md). After you've added your users, you can do the following basic processes: - [Work with dynamic groups and users](../enterprise-users/groups-create-rule.md) -Or you can do other user management tasks, such as [adding guest users from another directory](../external-identities/what-is-b2b.md) or [restoring a deleted user](active-directory-users-restore.md). For more information about other available actions, see [Azure Active Directory user management documentation](../enterprise-users/index.yml). +- [Add guest users from another directory](../external-identities/what-is-b2b.md) |
active-directory | Customize Branding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/customize-branding.md | To ensure that the KMSI prompt is shown only when it can benefit the user, the K ## Next steps -- [Add your organization's privacy info on Azure AD](/azure/active-directory/fundamentals/active-directory-properties-area)-- [Learn more about Conditional Access](../conditional-access/overview.md)+- [Add your organization's privacy info on Azure AD](./active-directory-properties-area.md) +- [Learn more about Conditional Access](../conditional-access/overview.md) |
active-directory | Secure With Azure Ad Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-best-practices.md | Below are some considerations when designing a governed subscription lifecycle p ## Operations -The following are additional operational considerations for Azure AD, specific to multiple isolated environments. Check the [Azure Cloud Adoption Framework](/azure/cloud-adoption-framework/manage/), [Azure Security Benchmark](/security/benchmark/azure/) and [Azure AD Operations guide](./active-directory-ops-guide-ops.md) for detailed guidance to operate individual environments. +The following are additional operational considerations for Azure AD, specific to multiple isolated environments. Check the [Azure Cloud Adoption Framework](/azure/cloud-adoption-framework/manage/), the [Microsoft cloud security benchmark](/security/benchmark/azure/) and [Azure AD Operations guide](./active-directory-ops-guide-ops.md) for detailed guidance to operate individual environments. ### Cross-environment roles and responsibilities |
active-directory | Secure With Azure Ad Single Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-single-tenant.md | Many separation scenarios can be achieved within a single tenant. If possible, w If a set of resources require unique tenant-wide settings, or there is minimal risk tolerance for unauthorized access by tenant members, or critical impact could be caused by configuration changes, you must achieve isolation in multiple tenants. -**Configuration separation** - In some cases, resources such as applications have dependencies on tenant-wide configurations like authentication methods or [named locations](/azure/active-directory/conditional-access/location-condition#named-locations). You should consider these dependencies when isolating resources. Global administrators can configure the resource settings and tenant-wide settings that affect resources. +**Configuration separation** - In some cases, resources such as applications have dependencies on tenant-wide configurations like authentication methods or [named locations](../conditional-access/location-condition.md#named-locations). You should consider these dependencies when isolating resources. Global administrators can configure the resource settings and tenant-wide settings that affect resources. If a set of resources require unique tenant-wide settings, or the tenant's settings must be administered by a different entity, you must achieve isolation with multiple tenants. Configuration settings such authentication methods allowed, hybrid configuration * [Resource isolation with multiple tenants](secure-with-azure-ad-multiple-tenants.md) -* [Best practices](secure-with-azure-ad-best-practices.md) +* [Best practices](secure-with-azure-ad-best-practices.md) |
active-directory | Whats New Sovereign Clouds | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-sovereign-clouds.md | +- [Azure Government](../../azure-government/documentation-government-welcome.md) This page is updated monthly, so revisit it regularly. |
active-directory | Identity Governance Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-automation.md | There are two places where you can see the expiration date in the Azure portal. ## Next steps -- [Create an Automation account using the Azure portal](/azure/automation/quickstarts/create-azure-automation-account-portal)+- [Create an Automation account using the Azure portal](../../automation/quickstarts/create-azure-automation-account-portal.md) |
active-directory | Lifecycle Workflow Extensibility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-extensibility.md | -Lifecycle Workflows allow you to create workflows that can be triggered based on joiner, mover, or leaver scenarios. While Lifecycle Workflows provide several built-in tasks to automate common scenarios throughout the lifecycle of users, eventually you may reach the limits of these built-in tasks. With the extensibility feature, you'll be able to utilize the concept of custom task extensions to call-out to external systems as part of a workflow. By calling out to the external systems, you're able to accomplish things, which can extend the purpose of your workflows. When a user joins your organization you can have a workflow with a custom task extension that assigns a Teams number, or have a separate workflow that grants access to an email account for a manager when a user leaves. With the extensibility feature, Lifecycle Workflows currently support creating custom tasks extensions to call-out to [Azure Logic Apps](/azure/logic-apps/logic-apps-overview). +Lifecycle Workflows allow you to create workflows that can be triggered based on joiner, mover, or leaver scenarios. While Lifecycle Workflows provide several built-in tasks to automate common scenarios throughout the lifecycle of users, eventually you may reach the limits of these built-in tasks. With the extensibility feature, you'll be able to utilize the concept of custom task extensions to call-out to external systems as part of a workflow. By calling out to the external systems, you're able to accomplish things, which can extend the purpose of your workflows. When a user joins your organization you can have a workflow with a custom task extension that assigns a Teams number, or have a separate workflow that grants access to an email account for a manager when a user leaves. With the extensibility feature, Lifecycle Workflows currently support creating custom tasks extensions to call-out to [Azure Logic Apps](../../logic-apps/logic-apps-overview.md). ## Prerequisite Logic App roles required for integration with the custom task extension The roles on the Azure Logic App, which allows it to be compatible with the cust - **Owner** > [!NOTE]-> The **Logic App Operator** role alone will not make an Azure Logic App compatible with the custom task extension. For more information on the required **Logic App contributor** role, see: [Logic App Contributor](/azure/role-based-access-control/built-in-roles#logic-app-contributor). +> The **Logic App Operator** role alone will not make an Azure Logic App compatible with the custom task extension. For more information on the required **Logic App contributor** role, see: [Logic App Contributor](../../role-based-access-control/built-in-roles.md#logic-app-contributor). ## Custom task extension deployment scenarios For a guide on supplying this information to a custom task extension via Microso - [customTaskExtension resource type](/graph/api/resources/identitygovernance-customtaskextension?view=graph-rest-beta) - [Trigger Logic Apps based on custom task extensions (Preview)](trigger-custom-task.md)-- [Configure a Logic App for Lifecycle Workflow use (Preview)](configure-logic-app-lifecycle-workflows.md)+- [Configure a Logic App for Lifecycle Workflow use (Preview)](configure-logic-app-lifecycle-workflows.md) |
active-directory | Tutorial Onboard Custom Workflow Graph | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-onboard-custom-workflow-graph.md | Two accounts are required for the tutorial, one account for the new hire and ano - department must be set to sales - manager attribute must be set, and the manager account should have a mailbox to receive an email. -For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md). The [TAP policy](/azure/active-directory/authentication/howto-authentication-temporary-access-pass#enable-the-temporary-access-pass-policy) must also be enabled to run this tutorial. +For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md). The [TAP policy](../authentication/howto-authentication-temporary-access-pass.md#enable-the-temporary-access-pass-policy) must also be enabled to run this tutorial. Detailed breakdown of the relevant attributes: Content-type: application/json ## Next steps - [Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md)-- [Automate employee onboarding tasks before their first day of work with Azure portal (preview)](tutorial-onboard-custom-workflow-portal.md)+- [Automate employee onboarding tasks before their first day of work with Azure portal (preview)](tutorial-onboard-custom-workflow-portal.md) |
active-directory | Tutorial Onboard Custom Workflow Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-onboard-custom-workflow-portal.md | Two accounts are required for this tutorial, one account for the new hire and an - department must be set to sales - manager attribute must be set, and the manager account should have a mailbox to receive an email -For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md). The [TAP policy](/azure/active-directory/authentication/howto-authentication-temporary-access-pass#enable-the-temporary-access-pass-policy) must also be enabled to run this tutorial. +For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md). The [TAP policy](../authentication/howto-authentication-temporary-access-pass.md#enable-the-temporary-access-pass-policy) must also be enabled to run this tutorial. Detailed breakdown of the relevant attributes: After running your workflow on-demand and checking that everything is working fi ## Next steps - [Tutorial: Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md)-- [Automate employee onboarding tasks before their first day of work with Microsoft Graph (preview)](tutorial-onboard-custom-workflow-graph.md)+- [Automate employee onboarding tasks before their first day of work with Microsoft Graph (preview)](tutorial-onboard-custom-workflow-graph.md) |
active-directory | Tutorial Prepare Azure Ad User Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-prepare-azure-ad-user-accounts.md | The manager attribute is used for email notification tasks. It's used by the li :::image type="content" source="media/tutorial-lifecycle-workflows/graph-get-manager.png" alt-text="Screenshot of getting a manager in Graph explorer." lightbox="media/tutorial-lifecycle-workflows/graph-get-manager.png"::: -For more information about updating manager information for a user in Graph API, see [assign manager](/graph/api/user-post-manager?view=graph-rest-1.0&tabs=http&preserve-view=true) documentation. You can also set this attribute in the Azure Admin center. For more information, see [add or change profile information](/azure/active-directory/fundamentals/active-directory-users-profile-azure-portal?context=azure/active-directory/users-groups-roles/context/ugr-context). +For more information about updating manager information for a user in Graph API, see [assign manager](/graph/api/user-post-manager?view=graph-rest-1.0&tabs=http&preserve-view=true) documentation. You can also set this attribute in the Azure Admin center. For more information, see [add or change profile information](../fundamentals/active-directory-users-profile-azure-portal.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context). ### Enabling the Temporary Access Pass (TAP) A Temporary Access Pass is a time-limited pass issued by an admin that satisfies strong authentication requirements. A user with groups and Teams memberships is required before you begin the tutori - [On-boarding users to your organization using Lifecycle workflows with Azure portal (preview)](tutorial-onboard-custom-workflow-portal.md) - [On-boarding users to your organization using Lifecycle workflows with Microsoft Graph (preview)](tutorial-onboard-custom-workflow-graph.md) - [Tutorial: Off-boarding users from your organization using Lifecycle workflows with Azure portal (preview)](tutorial-offboard-custom-workflow-portal.md)-- [Tutorial: Off-boarding users from your organization using Lifecycle workflows with Microsoft Graph (preview)](tutorial-offboard-custom-workflow-graph.md)+- [Tutorial: Off-boarding users from your organization using Lifecycle workflows with Microsoft Graph (preview)](tutorial-offboard-custom-workflow-graph.md) |
active-directory | What Are Lifecycle Workflows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/what-are-lifecycle-workflows.md | -Azure AD Lifecycle Workflows is a new Azure AD Identity Governance service that enables organizations to manage Azure AD users by automating these three basic lifecycle processes: +Lifecycle Workflows is a new Identity Governance service that enables organizations to manage Azure AD users by automating these three basic lifecycle processes: - Joiner - When an individual comes into scope of needing access. An example is a new employee joining a company or organization. - Mover - When an individual moves between boundaries within an organization. This movement may require more access or authorization. An example would be a user who was in marketing is now a member of the sales organization. |
active-directory | How To Connect Fed O365 Certs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-o365-certs.md | The token signing and token decrypting certificates are usually self-signed cert ### Renewal notification from the Microsoft 365 admin center or an email > [!NOTE]-> If you received an email or a portal notification asking you to renew your certificate for Office, see [Managing changes to token signing certificates](#managecerts) to check if you need to take any action. Microsoft is aware of a possible issue that can lead to notifications for certificate renewal being sent, even when no action is required. +> If you received an email asking you to renew your certificate for Office, see [Managing changes to token signing certificates](#managecerts) to check if you need to take any action. Microsoft is aware of a possible issue that can lead to notifications for certificate renewal being sent, even when no action is required. > > Azure AD attempts to monitor the federation metadata, and update the token signing certificates as indicated by this metadata. 30 days before the expiration of the token signing certificates, Azure AD checks if new certificates are available by polling the federation metadata. -* If it can successfully poll the federation metadata and retrieve the new certificates, no email notification or warning in the Microsoft 365 admin center is issued to the user. -* If it cannot retrieve the new token signing certificates, either because the federation metadata is not reachable or automatic certificate rollover is not enabled, Azure AD issues an email notification and a warning in the Microsoft 365 admin center. +* If it can successfully poll the federation metadata and retrieve the new certificates, no email notification is issued to the user. +* If it cannot retrieve the new token signing certificates, either because the federation metadata is not reachable or automatic certificate rollover is not enabled, Azure AD issues an email. - > [!IMPORTANT] > If you are using AD FS, to ensure business continuity, please verify that your servers have the following updates so that authentication failures for known issues do not occur. This mitigates known AD FS proxy server issues for this renewal and future renewal periods: |
active-directory | How To Connect Group Writeback V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-v2.md | Microsoft provides support for this public preview release, but we might not be These limitations and known issues are specific to group writeback: -- Cloud [distribution list groups](https://docs.microsoft.com/exchange/recipients-in-exchange-online/manage-distribution-groups/manage-distribution-groups) created in Exchange Online cannot be written back to AD, only Microsoft 365 and Azure AD security groups are supported. +- Cloud [distribution list groups](/exchange/recipients-in-exchange-online/manage-distribution-groups/manage-distribution-groups) created in Exchange Online cannot be written back to AD, only Microsoft 365 and Azure AD security groups are supported. - To be backwards compatible with the current version of group writeback, when you enable group writeback, all existing Microsoft 365 groups are written back and created as distribution groups, by default. - When you disable writeback for a group, the group won't automatically be removed from your on-premises Active Directory, until hard deleted in Azure AD. This behavior can be modified by following the steps detailed in [Modifying group writeback](how-to-connect-modify-group-writeback.md) - Group Writeback does not support writeback of nested group members that have a scope of ‘Domain local’ in AD, since Azure AD security groups are written back with scope ‘Universal’. If you have a nested group like this, you'll see an export error in Azure AD Connect with the message “A universal group cannot have a local group as a member.” The resolution is to remove the member with scope ‘Domain local’ from the Azure AD group or update the nested group member scope in AD to ‘Global’ or ‘Universal’ group. These limitations and known issues are specific to group writeback: - [Modify Azure AD Connect group writeback default behavior](how-to-connect-modify-group-writeback.md) - [Enable Azure AD Connect group writeback](how-to-connect-group-writeback-enable.md)-- [Disable Azure AD Connect group writeback](how-to-connect-group-writeback-disable.md)+- [Disable Azure AD Connect group writeback](how-to-connect-group-writeback-disable.md) |
active-directory | How To Connect Sync Whatis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-whatis.md | The sync service consists of two components, the on-premises **Azure AD Connect > >To find out if you are already eligible for Cloud Sync, please verify your requirements in [this wizard](https://admin.microsoft.com/adminportal/home?Q=setupguidance#/modernonboarding/identitywizard). >->To learn more about Cloud Sync please read [this article](/azure/active-directory/cloud-sync/what-is-cloud-sync), or watch this [short video](https://www.microsoft.com/videoplayer/embed/RWJ8l5). +>To learn more about Cloud Sync please read [this article](../cloud-sync/what-is-cloud-sync.md), or watch this [short video](https://www.microsoft.com/videoplayer/embed/RWJ8l5). > The sync service consists of two components, the on-premises **Azure AD Connect | [Functions Reference](reference-connect-sync-functions-reference.md) |Lists all functions available in declarative provisioning. | ## Additional Resources-* [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md) +* [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md) |
active-directory | How To Upgrade Previous Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-upgrade-previous-version.md | These steps also work to move from Azure AD Sync or a solution with FIM + Azure > It's important to fully decommission old Azure AD Connect servers as these may cause synchronization issues, difficult to troubleshoot, when an old sync server is left on the network or is powered up again later by mistake. Such ΓÇ£rogueΓÇ¥ servers tend to overwrite Azure AD data with its old information because, they may no longer be able to access on-premises Active Directory (for example, when the computer account is expired, the connector account password has changed, etcetera), but can still connect to Azure AD and cause attribute values to continually revert in every sync cycle (for example, every 30 minutes). To fully decommission an Azure AD Connect server, make sure you completely uninstall the product and its components or permanently delete the server if it is a virtual machine. ### Move a custom configuration from the active server to the staging server-If you have made configuration changes to the active server, you need to make sure that the same changes are applied to the new staging server. To help with this move, you can use the feature for [exporting and importing synchronization settings](/azure/active-directory/hybrid/how-to-connect-import-export-config). With this feature you can deploy a new staging server in a few steps, with the exact same settings as another Azure AD Connect server in your network. +If you have made configuration changes to the active server, you need to make sure that the same changes are applied to the new staging server. To help with this move, you can use the feature for [exporting and importing synchronization settings](./how-to-connect-import-export-config.md). With this feature you can deploy a new staging server in a few steps, with the exact same settings as another Azure AD Connect server in your network. For individual custom sync rules that you have created, you can move them by using PowerShell. If you must apply other changes the same way on both systems, and you cannot migrate the changes, then you might have to manually do the following configurations on both servers: This error occurs because the current Azure AD Connect configuration is not supp If you want to install a newer version of Azure AD Connect: close the Azure AD Connect wizard, uninstall the existing Azure AD Connect, and perform a clean install of the newer Azure AD Connect. ## Next steps-Learn more about [integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md). +Learn more about [integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md). |
active-directory | Plan Connect Userprincipalname | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-connect-userprincipalname.md | When the updates to a user object are synchronized to the Azure AD Tenant, Azure > >Whenever Azure AD recalculates the UserPrincipalName attribute, it also recalculates the MOERA. >->In case of verified domain change, Azure AD also recalculates the UserPrincipalName attribute. For more information, see [Troubleshoot: Audit data on verified domain change](https://docs.microsoft.com/azure/active-directory/reports-monitoring/troubleshoot-audit-data-verified-domain) +>In case of verified domain change, Azure AD also recalculates the UserPrincipalName attribute. For more information, see [Troubleshoot: Audit data on verified domain change](../reports-monitoring/troubleshoot-audit-data-verified-domain.md) ## UPN scenarios The following are example scenarios of how the UPN is calculated based on the given scenario. Azure AD Tenant user object: ## Next Steps - [Integrate your on-premises directories with Azure Active Directory](whatis-hybrid-identity.md)-- [Custom installation of Azure AD Connect](how-to-connect-install-custom.md)+- [Custom installation of Azure AD Connect](how-to-connect-install-custom.md) |
active-directory | Plan Hybrid Identity Design Considerations Data Protection Strategy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-data-protection-strategy.md | Once authenticated, the user principal name (UPN) is read from the authenticatio Moving data from your on-premises datacenter into Azure Storage over an Internet connection may not always be feasible due to data volume, bandwidth availability, or other considerations. The [Azure Storage Import/Export Service](../../import-export/storage-import-export-service.md) provides a hardware-based option for placing/retrieving large volumes of data in blob storage. It allows you to send [BitLocker-encrypted](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn306081(v=ws.11)#BKMK_BL2012R2) hard disk drives directly to an Azure datacenter where cloud operators upload the contents to your storage account, or they can download your Azure data to your drives to return to you. Only encrypted disks are accepted for this process (using a BitLocker key generated by the service itself during the job setup). The BitLocker key is provided to Azure separately, thus providing out of band key sharing. -Since data in transit can take place in different scenarios, is also relevant to know that Microsoft Azure uses [virtual networking](/azure/virtual-network/) to isolate tenantsΓÇÖ traffic from one another, employing measures such as host- and guest-level firewalls, IP packet filtering, port blocking, and HTTPS endpoints. However, most of AzureΓÇÖs internal communications, including infrastructure-to-infrastructure and infrastructure-to-customer (on-premises), are also encrypted. Another important scenario is the communications within Azure datacenters; Microsoft manages networks to assure that no VM can impersonate or eavesdrop on the IP address of another. TLS/SSL is used when accessing Azure Storage or SQL Databases, or when connecting to Cloud Services. In this case, the customer administrator is responsible for obtaining a TLS/SSL certificate and deploying it to their tenant infrastructure. Data traffic moving between Virtual Machines in the same deployment or between tenants in a single deployment via Microsoft Azure Virtual Network can be protected through encrypted communication protocols such as HTTPS, SSL/TLS, or others. +Since data in transit can take place in different scenarios, is also relevant to know that Microsoft Azure uses [virtual networking](../../virtual-network/index.yml) to isolate tenantsΓÇÖ traffic from one another, employing measures such as host- and guest-level firewalls, IP packet filtering, port blocking, and HTTPS endpoints. However, most of AzureΓÇÖs internal communications, including infrastructure-to-infrastructure and infrastructure-to-customer (on-premises), are also encrypted. Another important scenario is the communications within Azure datacenters; Microsoft manages networks to assure that no VM can impersonate or eavesdrop on the IP address of another. TLS/SSL is used when accessing Azure Storage or SQL Databases, or when connecting to Cloud Services. In this case, the customer administrator is responsible for obtaining a TLS/SSL certificate and deploying it to their tenant infrastructure. Data traffic moving between Virtual Machines in the same deployment or between tenants in a single deployment via Microsoft Azure Virtual Network can be protected through encrypted communication protocols such as HTTPS, SSL/TLS, or others. Depending on how you answered the questions in [Determine data protection requirements](plan-hybrid-identity-design-considerations-dataprotection-requirements.md), you should be able to determine how you want to protect your data and how the hybrid identity solution can assist you with that process. The following table shows the options supported by Azure that are available for each data protection scenario. Since the options for incident response use a multilayer approach, comparison be [Determine hybrid identity management tasks](plan-hybrid-identity-design-considerations-hybrid-id-management-tasks.md) ## See Also-[Design considerations overview](plan-hybrid-identity-design-considerations-overview.md) +[Design considerations overview](plan-hybrid-identity-design-considerations-overview.md) |
active-directory | Manage App Consent Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-app-consent-policies.md | -With [Microsoft Graph PowerShell](/powershell/microsoftgraph/get-started?view=graph-powershell-1.0&preserve-view=true), you can view and manage app consent policies. +With [Microsoft Graph](/graph/overview) and [Microsoft Graph PowerShell](/powershell/microsoftgraph/get-started?view=graph-powershell-1.0&preserve-view=true), you can view and manage app consent policies. An app consent policy consists of one or more "include" condition sets and zero or more "exclude" condition sets. For an event to be considered in an app consent policy, it must match *at least* one "include" condition set, and must not match *any* "exclude" condition set. The following table provides the list of supported conditions for app consent po To learn more: +* [Manage app consent policies using Microsoft Graph](/graph/api/resources/permissiongrantpolicy) * [Configure user consent settings](configure-user-consent.md) * [Configure the admin consent workflow](configure-admin-consent-workflow.md) * [Learn how to manage consent to applications and evaluate consent requests](manage-consent-requests.md) |
active-directory | How Manage User Assigned Managed Identities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md | For a full list of Azure CLI identity commands, see [az identity](/cli/azure/ide For information on how to assign a user-assigned managed identity to an Azure VM, see [Configure managed identities for Azure resources on an Azure VM using Azure CLI](qs-configure-cli-windows-vm.md#user-assigned-managed-identity). -Learn how to use [workload identity federation for managed identities](/azure/active-directory/develop/workload-identity-federation) to access Azure Active Directory (Azure AD) protected resources without managing secrets. +Learn how to use [workload identity federation for managed identities](../develop/workload-identity-federation.md) to access Azure Active Directory (Azure AD) protected resources without managing secrets. ::: zone-end Remove-AzUserAssignedIdentity -ResourceGroupName <RESOURCE GROUP> -Name <USER AS For a full list and more details of the Azure PowerShell managed identities for Azure resources commands, see [Az.ManagedServiceIdentity](/powershell/module/az.managedserviceidentity#managed_service_identity). -Learn how to use [workload identity federation for managed identities](/azure/active-directory/develop/workload-identity-federation) to access Azure Active Directory (Azure AD) protected resources without managing secrets. +Learn how to use [workload identity federation for managed identities](../develop/workload-identity-federation.md) to access Azure Active Directory (Azure AD) protected resources without managing secrets. ::: zone-end To create a user-assigned managed identity, use the following template. Replace To assign a user-assigned managed identity to an Azure VM using a Resource Manager template, see [Configure managed identities for Azure resources on an Azure VM using a template](qs-configure-template-windows-vm.md). -Learn how to use [workload identity federation for managed identities](/azure/active-directory/develop/workload-identity-federation) to access Azure Active Directory (Azure AD) protected resources without managing secrets. +Learn how to use [workload identity federation for managed identities](../develop/workload-identity-federation.md) to access Azure Active Directory (Azure AD) protected resources without managing secrets. ::: zone-end For information on how to assign a user-assigned managed identity to an Azure VM - [Configure managed identities for Azure resources on an Azure VM using REST API calls](qs-configure-rest-vm.md#user-assigned-managed-identity) - [Configure managed identities for Azure resources on a virtual machine scale set using REST API calls](qs-configure-rest-vmss.md#user-assigned-managed-identity) -Learn how to use [workload identity federation for managed identities](/azure/active-directory/develop/workload-identity-federation) to access Azure Active Directory (Azure AD) protected resources without managing secrets. --+Learn how to use [workload identity federation for managed identities](../develop/workload-identity-federation.md) to access Azure Active Directory (Azure AD) protected resources without managing secrets. |
active-directory | Overview For Developers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/overview-for-developers.md | Tokens should be treated like credentials. Don't expose them to users or other s * [How to use managed identities for App Service and Azure Functions](../../app-service/overview-managed-identity.md) * [How to use managed identities with Azure Container Instances](../../container-instances/container-instances-managed-identity.md) * [Implementing managed identities for Microsoft Azure Resources](https://www.pluralsight.com/courses/microsoft-azure-resources-managed-identities-implementing)-* Use [workload identity federation for managed identities](/azure/active-directory/develop/workload-identity-federation) to access Azure Active Directory (Azure AD) protected resources without managing secrets +* Use [workload identity federation for managed identities](../develop/workload-identity-federation.md) to access Azure Active Directory (Azure AD) protected resources without managing secrets |
active-directory | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/overview.md | Operations on managed identities can be performed by using an Azure Resource Man * [How to use managed identities for App Service and Azure Functions](../../app-service/overview-managed-identity.md) * [How to use managed identities with Azure Container Instances](../../container-instances/container-instances-managed-identity.md) * [Implementing managed identities for Microsoft Azure Resources](https://www.pluralsight.com/courses/microsoft-azure-resources-managed-identities-implementing)-* Use [workload identity federation for managed identities](/azure/active-directory/develop/workload-identity-federation) to access Azure Active Directory (Azure AD) protected resources without managing secrets +* Use [workload identity federation for managed identities](../develop/workload-identity-federation.md) to access Azure Active Directory (Azure AD) protected resources without managing secrets |
active-directory | Bambubysproutsocial Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bambubysproutsocial-tutorial.md | Title: 'Tutorial: Azure Active Directory integration with Bambu by Sprout Social | Microsoft Docs' -description: Learn how to configure single sign-on between Azure Active Directory and Bambu by Sprout Social. + Title: 'Tutorial: Azure AD SSO integration with Employee Advocacy by Sprout Social' +description: Learn how to configure single sign-on between Azure Active Directory and Employee Advocacy by Sprout Social. -# Tutorial: Azure Active Directory integration with Bambu by Sprout Social +# Tutorial: Azure AD SSO integration with Employee Advocacy by Sprout Social -In this tutorial, you'll learn how to integrate Bambu by Sprout Social with Azure Active Directory (Azure AD). When you integrate Bambu by Sprout Social with Azure AD, you can: +In this tutorial, you'll learn how to integrate Employee Advocacy by Sprout Social with Azure Active Directory (Azure AD). When you integrate Employee Advocacy by Sprout Social with Azure AD, you can: -* Control in Azure AD who has access to Bambu by Sprout Social. -* Enable your users to be automatically signed-in to Bambu by Sprout Social with their Azure AD accounts. +* Control in Azure AD who has access to Employee Advocacy by Sprout Social. +* Enable your users to be automatically signed-in to Employee Advocacy by Sprout Social with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal. ## Prerequisites In this tutorial, you'll learn how to integrate Bambu by Sprout Social with Azur To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).-* Bambu by Sprout Social single sign-on (SSO) enabled subscription. +* Employee Advocacy by Sprout Social single sign-on (SSO) enabled subscription. ## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment. -* Bambu by Sprout Social supports **IDP** initiated SSO. -* Bambu by Sprout Social supports **Just In Time** user provisioning. +* Employee Advocacy by Sprout Social supports **SP** and **IDP** initiated SSO. +* Employee Advocacy by Sprout Social supports **Just In Time** user provisioning. -## Add Bambu by Sprout Social from the gallery +## Add Employee Advocacy by Sprout Social from the gallery -To configure the integration of Bambu by Sprout Social into Azure AD, you need to add Bambu by Sprout Social from the gallery to your list of managed SaaS apps. +To configure the integration of Employee Advocacy by Sprout Social into Azure AD, you need to add Employee Advocacy by Sprout Social from the gallery to your list of managed SaaS apps. 1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.-1. In the **Add from the gallery** section, type **Bambu by Sprout Social** in the search box. -1. Select **Bambu by Sprout Social** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. +1. In the **Add from the gallery** section, type **Employee Advocacy by Sprout Social** in the search box. +1. Select **Employee Advocacy by Sprout Social** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) -## Configure and test Azure AD SSO for Bambu by Sprout Social +## Configure and test Azure AD SSO for Employee Advocacy by Sprout Social -Configure and test Azure AD SSO with Bambu by Sprout Social using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Bambu by Sprout Social. +Configure and test Azure AD SSO with Employee Advocacy by Sprout Social using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Employee Advocacy by Sprout Social. -To configure and test Azure AD SSO with Bambu by Sprout Social, perform the following steps: +To configure and test Azure AD SSO with Employee Advocacy by Sprout Social, perform the following steps: 1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.-1. **[Configure Bambu by Sprout Social SSO](#configure-bambu-by-sprout-social-sso)** - to configure the single sign-on settings on application side. - 1. **[Create Bambu by Sprout Social test user](#create-bambu-by-sprout-social-test-user)** - to have a counterpart of B.Simon in Bambu by Sprout Social that is linked to the Azure AD representation of user. +1. **[Configure Employee Advocacy by Sprout Social SSO](#configure-employee-advocacy-by-sprout-social-sso)** - to configure the single sign-on settings on application side. + 1. **[Create Employee Advocacy by Sprout Social test user](#create-employee-advocacy-by-sprout-social-test-user)** - to have a counterpart of B.Simon in Employee Advocacy by Sprout Social that is linked to the Azure AD representation of user. 1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal. -1. In the Azure portal, on the **Bambu by Sprout Social** application integration page, find the **Manage** section and select **single sign-on**. +1. In the Azure portal, on the **Employee Advocacy by Sprout Social** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.  -4. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure. +1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure. -5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer. +1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode: ++ In the **Sign-on URL** text box, type a URL using one of the following patterns: ++ | **Sign-on URL** | + |--| + | `https://advocacy.sproutsocial.com` | + | `https://<SUBDOMAIN>.advocacy.sproutsocial.com` | ++ > [!Note] + > This value is not the real. Update this value with the actual Sign-on URL. Contact [Employee Advocacy by Sprout Social Client support team](mailto:support@getbambu.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. ++1. Employee Advocacy by Sprout Social application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. ++  ++1. In addition to above, Employee Advocacy by Sprout Social application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements. ++ | Name | Source Attribute| + | | | + | firstName | user.givenname | + | lastName | user.surname | + | email | user.mail | ++1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.  -6. On the **Set up Bambu by Sprout Social** section, copy the appropriate URL(s) as per your requirement. +1. On the **Set up Employee Advocacy by Sprout Social** section, copy the appropriate URL(s) as per your requirement.  In this section, you'll create a test user in the Azure portal called B.Simon. ### Assign the Azure AD test user -In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Bambu by Sprout Social. +In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Employee Advocacy by Sprout Social. 1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.-1. In the applications list, select **Bambu by Sprout Social**. +1. In the applications list, select **Employee Advocacy by Sprout Social**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button. -## Configure Bambu by Sprout Social SSO +## Configure Employee Advocacy by Sprout Social SSO -To configure single sign-on on **Bambu by Sprout Social** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Bambu by Sprout Social support team](mailto:support@getbambu.com). They set this setting to have the SAML SSO connection set properly on both sides. +To configure single sign-on on **Employee Advocacy by Sprout Social** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Employee Advocacy by Sprout Social support team](mailto:support@getbambu.com). They set this setting to have the SAML SSO connection set properly on both sides. -### Create Bambu by Sprout Social test user +### Create Employee Advocacy by Sprout Social test user -In this section, a user called Britta Simon is created in Bambu by Sprout Social. Bambu by Sprout Social supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Bambu by Sprout Social, a new one is created after authentication. +In this section, a user called Britta Simon is created in Employee Advocacy by Sprout Social. Employee Advocacy by Sprout Social supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Employee Advocacy by Sprout Social, a new one is created after authentication. ## Test SSO -In this section, you test your Azure AD single sign-on configuration with following options. +In this section, you test your Azure AD single sign-on configuration with following options. ++#### SP initiated: ++* Click on **Test this application** in Azure portal. This will redirect to Employee Advocacy by Sprout Social Sign-on URL where you can initiate the login flow. ++* Go to Employee Advocacy by Sprout Social Sign-on URL directly and initiate the login flow from there. ++#### IDP initiated: -* Click on Test this application in Azure portal and you should be automatically signed in to the Bambu by Sprout Social for which you set up the SSO. +* Click on **Test this application** in Azure portal and you should be automatically signed in to the Employee Advocacy by Sprout Social for which you set up the SSO. -* You can use Microsoft My Apps. When you click the Bambu by Sprout Social tile in the My Apps, you should be automatically signed in to the Bambu by Sprout Social for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510). +You can also use Microsoft My Apps to test the application in any mode. When you click the Employee Advocacy by Sprout Social tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Employee Advocacy by Sprout Social for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ## Next steps -Once you configure Bambu by Sprout Social you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad). +Once you configure Employee Advocacy by Sprout Social you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad). |
active-directory | Plan Issuance Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/plan-issuance-solution.md | For security logging and monitoring, we recommend the following: * Mitigate distributed denial of service (DDOS) and Key Vault resource exhaustion risks. Every request that triggers a VC issuance request generates Key Vault signing operations that accrue towards service limits. We recommend protecting traffic by incorporating authentication or captcha before generating issuance requests. -For guidance on managing your Azure environment, we recommend you review [Azure Security Benchmark](/security/benchmark/azure/) and [Securing Azure environments with Azure Active Directory](https://aka.ms/AzureADSecuredAzure). These guides provide best practices for managing the underlying Azure resources, including Azure Key Vault, Azure Storage, websites, and other Azure-related services and capabilities. +For guidance on managing your Azure environment, we recommend you review the [Microsoft cloud security benchmark](/security/benchmark/azure/) and [Securing Azure environments with Azure Active Directory](https://aka.ms/AzureADSecuredAzure). These guides provide best practices for managing the underlying Azure resources, including Azure Key Vault, Azure Storage, websites, and other Azure-related services and capabilities. ## Additional considerations |
aks | Aks Support Help | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-support-help.md | + + Title: Azure Kubernetes Service support and help options +description: How to obtain help and support for questions or problems when you create solutions using Azure Kubernetes Service. ++ Last updated : 10/18/2022++++# Support and troubleshooting for Azure Kubernetes Service (AKS) ++Here are suggestions for where you can get help when developing your Azure Kubernetes Service (AKS) solutions. ++## Self help troubleshooting +++Various articles explain how to determine, diagnose, and fix issues that you might encounter when using Azure Kubernetes Service. Use these articles to troubleshoot deployment failures, security-related problems, connection issues and more. ++For a full list of self help troubleshooting content, see [Azure Kubernetes Service troubleshooting documentation](/troubleshoot/azure/azure-kubernetes/welcome-azure-kubernetes) ++## Post a question on Microsoft Q&A +++For quick and reliable answers on your technical product questions from Microsoft Engineers, Azure Most Valuable Professionals (MVPs), or our expert community, engage with us on [Microsoft Q&A](/answers/products/azure), Azure's preferred destination for community support. ++If you can't find an answer to your problem using search, submit a new question to Microsoft Q&A. Use one of the following tags when asking your question: ++| Area | Tag | +|-|-| +| [Azure Kubernetes Service](intro-kubernetes.md) | [azure-kubernetes-service](/answers/topics/azure-kubernetes-service.html)| +| [Azure Container Registry](../container-registry/container-registry-intro.md) | [azure-container-registry](/answers/topics/azure-container-registry.html)| +| [Azure storage accounts](../storage/common/storage-account-overview.md) | [azure-storage-accounts](/answers/topics/azure-storage-accounts.html)| +| [Azure Managed Identities](../active-directory/managed-identities-azure-resources/overview.md) | [azure-managed-identity](/answers/topics/azure-managed-identity.html) | +| [Azure RBAC](../role-based-access-control/overview.md) | [azure-rbac](/answers/topics/azure-rbac.html)| +| [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) | [azure-active-directory](/answers/topics/azure-active-directory.html)| +| [Azure Policy](../governance/policy/overview.md) | [azure-policy](/answers/topics/azure-policy.html)| +| [Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/overview.md) | [virtual-machine-scale-sets](/answers/topics/azure-virtual-machine-scale-sets.html)| +| [Azure Virtual Network](../virtual-network/network-overview.md) | [azure-virtual-network](/answers/topics/azure-virtual-network.html)| +| [Azure Application Gateway](../application-gateway/overview.md) | [azure-application-gateway](/answers/topics/azure-application-gateway.html)| +| [Azure Virtual Machines](../virtual-machines/linux/overview.md) | [azure-virtual-machines](/answers/topics/azure-virtual-machines.html) | ++## Create an Azure support request +++Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits, whether you're a developer just starting your cloud journey or a large organization deploying business-critical, strategic applications. Azure customers can create and manage support requests in the Azure portal. ++- If you already have an Azure Support Plan, [open a support request here](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). ++- To sign up for a new Azure Support Plan, [compare support plans](https://azure.microsoft.com/support/plans/) and select the plan that works for you. ++## Create a GitHub issue +++If you need help with the language and tools used to develop and manage Azure Kubernetes Service, open an issue in its repository on GitHub. ++| Library | GitHub issues URL| +| | | +| Azure PowerShell | https://github.com/Azure/azure-powershell/issues | +| Azure CLI | https://github.com/Azure/azure-cli/issues | +| Azure REST API | https://github.com/Azure/azure-rest-api-specs/issues | +| Azure SDK for Java | https://github.com/Azure/azure-sdk-for-java/issues | +| Azure SDK for Python | https://github.com/Azure/azure-sdk-for-python/issues | +| Azure SDK for .NET | https://github.com/Azure/azure-sdk-for-net/issues | +| Azure SDK for JavaScript | https://github.com/Azure/azure-sdk-for-js/issues | +| Terraform | https://github.com/Azure/terraform/issues | ++## Stay informed of updates and new releases +++Learn about important product updates, roadmap, and announcements in [Azure Updates](https://azure.microsoft.com/updates/?category=compute). ++News and information about Azure Virtual Machines is shared at the [Azure blog](https://azure.microsoft.com/blog/topics/virtual-machines/). ++## Next steps ++Learn more about [Azure Kubernetes Service](./index.yml) |
aks | Deployment Center Launcher | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deployment-center-launcher.md | -> Deployment Center for Azure Kubernetes Service will be retired on March 31, 2023. [Learn more](/azure/aks/deployment-center-launcher#retirement) +> Deployment Center for Azure Kubernetes Service will be retired on March 31, 2023. [Learn more](#retirement) Deployment Center in Azure DevOps simplifies setting up a robust Azure DevOps pipeline for your application. By default, Deployment Center configures an Azure DevOps pipeline to deploy your application updates to the Kubernetes cluster. You can extend the default configured Azure DevOps pipeline and also add richer capabilities: the ability to gain approval before deploying, provision additional Azure resources, run scripts, upgrade your application, and even run more validation tests. You can modify these build and release pipelines to meet the needs of your team. ## Retirement -Deployment Center for Azure Kubernetes will be retired on March 31, 2023 in favor of [Automated deployments](/azure/aks/automated-deployments). We encourage you to switch for enjoy similar capabilities. +Deployment Center for Azure Kubernetes will be retired on March 31, 2023 in favor of [Automated deployments](./automated-deployments.md). We encourage you to switch for enjoy similar capabilities. #### Migration Steps -There is no migration required as AKS Deployment center experience does not store any information itself, it just helps users with their Day 0 getting started experience on Azure. Moving forward, the recommended way for users to get started on CI/CD for AKS will be using [Automated deployments](/azure/aks/automated-deployments) feature. +There is no migration required as AKS Deployment center experience does not store any information itself, it just helps users with their Day 0 getting started experience on Azure. Moving forward, the recommended way for users to get started on CI/CD for AKS will be using [Automated deployments](./automated-deployments.md) feature. For existing pipelines, users will still be able to perform all operations from GitHub Actions or Azure DevOps after the retirement of this experience. Only the ability to create and view pipelines from Azure portal will be removed. See [GitHub Actions](https://docs.github.com/en/actions) or [Azure DevOps](/azure/devops/pipelines/get-started/pipelines-get-started) to learn how to get started. No. All the created pipelines will still be available and functional in GitHub 3. How can I still configure CD pipelines directly through Azure portal? -You can use Automated deployments available in the AKS blade in Azure portal. +You can use Automated deployments available in the AKS blade in Azure portal. |
aks | Howto Deploy Java Liberty App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md | The following steps guide you to create a Liberty runtime on AKS. After completi 1. This section allows you to select an existing AKS cluster and Azure Container Registry (ACR), instead of causing the deployment to create a new one, if desired. This capability enables you to leverage the sidecar pattern, as shown in the [Azure architecture center](/azure/architecture/patterns/sidecar). You can also adjust the settings for the size and number of the virtual machines in the AKS node pool. Leave all other values at the defaults and select **Next: Networking**. 1. Next to **Connect to Azure Application Gateway?** select **Yes**. This pane lets you customize the following deployment options. 1. You can customize the virtual network and subnet into which the deployment will place the resources. Leave these values at their defaults.- 1. You can provide the TLS/SSL certificate presented by the Azure Application Gateway. Leave the values at the default to cause the offer to generate a self-signed certificate. Do not go to production using a self-certificate. For more information about self-signed certificates, see [Create a self-signed public certificate to authenticate your application](/azure/active-directory/develop/howto-create-self-signed-certificate). + 1. You can provide the TLS/SSL certificate presented by the Azure Application Gateway. Leave the values at the default to cause the offer to generate a self-signed certificate. Do not go to production using a self-certificate. For more information about self-signed certificates, see [Create a self-signed public certificate to authenticate your application](../active-directory/develop/howto-create-self-signed-certificate.md). 1. You can enable cookie based affinity, also known as sticky sessions. We want this enabled for this article, so ensure this option is selected.  1. Select **Review + create** to validate your selected options. az group delete --name <db-resource-group> --yes --no-wait * [Azure Kubernetes Service](https://azure.microsoft.com/free/services/kubernetes-service/) * [Open Liberty](https://openliberty.io/) * [Open Liberty Operator](https://github.com/OpenLiberty/open-liberty-operator)-* [Open Liberty Server Configuration](https://openliberty.io/docs/ref/config/) +* [Open Liberty Server Configuration](https://openliberty.io/docs/ref/config/) |
aks | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS) description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/10/2022 Last updated : 10/12/2022 |
aks | Supported Kubernetes Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md | For the past release history, see [Kubernetes](https://en.wikipedia.org/wiki/Kub | 1.22 | Aug-04-21 | Sept 2021 | Dec 2021 | 1.25 GA | | 1.23 | Dec 2021 | Jan 2022 | Apr 2022 | 1.26 GA | | 1.24 | Apr-22-22 | May 2022 | Jul 2022 | 1.27 GA-| 1.25 | Aug 2022 | Oct 2022 | Nov 2022 | 1.28 GA +| 1.25 | Aug 2022 | Oct 2022 | Dec 2022 | 1.28 GA | 1.26 | Dec 2022 | Jan 2023 | Mar 2023 | 1.29 GA ## FAQ |
aks | Upgrade Windows 2019 2022 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-windows-2019-2022.md | sample-7794bfcc4c-sh78c 1/1 Running 0 2m49s 10.240.0.228 ak If you are leveraging Group Managed Service Accounts (gMSA) you will need to update the Managed Identity configuration for the new node pool. gMSA uses a secret (user account and password) so the node on which the Windows pod is running can authenticate the container against Active Directory. To access that secret on Azure Key Vault, the node uses a Managed Identity that allows the node to access the resource. Since Managed Identities are configured per node pool, and the pod now resides on a new node pool, you need to update that configuration. Check out [Enable Group Managed Service Accounts (GMSA) for your Windows Server nodes on your Azure Kubernetes Service (AKS) cluster](./use-group-managed-service-accounts.md) for more information. -The same principle applies to Managed Identities used for any other pod/node pool when accessing other Azure resources. Any access provided via Managed Identity needs to be updated to reflect the new node pool. To view update and sign-in activities, see [How to view Managed Identity activity](/azure/active-directory/managed-identities-azure-resources/how-to-view-managed-identity-activity). +The same principle applies to Managed Identities used for any other pod/node pool when accessing other Azure resources. Any access provided via Managed Identity needs to be updated to reflect the new node pool. To view update and sign-in activities, see [How to view Managed Identity activity](../active-directory/managed-identities-azure-resources/how-to-view-managed-identity-activity.md). |
aks | Web App Routing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/web-app-routing.md | The add-on deploys the following components: - An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). - [Azure CLI installed](/cli/azure/install-azure-cli). - An Azure Key Vault to store certificates.-- A DNS solution, such as [Azure DNS](/azure/dns/dns-getstarted-portal).+- A DNS solution, such as [Azure DNS](../dns/dns-getstarted-portal.md). ### Install the `aks-preview` Azure CLI extension az keyvault certificate import --vault-name <KeyVaultName> -n <KeyVaultCertifica ### Create an Azure DNS zone -If you want the add-on to automatically manage creating hostnames via Azure DNS, you need to [create an Azure DNS zone](/azure/dns/dns-getstarted-cli) if you don't have one already. +If you want the add-on to automatically manage creating hostnames via Azure DNS, you need to [create an Azure DNS zone](../dns/dns-getstarted-cli.md) if you don't have one already. ```azurecli-interactive # Create a DNS zone The following additional add-ons are required: * **open-service-mesh**: If you require encrypted intra cluster traffic (recommended) between the nginx ingress and your services, the Open Service Mesh add-on is required which provides mutual TLS (mTLS). > [!IMPORTANT]-> To enable the add-on to reload certificates from Azure Key Vault when they change, you should to enable the [secret autorotation feature](/azure/aks/csi-secrets-store-driver#enable-and-disable-autorotation) of the Secret Store CSI driver with the `--enable-secret-rotation` argument. When the autorotation is enabled, the driver updates the pod mount and the Kubernetes secret by polling for changes periodically, based on the rotation poll interval you can define. The default rotation poll interval is 2 minutes. +> To enable the add-on to reload certificates from Azure Key Vault when they change, you should to enable the [secret autorotation feature](./csi-secrets-store-driver.md#enable-and-disable-autorotation) of the Secret Store CSI driver with the `--enable-secret-rotation` argument. When the autorotation is enabled, the driver updates the pod mount and the Kubernetes secret by polling for changes periodically, based on the rotation poll interval you can define. The default rotation poll interval is 2 minutes. ```azurecli-interactive az aks create -g <ResourceGroupName> -n <ClusterName> -l <Location> --enable-addons azure-keyvault-secrets-provider,open-service-mesh,web_application_routing --generate-ssh-keys --enable-secret-rotation The following additional add-on is required: * **azure-keyvault-secrets-provider**: The Secret Store CSI provider for Azure Key Vault is required to retrieve the certificates from Azure Key Vault. > [!IMPORTANT]-> To enable the add-on to reload certificates from Azure Key Vault when they change, you should to enable the [secret autorotation feature](/azure/aks/csi-secrets-store-driver#enable-and-disable-autorotation) of the Secret Store CSI driver with the `--enable-secret-rotation` argument. When the autorotation is enabled, the driver updates the pod mount and the Kubernetes secret by polling for changes periodically, based on the rotation poll interval you can define. The default rotation poll interval is 2 minutes. +> To enable the add-on to reload certificates from Azure Key Vault when they change, you should to enable the [secret autorotation feature](./csi-secrets-store-driver.md#enable-and-disable-autorotation) of the Secret Store CSI driver with the `--enable-secret-rotation` argument. When the autorotation is enabled, the driver updates the pod mount and the Kubernetes secret by polling for changes periodically, based on the rotation poll interval you can define. The default rotation poll interval is 2 minutes. ```azurecli-interactive az aks create -g <ResourceGroupName> -n <ClusterName> -l <Location> --enable-addons azure-keyvault-secrets-provider,web_application_routing --generate-ssh-keys --enable-secret-rotation When the Web Application Routing add-on is disabled, some Kubernetes resources m [kubectl-delete]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete [kubectl-logs]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs [ingress]: https://kubernetes.io/docs/concepts/services-networking/ingress/-[ingress-resource]: https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource +[ingress-resource]: https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource |
api-management | Api Management Howto Disaster Recovery Backup Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-disaster-recovery-backup-restore.md | This article shows how to automate backup and restore operations of your API Man * An API Management service instance. If you don't have one, see [Create an API Management service instance](get-started-create-service-instance.md). * An Azure storage account. If you don't have one, see [Create a storage account](../storage/common/storage-account-create.md).- * [Create a container](/azure/storage/blobs/storage-quickstart-blobs-portal#create-a-container) in the storage account to hold the backup data. + * [Create a container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in the storage account to hold the backup data. * The latest version of Azure PowerShell, if you plan to use Azure PowerShell cmdlets. If you haven't already, [install Azure PowerShell](/powershell/azure/install-az-ps). Check out the following related resources for the backup/restore process: [api-management-arm-token]: ./media/api-management-howto-disaster-recovery-backup-restore/api-management-arm-token.png [api-management-endpoint]: ./media/api-management-howto-disaster-recovery-backup-restore/api-management-endpoint.png [control-plane-ip-address]: virtual-network-reference.md#control-plane-ip-addresses-[azure-storage-ip-firewall]: ../storage/common/storage-network-security.md#grant-access-from-an-internet-ip-range +[azure-storage-ip-firewall]: ../storage/common/storage-network-security.md#grant-access-from-an-internet-ip-range |
api-management | Api Version Retirement Sep 2023 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/api-version-retirement-sep-2023.md | After 30 September 2023, if you prefer not to update your tools, scripts, and pr * [Azure CLI](/cli/azure/update-azure-cli) * [Azure PowerShell](/powershell/azure/install-az-ps)-* [Azure Resource Manager](/azure/azure-resource-manager/management/overview) +* [Azure Resource Manager](../../azure-resource-manager/management/overview.md) * [Terraform on Azure](/azure/developer/terraform/)-* [Bicep](/azure/azure-resource-manager/bicep/overview) +* [Bicep](../../azure-resource-manager/bicep/overview.md) * [Microsoft Q&A](/answers/topics/azure-api-management.html) ## Next steps |
api-management | Devops Api Development Templates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/devops-api-development-templates.md | Review [Automated API deployments with APIOps][28] in the Azure Architecture Cen [14]: https://owasp.org/www-community/api_security_tools [15]: https://github.com/postmanlabs/newman [16]: https://learning.postman.com/docs/getting-started/creating-the-first-collection/-[17]: /azure/azure-resource-manager/templates/deployment-tutorial-pipeline +[17]: ../azure-resource-manager/templates/deployment-tutorial-pipeline.md [18]: https://github.com/marketplace/actions/deploy-azure-resource-manager-arm-template [19]: https://marketplace.visualstudio.com/items?itemName=charleszipp.azure-pipelines-tasks-terraform [20]: https://learn.hashicorp.com/tutorials/terraform/github-actions |
api-management | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure API Management description: Lists Azure Policy Regulatory Compliance controls available for Azure API Management. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/10/2022 Last updated : 10/12/2022 |
app-service | Configure Language Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-nodejs.md | This setting specifies the Node.js version to use, both at runtime and during au ## Get port number -You Node.js app needs to listen to the right port to receive incoming requests. +Your Node.js app needs to listen to the right port to receive incoming requests. ::: zone pivot="platform-windows" |
app-service | Deploy Azure Pipelines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-azure-pipelines.md | -Use [Azure Pipelines](/azure/devops/pipelines/) to automatically deploy your web app to [Azure App Service](/azure/app-service/overview) on every successful build. Azure Pipelines lets you build, test, and deploy with continuous integration (CI) and continuous delivery (CD) using [Azure DevOps](/azure/devops/). +Use [Azure Pipelines](/azure/devops/pipelines/) to automatically deploy your web app to [Azure App Service](./overview.md) on every successful build. Azure Pipelines lets you build, test, and deploy with continuous integration (CI) and continuous delivery (CD) using [Azure DevOps](/azure/devops/). YAML pipelines are defined using a YAML file in your repository. A step is the smallest building block of a pipeline and can be a script or task (pre-packaged script). [Learn about the key concepts and components that make up a pipeline](/azure/devops/pipelines/get-started/key-pipelines-concepts). By default, your deployment happens to the root application in the Azure Web App ``` * **VirtualApplication**: the name of the Virtual Application that has been configured in the Azure portal. See [Configure an App Service app in the Azure portal-](/azure/app-service/configure-common) for more details. +](./configure-common.md) for more details. # [Classic](#tab/classic/) You can control the order of deployment. To learn more, see [Stages](/azure/devo ## Make configuration changes -For most language stacks, [app settings](/azure/app-service/configure-common?toc=%252fazure%252fapp-service%252fcontainers%252ftoc.json#configure-app-settings) and [connection strings](/azure/app-service/configure-common?toc=%252fazure%252fapp-service%252fcontainers%252ftoc.json#configure-connection-strings) can be set as environment variables at runtime. +For most language stacks, [app settings](./configure-common.md?toc=%2fazure%2fapp-service%2fcontainers%2ftoc.json#configure-app-settings) and [connection strings](./configure-common.md?toc=%2fazure%2fapp-service%2fcontainers%2ftoc.json#configure-connection-strings) can be set as environment variables at runtime. -App settings can also be resolved from Key Vault using [Key Vault references](/azure/app-service/app-service-key-vault-references). +App settings can also be resolved from Key Vault using [Key Vault references](./app-service-key-vault-references.md). For ASP.NET and ASP.NET Core developers, setting app settings in App Service are like setting them in `<appSettings>` in Web.config. You might want to apply a specific configuration for your web app target before deploying to it. You're now ready to create a release, which means to run the release pipeline wi ## Next steps -- Customize your [Azure DevOps pipeline](/azure/devops/pipelines/customize-pipeline). +- Customize your [Azure DevOps pipeline](/azure/devops/pipelines/customize-pipeline). |
app-service | Deploy Content Sync | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-content-sync.md | Invoke-AzureRmResourceAction -ResourceGroupName <group-name> -ResourceType Micro On September 30th, 2023 the integrations for Microsoft OneDrive and Dropbox for Azure App Service and Azure Functions will be retired. If you are using OneDrive or Dropbox, you should [disable content sync deployments](#disable-content-sync-deployment) from OneDrive and Dropbox. Then, you can set up deployments from any of the following alternatives - [GitHub Actions](deploy-github-actions.md)-- [Azure DevOps Pipelines](https://docs.microsoft.com/azure/devops/pipelines/targets/webapp?view=azure-devops)-- [Azure CLI](https://docs.microsoft.com/azure/app-service/deploy-zip?tabs=cli)-- [VS Code](https://docs.microsoft.com/azure/app-service/deploy-zip?tabs=cli)-- [Local Git Repository](https://docs.microsoft.com/azure/app-service/deploy-local-git?tabs=cli)+- [Azure DevOps Pipelines](/azure/devops/pipelines/targets/webapp?view=azure-devops) +- [Azure CLI](./deploy-zip.md?tabs=cli) +- [VS Code](./deploy-zip.md?tabs=cli) +- [Local Git Repository](./deploy-local-git.md?tabs=cli) ## Next steps > [!div class="nextstepaction"]-> [Deploy from local Git repo](deploy-local-git.md) +> [Deploy from local Git repo](deploy-local-git.md) |
app-service | Overview Authentication Authorization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-authentication-authorization.md | When using Azure App Service with Easy Auth behind Azure Front Door or other rev 1) Disable Caching for the authentication workflow - See [Disable cache for auth workflow](/azure/static-web-apps/front-door-manual#disable-cache-for-auth-workflow) to learn more on how to configure rules in Azure Front Door to disable caching for authentication and authorization-related pages. + See [Disable cache for auth workflow](../static-web-apps/front-door-manual.md#disable-cache-for-auth-workflow) to learn more on how to configure rules in Azure Front Door to disable caching for authentication and authorization-related pages. 2) Use the Front Door endpoint for redirects Samples: - [Tutorial: Add authentication to your web app running on Azure App Service](scenario-secure-app-authentication-app-service.md) - [Tutorial: Authenticate and authorize users end-to-end in Azure App Service (Windows or Linux)](tutorial-auth-aad.md) - [.NET Core integration of Azure AppService EasyAuth (3rd party)](https://github.com/MaximRouiller/MaximeRouiller.Azure.AppService.EasyAuth)-- [Getting Azure App Service authentication working with .NET Core (3rd party)](https://github.com/kirkone/KK.AspNetCore.EasyAuthAuthentication)+- [Getting Azure App Service authentication working with .NET Core (3rd party)](https://github.com/kirkone/KK.AspNetCore.EasyAuthAuthentication) |
app-service | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview.md | Azure App Service is a fully managed platform as a service (PaaS) offering for d * **API and mobile features** - App Service provides turn-key CORS support for RESTful API scenarios, and simplifies mobile app scenarios by enabling authentication, offline data sync, push notifications, and more. * **Serverless code** - Run a code snippet or script on-demand without having to explicitly provision or manage infrastructure, and pay only for the compute time your code actually uses (see [Azure Functions](../azure-functions/index.yml)). -Besides App Service, Azure offers other services that can be used for hosting websites and web applications. For most scenarios, App Service is the best choice. For microservice architecture, consider [Azure Spring Apps](../spring-apps/index.yml) or [Service Fabric](/azure/service-fabric). If you need more control over the VMs on which your code runs, consider [Azure Virtual Machines](/azure/virtual-machines/). For more information about how to choose between these Azure services, see [Azure App Service, Virtual Machines, Service Fabric, and Cloud Services comparison](/azure/architecture/guide/technology-choices/compute-decision-tree). +Besides App Service, Azure offers other services that can be used for hosting websites and web applications. For most scenarios, App Service is the best choice. For microservice architecture, consider [Azure Spring Apps](../spring-apps/index.yml) or [Service Fabric](../service-fabric/index.yml). If you need more control over the VMs on which your code runs, consider [Azure Virtual Machines](../virtual-machines/index.yml). For more information about how to choose between these Azure services, see [Azure App Service, Virtual Machines, Service Fabric, and Cloud Services comparison](/azure/architecture/guide/technology-choices/compute-decision-tree). ## App Service on Linux Create your first web app. > [HTML](quickstart-html.md) > [!div class="nextstepaction"]-> [Custom container (Windows or Linux)](tutorial-custom-container.md) +> [Custom container (Windows or Linux)](tutorial-custom-container.md) |
app-service | Quickstart Wordpress | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-wordpress.md | -[WordPress](https://www.wordpress.org) is an open source content management system (CMS) used by over 40% of the web to create websites, blogs, and other applications. WordPress can be run on a few different Azure +[WordPress](https://www.wordpress.org) is an open source content management system (CMS) used by over 40% of the web to create websites, blogs, and other applications. WordPress can be run on a few different Azure -In this quickstart, you'll learn how to create and deploy your first [WordPress](https://www.wordpress.org/) site to [Azure App Service on Linux](overview.md#app-service-on-linux) with [Azure Database for MySQL - Flexible Server](/azure/mysql/flexible-server/) using the [WordPress Azure Marketplace item by App Service](https://azuremarketplace.microsoft.com/marketplace/apps/WordPress.WordPress?tab=Overview). This quickstart uses the **Basic** tier for your app and a **Burstable, B1ms** tier for your database, and incurs a cost for your Azure Subscription. For pricing, visit [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/) and [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/). +In this quickstart, you'll learn how to create and deploy your first [WordPress](https://www.wordpress.org/) site to [Azure App Service on Linux](overview.md#app-service-on-linux) with [Azure Database for MySQL - Flexible Server](../mysql/flexible-server/index.yml) using the [WordPress Azure Marketplace item by App Service](https://azuremarketplace.microsoft.com/marketplace/apps/WordPress.WordPress?tab=Overview). This quickstart uses the **Basic** tier for your app and a **Burstable, B1ms** tier for your database, and incurs a cost for your Azure Subscription. For pricing, visit [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/) and [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/). To complete this quickstart, you need an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs). When no longer needed, you can delete the resource group, App service, and all r ## Manage the MySQL flexible server, username, or password -- The MySQL Flexible Server is created behind a private [Virtual Network](/azure/virtual-network/virtual-networks-overview.md) and can't be accessed directly. To access or manage the database, use phpMyAdmin that's deployed with the WordPress site. You can access phpMyAdmin by following these steps:+- The MySQL Flexible Server is created behind a private [Virtual Network](../virtual-network/virtual-networks-overview.md) and can't be accessed directly. To access or manage the database, use phpMyAdmin that's deployed with the WordPress site. You can access phpMyAdmin by following these steps: - Navigate to the URL: https://`<sitename>`.azurewebsites.net/phpmyadmin - Login with the flexible server's username and password - Database username and password of the MySQL Flexible Server are generated automatically. To retrieve these values after the deployment go to Application Settings section of the Configuration page in Azure App Service. The WordPress configuration is modified to use these [Application Settings](reference-app-settings.md#wordpress) to connect to the MySQL database. -- To change the MySQL database password, see [Reset admin password](/azure/mysql/flexible-server/how-to-manage-server-portal#reset-admin-password). Whenever the MySQL database credentials are changed, the [Application Settings](reference-app-settings.md#wordpress) need to be updated. The [Application Settings for MySQL database](reference-app-settings.md#wordpress) begin with the **`DATABASE_`** prefix. For more information on updating MySQL passwords, see [WordPress on App Service](https://github.com/Azure/wordpress-linux-appservice/blob/main/WordPress/changing_mysql_database_password.md).+- To change the MySQL database password, see [Reset admin password](../mysql/flexible-server/how-to-manage-server-portal.md#reset-admin-password). Whenever the MySQL database credentials are changed, the [Application Settings](reference-app-settings.md#wordpress) need to be updated. The [Application Settings for MySQL database](reference-app-settings.md#wordpress) begin with the **`DATABASE_`** prefix. For more information on updating MySQL passwords, see [WordPress on App Service](https://github.com/Azure/wordpress-linux-appservice/blob/main/WordPress/changing_mysql_database_password.md). ## Change WordPress admin password Congratulations, you've successfully completed this quickstart! > [Tutorial: PHP app with MySQL](tutorial-php-mysql-app.md) > [!div class="nextstepaction"]-> [Configure PHP app](configure-language-php.md) +> [Configure PHP app](configure-language-php.md) |
app-service | Reference Dangling Subdomain Prevention | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-dangling-subdomain-prevention.md | + + Title: Prevent subdomain takeovers +description: Describes options for dangling subdomain prevention on Azure App Service. + Last updated : 10/14/2022+++++# What is a subdomain takeover? ++Subdomain takeovers are a common threat for organizations that regularly create and delete many resources. A subdomain takeover can occur when you have a DNS record that points to a deprovisioned Azure resource. Such DNS records are also known as "dangling DNS" entries. Subdomain takeovers enable malicious actors to redirect traffic intended for an organizationΓÇÖs domain to a site performing malicious activity. ++The risks of subdomain takeover include: ++- Loss of control over the content of the subdomain +- Cookie harvesting from unsuspecting visitors +- Phishing campaigns +- Further risks of classic attacks such as XSS, CSRF, CORS bypass ++Learn more about Subdomain Takeover at [Dangling DNS and subdomain takeover](../security/fundamentals/subdomain-takeover.md). ++Azure App Service provides [Name Reservation](#how-name-reservation-service-works) Service and [domain verification tokens](#domain-verification-token) to prevent subdomain takeovers. +## How Name Reservation Service works ++Upon deletion of an App Service app, the corresponding DNS is reserved. During the reservation period, re-use of the DNS will be forbidden except for subscriptions belonging to tenant of the subscription originally owning the DNS. ++After the reservation expires, the DNS is free to be claimed by any subscription. By Name Reservation Service, the customer is afforded some time to either clean up any associations/pointers to said DNS or re-claim the DNS in Azure. The DNS name being reserved can be derived by appending 'azurewebsites.net'. Name Reservation Service is enabled by default on Azure App Service and doesn't require additional configuration. ++#### Example scenario ++Subscription 'A' and subscription 'B' are the only subscriptions belonging to tenant 'AB'. Subscription 'A' contains an App Service app 'test' with DNS name 'test'.azurewebsites.net'. Upon deletion of the app, a reservation is taken on DNS name 'test.azurewebsites.net'. ++During the reservation period, only subscription 'A' or subscription 'B' will be able to claim the DNS name 'test.azurewebsites.net' by creating a web app named 'test'. No other subscriptions will be allowed to claim it. After the reservation period is complete, any subscription in Azure can now claim 'test.azurewebsites.net'. +++## Domain verification token ++When creating DNS entries for Azure App Service, create an asuid.{subdomain} TXT record with the Domain Verification ID. When such a TXT record exists, no other Azure Subscription can validate the Custom Domain or take it over unless they add their token verification ID to the DNS entries. ++These records prevent the creation of another App Service app using the same name from your CNAME entry. Without the ability to prove ownership of the domain name, threat actors can't receive traffic or control the content. ++DNS records should be updated before the site deletion to ensure bad actors can't take over the domain between the period of deletion and re-creation. Be aware that the DNS records take time to propagate. ++To get a domain verification ID, see the [Map a custom domain tutorial](app-service-web-tutorial-custom-domain.md#2-get-a-domain-verification-id) |
app-service | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/10/2022 Last updated : 10/12/2022 |
app-service | Tutorial Java Tomcat Connect Managed Identity Postgresql Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-tomcat-connect-managed-identity-postgresql-database.md | -[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service in Azure. It also provides a [managed identity](overview-managed-identity.md) for your app, which is a turn-key solution for securing access to [Azure Database for PostgreSQL](/azure/postgresql/) and other Azure services. Managed identities in App Service make your app more secure by eliminating secrets from your app, such as credentials in the environment variables. In this tutorial, you will learn how to: +[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service in Azure. It also provides a [managed identity](overview-managed-identity.md) for your app, which is a turn-key solution for securing access to [Azure Database for PostgreSQL](../postgresql/index.yml) and other Azure services. Managed identities in App Service make your app more secure by eliminating secrets from your app, such as credentials in the environment variables. In this tutorial, you will learn how to: > [!div class="checklist"] > * Create a PostgreSQL database. az webapp browse \ Learn more about running Java apps on App Service on Linux in the developer guide. > [!div class="nextstepaction"]-> [Java in App Service Linux dev guide](configure-language-java.md?pivots=platform-linux) +> [Java in App Service Linux dev guide](configure-language-java.md?pivots=platform-linux) |
application-gateway | Application Gateway Ilb Arm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-ilb-arm.md | If you want to configure SSL offload, see [Configure an application gateway for If you want more information about load balancing options in general, see: -* [Azure Load Balancer](/azure/load-balancer/) -* [Azure Traffic Manager](/azure/traffic-manager/) +* [Azure Load Balancer](../load-balancer/index.yml) +* [Azure Traffic Manager](../traffic-manager/index.yml) |
applied-ai-services | Concept Invoice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-invoice.md | See how data, including customer information, vendor details, and line items, is | PaymentTerm | String | The terms of payment for the invoice | | | SubTotal | Number | Subtotal field identified on this invoice | Integer | | TotalTax | Number | Total tax field identified on this invoice | Integer |-| TotalVAT | Number | Total VAT field identified on this invoice | Integer | | InvoiceTotal | Number (USD) | Total new charges associated with this invoice | Integer | | AmountDue | Number (USD) | Total Amount Due to the vendor | Integer | | ServiceAddress | String | Explicit service address or property address for the customer | | Following are the line items extracted from an invoice in the JSON output respon | Unit | String| The unit of the line item, e.g, kg, lb etc. | Hours | | | Date | Date| Date corresponding to each line item. Often it's a date the line item was shipped | 3/4/2021| 2021-03-04 | | Tax | Number | Tax associated with each line item. Possible values include tax amount, tax %, and tax Y/N | 10% | |-| VAT | Number | Stands for Value added tax. VAT is a flat tax levied on an item. Common in European countries | €20.00 | | The invoice key-value pairs and line items extracted are in the `documentResults` section of the JSON output. |
applied-ai-services | Sdk Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/sdk-overview.md | There are two supported methods for authentication * Use a [Form Recognizer API key](#use-your-api-key) with AzureKeyCredential from azure.core.credentials. -* Use a [token credential from azure-identity](#use-an-azure-active-directory-azure-ad-token-credential) to authenticate with [Azure Active Directory](/azure/active-directory/fundamentals/active-directory-whatis). +* Use a [token credential from azure-identity](#use-an-azure-active-directory-azure-ad-token-credential) to authenticate with [Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md). #### Use your API key async function main() { #### Use an Azure Active Directory (Azure AD) token credential > [!NOTE]-> Regional endpoints do not support AAD authentication. Create a [custom subdomain](/azure/cognitive-services/authentication?tabs=powershell#create-a-resource-with-a-custom-subdomain) for your resource in order to use this type of authentication. +> Regional endpoints do not support AAD authentication. Create a [custom subdomain](../../cognitive-services/authentication.md?tabs=powershell#create-a-resource-with-a-custom-subdomain) for your resource in order to use this type of authentication. Authorization is easiest using the `DefaultAzureCredential`. It provides a default token credential, based upon the running environment, capable of handling most Azure authentication scenarios. Here's how to acquire and use the [DefaultAzureCredential](/dotnet/api/azure.ide Install-Package Azure.Identity ``` -1. [Register an Azure AD application and create a new service principal](/azure/cognitive-services/authentication?tabs=powershell#assign-a-role-to-a-service-principal). +1. [Register an Azure AD application and create a new service principal](../../cognitive-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal). 1. Grant access to Form Recognizer by assigning the **`Cognitive Services User`** role to your service principal. Here's how to acquire and use the [DefaultAzureCredential](/java/api/com.azure.i </dependency> ``` -1. [Register an Azure AD application and create a new service principal](/azure/cognitive-services/authentication?tabs=powershell#assign-a-role-to-a-service-principal). +1. [Register an Azure AD application and create a new service principal](../../cognitive-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal). 1. Grant access to Form Recognizer by assigning the **`Cognitive Services User`** role to your service principal. Here's how to acquire and use the [DefaultAzureCredential](/javascript/api/@azur npm install @azure/identity ``` -1. [Register an Azure AD application and create a new service principal](/azure/cognitive-services/authentication?tabs=powershell#assign-a-role-to-a-service-principal). +1. [Register an Azure AD application and create a new service principal](../../cognitive-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal). 1. Grant access to Form Recognizer by assigning the **`Cognitive Services User`** role to your service principal. Here's how to acquire and use the [DefaultAzureCredential](/python/api/azure-ide ```python pip install azure-identity ```-1. [Register an Azure AD application and create a new service principal](/azure/cognitive-services/authentication?tabs=powershell#assign-a-role-to-a-service-principal). +1. [Register an Azure AD application and create a new service principal](../../cognitive-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal). 1. Grant access to Form Recognizer by assigning the **`Cognitive Services User`** role to your service principal. The [Microsoft Q&A](/answers/topics/azure-form-recognizer.html) and [Stack Overf > [**Try a Form Recognizer quickstart**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) > [!div class="nextstepaction"]-> [**Explore the Form Recognizer REST API v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) +> [**Explore the Form Recognizer REST API v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) |
applied-ai-services | Tutorial Azure Function | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/tutorial-azure-function.md | In this tutorial, you learn how to: * [**Azure Functions extension**](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions). Once it's installed, you should see the Azure logo in the left-navigation pane. - * [**Azure Functions Core Tools**](/azure/azure-functions/functions-run-local?tabs=v3%2Cwindows%2Ccsharp%2Cportal%2Cbash) version 3.x (Version 4.x isn't supported for this project). + * [**Azure Functions Core Tools**](../../azure-functions/functions-run-local.md?tabs=v3%2cwindows%2ccsharp%2cportal%2cbash) version 3.x (Version 4.x isn't supported for this project). * [**Python Extension**](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio code. For more information, *see* [Getting Started with Python in VS Code](https://code.visualstudio.com/docs/python/python-tutorial) In this tutorial, you learned how to use an Azure Function written in Python to > [Microsoft Power BI](https://powerbi.microsoft.com/integrations/azure-table-storage/) * [What is Form Recognizer?](overview.md)-* Learn more about the [layout model](concept-layout.md) +* Learn more about the [layout model](concept-layout.md) |
automation | Add User Assigned Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/add-user-assigned-identity.md | If you don't have an Azure subscription, create a [free account](https://azure.m ## Prerequisites -- An Azure Automation account. For instructions, see [Create an Azure Automation account](/azure/automation/quickstarts/create-azure-automation-account-portal).+- An Azure Automation account. For instructions, see [Create an Azure Automation account](./quickstarts/create-azure-automation-account-portal.md). - The user-assigned managed identity and the target Azure resources that your runbook manages using that identity can be in different Azure subscriptions. |
automation | Automation Availability Zones | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-availability-zones.md | In the event when a zone is down, there's no action required by you to recover f - All existing Automation accounts would become zone redundant automatically. It requires no action from your end. - In a zone-down scenario, you might expect a brief performance degradation until the service self-healing rebalances the underlying capacity to adjust to healthy zones. This isn't dependent on zone restoration; the service self-healing state will compensate for a lost zone, using the capacity from other zones. - In a zone-wide failure scenario, you must follow the guidance provided to set up a disaster recovery for Automation accounts in a secondary region. -- Availability zone support for Automation accounts supports only [Process Automation](/azure/automation/overview#process-automation) feature to provide an improved resiliency for runbook automation. +- Availability zone support for Automation accounts supports only [Process Automation](./overview.md#process-automation) feature to provide an improved resiliency for runbook automation. ## Supported regions with availability zones Automation accounts currently support the following regions in preview: ## Create a zone redundant Automation account You can create a zone redundant Automation account using:-- [Azure portal](/azure/automation/automation-create-standalone-account?tabs=azureportal)-- [Azure Resource Manager (ARM) template](/azure/automation/quickstart-create-automation-account-template)+- [Azure portal](./automation-create-standalone-account.md?tabs=azureportal) +- [Azure Resource Manager (ARM) template](./quickstart-create-automation-account-template.md) > [!Note] > There is no option to select or see Availability zone in the creation flow of the Automation Accounts. ItΓÇÖs a default setting enabled and managed at the service level. |
automation | Automation Managed Identity Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-managed-identity-faq.md | Automation Run As accounts will be supported until *September 30, 2023*. Althoug Existing users can still create a Run As account. You can go to the account properties and renew a certificate upon expiration until *January 30, 2023*. After that date, you won't be able to create a Run As account from the Azure portal. -You'll still be able to create a Run As account through a [PowerShell script](/azure/automation/create-run-as-account#create-account-using-powershell) until support ends. You can [use this script](https://github.com/azureautomation/runbooks/blob/master/Utility/AzRunAs/RunAsAccountAssessAndRenew.ps1) to renew the certificate after *January 30, 2023*, until *September 30, 2023*. This script will assess the Automation account that has configured Run As accounts and renew the certificate if you choose to do so. On confirmation, the script will renew the key credentials of the Azure Active Directory (Azure AD) app and upload new a self-signed certificate to the Azure AD app. +You'll still be able to create a Run As account through a [PowerShell script](./create-run-as-account.md#create-account-using-powershell) until support ends. You can [use this script](https://github.com/azureautomation/runbooks/blob/master/Utility/AzRunAs/RunAsAccountAssessAndRenew.ps1) to renew the certificate after *January 30, 2023*, until *September 30, 2023*. This script will assess the Automation account that has configured Run As accounts and renew the certificate if you choose to do so. On confirmation, the script will renew the key credentials of the Azure Active Directory (Azure AD) app and upload new a self-signed certificate to the Azure AD app. ## Will existing runbooks that use the Run As account be able to authenticate? Yes, they'll be able to authenticate. There will be no impact to existing runbooks that use a Run As account. Yes, the runbooks will be able to authenticate until the Run As account certific ## What is a managed identity? Applications use managed identities in Azure AD when they're connecting to resources that support Azure AD authentication. Applications can use managed identities to obtain Azure AD tokens without managing credentials, secrets, certificates, or keys. -For more information about managed identities in Azure AD, see [Managed identities for Azure resources](/azure/active-directory/managed-identities-azure-resources/overview). +For more information about managed identities in Azure AD, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md). ## What can I do with a managed identity in Automation accounts? An Azure Automation managed identity from Azure AD allows your runbook to access other Azure AD-protected resources easily. This identity is managed by the Azure platform and doesn't require you to provision or rotate any secrets. A Run As account creates an Azure AD app that's used to manage the resources wit Run As accounts also have a management overhead that involves creating a service principal, Run As certificate, Run As connection, certificate renewal, and so on. Managed identities eliminate this overhead by providing a secure method for users to authenticate and access resources that support Azure AD authentication without worrying about any certificate or credential management. ## Can a managed identity be used for both cloud and hybrid jobs?-Azure Automation supports [system-assigned managed identities](/azure/automation/automation-security-overview#managed-identities) for both cloud and hybrid jobs. Currently, Azure Automation [user-assigned managed identities](/azure/automation/automation-security-overview#managed-identities-preview) can be used for cloud jobs only and can't be used for jobs that run on a hybrid worker. +Azure Automation supports [system-assigned managed identities](./automation-security-overview.md#managed-identities) for both cloud and hybrid jobs. Currently, Azure Automation [user-assigned managed identities](./automation-security-overview.md) can be used for cloud jobs only and can't be used for jobs that run on a hybrid worker. ## Can I use a Run As account for new Automation account?-Yes, but only in a scenario where managed identities aren't supported for specific on-premises resources. We'll allow the creation of a Run As account through a [PowerShell script](/azure/automation/create-run-as-account#create-account-using-powershell). +Yes, but only in a scenario where managed identities aren't supported for specific on-premises resources. We'll allow the creation of a Run As account through a [PowerShell script](./create-run-as-account.md#create-account-using-powershell). ## How can I migrate from an existing Run As account to a managed identity?-Follow the steps in [Migrate an existing Run As account to a managed identity](/azure/automation/migrate-run-as-accounts-managed-identity). +Follow the steps in [Migrate an existing Run As account to a managed identity](./migrate-run-as-accounts-managed-identity.md). ## How do I see the runbooks that are using a Run As account and know what permissions are assigned to that account? Use [this script](https://github.com/azureautomation/runbooks/blob/master/Utility/AzRunAs/Check-AutomationRunAsAccountRoleAssignments.ps1) to find out which Automation accounts are using a Run As account. If your Azure Automation accounts contain a Run As account, it will have the built-in contributor role assigned to it by default. You can use the script to check the Azure Automation Run As accounts and determine if their role assignment is the default one or if it has been changed to a different role definition. Use [this script](https://github.com/azureautomation/runbooks/blob/master/Utilit If your question isn't answered here, you can refer to the following sources for more questions and answers: -- [Azure Automation](https://docs.microsoft.com/answers/topics/azure-automation.html)+- [Azure Automation](/answers/topics/azure-automation.html) - [Feedback forum](https://feedback.azure.com/d365community/forum/721a322e-bd25-ec11-b6e6-000d3a4f0f1c) |
automation | Automation Security Guidelines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-security-guidelines.md | -> Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](https://learn.microsoft.com/azure/automation/migrate-run-as-accounts-managed-identity?tabs=run-as-account#sample-scripts) to start migrating the runbooks from Run As account to managed identities before 30 September 2023. +> Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](migrate-run-as-accounts-managed-identity.md?tabs=run-as-account#sample-scripts) to start migrating the runbooks from Run As account to managed identities before 30 September 2023. This article details the best practices to securely execute the automation jobs. [Azure Automation](./overview.md) provides you the platform to orchestrate frequent, time consuming, error-prone infrastructure management and operational tasks, as well as mission-critical operations. This service allows you to execute scripts, known as automation runbooks seamlessly across cloud and hybrid environments. -The platform components of Azure Automation Service are actively secured and hardened. The service goes through robust security and compliance checks. [Azure security benchmark](/security/benchmark/azure/overview) details the best practices and recommendations to help improve the security of workloads, data, and services on Azure. Also see [Azure security baseline for Azure Automation](/security/benchmark/azure/baselines/automation-security-baseline?toc=/azure/automation/TOC.json). +The platform components of Azure Automation Service are actively secured and hardened. The service goes through robust security and compliance checks. the [Microsoft cloud security benchmark](/security/benchmark/azure/overview) details the best practices and recommendations to help improve the security of workloads, data, and services on Azure. Also see [Azure security baseline for Azure Automation](/security/benchmark/azure/baselines/automation-security-baseline?toc=/azure/automation/TOC.json). ## Secure configuration of Automation account |
automation | Automation Security Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-security-overview.md | This article covers authentication scenarios supported by Azure Automation and t When you start Azure Automation for the first time, you must create at least one Automation account. Automation accounts allow you to isolate your Automation resources, runbooks, assets, and configurations from the resources of other accounts. You can use Automation accounts to separate resources into separate logical environments or delegated responsibilities. For example, you might use one account for development, another for production, and another for your on-premises environment. Or you might dedicate an Automation account to manage operating system updates across all of your machines with [Update Management](update-management/overview.md). -An Azure Automation account is different from your Microsoft account or accounts created in your Azure subscription. For an introduction to creating an Automation account, see [Create an Automation account](/azure/automation/quickstarts/create-azure-automation-account-portal). +An Azure Automation account is different from your Microsoft account or accounts created in your Azure subscription. For an introduction to creating an Automation account, see [Create an Automation account](./quickstarts/create-azure-automation-account-portal.md). ## Automation resources For runbooks that use Hybrid Runbook Workers on Azure VMs, you can use [runbook * To create an Automation account from the Azure portal, see [Create a standalone Azure Automation account](automation-create-standalone-account.md). * If you prefer to create your account using a template, see [Create an Automation account using an Azure Resource Manager template](quickstart-create-automation-account-template.md). * For authentication using Amazon Web Services, see [Authenticate runbooks with Amazon Web Services](automation-config-aws-account.md).-* For a list of Azure services that support the managed identities for Azure resources feature, see [Services that support managed identities for Azure resources](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md). +* For a list of Azure services that support the managed identities for Azure resources feature, see [Services that support managed identities for Azure resources](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md). |
automation | Enable Managed Identity For Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/enable-managed-identity-for-automation.md | If you don't have an Azure subscription, create a [free account](https://azure.m ## Prerequisites -- An Azure Automation account. For instructions, see [Create an Azure Automation account](/azure/automation/quickstarts/create-azure-automation-account-portal).+- An Azure Automation account. For instructions, see [Create an Azure Automation account](./quickstarts/create-azure-automation-account-portal.md). - The latest version of Az PowerShell modules Az.Accounts, Az.Resources, Az.Automation, Az.KeyVault. |
automation | Manage Office 365 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/manage-office-365.md | Use of Office 365 within Azure Automation requires Microsoft Azure Active Direct ## Create an Azure Automation account -To complete the steps in this article, you need an account in Azure Automation. See [Create an Azure Automation account](/azure/automation/quickstarts/create-azure-automation-account-portal). +To complete the steps in this article, you need an account in Azure Automation. See [Create an Azure Automation account](./quickstarts/create-azure-automation-account-portal.md). ## Add MSOnline and MSOnlineExt as assets |
automation | Migrate Run As Accounts Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-run-as-accounts-managed-identity.md | Title: Migrate from a Run As account to a managed identity description: This article describes how to migrate from a Run As account to a managed identity in Azure Automation. Previously updated : 04/27/2022 Last updated : 10/17/2022 To migrate from an Automation Run As account to a managed identity for your runb For managed identity support, use the `Connect-AzAccount` cmdlet. To learn more about this cmdlet, see [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount?branch=main&view=azps-8.3.0) in the PowerShell reference. - - If you're using Az modules, update to the latest version by following the steps in the [Update Azure PowerShell modules](/azure/automation/automation-update-azure-modules?branch=main#update-az-modules) article. + - If you're using Az modules, update to the latest version by following the steps in the [Update Azure PowerShell modules](./automation-update-azure-modules.md?branch=main#update-az-modules) article. - If you're using AzureRM modules, update `AzureRM.Profile` to the latest version and replace it by using the `Add-AzureRMAccount` cmdlet with `Connect-AzureRMAccount ΓÇôIdentity`. To understand the changes to the runbook code that are required before you can use managed identities, use the [sample scripts](#sample-scripts). The following examples of runbook scripts fetch the Resource Manager resources b > Enable appropriate RBAC permissions for the system identity of this Automation account. Otherwise, the runbook might fail. ```powershell+ try { "Logging in to Azure..." Connect-AzAccount -Identity The following examples of runbook scripts fetch the Resource Manager resources b # [User-assigned managed identity](#tab/ua-managed-identity) ```powershell+try { "Logging in to Azure..." For more information, see the sample runbook name **AzureAutomationTutorialWithI - For information about Azure Automation account security, see [Azure Automation account authentication overview](automation-security-overview.md). - |
automation | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/overview.md | You can review the prices associated with Azure Automation on the [pricing](http ## Next steps > [!div class="nextstepaction"]-> [Create an Automation account](/azure/automation/quickstarts/create-azure-automation-account-portal) +> [Create an Automation account](./quickstarts/create-azure-automation-account-portal.md) |
automation | Enable Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstarts/enable-managed-identity.md | This Quickstart shows you how to enable managed identities for an Azure Automati - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- An Azure Automation account. For instructions, see [Create an Automation account](/azure/automation/quickstarts/create-azure-automation-account-portal).+- An Azure Automation account. For instructions, see [Create an Automation account](./create-azure-automation-account-portal.md). - A user-assigned managed identity. For instructions, see [Create a user-assigned managed identity](../../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md#create-a-user-assigned-managed-identity). The user-assigned managed identity and the target Azure resources that your runbook manages using that identity must be in the same Azure subscription. If you no longer need the system-assigned managed identity enabled for your Auto In this Quickstart, you enabled managed identities for an Azure Automation account. To use your Automation account with managed identities to execute a runbook, see. > [!div class="nextstepaction"]-> [Tutorial: Create Automation PowerShell runbook using managed identity](../learn/powershell-runbook-managed-identity.md) +> [Tutorial: Create Automation PowerShell runbook using managed identity](../learn/powershell-runbook-managed-identity.md) |
automation | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Automation description: Lists Azure Policy Regulatory Compliance controls available for Azure Automation. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/10/2022 Last updated : 10/12/2022 |
azure-app-configuration | Concept Geo Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-geo-replication.md | This team would benefit from geo-replication. They can create a replica of their - Geo-replication isn't available in the free tier. - Each replica has limits, as outlined in the [App Configuration pricing page](https://azure.microsoft.com/pricing/details/app-configuration/). These limits are isolated per replica. - Azure App Configuration also supports Azure availability zones to create a resilient and highly available store within an Azure Region. Availability zone support is automatically included for a replica if the replica's region has availability zone support. The combination of availability zones for redundancy within a region, and geo-replication across multiple regions, enhances both the availability and performance of a configuration store.-- Currently, you can only authenticate with replica endpoints with [Azure Active Directory (Azure AD)](/azure/app-service/overview-managed-identity).+- Currently, you can only authenticate with replica endpoints with [Azure Active Directory (Azure AD)](../app-service/overview-managed-identity.md). <!-- To add once these links become available: - Request handling for replicas will vary by configuration provider, for further information reference [.NET Geo-replication Reference](https://azure.microsoft.com/pricing/details/app-configuration/) and [Java Geo-replication Reference](https://azure.microsoft.com/pricing/details/app-configuration/). Each replica created will add extra charges. Reference the [App Configuration pr > [!div class="nextstepaction"] > [How to enable Geo replication](./howto-geo-replication.md) -> [Resiliency and Disaster Recovery](./concept-disaster-recovery.md) +> [Resiliency and Disaster Recovery](./concept-disaster-recovery.md) |
azure-app-configuration | Quickstart Aspnet Core App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-aspnet-core-app.md | dotnet new webapp --output TestAppConfig --framework netcoreapp3.1 > [!TIP] > Some shells will truncate the connection string unless it's enclosed in quotes. Ensure that the output of the `dotnet user-secrets list` command shows the entire connection string. If it doesn't, rerun the command, enclosing the connection string in quotes. - Secret Manager stores the secret outside of your project tree, which helps prevent the accidental sharing of secrets within source code. It's used only to test the web app locally. When the app is deployed to Azure like [App Service](/azure/app-service/overview), use the *Connection strings*, *Application settings* or environment variables to store the connection string. Alternatively, to avoid connection strings all together, you can [connect to App Configuration using managed identities](./howto-integrate-azure-managed-service-identity.md) or your other [Azure AD identities](./concept-enable-rbac.md). + Secret Manager stores the secret outside of your project tree, which helps prevent the accidental sharing of secrets within source code. It's used only to test the web app locally. When the app is deployed to Azure like [App Service](../app-service/overview.md), use the *Connection strings*, *Application settings* or environment variables to store the connection string. Alternatively, to avoid connection strings all together, you can [connect to App Configuration using managed identities](./howto-integrate-azure-managed-service-identity.md) or your other [Azure AD identities](./concept-enable-rbac.md). 1. Open *Program.cs*, and add Azure App Configuration as an extra configuration source by calling the `AddAzureAppConfiguration` method. In this quickstart, you: To learn how to configure your ASP.NET Core web app to dynamically refresh configuration settings, continue to the next tutorial. > [!div class="nextstepaction"]-> [Enable dynamic configuration](./enable-dynamic-configuration-aspnet-core.md) +> [Enable dynamic configuration](./enable-dynamic-configuration-aspnet-core.md) |
azure-app-configuration | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure App Configuration description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/10/2022 Last updated : 10/12/2022 |
azure-arc | Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/private-link.md | -[Azure Private Link](/azure/private-link/private-link-overview) allows you to securely link Azure services to your virtual network using private endpoints. This means you can connect your on-premises Kubernetes clusters with Azure Arc and send all traffic over an Azure ExpressRoute or site-to-site VPN connection instead of using public networks. In Azure Arc, you can use a Private Link Scope model to allow multiple Kubernetes clusters to communicate with their Azure Arc resources using a single private endpoint. +[Azure Private Link](../../private-link/private-link-overview.md) allows you to securely link Azure services to your virtual network using private endpoints. This means you can connect your on-premises Kubernetes clusters with Azure Arc and send all traffic over an Azure ExpressRoute or site-to-site VPN connection instead of using public networks. In Azure Arc, you can use a Private Link Scope model to allow multiple Kubernetes clusters to communicate with their Azure Arc resources using a single private endpoint. This document covers when to use and how to set up Azure Arc Private Link (preview). With Private Link you can: * Securely connect your private on-premises network to Azure Arc using ExpressRoute and Private Link. * Keep all traffic inside the Microsoft Azure backbone network. -For more information, see [Key benefits of Azure Private Link](/azure/private-link/private-link-overview#key-benefits). +For more information, see [Key benefits of Azure Private Link](../../private-link/private-link-overview.md#key-benefits). ## How it works Azure Arc Private Link Scope connects private endpoints (and the virtual networks they're contained in) to an Azure resource, in this case Azure Arc-enabled Kubernetes clusters. When you enable any one of the Arc-enabled Kubernetes cluster supported extensions, such as Azure Monitor, then connection to other Azure resources may be required for these scenarios. For example, in the case of Azure Monitor, the logs collected from the cluster are sent to Log Analytics workspace. -Connectivity to the other Azure resources from an Arc-enabled Kubernetes cluster listed earlier requires configuring Private Link for each service. For an example, see [Private Link for Azure Monitor](/azure/azure-monitor/logs/private-link-security). +Connectivity to the other Azure resources from an Arc-enabled Kubernetes cluster listed earlier requires configuring Private Link for each service. For an example, see [Private Link for Azure Monitor](../../azure-monitor/logs/private-link-security.md). ## Current limitations Consider these current limitations when planning your Private Link setup. * You can associate at most one Azure Arc Private Link Scope with a virtual network. * An Azure Arc-enabled Kubernetes cluster can only connect to one Azure Arc Private Link Scope.-* All on-premises Kubernetes clusters need to use the same private endpoint by resolving the correct private endpoint information (FQDN record name and private IP address) using the same DNS forwarder. For more information, see [Azure Private Endpoint DNS configuration](/azure/private-link/private-endpoint-dns). The Azure Arc-enabled Kubernetes cluster, Azure Arc Private Link Scope, and virtual network must be in the same Azure region. The Private Endpoint and the virtual network must also be in the same Azure region, but this region can be different from that of your Azure Arc Private Link Scope and Arc-enabled Kubernetes cluster. +* All on-premises Kubernetes clusters need to use the same private endpoint by resolving the correct private endpoint information (FQDN record name and private IP address) using the same DNS forwarder. For more information, see [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md). The Azure Arc-enabled Kubernetes cluster, Azure Arc Private Link Scope, and virtual network must be in the same Azure region. The Private Endpoint and the virtual network must also be in the same Azure region, but this region can be different from that of your Azure Arc Private Link Scope and Arc-enabled Kubernetes cluster. * Traffic to Azure Active Directory, Azure Resource Manager and Microsoft Container Registry service tags must be allowed through your on-premises network firewall during the preview. * Other Azure services that you will use, for example Azure Monitor, requires their own private endpoints in your virtual network. Consider these current limitations when planning your Private Link setup. On Azure Arc-enabled Kubernetes clusters configured with private links, the following extensions support end-to-end connectivity through private links. Refer to the guidance linked to each cluster extension for additional configuration steps and details on support for private links. * [Azure GitOps](conceptual-gitops-flux2.md)-* [Azure Monitor](/azure/azure-monitor/logs/private-link-security) +* [Azure Monitor](../../azure-monitor/logs/private-link-security.md) ## Planning your Private Link setup To connect your Kubernetes cluster to Azure Arc over a private link, you need to configure your network to accomplish the following: -1. Establish a connection between your on-premises network and an Azure virtual network using a [site-to-site VPN](/azure/vpn-gateway/tutorial-site-to-site-portal) or [ExpressRoute](/azure/expressroute/expressroute-howto-linkvnet-arm) circuit. +1. Establish a connection between your on-premises network and an Azure virtual network using a [site-to-site VPN](../../vpn-gateway/tutorial-site-to-site-portal.md) or [ExpressRoute](../../expressroute/expressroute-howto-linkvnet-arm.md) circuit. 1. Deploy an Azure Arc Private Link Scope, which controls which Kubernetes clusters can communicate with Azure Arc over private endpoints and associate it with your Azure virtual network using a private endpoint. 1. Update the DNS configuration on your local network to resolve the private endpoint addresses. 1. Configure your local firewall to allow access to Azure Active Directory, Azure Resource Manager and Microsoft Container Registry. Azure Arc-enabled Kubernetes integrates with several Azure services to bring clo There are two ways you can achieve this: -* If your network is configured to route all internet-bound traffic through the Azure VPN or ExpressRoute circuit, you can configure the network security group (NSG) associated with your subnet in Azure to allow outbound TCP 443 (HTTPS) access to Azure AD, Azure Resource Manager, Azure Front Door and Microsoft Container Registry using [service tags](/azure/virtual-network/service-tags-overview). The NSG rules should look like the following: +* If your network is configured to route all internet-bound traffic through the Azure VPN or ExpressRoute circuit, you can configure the network security group (NSG) associated with your subnet in Azure to allow outbound TCP 443 (HTTPS) access to Azure AD, Azure Resource Manager, Azure Front Door and Microsoft Container Registry using [service tags](../../virtual-network/service-tags-overview.md). The NSG rules should look like the following: | Setting | Azure AD rule | Azure Resource Manager rule | AzureFrontDoorFirstParty rule | Microsoft Container Registry rule | |-|||| The Private Endpoint on your virtual network allows it to reach Azure Arc-enable :::image type="content" source="media/private-link/create-private-endpoint-2.png" alt-text="Screenshot of the Configuration step to create a private endpoint in the Azure portal."::: > [!NOTE]- > If you choose **No** and prefer to manage DNS records manually, first complete setting up your Private Link, including this private endpoint and the Private Scope configuration. Next, configure your DNS according to the instructions in [Azure Private Endpoint DNS configuration](/azure/private-link/private-endpoint-dns). Make sure not to create empty records as preparation for your Private Link setup. The DNS records you create can override existing settings and impact your connectivity with Arc-enabled Kubernetes clusters. + > If you choose **No** and prefer to manage DNS records manually, first complete setting up your Private Link, including this private endpoint and the Private Scope configuration. Next, configure your DNS according to the instructions in [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md). Make sure not to create empty records as preparation for your Private Link setup. The DNS records you create can override existing settings and impact your connectivity with Arc-enabled Kubernetes clusters. 1. Select **Review + create**. 1. Let validation pass. 1. Select **Create**. Your on-premises Kubernetes clusters need to be able to resolve the private link If you set up private DNS zones for Azure Arc-enabled Kubernetes clusters when creating the private endpoint, your on-premises Kubernetes clusters must be able to forward DNS queries to the built-in Azure DNS servers to resolve the private endpoint addresses correctly. You need a DNS forwarder in Azure (either a purpose-built VM or an Azure Firewall instance with DNS proxy enabled), after which you can configure your on-premises DNS server to forward queries to Azure to resolve private endpoint IP addresses. -The private endpoint documentation provides guidance for configuring [on-premises workloads using a DNS forwarder](/azure/private-link/private-endpoint-dns#on-premises-workloads-using-a-dns-forwarder). +The private endpoint documentation provides guidance for configuring [on-premises workloads using a DNS forwarder](../../private-link/private-endpoint-dns.md#on-premises-workloads-using-a-dns-forwarder). ### Manual DNS server configuration If you run into problems, the following suggestions may help: ## Next steps -* Learn more about [Azure Private Endpoint](/azure/private-link/private-link-overview). -* Learn how to [troubleshoot Azure Private Endpoint connectivity problems](/azure/private-link/troubleshoot-private-endpoint-connectivity). -* Learn how to [configure Private Link for Azure Monitor](/azure/azure-monitor/logs/private-link-security). +* Learn more about [Azure Private Endpoint](../../private-link/private-link-overview.md). +* Learn how to [troubleshoot Azure Private Endpoint connectivity problems](../../private-link/troubleshoot-private-endpoint-connectivity.md). +* Learn how to [configure Private Link for Azure Monitor](../../azure-monitor/logs/private-link-security.md). |
azure-arc | Manage Automatic Vm Extension Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-automatic-vm-extension-upgrade.md | Title: Automatic extension upgrade (preview) for Azure Arc-enabled servers -description: Learn how to enable the automatic extension upgrades for your Azure Arc-enabled servers. + Title: Automatic extension upgrade for Azure Arc-enabled servers +description: Learn how to enable automatic extension upgrades for your Azure Arc-enabled servers. Previously updated : 06/02/2021 Last updated : 10/14/2022 -# Automatic extension upgrade (preview) for Azure Arc-enabled servers +# Automatic extension upgrade for Azure Arc-enabled servers -Automatic extension upgrade (preview) is available for Azure Arc-enabled servers that have supported VM extensions installed. When automatic extension upgrade is enabled on a machine, the extension is upgraded automatically whenever the extension publisher releases a new version for that extension. +Automatic extension upgrade is available for Azure Arc-enabled servers that have supported VM extensions installed. Automatic extension upgrades reduce the amount of operational overhead for you by scheduling the installation of new extension versions when they become available. The Azure Connected Machine agent takes care of upgrading the extension (preserving its settings along the way) and automatically rolling back to the previous version if something goes wrong during the upgrade process. - Automatic extension upgrade has the following features: +Automatic extension upgrade has the following features: -- You can opt in and out of automatic upgrades at any time.+- You can opt in and out of automatic upgrades at any time. By default, all extensions are opted into automatic extension upgrades. - Each supported extension is enrolled individually, and you can choose which extensions to upgrade automatically.-- Supported in all public cloud regions.--> [!NOTE] -> In this release, it is only possible to configure automatic extension upgrade with the Azure CLI and Azure PowerShell module. +- Supported in all Azure Arc regions. ## How does automatic extension upgrade work? If an extension upgrade fails, Azure will try to repair the extension by perform 1. If the rollback is successful, the extension status will show as **Succeeded** and the extension will be added to the automatic upgrade queue again. The next upgrade attempt can be as soon as the next hour and will continue until the upgrade is successful. 1. If the rollback fails, the extension status will show as **Failed** and the extension will no longer function as intended. You'll need to [remove](manage-vm-extensions-cli.md#remove-extensions) and [reinstall](manage-vm-extensions-cli.md#enable-extension) the extension to restore functionality. -If you continue to have trouble upgrading an extension, you can [disable automatic extension upgrade](#disable-automatic-extension-upgrade) to prevent the system from trying again while you troubleshoot the issue. You can [enable automatic extension upgrade](#enable-automatic-extension-upgrade) again when you're ready. +If you continue to have trouble upgrading an extension, you can [disable automatic extension upgrade](#manage-automatic-extension-upgrade) to prevent the system from trying again while you troubleshoot the issue. You can [enable automatic extension upgrade](#manage-automatic-extension-upgrade) again when you're ready. ## Supported extensions Automatic extension upgrade supports the following extensions (and more are adde - Key Vault Extension - Linux only - Log Analytics agent (OMS agent) - Linux only -## Enable automatic extension upgrade +## Manage automatic extension upgrade ++Automatic extension upgrade is enabled by default when you install extensions on Azure Arc-enabled servers. To enable automatic upgrades for an existing extension, you can use Azure CLI or Azure PowerShell to set the `enableAutomaticUpgrade` property on the extension to `true`. You'll need to repeat this process for every extension where you'd like to enable or disable automatic upgrades. ++### [Azure portal](#tab/azure-portal) ++Use the following steps to configure automatic extension upgrades in using the Azure portal: ++1. Navigate to the [Azure portal](https://portal.azure.com) and type **Servers - Azure Arc** into the search bar. + :::image type="content" source="media/manage-automatic-vm-extension-upgrade/portal-search-arc-server.png" alt-text="Screenshot of Azure portal showing user typing in Servers - Azure Arc." border="true"::: +1. Select **Servers - Azure Arc** under the Services category, then select the individual server you wish to manage. +1. In the navigation pane, select the **Extensions** tab to see a list of all extensions installed on the server. + :::image type="content" source="media/manage-automatic-vm-extension-upgrade/portal-navigation-extensions.png" alt-text="Screenshot of an Azure Arc-enabled server in the Azure portal showing where to navigate to extensions." border="true"::: +1. The **Automatic upgrade** column in the table shows whether upgrades are enabled, disabled, or not supported for each extension. Select the checkbox next to the extensions for which you want automatic upgrades enabled, then select **Enable automatic upgrade** to turn on the feature. Select **Disable automatic upgrade** to turn off the feature. + :::image type="content" source="media/manage-automatic-vm-extension-upgrade/portal-enable-auto-upgrade.png" alt-text="Screenshot of Azure portal showing how to select extensions and enable automatic upgrades." border="true"::: -Automatic extension upgrade is enabled by default when you install extensions on Azure Arc-enabled servers. To enable automatic extension upgrade for an existing extension, you can use Azure CLI or Azure PowerShell to set the `enableAutomaticUpgrade` property on the extension to `true`. You'll need to repeat this process for every extension where you'd like to enable automatic upgrades. +### [Azure CLI](#tab/azure-cli) ++To check the status of automatic extension upgrade for all extensions on an Arc-enabled server, run the following command: ++```azurecli +az connectedmachine extension list --resource-group resourceGroupName --machine-name machineName --query "[].{Name:name, AutoUpgrade:properties.enableAutoUpgrade}" --output table +``` -Use the [az connectedmachine extension update](/cli/azure/connectedmachine/extension) command to enable automatic upgrade on an extension: +Use the [az connectedmachine extension update](/cli/azure/connectedmachine/extension) command to enable automatic upgrades on an extension: ```azurecli az connectedmachine extension update \ --resource-group resourceGroupName \ --machine-name machineName \- --name DependencyAgentLinux \ + --name extensionName \ --enable-auto-upgrade true ``` -To check the status of automatic extension upgrade for all extensions on an Arc-enabled server, run the following command: +To disable automatic upgrades, set the `--enable-auto-upgrade` parameter to `false`, as shown below: ```azurecli-az connectedmachine extension list --resource-group resourceGroupName --machine-name machineName --query "[].{Name:name, AutoUpgrade:properties.enableAutoUpgrade}" --output table +az connectedmachine extension update \ + --resource-group resourceGroupName \ + --machine-name machineName \ + --name extensionName \ + --enable-auto-upgrade false ``` -To enable automatic extension upgrade for an extension using Azure PowerShell, use the [Update-AzConnectedMachineExtension](/powershell/module/az.connectedmachine/update-azconnectedmachineextension) cmdlet: --```azurepowershell -Update-AzConnectedMachineExtension -ResourceGroup resourceGroupName -MachineName machineName -Name DependencyAgentLinux -EnableAutomaticUpgrade -``` +### [Azure PowerShell](#tab/azure-powershell) To check the status of automatic extension upgrade for all extensions on an Arc-enabled server, run the following command: To check the status of automatic extension upgrade for all extensions on an Arc- Get-AzConnectedMachineExtension -ResourceGroup resourceGroupName -MachineName machineName | Format-Table Name, EnableAutomaticUpgrade ``` -## Extension upgrades with multiple extensions +To enable automatic upgrades for an extension using Azure PowerShell, use the [Update-AzConnectedMachineExtension](/powershell/module/az.connectedmachine/update-azconnectedmachineextension) cmdlet: -A machine managed by Arc-enabled servers can have multiple extensions with automatic extension upgrade enabled. The same machine can also have other extensions without automatic extension upgrade enabled. +```azurepowershell +Update-AzConnectedMachineExtension -ResourceGroup resourceGroupName -MachineName machineName -Name extensionName -EnableAutomaticUpgrade +``` -If multiple extension upgrades are available for a machine, the upgrades may be batched together, but each extension upgrade is applied individually on a machine. A failure on one extension doesn't impact the other extension(s) to be upgraded. For example, if two extensions are scheduled for an upgrade, and the first extension upgrade fails, the second extension will still be upgraded. +To disable automatic upgrades, set `-EnableAutomaticUpgrade:$false` as shown in the example below: -## Disable automatic extension upgrade +```azurepowershell +Update-AzConnectedMachineExtension -ResourceGroup resourceGroupName -MachineName machineName -Name extensionName -EnableAutomaticUpgrade:$false +``` -To disable automatic extension upgrade for an extension, set the `enable-auto-upgrade` property to `false`. +> [!TIP] +> The cmdlets above come from the [Az.ConnectedMachine](/powershell/module/az.connectedmachine) PowerShell module. You can install this PowerShell module with `Install-Module Az.ConnectedMachine` on your computer or in Azure Cloud Shell. -With Azure CLI, use the [az connectedmachine extension update](/cli/azure/connectedmachine/extension) command to disable automatic upgrade on an extension: + -```azurecli -az connectedmachine extension update \ - --resource-group resourceGroupName \ - --machine-name machineName \ - --name DependencyAgentLinux \ - --enable-auto-upgrade false -``` +## Extension upgrades with multiple extensions -With Azure PowerShell, use the [Update-AzConnectedMachineExtension](/powershell/module/az.connectedmachine/update-azconnectedmachineextension) cmdlet: +A machine managed by Arc-enabled servers can have multiple extensions with automatic extension upgrade enabled. The same machine can also have other extensions without automatic extension upgrade enabled. -```azurepowershell -Update-AzConnectedMachineExtension -ResourceGroup resourceGroupName -MachineName machineName -Name DependencyAgentLinux -EnableAutomaticUpgrade:$false -``` +If multiple extension upgrades are available for a machine, the upgrades may be batched together, but each extension upgrade is applied individually on a machine. A failure on one extension doesn't impact the other extension(s) to be upgraded. For example, if two extensions are scheduled for an upgrade, and the first extension upgrade fails, the second extension will still be upgraded. ## Check automatic extension upgrade history You can use the Azure Activity Log to identify extensions that were automatically upgraded. You can find the Activity Log tab on individual Azure Arc-enabled server resources, resource groups, and subscriptions. Extension upgrades are identified by the `Upgrade Extensions on Azure Arc machines (Microsoft.HybridCompute/machines/upgradeExtensions/action)` operation. -To view automatic extension upgrade history, search for the **Azure Activity Log** in the Azure Portal. Select **Add filter** and choose the Operation filter. For the filter criteria, search for "Upgrade Extensions on Azure Arc machines" and select that option. You can optionally add a second filter for **Event initiated by** and set "Azure Regional Service Manager" as the filter criteria to only see automatic upgrade attempts and exclude upgrades manually initiated by users. +To view automatic extension upgrade history, search for the **Azure Activity Log** in the Azure portal. Select **Add filter** and choose the Operation filter. For the filter criteria, search for "Upgrade Extensions on Azure Arc machines" and select that option. You can optionally add a second filter for **Event initiated by** and set "Azure Regional Service Manager" as the filter criteria to only see automatic upgrade attempts and exclude upgrades manually initiated by users. :::image type="content" source="media/manage-automatic-vm-extension-upgrade/azure-activity-log-extension-upgrade.png" alt-text="Azure Activity Log showing attempts to automatically upgrade extensions on Azure Arc-enabled servers." border="true"::: |
azure-arc | Migrate Azure Monitor Agent Ansible | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/migrate-azure-monitor-agent-ansible.md | This workflow performs the following tasks: ### Create template to install Azure Connected Machine agent -This template is responsible for installing the Azure Arc [Connected Machine agent](/azure/azure-arc/servers/agent-overview) on hosts within the provided inventory. A successful run will have installed the agent on all machines. +This template is responsible for installing the Azure Arc [Connected Machine agent](./agent-overview.md) on hosts within the provided inventory. A successful run will have installed the agent on all machines. Follow the steps below to create the template: After following the steps in this article, you have created an automation workfl ## Next steps -Learn more about [connecting machines using Ansible playbooks](onboard-ansible-playbooks.md). -+Learn more about [connecting machines using Ansible playbooks](onboard-ansible-playbooks.md). |
azure-arc | Onboard Group Policy Service Principal Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-group-policy-service-principal-encryption.md | Title: Connect machines at scale using Group Policy with a PowerShell script description: In this article, you learn how to create a Group Policy Object to onboard Active Directory-joined Windows machines to Azure Arc-enabled servers. Previously updated : 07/20/2022 Last updated : 10/18/2022 The Group Policy Object, which is used to onboard Azure Arc-enabled servers, req * Assign the Azure Connected Machine Onboarding role to your service principal and limit the scope of the role to the target Azure landing zone. * Make a note of the Service Principal Secret; you'll need this value later. -1. For each of the scripts below, click to go to its GitHub directory and download the raw script to your local share using your browser's **Save as** function: - * [`EnableAzureArc.ps1`](https://raw.githubusercontent.com/Azure/ArcEnabledServersGroupPolicy/main/EnableAzureArc.ps1) - * [`DeployGPO.ps1`](https://raw.githubusercontent.com/Azure/ArcEnabledServersGroupPolicy/main/DeployGPO.ps1) - * [`AzureArcDeployment.psm1`](https://raw.githubusercontent.com/Azure/ArcEnabledServersGroupPolicy/main/AzureArcDeployment.psm1) +1. Download and unzip the folder **ArcEnabledServersGroupPolicy_v1.0.1** from [https://aka.ms/gp-onboard](https://aka.ms/gp-onboard). This folder contains the ArcGPO project structure with the scripts `EnableAzureArc.ps1`, `DeployGPO.ps1`, and `AzureArcDeployment.psm1`. These assets will be used for onboarding the machine to Azure Arc-enabled servers. - > [!NOTE] - > The ArcGPO folder must be in the same directory as the downloaded script files above. The ArcGPO folder contains the files that define the Group Policy Object that's created when the DeployGPO script is run. When running the DeployGPO script, make sure you're in the same directory as the ps1 files and ArcGPO folder. --1. Modify the script `EnableAzureArc.ps1` by providing the parameter declarations for servicePrincipalClientId, tenantId, subscriptionId, ResourceGroup, Location, Tags, and ReportServerFQDN fields respectively. --1. Execute the deployment script `DeployGPO.ps1`, modifying the run parameters for the DomainFQDN, ReportServerFQDN, ArcRemoteShare, AgentProxy (if applicable), and Service Principal secret: +1. Execute the deployment script `DeployGPO.ps1`, modifying the run parameters for the DomainFQDN, ReportServerFQDN, ArcRemoteShare, Service Principal secret, Service Principal Client Id, Subscription Id, Resource Group, Region, Tenant, and AgentProxy (if applicable): ```- .\DeployGPO.ps1 -DomainFQDN <INSERT Domain FQDN> -ReportServerFQDN <INSERT Domain FQDN of Network Share> -ArcRemoteShare <INSERT Name of Network Share> -Spsecret <INSERT SPN SECRET> [-AgentProxy $AgentProxy] + .\DeployGPO.ps1 -DomainFQDN contoso.com -ReportServerFQDN Server.contoso.com -ArcRemoteShare AzureArcOnBoard -ServicePrincipalSecret $ServicePrincipalSecret -ServicePrincipalClientId $ServicePrincipalClientId -SubscriptionId $SubscriptionId --ResourceGroup $ResourceGroup -Location $Location -TenantId $TenantId [-AgentProxy $AgentProxy] ``` 1. Download the latest version of the [Azure Connected Machine agent Windows Installer package](https://aka.ms/AzureConnectedMachineAgent) from the Microsoft Download Center and save it to the remote share. |
azure-arc | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Arc-enabled servers (preview) description: Lists Azure Policy Regulatory Compliance controls available for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/10/2022 Last updated : 10/12/2022 |
azure-arc | Troubleshoot Agent Onboard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/troubleshoot-agent-onboard.md | Use the following table to identify and resolve issues when configuring the Azur | AZCM0019 | The path to the configuration file is incorrect | Ensure the path to the configuration file is correct and try again. | | AZCM0023 | The value provided for a parameter (argument) is invalid | Review the error message for more specific information. Refer to the syntax of the command (`azcmagent <command> --help`) for valid values or expected format for the arguments. | | AZCM0026 | There is an error in network configuration or some critical services are temporarily unavailable | Check if the required endpoints are reachable (for example, hostnames are resolvable, endpoints are not blocked). If the network is configured for Private Link Scope, a Private Link Scope resource ID must be provided for onboarding using the `--private-link-scope` parameter. |-| AZCM0041 | The credentials supplied are invalid | For device logins, verify that the user account specified has access to the tenant and subscription where the server resource will be created<sup>[1](#footnote3)</sup>.<br> For service principal logins, check the client ID and secret for correctness, the expiration date of the secret<sup>[2](#footnote4)</sup>, and that the service principal is from the same tenant where the server resource will be created<sup>[1](#footnote3)</sup>.<br> <a name="footnote3"></a><sup>1</sup>See [How to find your Azure Active Directory tenant ID](/azure/active-directory/fundamentals/active-directory-how-to-find-tenant).<br> <a name="footnote4"></a><sup>2</sup>In Azure portal, open Azure Active Directory and select the App registration blade. Select the application to be used and the Certificates and secrets within it. Check whether the expiration data has passed. If it has, create new credentials with sufficient roles and try again. See [Connected Machine agent prerequisites-required permissions](prerequisites.md#required-permissions). | +| AZCM0041 | The credentials supplied are invalid | For device logins, verify that the user account specified has access to the tenant and subscription where the server resource will be created<sup>[1](#footnote3)</sup>.<br> For service principal logins, check the client ID and secret for correctness, the expiration date of the secret<sup>[2](#footnote4)</sup>, and that the service principal is from the same tenant where the server resource will be created<sup>[1](#footnote3)</sup>.<br> <a name="footnote3"></a><sup>1</sup>See [How to find your Azure Active Directory tenant ID](../../active-directory/fundamentals/active-directory-how-to-find-tenant.md).<br> <a name="footnote4"></a><sup>2</sup>In Azure portal, open Azure Active Directory and select the App registration blade. Select the application to be used and the Certificates and secrets within it. Check whether the expiration data has passed. If it has, create new credentials with sufficient roles and try again. See [Connected Machine agent prerequisites-required permissions](prerequisites.md#required-permissions). | | AZCM0042 | Creation of the Azure Arc-enabled server resource failed | Review the error message in the output to identify the cause of the failure to create resource and the suggested remediation. For permission issues, see [Connected Machine agent prerequisites-required permissions](prerequisites.md#required-permissions) for more information. | | AZCM0043 | Deletion of the Azure Arc-enabled server resource failed | Verify that the user/service principal specified has permissions to delete Azure Arc-enabled server/resources in the specified group ΓÇö see [Connected Machine agent prerequisites-required permissions](prerequisites.md#required-permissions).<br> If the resource no longer exists in Azure, use the `--force-local-only` flag to proceed. | | AZCM0044 | A resource with the same name already exists | Specify a different name for the `--resource-name` parameter or delete the existing Azure Arc-enabled server in Azure and try again. | If you don't see your problem here or you can't resolve your issue, try one of t * Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts. -* File an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select **Get Support**. +* File an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select **Get Support**. |
azure-arc | Quickstart Connect System Center Virtual Machine Manager To Arc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md | This QuickStart shows you how to connect your SCVMM management server to Azure A ## Prerequisites +>[!Note] +>If VMM server is running on Windows Server 2016 machine, ensure that [Open SSH package](https://github.com/PowerShell/Win32-OpenSSH/releases) is installed. + | **Requirement** | **Details** | | | | | **Azure** | An Azure subscription <br/><br/> A resource group in the above subscription where you have the *Owner/Contributor* role. | |
azure-cache-for-redis | Cache How To Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-monitor.md | The types **Count** and **ΓÇ£Sum** can be misleading for certain metrics (connec - The number of items expired from the cache during the specified reporting interval. This value maps to `expired_keys` from the Redis INFO command. > [!IMPORTANT]-> Geo-replication metrics are affected by monthly internal maintenance operations. The Azure Cache for Redis service periodically patches all caches with the latest platform features and improvements. During these updates, each cache node is taken offline, which temporarily disables the geo-replication link. If your geo replication link is unhealthy, check to see if it was caused by a patching event on either the geo-primary or geo-secondary cache by using **Diagnose and Solve Problems** from the Resource menu in the portal. Depending on the amount of data in the cache, the downtime from patching can take anywhere from a few minutes to an hour. If the geo-replication link is unhealthy for over an hour, [file a support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). +> Geo-replication metrics are affected by monthly internal maintenance operations. The Azure Cache for Redis service periodically patches all caches with the latest platform features and improvements. During these updates, each cache node is taken offline, which temporarily disables the geo-replication link. If your geo replication link is unhealthy, check to see if it was caused by a patching event on either the geo-primary or geo-secondary cache by using **Diagnose and Solve Problems** from the Resource menu in the portal. Depending on the amount of data in the cache, the downtime from patching can take anywhere from a few minutes to an hour. If the geo-replication link is unhealthy for over an hour, [file a support request](../azure-portal/supportability/how-to-create-azure-support-request.md). > - Geo Replication Connectivity Lag (preview) The types **Count** and **ΓÇ£Sum** can be misleading for certain metrics (connec - This metric is only available in the Premium tier for caches with geo-replication enabled. - This metric may indicate a disconnected/unhealthy replication status for several reasons, including: monthly patching, host OS updates, network misconfiguration, or failed geo-replication link provisioning. - A value of 0 does not mean that data on the geo-replica is lost. It just means that the link between geo-primary and geo-secondary is unhealthy. - - If the geo-replication link is unhealthy for over an hour, [file a support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). + - If the geo-replication link is unhealthy for over an hour, [file a support request](../azure-portal/supportability/how-to-create-azure-support-request.md). - Gets - The number of get operations from the cache during the specified reporting interval. This value is the sum of the following values from the Redis INFO all command: `cmdstat_get`, `cmdstat_hget`, `cmdstat_hgetall`, `cmdstat_hmget`, `cmdstat_mget`, `cmdstat_getbit`, and `cmdstat_getrange`, and is equivalent to the sum of cache hits and misses during the reporting interval. For information on creating a metric, see [Create your own metrics](#create-your - [Azure Monitor for Azure Cache for Redis](redis-cache-insights-overview.md) - [Azure Monitor Metrics REST API](../azure-monitor/essentials/stream-monitoring-data-event-hubs.md)-- [`INFO`](https://redis.io/commands/info)+- [`INFO`](https://redis.io/commands/info) |
azure-cache-for-redis | Cache How To Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-upgrade.md | For more details on how to export, see [Import and Export data in Azure Cache fo - Upgrading a Basic tier cache results in brief unavailability and data loss. - Upgrading on geo-replicated cache isn't supported. You must manually unlink the cache instances before upgrading.-- Upgrading a cache with a dependency on Cloud Services isn't supported. You should migrate your cache instance to virtual machine scale set before upgrading. For more information, see [Caches with a dependency on Cloud Services (classic)](/azure/azure-cache-for-redis/cache-faq) for details on cloud services hosted caches.+- Upgrading a cache with a dependency on Cloud Services isn't supported. You should migrate your cache instance to virtual machine scale set before upgrading. For more information, see [Caches with a dependency on Cloud Services (classic)](./cache-faq.yml) for details on cloud services hosted caches. ### Check the version of a cache Set-AzRedisCache -Name "CacheName" -ResourceGroupName "ResourceGroupName" -Redis - To learn more about Azure Cache for Redis versions, see lin[Set Redis version for Azure Cache for Redis](cache-how-to-version.md) - To learn more about Redis 6 features, see [Diving Into Redis 6.0 by Redis](https://redis.com/blog/diving-into-redis-6/)-- To learn more about Azure Cache for Redis features: [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers)+- To learn more about Azure Cache for Redis features: [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers) |
azure-cache-for-redis | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Cache for Redis description: Lists Azure Policy Regulatory Compliance controls available for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/10/2022 Last updated : 10/12/2022 |
azure-functions | Create First Function Vs Code Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-csharp.md | Title: "Create a C# function using Visual Studio Code - Azure Functions" description: "Learn how to create a C# function, then publish the local project to serverless hosting in Azure Functions using the Azure Functions extension in Visual Studio Code. " Previously updated : 06/11/2022 Last updated : 10/11/2022 ms.devlang: csharp adobe-target: true adobe-target-content: ./create-first-function-vs-code-csharp-ieux In this article, you use Visual Studio Code to create a C# function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions. This article creates an HTTP triggered function that runs on .NET 6.0. There's also a [CLI-based version](create-first-function-cli-csharp.md) of this article. -By default, this article shows you how to create C# functions that run on .NET 6 [in the same process as the Functions host](functions-dotnet-class-library.md). These _in-process_ C# functions are only supported on Long Term Support (LTS) versions of .NET, such as .NET 6. To create C# functions on .NET 6 that can also run on .NET 5.0 and .NET Framework 4.8 (in preview) [in an isolated process](dotnet-isolated-process-guide.md), see the [alternate version of this article](create-first-function-vs-code-csharp.md?tabs=isolated-process). +By default, this article shows you how to create C# functions that run [in the same process as the Functions host](functions-dotnet-class-library.md). These _in-process_ C# functions are only supported on Long Term Support (LTS) versions of .NET, such as .NET 6. To create C# functions on .NET 6 that can also run on [other supported versions](functions-versions.md) for Azure functions [in an isolated process](dotnet-isolated-process-guide.md). Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account. |
azure-functions | Functions App Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md | In version 2.x and later versions of the Functions runtime, configures app behav In version 2.x and later versions of the Functions runtime, application settings can override [host.json](functions-host-json.md) settings in the current environment. These overrides are expressed as application settings named `AzureFunctionsJobHost__path__to__setting`. For more information, see [Override host.json values](functions-host-json.md#override-hostjson-values). -## AzureFunctionsWebHost__hostId +## AzureFunctionsWebHost__hostid Sets the host ID for a given function app, which should be a unique ID. This setting overrides the automatically generated host ID value for your app. Use this setting only when you need to prevent host ID collisions between function apps that share the same storage account. A host ID must be between 1 and 32 characters, contain only lowercase letters, n |Key|Sample value| |||-|AzureFunctionsWebHost__hostId|`myuniquefunctionappname123456789`| +|AzureFunctionsWebHost__hostid|`myuniquefunctionappname123456789`| For more information, see [Host ID considerations](storage-considerations.md#host-id-considerations). |
azure-functions | Functions Proxies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-proxies.md | After you have your function app endpoints exposed by using API Management, the | [Edit an API](../api-management/edit-api.md) | Shows you how to work with an existing API hosted in API Management. | | [Policies in Azure API Management](../api-management/api-management-howto-policies.md) | In API Management, publishers can change API behavior through configuration using policies. Policies are a collection of statements that are run sequentially on the request or response of an API. | | [API Management policy reference](../api-management/api-management-policies.md) | Reference that details all supported API Management policies. |-| [API Management policy samples](/azure/api-management/policies/) | Helpful collection of samples using API Management policies in key scenarios. | +| [API Management policy samples](../api-management/policies/index.md) | Helpful collection of samples using API Management policies in key scenarios. | ## Legacy Functions Proxies Some basic hints for how to perform equivalent tasks using API Management have b ## Next steps > [!div class="nextstepaction"]-> [Expose serverless APIs from HTTP endpoints using Azure API Management](functions-openapi-definition.md) +> [Expose serverless APIs from HTTP endpoints using Azure API Management](functions-openapi-definition.md) |
azure-functions | Functions Reference Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-csharp.md | Azure Functions lets you develop functions using C# in one of the following ways | Type | Execution process | Code extension | Development environment | Reference | | | - | | | | | C# script | in-process | .csx | [Portal](functions-create-function-app-portal.md)<br/>[Core Tools](functions-run-local.md) | This article | -| C# class library | in-process | .cs | [Visual Studio](functions-develop-vs.md)<br/>[Visual Studio Code](functions-develop-vs-code.md)<br />[Core Tools](functions-run-local.md)s | [In-process C# class library functions](functions-dotnet-class-library.md) | -| C# class library (isolated process)| out-of-process | .cs | [Visual Studio](functions-develop-vs.md)<br/>[Visual Studio Code](functions-develop-vs-code.md)<br />[Core Tools](functions-run-local.md) | [.NET isolated process functions](dotnet-isolated-process-guide.md) | +| C# class library | in-process | .cs | [Visual Studio](functions-develop-vs.md)<br/>[Visual Studio Code](functions-develop-vs-code.md)<br />[Core Tools](functions-run-local.md)| [In-process C# class library functions](functions-dotnet-class-library.md) | +| C# class library (isolated process)| in an isolated process | .cs | [Visual Studio](functions-develop-vs.md)<br/>[Visual Studio Code](functions-develop-vs-code.md)<br />[Core Tools](functions-run-local.md) | [.NET isolated process functions](dotnet-isolated-process-guide.md) | This article assumes that you've already read the [Azure Functions developers guide](functions-reference.md). |
azure-functions | Functions Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-versions.md | zone_pivot_groups: programming-languages-set-functions | Version | Support level | Description | | | | |-| 4.x | GA | **_Recommended runtime version for functions in all languages._** Use this version to [run C# functions on .NET 6.0, .NET 7.0, and .NET Framework 4.8](functions-dotnet-class-library.md#supported-versions). | -| 3.x | GA | Supports all languages. Use this version to [run C# functions on .NET Core 3.1 and .NET 5.0](functions-dotnet-class-library.md#supported-versions).| +| 4.x | GA | **_Recommended runtime version for functions in all languages._** Check out [Supported language versions](#languages). | +| 3.x | GA | Supports all languages. Check out [Supported language versions](#languages).| | 2.x | GA | Supported for [legacy version 2.x apps](#pinning-to-version-20). This version is in maintenance mode, with enhancements provided only in later versions.| | 1.x | GA | Recommended only for C# apps that must use .NET Framework and only supports development in the Azure portal, Azure Stack Hub portal, or locally on Windows computers. This version is in maintenance mode, with enhancements provided only in later versions. | |
azure-functions | Security Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/security-concepts.md | In many ways, planning for secure development, deployment, and operation of serv [!INCLUDE [app-service-security-intro](../../includes/app-service-security-intro.md)] -For a set of security recommendations that follow the [Azure Security Benchmark](../security/benchmarks/overview.md), see [Azure Security Baseline for Azure Functions](security-baseline.md). +For a set of security recommendations that follow the [Microsoft cloud security benchmark](/security/benchmark/azure/introduction), see [Azure Security Baseline for Azure Functions](/security/benchmark/azure/baselines/functions-security-baseline). ## Secure operation |
azure-functions | Storage Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/storage-considerations.md | You can use the following strategies to avoid host ID collisions: ### Override the host ID -You can explicitly set a specific host ID for your function app in the application settings by using the `AzureFunctionsWebHost__hostId` setting. For more information, see [AzureFunctionsWebHost__hostId](functions-app-settings.md#azurefunctionswebhost__hostid). +You can explicitly set a specific host ID for your function app in the application settings by using the `AzureFunctionsWebHost__hostid` setting. For more information, see [AzureFunctionsWebHost__hostid](functions-app-settings.md#azurefunctionswebhost__hostid). When the collision occurs between slots, you may need to mark this setting as a slot setting. To learn how to create app settings, see [Work with application settings](functions-how-to-use-azure-function-app-settings.md#settings). |
azure-functions | Storage Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-glossary-cloud-terminology.md | The collection of virtual machines in an availability set that are updated at th See [Manage the availability of Windows virtual machines](./virtual-machines/availability.md?toc=/azure/virtual-machines/windows/toc.json) and [Manage the availability of Linux virtual machines](./virtual-machines/availability.md?toc=/azure/virtual-machines/linux/toc.json) ## <a name="vm"></a>virtual machine-The software implementation of a physical computer that runs an operating system. Multiple virtual machines can run simultaneously on the same hardware. In Azure, virtual machines are available in a variety of sizes. For more information, see [Virtual Machines documentation](/azure/virtual-machines/) +The software implementation of a physical computer that runs an operating system. Multiple virtual machines can run simultaneously on the same hardware. In Azure, virtual machines are available in a variety of sizes. For more information, see [Virtual Machines documentation](./virtual-machines/index.yml) ## <a name="vm-extension"></a>virtual machine extension A resource that implements behaviors or features that either help other programs work or provide the ability for you to interact with a running computer. For example, you could use the VM Access extension to reset or modify remote access values on an Azure virtual machine. Another name for [App Service App](#app-service-app). * [Get started with Azure](https://azure.microsoft.com/get-started/) * [Cloud resource center](https://azure.microsoft.com/resources/) * [Azure for your business application](https://azure.microsoft.com/overview/business-apps-on-azure/)-* [Azure in your datacenter](https://azure.microsoft.com/overview/business-apps-on-azure/) +* [Azure in your datacenter](https://azure.microsoft.com/overview/business-apps-on-azure/) |
azure-monitor | Agents Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md | description: Overview of the Azure Monitor Agent, which collects monitoring data Previously updated : 9/15/2022 Last updated : 10/17/2022 In addition to the generally available data collection listed above, Azure Monit | Azure service | Current support | Other extensions installed | More information | | : | : | : | : |-| [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Public preview | <ul><li>Azure Security Agent extension</li><li>SQL Advanced Threat Protection extension</li><li>SQL Vulnerability Assessment extension</li></ul> | [Auto-deployment of Azure Monitor Agent (Preview)](/azure/defender-for-cloud/auto-deploy-azure-monitoring-agent) | -| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors-reference.md#windows-forwarded-events-preview)</li><li>Windows DNS logs: [Public preview](/azure/sentinel/connect-dns-ama)</li><li>Linux Syslog CEF: Preview</li></ul> | Sentinel DNS extension, if youΓÇÖre collecting DNS logs. For all other data types, you just need the Azure Monitor Agent extension. | <ul><li>[Sign-up link for Linux Syslog CEF](https://aka.ms/amadcr-privatepreviews)</li><li>No sign-up needed for Windows Forwarding Event (WEF), Windows Security Events and Windows DNS events</li></ul> | +| [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Public preview | <ul><li>Azure Security Agent extension</li><li>SQL Advanced Threat Protection extension</li><li>SQL Vulnerability Assessment extension</li></ul> | [Auto-deployment of Azure Monitor Agent (Preview)](../../defender-for-cloud/auto-deploy-azure-monitoring-agent.md) | +| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors-reference.md#windows-forwarded-events-preview)</li><li>Windows DNS logs: [Public preview](../../sentinel/connect-dns-ama.md)</li><li>Linux Syslog CEF: Preview</li></ul> | Sentinel DNS extension, if youΓÇÖre collecting DNS logs. For all other data types, you just need the Azure Monitor Agent extension. | <ul><li>[Sign-up link for Linux Syslog CEF](https://aka.ms/amadcr-privatepreviews)</li><li>No sign-up needed for Windows Forwarding Event (WEF), Windows Security Events and Windows DNS events</li></ul> | | [Change Tracking](../../automation/change-tracking/overview.md) | Change Tracking: Preview. | Change Tracking extension | [Sign-up link](https://aka.ms/amadcr-privatepreviews) | | [Update Management](../../automation/update-management/overview.md) (available without Azure Monitor Agent) | Use Update Management v2 - Public preview | None | [Update management center (Public preview) documentation](../../update-center/index.yml) | | [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Connection Monitor: Preview | Azure NetworkWatcher extension | [Sign-up link](https://aka.ms/amadcr-privatepreviews) | The tables below provide a comparison of Azure Monitor Agent with the legacy the | | Azure | X | X | X | | | Other cloud (Azure Arc) | X | X | | | | On-premises (Azure Arc) | X | X | |-| | Windows Client OS | X (Public preview) | | | +| | Windows Client OS | X | | | | **Data collected** | | | | | | | Event Logs | X | X | X | | | Performance | X | X | X | The tables below provide a comparison of Azure Monitor Agent with the legacy the ### Supported operating systems -The following tables list the operating systems that Azure Monitor Agent and the legacy agents support. All operating systems are assumed to be x64. x86 isn't supported for any operating system. +The following tables list the operating systems that Azure Monitor Agent and the legacy agents support. All operating systems are assumed to be x64. x86 isn't supported for any operating system. +View [supported operating systems for Azure Arc Connected Machine agent](../../azure-arc/servers/prerequisites.md#supported-operating-systems), which is a prerequisite to run Azure Monitor agent on physical servers and virtual machines hosted outside of Azure (that is, on-premises) or in other clouds. #### Windows -| Operating system | Azure Monitor agent | Log Analytics agent | Diagnostics extension | -|:|::|::|::|::| +| Operating system | Azure Monitor agent | Log Analytics agent (legacy) | Diagnostics extension | +|:|::|::|::| | Windows Server 2022 | X | | | | Windows Server 2022 Core | X | | | | Windows Server 2019 | X | X | X | The following tables list the operating systems that Azure Monitor Agent and the | Azure Stack HCI | | X | | <sup>1</sup> Running the OS on server hardware, for example, machines that are always connected, always turned on, and not running other workloads (PC, office, browser).<br>-<sup>2</sup> Using the Azure Monitor agent [client installer (Public preview)](./azure-monitor-agent-windows-client.md).<br> +<sup>2</sup> Using the Azure Monitor agent [client installer](./azure-monitor-agent-windows-client.md).<br> <sup>3</sup> Also supported on Arm64-based machines. #### Linux -| Operating system | Azure Monitor agent <sup>1</sup> | Log Analytics agent <sup>1</sup> | Diagnostics extension <sup>2</sup>| -|:|::|::|::|:: +| Operating system | Azure Monitor agent <sup>1</sup> | Log Analytics agent (legacy) <sup>1</sup> | Diagnostics extension <sup>2</sup>| +|:|::|::|::| | AlmaLinux 8 | X<sup>3</sup> | X | | | Amazon Linux 2017.09 | | X | | | Amazon Linux 2 | | X | | The following tables list the operating systems that Azure Monitor Agent and the ## Next steps - [Install the Azure Monitor Agent](azure-monitor-agent-manage.md) on Windows and Linux virtual machines.-- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.+- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor. |
azure-monitor | Azure Monitor Agent Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md | Remove-AzVMExtension -Name AzureMonitorLinuxAgent -ResourceGroupName <resource-g ### Update on Azure virtual machines -To perform a one-time update of the agent, you must first uninstall the existing agent version,. Then install the new version as described. +To perform a one-time update of the agent, you must first uninstall the existing agent version, then install the new version as described. We recommend that you enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature by using the following PowerShell commands. Update-AzConnectedExtension -ResourceGroupName $env.ResourceGroupName -MachineNa ``` -We recommend that you enable automatic update of the agent by enabling the [Automatic Extension Upgrade (preview)](../../azure-arc/servers/manage-automatic-vm-extension-upgrade.md#enable-automatic-extension-upgrade) feature by using the following PowerShell commands. +We recommend that you enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../azure-arc/servers/manage-automatic-vm-extension-upgrade.md#manage-automatic-extension-upgrade) feature by using the following PowerShell commands. # [Windows](#tab/PowerShellWindowsArc) az vm extension delete --resource-group <resource-group-name> --vm-name <virtual ### Update on Azure virtual machines -To perform a one-time update of the agent, you must first uninstall the existing agent version,. Then install the new version as described. +To perform a one-time update of the agent, you must first uninstall the existing agent version, then install the new version as described. We recommend that you enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature by using the following CLI commands. az connectedmachine upgrade-extension --extension-targets "{\"Microsoft.Azure.Mo ``` - We recommend that you enable automatic update of the agent by enabling the [Automatic Extension Upgrade (preview)](../../azure-arc/servers/manage-automatic-vm-extension-upgrade.md#enable-automatic-extension-upgrade) feature by using the following PowerShell commands. + We recommend that you enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../azure-arc/servers/manage-automatic-vm-extension-upgrade.md#manage-automatic-extension-upgrade) feature by using the following PowerShell commands. # [Windows](#tab/CLIWindowsArc) Policy initiatives for Windows and Linux virtual machines, scale sets consist of #### Known issues - Managed Identity default behavior. [Learn more](../../active-directory/managed-identities-azure-resources/managed-identities-faq.md#what-identity-will-imds-default-to-if-dont-specify-the-identity-in-the-request).-- Possible race condition with using built-in user-assigned identity creation policy. [Learn more](../../active-directory/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy.md#known-issues).+- Possible rare condition with using built-in user-assigned identity creation policy. [Learn more](../../active-directory/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy.md#known-issues). - Assigning policy to resource groups. If the assignment scope of the policy is a resource group and not a subscription, the identity used by policy assignment (different from the user-assigned identity used by agent) must be manually granted [these roles](../../active-directory/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy.md#required-authorization) prior to assignment/remediation. Failing to do this step will result in *deployment failures*. - Other [Managed Identity limitations](../../active-directory/managed-identities-azure-resources/managed-identities-faq.md#limitations). |
azure-monitor | Azure Monitor Agent Migration Tools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration-tools.md | Title: Tools for migrating to Azure Monitor Agent from legacy agents description: This article describes various migration tools and helpers available for migrating from existing legacy agents to the new Azure Monitor agent (AMA) and data collection rules (DCR). -+ Last updated 8/18/2022 -Azure Monitor Agent (AMA) replaces the Log Analytics Agent (MM) include enhanced security, cost-effectiveness, performance, manageability and reliability. This article explains how to use the AMA Migration Helper and DCR Config Generator tools to help automate and track the migration from Log Analytics Agent to Azure Monitor Agent. +Azure Monitor Agent (AMA) replaces the Log Analytics Agent (MM) include enhanced security, cost-effectiveness, performance, manageability and reliability. This article explains how to use the AMA Migration Helper and DCR Config Generator tools to help automate and track the migration from Log Analytics Agent to Azure Monitor Agent.  |
azure-monitor | Azure Monitor Agent Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md | Your migration plan to the Azure Monitor Agent should take into account: ## Prerequisites -Review the [prerequisites](./azure-monitor-agent-manage.md#prerequisites) for use with Azure Monitor Agent. For non-Azure servers, [installing the Azure Arc agent](/azure/azure-arc/servers/agent-overview) is an important prerequisite that then helps to install the agent extension and other required extensions. Using Azure Arc for this purpose comes at no added cost. It's not mandatory to use Azure Arc for server management overall. You can continue using your existing non-Azure management solutions. After the Azure Arc agent is installed, you can follow the same guidance in this article across Azure and non-Azure for migration. +Review the [prerequisites](./azure-monitor-agent-manage.md#prerequisites) for use with Azure Monitor Agent. For non-Azure servers, [installing the Azure Arc agent](../../azure-arc/servers/agent-overview.md) is an important prerequisite that then helps to install the agent extension and other required extensions. Using Azure Arc for this purpose comes at no added cost. It's not mandatory to use Azure Arc for server management overall. You can continue using your existing non-Azure management solutions. After the Azure Arc agent is installed, you can follow the same guidance in this article across Azure and non-Azure for migration. ## Migration testing For more information, see: - [Azure Monitor Agent overview](agents-overview.md) - [Azure Monitor Agent migration for Microsoft Sentinel](../../sentinel/ama-migrate.md)-- [Frequently asked questions for Azure Monitor Agent migration](/azure/azure-monitor/faq#azure-monitor-agent)+- [Frequently asked questions for Azure Monitor Agent migration](/azure/azure-monitor/faq#azure-monitor-agent) |
azure-monitor | Azure Monitor Agent Windows Client | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-windows-client.md | description: This article describes the instructions to install the agent on Win Previously updated : 10/10/2022 Last updated : 10/18/2022 The image below demonstrates how this works: Then, proceed with the instructions below to create and associate them to a Monitored Object, using REST APIs or PowerShell commands. +### Permissions required +Since MO is a tenant level resource, the scope of the permission would be higher than a subscription scope. Therefore, an Azure tenant admin may be needed to perform this step. [Follow these steps to elevate Azure AD Tenant Admin as Azure Tenant Admin](../../role-based-access-control/elevate-access-global-admin.md). It will give the Azure AD admin 'owner' permissions at the root scope. This is needed for all methods described below in this section. + ### Using REST APIs #### 1. Assign ΓÇÿMonitored Object ContributorΓÇÖ role to the operator This step grants the ability to create and link a monitored object to a user.-**Permissions required:** Since MO is a tenant level resource, the scope of the permission would be higher than a subscription scope. Therefore, an Azure tenant admin may be needed to perform this step. [Follow these steps to elevate Azure AD Tenant Admin as Azure Tenant Admin](../../role-based-access-control/elevate-access-global-admin.md). It will give the Azure AD admin 'owner' permissions at the root scope. **Request URI** ```HTTP |
azure-monitor | Action Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md | An action group is a **global** service, so there's no dependency on a specific | Option | Behavior | | | -- |- | Global | The action groups service decides where to store the action group. The action group is persisted in at least two regions to ensure regional resiliency. Processing of actions may be done in any [geographic region](https://azure.microsoft.com/explore/global-infrastructure/geographies/#overview).<br></br>Voice, SMS and email actions performed as the result of [service health alerts](/azure/service-health/alerts-activity-log-service-notifications-portal) are resilient to Azure live-site-incidents. | - | Regional | The action group is stored within the selected region. The action group is [zone-redundant](/azure/availability-zones/az-region#highly-available-services). Processing of actions is performed within the region.</br></br>Use this option if you want to ensure that the processing of your action group is performed within a specific [geographic boundary](https://azure.microsoft.com/explore/global-infrastructure/geographies/#overview). | + | Global | The action groups service decides where to store the action group. The action group is persisted in at least two regions to ensure regional resiliency. Processing of actions may be done in any [geographic region](https://azure.microsoft.com/explore/global-infrastructure/geographies/#overview).<br></br>Voice, SMS and email actions performed as the result of [service health alerts](../../service-health/alerts-activity-log-service-notifications-portal.md) are resilient to Azure live-site-incidents. | + | Regional | The action group is stored within the selected region. The action group is [zone-redundant](../../availability-zones/az-region.md#highly-available-services). Processing of actions is performed within the region.</br></br>Use this option if you want to ensure that the processing of your action group is performed within a specific [geographic boundary](https://azure.microsoft.com/explore/global-infrastructure/geographies/#overview). | The action group is saved in the subscription, region and resource group that you select. For source IP address ranges, see [Action group IP addresses](../app/ip-addresse - Learn more about [ITSM Connector](./itsmc-overview.md). - Learn more about [rate limiting](./alerts-rate-limiting.md) on alerts. - Get an [overview of activity log alerts](./alerts-overview.md), and learn how to receive alerts.-- Learn how to [configure alerts whenever a Service Health notification is posted](../../service-health/alerts-activity-log-service-notifications-portal.md).+- Learn how to [configure alerts whenever a Service Health notification is posted](../../service-health/alerts-activity-log-service-notifications-portal.md). |
azure-monitor | Convert Classic Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md | If you don't need to migrate an existing resource, and instead want to create a - Check your current retention settings under **General** > **Usage and estimated costs** > **Data Retention** for your Log Analytics workspace. This setting will affect how long any new ingested data is stored after you migrate your Application Insights resource. > [!NOTE]- > - If you currently store Application Insights data for longer than the default 90 days and want to retain this longer retention period after migration, adjust your [workspace retention settings](/azure/azure-monitor/logs/data-retention-archive?tabs=portal-1%2Cportal-2#set-retention-and-archive-policy-by-table) from the default 90 days to the desired longer retention period. + > - If you currently store Application Insights data for longer than the default 90 days and want to retain this longer retention period after migration, adjust your [workspace retention settings](../logs/data-retention-archive.md?tabs=portal-1%2cportal-2#set-retention-and-archive-policy-by-table) from the default 90 days to the desired longer retention period. > - If you've selected data retention longer than 90 days on data ingested into the classic Application Insights resource prior to migration, data retention will continue to be billed through that Application Insights resource until the data exceeds the retention period. > - If the retention setting for your Application Insights instance under **Configure** > **Usage and estimated costs** > **Data Retention** is enabled, use that setting to control the retention days for the telemetry data still saved in your classic resource's storage. Legacy table: traces ## Next steps * [Explore metrics](../essentials/metrics-charts.md)-* [Write Log Analytics queries](../logs/log-query-overview.md) +* [Write Log Analytics queries](../logs/log-query-overview.md) |
azure-monitor | Export Telemetry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/export-telemetry.md | To migrate to diagnostic settings export: > [!CAUTION] > If you want to store diagnostic logs in a Log Analytics workspace, there are two things to consider to avoid seeing duplicate data in Application Insights: > * The destination can't be the same Log Analytics workspace that your Application Insights resource is based on.-> * The Application Insights user can't have access to both workspaces. This can be done by setting the Log Analytics [Access control mode](/azure/azure-monitor/logs/log-analytics-workspace-overview#permissions) to **Requires workspace permissions** and ensuring through [Azure role-based access control (Azure RBAC)](./resources-roles-access-control.md) that the user only has access to the Log Analytics workspace the Application Insights resource is based on. +> * The Application Insights user can't have access to both workspaces. This can be done by setting the Log Analytics [Access control mode](../logs/log-analytics-workspace-overview.md#permissions) to **Requires workspace permissions** and ensuring through [Azure role-based access control (Azure RBAC)](./resources-roles-access-control.md) that the user only has access to the Log Analytics workspace the Application Insights resource is based on. > > These steps are necessary because Application Insights accesses telemetry across Application Insight resources (including Log Analytics workspaces) to provide complete end-to-end transaction operations and accurate application maps. Because diagnostic logs use the same table names, duplicate telemetry can be displayed if the user has access to multiple resources containing the same data. <!--Link references--> [exportasa]: ../../stream-analytics/app-insights-export-sql-stream-analytics.md-[roles]: ./resources-roles-access-control.md +[roles]: ./resources-roles-access-control.md |
azure-monitor | Autoscale Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-best-practices.md | We recommend you do NOT explicit set your agent to only use TLS 1.2 unless absol ## Next Steps-- [Autoscale flapping](/azure/azure-monitor/autoscale/autoscale-flapping)+- [Autoscale flapping](./autoscale-flapping.md) - [Create an Activity Log Alert to monitor all autoscale engine operations on your subscription.](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-alert)-- [Create an Activity Log Alert to monitor all failed autoscale scale in/scale out operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-failed-alert)+- [Create an Activity Log Alert to monitor all failed autoscale scale in/scale out operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-failed-alert) |
azure-monitor | Autoscale Flapping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-flapping.md | Below is an example of an activity log record for flapping: To learn more about autoscale, see the following resources: -* [Overview of common autoscale patterns](/azure/azure-monitor/autoscale/autoscale-common-scale-patterns) -* [Automatically scale a virtual machine scale](/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell) -* [Use autoscale actions to send email and webhook alert notifications](/azure/azure-monitor/autoscale/autoscale-webhook-email) +* [Overview of common autoscale patterns](./autoscale-common-scale-patterns.md) +* [Automatically scale a virtual machine scale](../../virtual-machine-scale-sets/tutorial-autoscale-powershell.md) +* [Use autoscale actions to send email and webhook alert notifications](./autoscale-webhook-email.md) |
azure-monitor | Autoscale Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-overview.md | This article describes Microsoft Azure autoscale and its benefits. Azure autoscale supports many resource types. For more information about supported resources, see [autoscale supported resources](#supported-services-for-autoscale). > [!NOTE]-> [Availability sets](/archive/blogs/kaevans/autoscaling-azurevirtual-machines) are an older scaling feature for virtual machines with limited support. We recommend migrating to [virtual machine scale sets](/azure/virtual-machine-scale-sets/overview) for faster and more reliable autoscale support. +> [Availability sets](/archive/blogs/kaevans/autoscaling-azurevirtual-machines) are an older scaling feature for virtual machines with limited support. We recommend migrating to [virtual machine scale sets](../../virtual-machine-scale-sets/overview.md) for faster and more reliable autoscale support. ## What is autoscale Resources generate metrics that are used in autoscale rules to trigger scale eve ### Custom metrics -Use your own custom metrics that your application generates. Configure your application to send metrics to [Application Insights](/azure/azure-monitor/app/app-insights-overview) so you can use those metrics decide when to scale. +Use your own custom metrics that your application generates. Configure your application to send metrics to [Application Insights](../app/app-insights-overview.md) so you can use those metrics decide when to scale. ### Time Rules can trigger one or more actions. Actions include: * Scale - Scale resources in or out. * Email - Send an email to the subscription admins, co-admins, and/or any other email address. * Webhooks - Call webhooks to trigger multiple complex actions inside or outside Azure. In Azure, you can:- * Start an [Azure Automation runbook](/azure/automation/overview). - * Call an [Azure Function](/azure/azure-functions/functions-overview). - * Trigger an [Azure Logic App](/azure/logic-apps/logic-apps-overview). + * Start an [Azure Automation runbook](../../automation/overview.md). + * Call an [Azure Function](../../azure-functions/functions-overview.md). + * Trigger an [Azure Logic App](../../logic-apps/logic-apps-overview.md). ## Autoscale settings |
azure-monitor | Container Insights Enable Aks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-aks.md | kubectl get deployment ama-logs-rs -n=kube-system The output should resemble the following, which indicates that it was deployed properly: ```output-User@aksuser:~$ kubectl get deployment omsagent-rs -n=kube-system +User@aksuser:~$ kubectl get deployment ama-logs-rs -n=kube-system NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE ama-logs-rs 1 1 1 1 3h ``` |
azure-monitor | Metrics Supported | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md | This latest update adds a new column and reorders the metrics to be alphabetical |||||||| |AddRegion|Yes|Region Added|Count|Count|Region Added|Region| |AutoscaleMaxThroughput|No|Autoscale Max Throughput|Count|Maximum|Autoscale Max Throughput|DatabaseName, CollectionName|-|AvailableStorage|No|(deprecated) Available Storage|Bytes|Total|"Available Storage" will be removed from Azure Monitor at the end of September 2023. Azure Cosmos DB collection storage size is now unlimited. The only restriction is that the storage size for each logical partition key is 20GB. You can enable PartitionKeyStatistics in Diagnostic Log to know the storage consumption for top partition keys. For more information, see [Azure Cosmos DB service quotas](/azure/cosmos-db/concepts-limits). After deprecation, the remaining alert rules still defined on the deprecated metric will be automatically disabled post the deprecation date.|CollectionName, DatabaseName, Region| +|AvailableStorage|No|(deprecated) Available Storage|Bytes|Total|"Available Storage" will be removed from Azure Monitor at the end of September 2023. Azure Cosmos DB collection storage size is now unlimited. The only restriction is that the storage size for each logical partition key is 20GB. You can enable PartitionKeyStatistics in Diagnostic Log to know the storage consumption for top partition keys. For more information, see [Azure Cosmos DB service quotas](../../cosmos-db/concepts-limits.md). After deprecation, the remaining alert rules still defined on the deprecated metric will be automatically disabled post the deprecation date.|CollectionName, DatabaseName, Region| |CassandraConnectionClosures|No|Cassandra Connection Closures|Count|Total|Number of Cassandra connections that were closed, reported at a 1 minute granularity|Region, ClosureReason| |CassandraConnectorAvgReplicationLatency|No|Cassandra Connector Average ReplicationLatency|MilliSeconds|Average|Cassandra Connector Average ReplicationLatency|No Dimensions| |CassandraConnectorReplicationHealthStatus|No|Cassandra Connector Replication Health Status|Count|Count|Cassandra Connector Replication Health Status|NotStarted, ReplicationInProgress, Error| This latest update adds a new column and reorders the metrics to be alphabetical - [Read about metrics in Azure Monitor](../data-platform.md) - [Create alerts on metrics](../alerts/alerts-overview.md)-- [Export metrics to storage, Event Hub, or Log Analytics](../essentials/platform-logs-overview.md)+- [Export metrics to storage, Event Hub, or Log Analytics](../essentials/platform-logs-overview.md) |
azure-monitor | Basic Logs Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md | Title: Configure Basic Logs in Azure Monitor -description: Configure a table for Basic Logs in Azure Monitor. +description: Learn how to configure a table for Basic Logs in Azure Monitor. Last updated 10/01/2022 Last updated 10/01/2022 # Configure Basic Logs in Azure Monitor -Setting a table's [log data plan](log-analytics-workspace-overview.md#log-data-plans) to *Basic Logs* lets you save on the cost of storing high-volume verbose logs you use for debugging, troubleshooting and auditing, but not for analytics and alerts. This article describes how to configure Basic Logs for a particular table in your Log Analytics workspace. +Setting a table's [log data plan](log-analytics-workspace-overview.md#log-data-plans) to **Basic Logs** lets you save on the cost of storing high-volume verbose logs you use for debugging, troubleshooting, and auditing, but not for analytics and alerts. This article describes how to configure Basic Logs for a particular table in your Log Analytics workspace. > [!IMPORTANT]-> You can switch a table's plan once a week. The Basic Logs feature is not available for workspaces in [legacy pricing tiers](cost-logs.md#legacy-pricing-tiers). +> You can switch a table's plan once a week. The Basic Logs feature isn't available for workspaces in [legacy pricing tiers](cost-logs.md#legacy-pricing-tiers). ## Which tables support Basic Logs?-By default, all tables in your Log Analytics are Analytics tables, and available for query and alerts. -You can currently configure the following tables for Basic Logs: -- All custom tables created with or migrated to the [Data Collection Rule (DCR)-based logs ingestion API.](logs-ingestion-api-overview.md) -- [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) - Used in [Container Insights](../containers/container-insights-overview.md) and includes verbose text-based log records.-- [AppTraces](/azure/azure-monitor/reference/tables/apptraces) - Freeform Application Insights traces.-- [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/ContainerAppConsoleLogs) - Logs generated by Container Apps, within a Container App environment.+By default, all tables in your Log Analytics workspace are Analytics tables, and they're available for query and alerts. You can currently configure the following tables for Basic Logs: ++- Custom tables: All custom tables created with or migrated to the [data collection rule (DCR)-based logs ingestion API.](logs-ingestion-api-overview.md) +- [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2): Used in [Container insights](../containers/container-insights-overview.md) and includes verbose text-based log records. +- [AppTraces](/azure/azure-monitor/reference/tables/apptraces): Freeform Application Insights traces. +- [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/ContainerAppConsoleLogs): Logs generated by Azure Container Apps, within a Container Apps environment. > [!NOTE]-> Tables created with the [Data Collector API](data-collector-api.md) do not support Basic Logs. +> Tables created with the [Data Collector API](data-collector-api.md) don't support Basic Logs. ## Set table configuration To configure a table for Basic Logs or Analytics Logs in the Azure portal: 1. From the **Log Analytics workspaces** menu, select **Tables**. - The **Tables** screen lists all of the tables in the workspace. + The **Tables** screen lists all the tables in the workspace. 1. Select the context menu for the table you want to configure and select **Manage table**. - :::image type="content" source="media/basic-logs-configure/log-analytics-table-configuration.png" lightbox="media/basic-logs-configure/log-analytics-table-configuration.png" alt-text="Screenshot showing the Manage table button for one of the tables in a workspace."::: + :::image type="content" source="media/basic-logs-configure/log-analytics-table-configuration.png" lightbox="media/basic-logs-configure/log-analytics-table-configuration.png" alt-text="Screenshot that shows the Manage table button for one of the tables in a workspace."::: 1. From the **Table plan** dropdown on the table configuration screen, select **Basic** or **Analytics**. The **Table plan** dropdown is enabled only for [tables that support Basic Logs](#which-tables-support-basic-logs). - :::image type="content" source="media/basic-logs-configure/log-analytics-configure-table-plan.png" lightbox="media/basic-logs-configure/log-analytics-configure-table-plan.png" alt-text="Screenshot showing the Table plan dropdown on the table configuration screen."::: + :::image type="content" source="media/basic-logs-configure/log-analytics-configure-table-plan.png" lightbox="media/basic-logs-configure/log-analytics-configure-table-plan.png" alt-text="Screenshot that shows the Table plan dropdown on the table configuration screen."::: 1. Select **Save**. To configure a table for Basic Logs or Analytics Logs, call the **Tables - Updat ```http PATCH https://management.azure.com/subscriptions/<subscriptionId>/resourcegroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/tables/<tableName>?api-version=2021-12-01-preview ```+ > [!IMPORTANT]-> Use the Bearer token for authentication. Read more about [using Bearer tokens](https://social.technet.microsoft.com/wiki/contents/articles/51140.azure-rest-management-api-the-quickest-way-to-get-your-bearer-token.aspx). +> Use the bearer token for authentication. Learn more about [using bearer tokens](https://social.technet.microsoft.com/wiki/contents/articles/51140.azure-rest-management-api-the-quickest-way-to-get-your-bearer-token.aspx). **Request body** |Name | Type | Description | | | | |-|properties.plan | string | The table plan. Possible values are *Analytics* and *Basic*.| +|properties.plan | string | The table plan. Possible values are `Analytics` and `Basic`.| **Example** This example configures the `ContainerLogV2` table for Basic Logs. -Container Insights uses ContainerLog by default. To switch to using ContainerLogV2 for Container Insights, [enable the ContainerLogV2 schema](../containers/container-insights-logging-v2.md) before you convert the table to Basic Logs. +Container insights uses `ContainerLog` by default. To switch to using `ContainerLogV2` for Container insights, [enable the ContainerLogV2 schema](../containers/container-insights-logging-v2.md) before you convert the table to Basic Logs. **Sample request** Use this request body to change to Analytics Logs: **Sample response** -This is the response for a table changed to Basic Logs. +This sample is the response for a table changed to Basic Logs: Status code: 200 For example: ```azurecli az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name ContainerLogV2 --plan Analytics ```- + ## Check table configuration+ # [Portal](#tab/portal-2) -To check table configuration in the Azure portal, you can open the table configuration screen, as described in [Set table configuration](#set-table-configuration). +To check table configuration in the Azure portal, you can open the table configuration screen, as described in [Set table configuration](#set-table-configuration). -Alternatively: +Alternatively: -1. From the **Azure Monitor** menu, select **Logs** and select your workspace for the [scope](scope.md). See [Log Analytics tutorial](log-analytics-tutorial.md#view-table-information) for a walkthrough. -1. Open the **Tables** tab, which lists all tables in the workspace. +1. From the **Azure Monitor** menu, select **Logs** and select your workspace for the [scope](scope.md). See the [Log Analytics tutorial](log-analytics-tutorial.md#view-table-information) for a walkthrough. +1. Open the **Tables** tab, which lists all tables in the workspace. - Basic Logs tables have a unique icon: + Basic Logs tables have a unique icon: - :::image type="content" source="media/basic-logs-configure/table-icon.png" alt-text="Screenshot of the Basic Logs table icon in the table list." lightbox="media/basic-logs-configure/table-icon.png"::: + :::image type="content" source="media/basic-logs-configure/table-icon.png" alt-text="Screenshot that shows the Basic Logs table icon in the table list." lightbox="media/basic-logs-configure/table-icon.png"::: You can also hover over a table name for the table information view, which indicates whether the table is configured as Basic Logs: - :::image type="content" source="media/basic-logs-configure/table-info.png" alt-text="Screenshot of the Basic Logs table indicator in the table details." lightbox="media/basic-logs-configure/table-info.png"::: - + :::image type="content" source="media/basic-logs-configure/table-info.png" alt-text="Screenshot that shows the Basic Logs table indicator in the table details." lightbox="media/basic-logs-configure/table-info.png"::: + # [API](#tab/api-2) To check the configuration of a table, call the **Tables - Get** API: To check the configuration of a table, call the **Tables - Get** API: GET https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{tableName}?api-version=2021-12-01-preview ``` -**Response Body** +**Response body** |Name | Type | Description | | | | |-|properties.plan | string | The table plan. Either "Analytics" or "Basic". | -|properties.retentionInDays | integer | The table's data retention in days. In _Basic Logs_, the value is 8 days, fixed. In _Analytics Logs_, the value is between 7 and 730.| -|properties.totalRetentionInDays | integer | The table's data retention including Archive period| +|properties.plan | string | The table plan. Either `Analytics` or `Basic`. | +|properties.retentionInDays | integer | The table's data retention in days. In `Basic Logs`, the value is 8 days, fixed. In `Analytics Logs`, the value is between 7 and 730 days.| +|properties.totalRetentionInDays | integer | The table's data retention that also includes the archive period.| |properties.archiveRetentionInDays|integer|The table's archive period (read-only, calculated).|-|properties.lastPlanModifiedDate|String|Last time when plan was set for this table. Null if no change was ever done from the default settings (read-only) +|properties.lastPlanModifiedDate|String|Last time when the plan was set for this table. Null if no change was ever done from the default settings (read-only). -**Sample Request** +**Sample request** ```http GET https://management.azure.com/subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/Microsoft.OperationalInsights/workspaces/ContosoWorkspace/tables/ContainerLogV2?api-version=2021-12-01-preview ``` --**Sample Response** +**Sample response** Status code: 200 ```http az monitor log-analytics workspace table show --subscription ContosoSID --resour -## Retention and archiving of Basic Logs +## Retain and archive Basic Logs Analytics tables retain data based on a [retention and archive policy](data-retention-archive.md) you set. -Basic Logs tables retain data for eight days. When you change an existing table's plan to Basic Logs, Azure archives data that is more than eight days old but still within the table's original retention period. +Basic Logs tables retain data for eight days. When you change an existing table's plan to Basic Logs, Azure archives data that's more than eight days old but still within the table's original retention period. ## Next steps -- [Learn more about the different log plans.](log-analytics-workspace-overview.md#log-data-plans)-- [Query data in Basic Logs.](basic-logs-query.md)+- [Learn more about the different log plans](log-analytics-workspace-overview.md#log-data-plans) +- [Query data in Basic Logs](basic-logs-query.md) |
azure-monitor | Cost Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md | ms.reviwer: dalek git # Azure Monitor Logs pricing details-The most significant charges for most Azure Monitor implementations will typically be ingestion and retention of data in your Log Analytics workspaces. Several features in Azure Monitor do not have a direct cost but add to the workspace data that's collected. This article describes how data charges are calculated for your Log Analytics workspaces and Application Insights resources and the different configuration options that affect your costs. ++The most significant charges for most Azure Monitor implementations will typically be ingestion and retention of data in your Log Analytics workspaces. Several features in Azure Monitor don't have a direct cost but add to the workspace data that's collected. This article describes how data charges are calculated for your Log Analytics workspaces and Application Insights resources and the different configuration options that affect your costs. ## Pricing model-The default pricing for Log Analytics is a Pay-As-You-Go model that's based on ingested data volume and data retention. Each Log Analytics workspace is charged as a separate service and contributes to the bill for your Azure subscription. [Pricing for Log Analytics](https://azure.microsoft.com/pricing/details/monitor/) is set regionally. The amount of data ingestion can be considerable, depending on the following factors: -- The set of management solutions enabled and their configuration-- The number and type of monitored resources-- The types of data collected from each monitored resource+The default pricing for Log Analytics is a pay-as-you-go model that's based on ingested data volume and data retention. Each Log Analytics workspace is charged as a separate service and contributes to the bill for your Azure subscription. [Pricing for Log Analytics](https://azure.microsoft.com/pricing/details/monitor/) is set regionally. The amount of data ingestion can be considerable, depending on: ++- The set of management solutions enabled and their configuration. +- The number and type of monitored resources. +- The types of data collected from each monitored resource. ## Data size calculation-Data volume is measured as the size of the data that will be stored in GB (10^9 bytes). The data size of a single record is calculated from a string representation of the columns that are stored in the Log Analytics workspace for that record, regardless of whether the data is sent from an agent or added during the ingestion process. This includes any custom columns added by the [logs ingestion API](logs-ingestion-api-overview.md), [transformations](../essentials/data-collection-transformations.md), or [custom fields](custom-fields.md) that are added as data is collected and then stored in the workspace. ++Data volume is measured as the size of the data that will be stored in GB (10^9 bytes). The data size of a single record is calculated from a string representation of the columns that are stored in the Log Analytics workspace for that record. It doesn't matter whether the data is sent from an agent or added during the ingestion process. This calculation includes any custom columns added by the [logs ingestion API](logs-ingestion-api-overview.md), [transformations](../essentials/data-collection-transformations.md) or [custom fields](custom-fields.md) that are added as data is collected and then stored in the workspace. >[!NOTE]->The billable data volume calculation is generally substantially smaller than the size of the entire incoming JSON-packaged event. Including the effect of the standard columns excluded from billing, on average across all event types the billed size is around 25% less than the incoming data size. This can be up to 50% for small events. It is essential to understand this calculation of billed data size when estimating costs and comparing to other pricing models. +>The billable data volume calculation is generally substantially smaller than the size of the entire incoming JSON-packaged event. On average, across all event types, the billed size is around 25 percent less than the incoming data size. It can be up to 50 percent for small events. The percentage includes the effect of the standard columns excluded from billing. It's essential to understand this calculation of billed data size when you estimate costs and compare other pricing models. ### Excluded columns-The following [standard columns](log-standard-columns.md) that are common to all tables, are excluded in the calculation of the record size. All other columns stored in Log Analytics are included in the calculation of the record size. ++The following [standard columns](log-standard-columns.md) are common to all tables and are excluded in the calculation of the record size. All other columns stored in Log Analytics are included in the calculation of the record size. The standard columns are: - `_ResourceId` - `_SubscriptionId` The following [standard columns](log-standard-columns.md) that are common to all - `_BilledSize` - `Type` - ### Excluded tables-Some tables are free from data ingestion charges altogether, including [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity), [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat), [Usage](/azure/azure-monitor/reference/tables/usage), [Operation](/azure/azure-monitor/reference/tables/operation). This will always be indicated by the [_IsBillable](log-standard-columns.md#_isbillable) column, which indicates whether a record was excluded from billing for data ingestion. - +Some tables are free from data ingestion charges altogether, including [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity), [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat), [Usage](/azure/azure-monitor/reference/tables/usage), and [Operation](/azure/azure-monitor/reference/tables/operation). This information will always be indicated by the [_IsBillable](log-standard-columns.md#_isbillable) column, which indicates whether a record was excluded from billing for data ingestion. ### Charges for other solutions and services-Some solutions have more specific policies about free data ingestion. For example [Azure Migrate](https://azure.microsoft.com/pricing/details/azure-migrate/) makes dependency visualization data free for the first 180-days of a Server Assessment. Services such as [Microsoft Defender for Cloud](https://azure.microsoft.com/pricing/details/azure-defender/), [Microsoft Sentinel](https://azure.microsoft.com/pricing/details/azure-sentinel/), and [Configuration management](https://azure.microsoft.com/pricing/details/automation/) have their own pricing models. ++Some solutions have more specific policies about free data ingestion. For example, [Azure Migrate](https://azure.microsoft.com/pricing/details/azure-migrate/) makes dependency visualization data free for the first 180 days of a Server Assessment. Services such as [Microsoft Defender for Cloud](https://azure.microsoft.com/pricing/details/azure-defender/), [Microsoft Sentinel](https://azure.microsoft.com/pricing/details/azure-sentinel/), and [configuration management](https://azure.microsoft.com/pricing/details/automation/) have their own pricing models. See the documentation for different services and solutions for any unique billing calculations. -## Commitment Tiers -In addition to the Pay-As-You-Go model, Log Analytics has **Commitment Tiers**, which can save you as much as 30 percent compared to the Pay-As-You-Go price. With commitment tier pricing, you can commit to buy data ingestion for a workspace, starting at 100 GB/day, at a lower price than Pay-As-You-Go pricing. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. The commitment tiers have a 31-day commitment period from the time a commitment tier is selected. +## Commitment tiers -- During the commitment period, you can change to a higher commitment tier (which restarts the 31-day commitment period), but you can't move back to Pay-As-You-Go or to a lower commitment tier until after you finish the commitment period. -- At the end of the commitment period, the workspace retains the selected commitment tier, and the workspace can be moved to Pay-As-You-Go or to a different commitment tier at any time. - -Billing for the commitment tiers is done per workspace on a daily basis. If the workspace is part of a [dedicated cluster](#dedicated-clusters), the billing is done for the cluster (see below). See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for a detailed listing of the commitment tiers and their prices. +In addition to the pay-as-you-go model, Log Analytics has *commitment tiers*, which can save you as much as 30 percent compared to the pay-as-you-go price. With commitment tier pricing, you can commit to buy data ingestion for a workspace, starting at 100 GB per day, at a lower price than pay-as-you-go pricing. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. The commitment tiers have a 31-day commitment period from the time a commitment tier is selected. -Azure Commitment Discounts such as those received from [Microsoft Enterprise Agreements](https://www.microsoft.com/licensing/licensing-programs/enterprise) are applied to Azure Monitor Logs Commitment Tier pricing just as they are to Pay-As-You-Go pricing (whether the usage is being billed per workspace or per dedicated cluster). +- During the commitment period, you can change to a higher commitment tier, which restarts the 31-day commitment period. You can't move back to pay-as-you-go or to a lower commitment tier until after you finish the commitment period. +- At the end of the commitment period, the workspace retains the selected commitment tier, and the workspace can be moved to pay-as-you-go or to a different commitment tier at any time. ++Billing for the commitment tiers is done per workspace on a daily basis. If the workspace is part of a [dedicated cluster](#dedicated-clusters), the billing is done for the cluster. See the following "Dedicated clusters" section. For a list of the commitment tiers and their prices, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). ++Azure Commitment Discounts, such as discounts received from [Microsoft Enterprise Agreements](https://www.microsoft.com/licensing/licensing-programs/enterprise), are applied to Azure Monitor Logs commitment-tier pricing just as they are to pay-as-you-go pricing. Discounts are applied whether the usage is being billed per workspace or per dedicated cluster. > [!TIP]-> The **Usage and estimated costs** menu item for each Log Analytics workspace shows an estimate of your monthly charges at each commitment level. You should periodically review this information to determine if you can reduce your charges by moving to another tier. See [Usage and estimated costs](../usage-estimated-costs.md#usage-and-estimated-costs) for information on this view. +> The **Usage and estimated costs** menu item for each Log Analytics workspace shows an estimate of your monthly charges at each commitment level. Review this information periodically to determine if you can reduce your charges by moving to another tier. For information on this view, see [Usage and estimated costs](../usage-estimated-costs.md#usage-and-estimated-costs). ## Dedicated clusters-An [Azure Monitor Logs dedicated cluster](logs-dedicated-clusters.md) is a collection of workspaces in a single managed Azure Data Explorer cluster. Dedicated clusters support advanced features such as [customer-managed keys](customer-managed-keys.md) and use the same commitment tier pricing model as workspaces although they must have a commitment level of at least 500 GB/day. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. There is no Pay-As-You-Go option for clusters. -The cluster commitment tier has a 31-day commitment period after the commitment level is increased. During the commitment period, the commitment tier level can't be reduced, but it can be increased at any time. When workspaces are associated to a cluster, the data ingestion billing for those workspaces is done at the cluster level using the configured commitment tier level. +An [Azure Monitor Logs dedicated cluster](logs-dedicated-clusters.md) is a collection of workspaces in a single managed Azure Data Explorer cluster. Dedicated clusters support advanced features, such as [customer-managed keys](customer-managed-keys.md), and use the same commitment-tier pricing model as workspaces, although they must have a commitment level of at least 500 GB per day. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. There's no pay-as-you-go option for clusters. -There are two modes of billing for a cluster that you specify when you create the cluster. +The cluster commitment tier has a 31-day commitment period after the commitment level is increased. During the commitment period, the commitment tier level can't be reduced, but it can be increased at any time. When workspaces are associated to a cluster, the data ingestion billing for those workspaces is done at the cluster level by using the configured commitment tier level. -- **Cluster (default)**: Billing for ingested data is done at the cluster level. The ingested data quantities from each workspace associated to a cluster are aggregated to calculate the daily bill for the cluster. Per-node allocations from [Microsoft Defender for Cloud](../../security-center/index.yml) are applied at the workspace level prior to this aggregation of data across all workspaces in the cluster. +There are two modes of billing for a cluster that you specify when you create the cluster: -- **Workspaces**: Commitment tier costs for your cluster are attributed proportionately to the workspaces in the cluster, by each workspace's data ingestion volume (after accounting for per-node allocations from [Microsoft Defender for Cloud](../../security-center/index.yml) for each workspace.)<br><br>If the total data volume ingested into a cluster for a day is less than the commitment tier, each workspace is billed for its ingested data at the effective per-GB commitment tier rate by billing them a fraction of the commitment tier. The unused part of the commitment tier is then billed to the cluster resource.<br><br>If the total data volume ingested into a cluster for a day is more than the commitment tier, each workspace is billed for a fraction of the commitment tier, based on its fraction of the ingested data that day and each workspace for a fraction of the ingested data above the commitment tier. If the total data volume ingested into a workspace for a day is above the commitment tier, nothing is billed to the cluster resource.+- **Cluster (default)**: Billing for ingested data is done at the cluster level. The ingested data quantities from each workspace associated to a cluster are aggregated to calculate the daily bill for the cluster. Per-node allocations from [Microsoft Defender for Cloud](../../security-center/index.yml) are applied at the workspace level prior to this aggregation of data across all workspaces in the cluster. ++- **Workspaces**: Commitment tier costs for your cluster are attributed proportionately to the workspaces in the cluster, by each workspace's data ingestion volume (after accounting for per-node allocations from [Microsoft Defender for Cloud](../../security-center/index.yml) for each workspace).<br><br>If the total data volume ingested into a cluster for a day is less than the commitment tier, each workspace is billed for its ingested data at the effective per-GB commitment tier rate by billing them a fraction of the commitment tier. The unused part of the commitment tier is then billed to the cluster resource.<br><br>If the total data volume ingested into a cluster for a day is more than the commitment tier, each workspace is billed for a fraction of the commitment tier, based on its fraction of the ingested data that day and each workspace for a fraction of the ingested data above the commitment tier. If the total data volume ingested into a workspace for a day is above the commitment tier, nothing is billed to the cluster resource. In cluster billing options, data retention is billed for each workspace. Cluster billing starts when the cluster is created, regardless of whether workspaces are associated with the cluster. -When you link workspaces to a cluster, the pricing tier is changed to cluster, and ingestion is billed based on the cluster's commitment tier. Workspaces associated to a cluster no longer have their own pricing tier. Workspaces can be unlinked from a cluster at any time, and pricing tier change to per-GB. +When you link workspaces to a cluster, the pricing tier is changed to cluster, and ingestion is billed based on the cluster's commitment tier. Workspaces associated to a cluster no longer have their own pricing tier. Workspaces can be unlinked from a cluster at any time, and the pricing tier can be changed to per GB. -If your linked workspace is using legacy Per Node pricing tier, it will be billed based on data ingested against the cluster's Commitment Tier, and no longer Per Node. Per-node data allocations from Microsoft Defender for Cloud will continue to be applied. +If your linked workspace is using the legacy Per Node pricing tier, it will be billed based on data ingested against the cluster's commitment tier, and no longer Per Node. Per-node data allocations from Microsoft Defender for Cloud will continue to be applied. -See [Create a dedicated cluster](logs-dedicated-clusters.md#create-a-dedicated-cluster) for details on creating a dedicated cluster and specifying its billing type. +For more information on how to create a dedicated cluster and specify its billing type, see [Create a dedicated cluster](logs-dedicated-clusters.md#create-a-dedicated-cluster). ## Basic Logs-You can configure certain tables in a Log Analytics workspace to use [Basic Logs](basic-logs-configure.md). Data in these tables has a significantly reduced ingestion charge and a limited retention period. There is a charge though to search against these tables. Basic Logs are intended for high-volume verbose logs you use for debugging, troubleshooting and auditing, but not for analytics and alerts. -The charge for searching against Basic Logs is based on the GB of data scanned in performing the search. +You can configure certain tables in a Log Analytics workspace to use [Basic Logs](basic-logs-configure.md). Data in these tables has a significantly reduced ingestion charge and a limited retention period. There's a charge to search against these tables. Basic Logs are intended for high-volume verbose logs you use for debugging, troubleshooting, and auditing, but not for analytics and alerts. ++The charge for searching against Basic Logs is based on the GB of data scanned in performing the search. -See [Configure Basic Logs in Azure Monitor](basic-logs-configure.md) for details on Basic Logs including how to configure them and query their data. +For more information on Basic Logs, including how to configure them and query their data, see [Configure Basic Logs in Azure Monitor](basic-logs-configure.md). ## Log data retention and archive-In addition to data ingestion, there is a charge for the retention of data in each Log Analytics workspace. You can set the retention period for the entire workspace or for each table. After this period, the data is either removed or archived. Archived Logs have a reduced retention charge, and there is a charge to search against them. Use Archive Logs to reduce your costs for data that you must store for compliance or occasional investigation. -See [Configure data retention and archive policies in Azure Monitor Logs](data-retention-archive.md) for details on data retention and archiving including how to configure these settings and access archived data. +In addition to data ingestion, there's a charge for the retention of data in each Log Analytics workspace. You can set the retention period for the entire workspace or for each table. After this period, the data is either removed or archived. Archived logs have a reduced retention charge, and there's a charge to search against them. Use archived logs to reduce your costs for data that you must store for compliance or occasional investigation. ++For more information on data retention and archiving, including how to configure these settings and access archived data, see [Configure data retention and archive policies in Azure Monitor Logs](data-retention-archive.md). ## Search jobs-Searching against Archived Logs uses [search jobs](search-jobs.md). Search jobs are asynchronous queries that fetch records into a new search table within your workspace for further analytics. Search jobs are billed by the number of GB of data scanned on each day that is accessed to perform the search. ++Searching against archived logs uses [search jobs](search-jobs.md). Search jobs are asynchronous queries that fetch records into a new search table within your workspace for further analytics. Search jobs are billed by the number of GB of data scanned on each day that's accessed to perform the search. ## Log data restore-For situations in which older or archived logs need to be intensively queried with the full analytic query capabilities, the [data restore](restore.md) feature is a powerful tool. The restore operation makes a specific time range of data in a table available in the hot cache for high-performance queries. You can later dismiss the data when you're done. Log data restore is billed by the amount of data restored, and by the time the restore is kept active. The minimal values billed for any data restore are 2 TB and 12 hours. Data restored of more than 2 TB and/or more than 12 hours in duration are billed on a pro-rated basis. ++For situations in which older or archived logs must be intensively queried with the full analytic query capabilities, the [data restore](restore.md) feature is a powerful tool. The restore operation makes a specific time range of data in a table available in the hot cache for high-performance queries. You can later dismiss the data when you're finished. Log data restore is billed by the amount of data restored, and by the time the restore is kept active. The minimal values billed for any data restore are 2 TB and 12 hours. Data restored of more than 2 TB and/or more than 12 hours in duration is billed on a pro-rated basis. ## Log data export-[Data export](logs-data-export.md) in Log Analytics workspace lets you continuously export data per selected tables in your workspace, to an Azure Storage Account or Azure Event Hubs as it arrives to Azure Monitor pipeline. Charges for the use of data export are based on the amount of data exported. The size of data exported is the number of bytes in the exported JSON formatted data. -## Application insights billing -Since [workspace-based Application Insights resources](../app/create-workspace-resource.md) store their data in a Log Analytics workspace, the billing for data ingestion and retention is done by the workspace where the Application Insights data is located. This enables you to leverage all options of the Log Analytics pricing model, including [commitment tiers](#commitment-tiers) in addition to Pay-As-You-Go. +[Data export](logs-data-export.md) in a Log Analytics workspace lets you continuously export data per selected tables in your workspace to an Azure Storage account or Azure Event Hubs as it arrives to an Azure Monitor pipeline. Charges for the use of data export are based on the amount of data exported. The size of data exported is the number of bytes in the exported JSON-formatted data. ++## Application Insights billing -Data ingestion and data retention for a [classic Application Insights resource](../app/create-new-resource.md) follow the same Pay-As-You-Go pricing as workspace-based resources, but they can't leverage commitment tiers. +Because [workspace-based Application Insights resources](../app/create-workspace-resource.md) store their data in a Log Analytics workspace, the billing for data ingestion and retention is done by the workspace where the Application Insights data is located. For this reason, you can use all options of the Log Analytics pricing model, including [commitment tiers](#commitment-tiers), along with pay-as-you-go. -Telemetry from ping tests and multi-step tests is charged the same as data usage for other telemetry from your app. Use of web tests and enabling alerting on custom metric dimensions is still reported through Application Insights. There's no data volume charge for using the [Live Metrics Stream](../app/live-stream.md). +Data ingestion and data retention for a [classic Application Insights resource](../app/create-new-resource.md) follow the same pay-as-you-go pricing as workspace-based resources, but they can't use commitment tiers. -See [Application Insights legacy enterprise (per node) pricing tier](../app/legacy-pricing.md) for details about legacy tiers that are available to early adopters of Application Insights. +Telemetry from ping tests and multi-step tests is charged the same as data usage for other telemetry from your app. Use of web tests and enabling alerting on custom metric dimensions is still reported through Application Insights. There's no data volume charge for using [Live Metrics Stream](../app/live-stream.md). ++For more information about legacy tiers that are available to early adopters of Application Insights, see [Application Insights legacy enterprise (per node) pricing tier](../app/legacy-pricing.md). ## Workspaces with Microsoft Sentinel-When Microsoft Sentinel is enabled in a Log Analytics workspace, all data collected in that workspace is subject to Sentinel charges in addition to Log Analytics charges. For this reason, you will often separate your security and operational data in different workspaces so that you don't incur [Sentinel charges](../../sentinel/billing.md) for operational data. For some particular situations though, combining this data can actually result in a cost savings. This is typically when you aren't collecting enough security and operational data to each reach a commitment tier on their own, but the combined data is enough to reach a commitment tier. See **Combining your SOC and non-SOC data** in [Design your Microsoft Sentinel workspace architecture](../../sentinel/design-your-workspace-architecture.md#decision-tree) for details and a sample cost calculation. ++When Microsoft Sentinel is enabled in a Log Analytics workspace, all data collected in that workspace is subject to Microsoft Sentinel charges along with Log Analytics charges. For this reason, you'll often separate your security and operational data in different workspaces so that you don't incur [Microsoft Sentinel charges](../../sentinel/billing.md) for operational data. ++In some scenarios, combining this data can result in cost savings. Typically, this situation occurs when you aren't collecting enough security and operational data for each to reach a commitment tier on their own, but the combined data is enough to reach a commitment tier. For more information and a sample cost calculation, see the section "Combining your SOC and non-SOC data" in [Design your Microsoft Sentinel workspace architecture](../../sentinel/design-your-workspace-architecture.md#decision-tree). + ## Workspaces with Microsoft Defender for Cloud-[Microsoft Defender for Servers (part of Defender for Cloud)](../../security-center/index.yml) [bills by the number of monitored services](https://azure.microsoft.com/pricing/details/azure-defender/) and provides 500 MB/server/day data allocation that is applied to the following subset of [security data types](/azure/azure-monitor/reference/tables/tables-category#security): ++[Microsoft Defender for Servers (part of Defender for Cloud)](../../security-center/index.yml) [bills by the number of monitored services](https://azure.microsoft.com/pricing/details/azure-defender/). It provides 500 MB per server per day of data allocation that's applied to the following subset of [security data types](/azure/azure-monitor/reference/tables/tables-category#security): - [WindowsEvent](/azure/azure-monitor/reference/tables/windowsevent) - [SecurityAlert](/azure/azure-monitor/reference/tables/securityalert) When Microsoft Sentinel is enabled in a Log Analytics workspace, all data collec - [LinuxAuditLog](/azure/azure-monitor/reference/tables/linuxauditlog) - [SysmonEvent](/azure/azure-monitor/reference/tables/sysmonevent) - [ProtectionStatus](/azure/azure-monitor/reference/tables/protectionstatus)-- [Update](/azure/azure-monitor/reference/tables/update) and [UpdateSummary](/azure/azure-monitor/reference/tables/updatesummary) when the Update Management solution isn't running in the workspace or solution targeting is enabled. See [What data types are included in the 500-MB data daily allowance?](../../defender-for-cloud/enhanced-security-features-overview.md#what-data-types-are-included-in-the-500-mb-data-daily-allowance)- +- [Update](/azure/azure-monitor/reference/tables/update) and [UpdateSummary](/azure/azure-monitor/reference/tables/updatesummary) when the Update Management solution isn't running in the workspace or solution targeting is enabled. See [What data types are included in the 500-MB data daily allowance?](../../defender-for-cloud/enhanced-security-features-overview.md#what-data-types-are-included-in-the-500-mb-data-daily-allowance). + The count of monitored servers is calculated on an hourly granularity. The daily data allocation contributions from each monitored server are aggregated at the workspace level. If the workspace is in the legacy Per Node pricing tier, the Microsoft Defender for Cloud and Log Analytics allocations are combined and applied jointly to all billable ingested data. ## Legacy pricing tiers-Subscriptions that contained a Log Analytics workspace or Application Insights resource on April 2, 2018, or are linked to an Enterprise Agreement that started before February 1, 2019 and is still active, will continue to have access to use the following legacy pricing tiers: ++Subscriptions that contained a Log Analytics workspace or Application Insights resource on April 2, 2018, or are linked to an Enterprise Agreement that started before February 1, 2019, and is still active, will continue to have access to use the following legacy pricing tiers: - Standalone (Per GB)-- Per Node (OMS)+- Per Node (Operations Management Suite [OMS]) -Access to the legacy Free Trial pricing tier will be further limited starting July 1, 2022 (see below.) +Access to the legacy Free Trial pricing tier was limited on July 1, 2022. ### Free Trial pricing tier-Workspaces in the **Free Trial** pricing tier will have daily data ingestion limited to 500 MB (except for security data types collected by [Microsoft Defender for Cloud](../../security-center/index.yml)), and the data retention is limited to seven days. The Free Trial pricing tier is intended only for evaluation purposes. No SLA is provided for the Free tier. ++Workspaces in the Free Trial pricing tier will have daily data ingestion limited to 500 MB (except for security data types collected by [Microsoft Defender for Cloud](../../security-center/index.yml)). The data retention is limited to seven days. The Free Trial pricing tier is intended only for evaluation purposes. No SLA is provided for the Free Trial tier. > [!NOTE]-> Creating new workspaces in, or moving existing workspaces into, the legacy Free Trial pricing tier is possible only until July 1, 2022. +> Creating new workspaces in, or moving existing workspaces into, the legacy Free Trial pricing tier was possible only until July 1, 2022. ### Standalone pricing tier-Usage on the **Standalone** pricing tier is billed by the ingested data volume. It is reported in the **Log Analytics** service and the meter is named "Data Analyzed". Workspaces in the Standalone pricing tier have user-configurable retention from 30 to 730 days. Workspaces in the Standalone pricing tier do not support the use of [Basic Logs](basic-logs-configure.md). ++Usage on the Standalone pricing tier is billed by the ingested data volume. It's reported in the **Log Analytics** service and the meter is named "Data Analyzed." Workspaces in the Standalone pricing tier have user-configurable retention from 30 to 730 days. Workspaces in the Standalone pricing tier don't support the use of [Basic Logs](basic-logs-configure.md). ### Per Node pricing tier-The **Per Node** pricing tier charges per monitored VM (node) on an hour granularity. For each monitored node, the workspace is allocated 500 MB of data per day that's not billed. This allocation is calculated with hourly granularity and is aggregated at the workspace level each day. Data ingested above the aggregate daily data allocation is billed per GB as data overage. On your bill, the service will be **Insight and Analytics** for Log Analytics usage if the workspace is in the Per Node pricing tier. Workspaces in the Per Node pricing tier have user-configurable retention from 30 to 730 days. Workspaces in the Per Node pricing tier do not support the use of [Basic Logs](basic-logs-configure.md). Usage is reported on three meters: -- **Node**: this is usage for the number of monitored nodes (VMs) in units of node months.-- **Data Overage per Node**: this is the number of GB of data ingested in excess of the aggregated data allocation.-- **Data Included per Node**: this is the amount of ingested data that was covered by the aggregated data allocation. This meter is also used when the workspace is in all pricing tiers to show the amount of data covered by the Microsoft Defender for Cloud.+The Per Node pricing tier charges per monitored VM (node) on an hour granularity. For each monitored node, the workspace is allocated 500 MB of data per day that's not billed. This allocation is calculated with hourly granularity and is aggregated at the workspace level each day. Data ingested above the aggregate daily data allocation is billed per GB as data overage. ++On your bill, the service will be **Insight and Analytics** for Log Analytics usage if the workspace is in the Per Node pricing tier. Workspaces in the Per Node pricing tier have user-configurable retention from 30 to 730 days. Workspaces in the Per Node pricing tier don't support the use of [Basic Logs](basic-logs-configure.md). Usage is reported on three meters: ++- **Node**: The usage for the number of monitored nodes (VMs) in units of node months. +- **Data Overage per Node**: The number of GB of data ingested in excess of the aggregated data allocation. +- **Data Included per Node**: The amount of ingested data that was covered by the aggregated data allocation. This meter is also used when the workspace is in all pricing tiers to show the amount of data covered by Microsoft Defender for Cloud. > [!TIP]-> If your workspace has access to the **Per Node** pricing tier but you're wondering whether it would cost less in a Pay-As-You-Go tier, you can [use the query below](#evaluate-the-legacy-per-node-pricing-tier) for a recommendation. +> If your workspace has access to the **Per Node** pricing tier but you're wondering whether it would cost less in a pay-as-you-go tier, [use the following query](#evaluate-the-legacy-per-node-pricing-tier) for a recommendation. -### Standard and Premium pricing tiers +### Standard and Premium pricing tiers -Workspaces created before April 2016 can continue to use the **Standard** and **Premium** pricing tiers that have fixed data retention of 30 days and 365 days, respectively. New workspaces can't be created in the **Standard** or **Premium** pricing tiers, and if a workspace is moved out of these tiers, it can't be moved back. Workspaces in these pricing tiers do not support the use of [Basic Logs](basic-logs-configure.md). Data ingestion meters on your Azure bill for these legacy tiers are called "Data analyzed." +Workspaces created before April 2016 can continue to use the **Standard** and **Premium** pricing tiers that have fixed data retention of 30 days and 365 days, respectively. New workspaces can't be created in the **Standard** or **Premium** pricing tiers. If a workspace is moved out of these tiers, it can't be moved back. Workspaces in these pricing tiers don't support the use of [Basic Logs](basic-logs-configure.md). Data ingestion meters on your Azure bill for these legacy tiers are called "Data Analyzed." -### Microsoft Defender for Cloud with legacy pricing tiers -Following are considerations between legacy Log Analytics tiers and how usage is billed for [Microsoft Defender for Cloud](../../security-center/index.yml). +### Microsoft Defender for Cloud with legacy pricing tiers ++The following considerations pertain to legacy Log Analytics tiers and how usage is billed for [Microsoft Defender for Cloud](../../security-center/index.yml): - If the workspace is in the legacy Standard or Premium tier, Microsoft Defender for Cloud is billed only for Log Analytics data ingestion, not per node.-- If the workspace is in the legacy Per Node tier, Microsoft Defender for Cloud is billed using the current [Microsoft Defender for Cloud node-based pricing model](https://azure.microsoft.com/pricing/details/security-center/). +- If the workspace is in the legacy Per Node tier, Microsoft Defender for Cloud is billed using the current [Microsoft Defender for Cloud node-based pricing model](https://azure.microsoft.com/pricing/details/security-center/). - In other pricing tiers (including commitment tiers), if Microsoft Defender for Cloud was enabled before June 19, 2017, Microsoft Defender for Cloud is billed only for Log Analytics data ingestion. Otherwise, Microsoft Defender for Cloud is billed using the current Microsoft Defender for Cloud node-based pricing model. -More details of pricing tier limitations are available at [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md#log-analytics-workspaces). +More information on pricing tier limitations is available at [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md#log-analytics-workspaces). -None of the legacy pricing tiers have regional-based pricing. +None of the legacy pricing tiers have regional-based pricing. > [!NOTE]-> To use the entitlements that come from purchasing OMS E1 Suite, OMS E2 Suite, or OMS Add-On for System Center, choose the Log Analytics *Per Node* pricing tier. +> To use the entitlements that come from purchasing OMS E1 Suite, OMS E2 Suite, or OMS Add-On for System Center, choose the Log Analytics Per Node pricing tier. ## Evaluate the legacy Per Node pricing tier-It's often difficult to determine whether workspaces with access to the legacy **Per Node** pricing tier are better off in that tier or in a current **Pay-As-You-Go** or **Commitment Tier**. This involves understanding the trade-off between the fixed cost per monitored node in the Per Node pricing tier and its included data allocation of 500 MB/node/day and the cost of just paying for ingested data in the Pay-As-You-Go (Per GB) tier. -The following query can be used to make a recommendation for the optimal pricing tier based on a workspace's usage patterns. This query looks at the monitored nodes and data ingested into a workspace in the last seven days, and for each day, it evaluates which pricing tier would have been optimal. To use the query, you need to specify: +It's often difficult to determine whether workspaces with access to the legacy Per Node pricing tier are better off in that tier or in a current pay-as-you-go or commitment tier. You need to understand the trade-off between the fixed cost per monitored node in the Per Node pricing tier and its included data allocation of 500 MB per node per day and the cost of paying for ingested data in the pay-as-you-go (per GB) tier. ++Use the following query to make a recommendation for the optimal pricing tier based on a workspace's usage patterns. This query looks at the monitored nodes and data ingested into a workspace in the last seven days. For each day, it evaluates which pricing tier would have been optimal. To use the query, you must specify: -- Whether the workspace is using Microsoft Defender for Cloud by setting **workspaceHasSecurityCenter** to **true** or **false**. +- Whether the workspace is using Microsoft Defender for Cloud by setting `workspaceHasSecurityCenter` to `true` or `false`. - Update the prices if you have specific discounts.-- Specify the number of days to look back and analyze by setting **daysToEvaluate**. This is useful if the query is taking too long trying to look at seven days of data.+- Specify the number of days to look back and analyze by setting `daysToEvaluate`. This option is useful if the query is taking too long trying to look at seven days of data. ```kusto // Set these parameters before running query-// For Pay-As-You-Go (per-GB) and commitment tier pricing details, see https://azure.microsoft.com/pricing/details/monitor/. +// For pay-as-you-go (per-GB) and commitment tier pricing details, see https://azure.microsoft.com/pricing/details/monitor/. // You can see your per-node costs in your Azure usage and charge data. For more information, see https://learn.microsoft.com/azure/cost-management-billing/understand/download-azure-daily-usage. let PerNodePrice = 15.; // Monthly price per monitored node let PerNodeOveragePrice = 2.30; // Price per GB for data overage in the Per Node pricing tier-let PerGBPrice = 2.30; // Enter the Pay-as-you-go price for your workspace's region (from https://azure.microsoft.com/pricing/details/monitor/) +let PerGBPrice = 2.30; // Enter the pay-as-you-go price for your workspace's region (from https://azure.microsoft.com/pricing/details/monitor/) let CommitmentTier100Price = 196.; // Enter your price for the 100 GB/day commitment tier let CommitmentTier200Price = 368.; // Enter your price for the 200 GB/day commitment tier let CommitmentTier300Price = 540.; // Enter your price for the 300 GB/day commitment tier union * | sort by day asc ``` -This query isn't an exact replication of how usage is calculated, but it provides pricing tier recommendations in most cases. +This query isn't an exact replication of how usage is calculated, but it provides pricing tier recommendations in most cases. > [!NOTE]-> To use the entitlements that come from purchasing OMS E1 Suite, OMS E2 Suite, or OMS Add-On for System Center, choose the Log Analytics *Per Node* pricing tier. -+> To use the entitlements that come from purchasing OMS E1 Suite, OMS E2 Suite, or OMS Add-On for System Center, choose the Log Analytics Per Node pricing tier. ## Next steps+ - See [Azure Monitor cost and usage](../usage-estimated-costs.md) for a description of the different types of Azure Monitor charges and how to analyze them on your Azure bill.-- See [Analyze usage in Log Analytics workspace](analyze-usage.md) for details on analyzing the data in your workspace to determine to source of any higher than expected usage and opportunities to reduce your amount of data collected.-- See [Set daily cap on Log Analytics workspace](daily-cap.md) to control your costs by configuring a maximum volume that may be ingested in a workspace each day.+- See [Analyze usage in Log Analytics workspace](analyze-usage.md) for details on analyzing the data in your workspace to determine the source of any higher-than-expected usage and opportunities to reduce your amount of data collected. +- See [Set daily cap on Log Analytics workspace](daily-cap.md) to control your costs by configuring a maximum volume that might be ingested in a workspace each day. - See [Azure Monitor best practices - Cost management](../best-practices-cost.md) for best practices on configuring and managing Azure Monitor to minimize your charges. |
azure-monitor | Data Retention Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-retention-archive.md | Title: Configure data retention and archive in Azure Monitor Logs (Preview) + Title: Configure data retention and archive in Azure Monitor Logs (preview) description: Configure archive settings for a table in a Log Analytics workspace in Azure Monitor. Last updated 10/01/2022 # Configure data retention and archive policies in Azure Monitor Logs-Retention policies define when to remove or archive data in a [Log Analytics workspace](log-analytics-workspace-overview.md). Archiving lets you keep older, less used data in your workspace at a reduced cost. ++Retention policies define when to remove or archive data in a [Log Analytics workspace](log-analytics-workspace-overview.md). Archiving lets you keep older, less used data in your workspace at a reduced cost. This article describes how to configure data retention and archiving. ## How retention and archiving work+ Each workspace has a default retention policy that's applied to all tables. You can set a different retention policy on individual tables. -During the interactive retention period, data is available for monitoring, troubleshooting and analytics. When you no longer use the logs, but still need to keep the data for compliance or occasional investigation, archive the logs to save costs. +During the interactive retention period, data is available for monitoring, troubleshooting, and analytics. When you no longer use the logs, but still need to keep the data for compliance or occasional investigation, archive the logs to save costs. -Archived data stays in the same table, alongside the data that's available for interactive queries. -When you set a total retention period that's longer than the interactive retention period, Log Analytics automatically archives the relevant data immediately at the end of the retention period. +Archived data stays in the same table, alongside the data that's available for interactive queries. When you set a total retention period that's longer than the interactive retention period, Log Analytics automatically archives the relevant data immediately at the end of the retention period. -If you change the archive settings on a table with existing data, the relevant data in the table is also affected immediately. For example, if you have an existing table with 30 days of interactive retention and no archive period and you change the retention policy to eight days of interactive retention and one year total retention, Log Analytics immediately archives any data that's older than eight days. +If you change the archive settings on a table with existing data, the relevant data in the table is also affected immediately. For example, you might have an existing table with 30 days of interactive retention and no archive period. You decide to change the retention policy to eight days of interactive retention and one year total retention. Log Analytics immediately archives any data that's older than eight days. ++You can access archived data by [running a search job](search-jobs.md) or [restoring archived logs](restore.md). -You can access archived data by [running a search job](search-jobs.md) or [restoring archived logs](restore.md). - > [!NOTE] > The archive period can only be set at the table level, not at the workspace level. ## Configure the default workspace retention policy-You can set the workspace default retention policy in the Azure portal to 30, 31, 60, 90, 120, 180, 270, 365, 550, and 730 days. You can set a different policy for specific tables by [configuring retention and archive policy at the table level](#set-retention-and-archive-policy-by-table). If you're on the *free* tier, you'll need to upgrade to the paid tier to change the data retention period. ++You can set the workspace default retention policy in the Azure portal to 30, 31, 60, 90, 120, 180, 270, 365, 550, and 730 days. You can set a different policy for specific tables by [configuring the retention and archive policy at the table level](#set-retention-and-archive-policy-by-table). If you're on the *free* tier, you'll need to upgrade to the paid tier to change the data retention period. To set the default workspace retention policy: -1. From the **Logs Analytics workspaces** menu in the Azure portal, select your workspace. -1. Select **Usage and estimated costs** in the left pane. -1. Select **Data Retention** at the top of the page. +1. From the **Log Analytics workspaces** menu in the Azure portal, select your workspace. +1. Select **Usage and estimated costs** in the left pane. +1. Select **Data Retention** at the top of the page. - :::image type="content" source="media/manage-cost-storage/manage-cost-change-retention-01.png" alt-text="Change workspace data retention setting"::: - -1. Move the slider to increase or decrease the number of days, and then select **OK**. + :::image type="content" source="media/manage-cost-storage/manage-cost-change-retention-01.png" alt-text="Screenshot that shows changing the workspace data retention setting."::: ++1. Move the slider to increase or decrease the number of days, and then select **OK**. ## Set retention and archive policy by table By default, all tables in your workspace inherit the workspace's interactive retention setting and have no archive policy. You can modify the retention and archive policies of individual tables, except for workspaces in the legacy Free Trial pricing tier. -You can keep data in interactive retention between 4 and 730 days. You can set the archive period for a total retention time of up to 2,556 days (seven years). +You can keep data in interactive retention between 4 and 730 days. You can set the archive period for a total retention time of up to 2,556 days (seven years). # [Portal](#tab/portal-1) To set the retention and archive duration for a table in the Azure portal: -1. From the **Log Analytics workspaces** menu, select **Tables **. +1. From the **Log Analytics workspaces** menu, select **Tables**. - The **Tables** screen lists all of the tables in the workspace. + The **Tables** screen lists all the tables in the workspace. 1. Select the context menu for the table you want to configure and select **Manage table**. - :::image type="content" source="media/basic-logs-configure/log-analytics-table-configuration.png" lightbox="media/basic-logs-configure/log-analytics-table-configuration.png" alt-text="Screenshot showing the Manage table button for one of the tables in a workspace."::: + :::image type="content" source="media/basic-logs-configure/log-analytics-table-configuration.png" lightbox="media/basic-logs-configure/log-analytics-table-configuration.png" alt-text="Screenshot that shows the Manage table button for one of the tables in a workspace."::: -1. Configure the retention and archive duration in **Data retention settings** section of the table configuration screen. +1. Configure the retention and archive duration in the **Data retention settings** section of the table configuration screen. - :::image type="content" source="media/data-retention-configure/log-analytics-configure-table-retention-archive.png" lightbox="media/data-retention-configure/log-analytics-configure-table-retention-archive.png" alt-text="Screenshot showing the data retention settings on the table configuration screen."::: + :::image type="content" source="media/data-retention-configure/log-analytics-configure-table-retention-archive.png" lightbox="media/data-retention-configure/log-analytics-configure-table-retention-archive.png" alt-text="Screenshot that shows the data retention settings on the table configuration screen."::: # [API](#tab/api-1) -To set the retention and archive duration for a table, call the **Tables - Update** API: +To set the retention and archive duration for a table, call the **Tables - Update** API: ```http PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{tableName}?api-version=2021-12-01-preview PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups > [!NOTE] > You don't explicitly specify the archive duration in the API call. Instead, you set the total retention, which is the sum of the interactive retention plus the archive duration. +You can use either PUT or PATCH, with the following difference: -You can use either PUT or PATCH, with the following difference: --- The **PUT** API sets *retentionInDays* and *totalRetentionInDays* to the default value if you don't set non-null values.-- The **PATCH** API doesn't change the *retentionInDays* or *totalRetentionInDays* values if you don't specify values. +- The **PUT** API sets `retentionInDays` and `totalRetentionInDays` to the default value if you don't set non-null values. +- The **PATCH** API doesn't change the `retentionInDays` or `totalRetentionInDays` values if you don't specify values. **Request body** The request body includes the values in the following table. |Name | Type | Description | | | | |-|properties.retentionInDays | integer | The table's data retention in days. This value can be between 4 and 730. <br/>Setting this property to null will default to the workspace retention. For a Basic Logs table, the value is always 8. | -|properties.totalRetentionInDays | integer | The table's total data retention including archive period. This value can be between 4 and 730; or 1095, 1460, 1826, 2191, or 2556. Set this property to null if you don't want to archive data. | +|properties.retentionInDays | integer | The table's data retention in days. This value can be between 4 and 730. <br/>Setting this property to null will default to the workspace retention. For a Basic Logs table, the value is always 8. | +|properties.totalRetentionInDays | integer | The table's total data retention including archive period. This value can be between 4 and 730; or 1095, 1460, 1826, 2191, or 2556. Set this property to null if you don't want to archive data. | **Example** PATCH https://management.azure.com/subscriptions/00000000-0000-0000-0000-0000000 ``` **Request body**+ ```http { "properties": { Status code: 200 To set the retention and archive duration for a table, run the [az monitor log-analytics workspace table update](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-update) command and pass the `--retention-time` and `--total-retention-time` parameters. -This example sets table's interactive retention to 30 days, and the total retention to two years, which means that the archive duration is 23 months: +This example sets the table's interactive retention to 30 days, and the total retention to two years, which means that the archive duration is 23 months: ```azurecli az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name AzureMetrics --retention-time 30 --total-retention-time 730 az monitor log-analytics workspace table update --subscription ContosoSID --reso ``` - + ## Get retention and archive policy by table # [Portal](#tab/portal-2) To view the retention and archive duration for a table in the Azure portal, from the **Log Analytics workspaces** menu, select **Tables**. -The **Tables** screen shows the interactive retention and archive period for all of the tables in the workspace. +The **Tables** screen shows the interactive retention and archive period for all the tables in the workspace. # [API](#tab/api-2) To get the retention policy of a particular table (in this example, `SecurityEve GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent?api-version=2021-12-01-preview ``` -To get all table-level retention policies in your workspace, don't set a table name; for example: +To get all table-level retention policies in your workspace, don't set a table name. ++For example: ```JSON GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables?api-version=2021-12-01-preview az monitor log-analytics workspace table show --subscription ContosoSID --resour ## Purge retained data-When you shorten an existing retention policy, it takes several days for Azure Monitor to remove data that you no longer want to keep. -If you set the data retention policy to 30 days, you can purge older data immediately using the `immediatePurgeDataOn30Days` parameter in Azure Resource Manager. The purge functionality is useful when you need to remove personal data immediately. The immediate purge functionality isn't available through the Azure portal. - -Note that workspaces with a 30-day retention policy might actually keep data for 31 days if you don't set the `immediatePurgeDataOn30Days` parameter. +When you shorten an existing retention policy, it takes several days for Azure Monitor to remove data that you no longer want to keep. ++If you set the data retention policy to 30 days, you can purge older data immediately by using the `immediatePurgeDataOn30Days` parameter in Azure Resource Manager. The purge functionality is useful when you need to remove personal data immediately. The immediate purge functionality isn't available through the Azure portal. ++Workspaces with a 30-day retention policy might keep data for 31 days if you don't set the `immediatePurgeDataOn30Days` parameter. -You can also purge data from a workspace using the [purge feature](personal-data-mgmt.md#exporting-and-deleting-personal-data), which removes personal data. You canΓÇÖt purge data from archived logs. +You can also purge data from a workspace by using the [purge feature](personal-data-mgmt.md#exporting-and-deleting-personal-data), which removes personal data. You can't purge data from archived logs. -The Log Analytics [Purge API](/rest/api/loganalytics/workspacepurge/purge) doesn't affect retention billing. **To lower retention costs, decrease the retention period for the workspace or for specific tables.** +The Log Analytics [Purge API](/rest/api/loganalytics/workspacepurge/purge) doesn't affect retention billing. To lower retention costs, *decrease the retention period for the workspace or for specific tables*. ## Tables with unique retention policies-By default, two data types - `Usage` and `AzureActivity` - keep data for at least 90 days at no charge. When you increase the workspace retention to more than 90 days, you also increase the retention of these data types, and you'll be charged for retaining this data beyond the 90-day period. These tables are also free from data ingestion charges. -Tables related to Application Insights resources also keep data for 90 days at no charge. You can adjust the retention policy of each of these tables individually. +By default, two data types, `Usage` and `AzureActivity`, keep data for at least 90 days at no charge. When you increase the workspace retention to more than 90 days, you also increase the retention of these data types. You'll be charged for retaining this data beyond the 90-day period. These tables are also free from data ingestion charges. ++Tables related to Application Insights resources also keep data for 90 days at no charge. You can adjust the retention policy of each of these tables individually: - `AppAvailabilityResults` - `AppBrowserTimings` Tables related to Application Insights resources also keep data for 90 days at n - `AppEvents` - `AppMetrics` - `AppPageViews`-- `AppPerformanceCounters`, +- `AppPerformanceCounters` - `AppRequests` - `AppSystemEvents` - `AppTraces` The charge for maintaining archived logs is calculated based on the volume of da For more information, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). -## Set data retention for Classic Application Insights resources -Workspace-based Application Insights resources store data in a Log Analytics workspace, so it's included in the data retention and archive settings for the workspace. However, classic Application Insights resources have separate retention settings. +## Set data retention for classic Application Insights resources ++Workspace-based Application Insights resources store data in a Log Analytics workspace, so it's included in the data retention and archive settings for the workspace. Classic Application Insights resources have separate retention settings. -The default retention for Application Insights resources is 90 days. You can select different retention periods for each Application Insights resource. The full set of available retention periods is 30, 60, 90, 120, 180, 270, 365, 550 or 730 days. +The default retention for Application Insights resources is 90 days. You can select different retention periods for each Application Insights resource. The full set of available retention periods is 30, 60, 90, 120, 180, 270, 365, 550, or 730 days. -To change the retention, from your Application Insights resource, go to the **Usage and Estimated Costs** page and select the **Data Retention** option: +To change the retention, from your Application Insights resource, go to the **Usage and estimated costs** page and select the **Data retention** option.  A several-day grace period begins when the retention is lowered before the oldest data is removed. -The retention can also be [set programatically using PowerShell](../app/powershell.md#set-the-data-retention) using the `retentionInDays` parameter. If you set the data retention to 30 days, you can trigger an immediate purge of older data using the `immediatePurgeDataOn30Days` parameter, which may be useful for compliance-related scenarios. This purge functionality is only exposed via Azure Resource Manager and should be used with extreme care. The daily reset time for the data volume cap can be configured using Azure Resource Manager to set the `dailyQuotaResetTime` parameter. +The retention can also be [set programmatically with PowerShell](../app/powershell.md#set-the-data-retention) by using the `retentionInDays` parameter. If you set the data retention to 30 days, you can trigger an immediate purge of older data by using the `immediatePurgeDataOn30Days` parameter. This approach might be useful for compliance-related scenarios. This purge functionality is only exposed via Azure Resource Manager and should be used with extreme care. The daily reset time for the data volume cap can be configured by using Azure Resource Manager to set the `dailyQuotaResetTime` parameter. ## Next steps-- [Learn more about Log Analytics workspaces and data retention and archive.](log-analytics-workspace-overview.md)-- [Create a search job to retrieve archive data matching particular criteria.](search-jobs.md)-- [Restore archive data within a particular time range.](restore.md)++- [Learn more about Log Analytics workspaces and data retention and archive](log-analytics-workspace-overview.md) +- [Create a search job to retrieve archive data matching particular criteria](search-jobs.md) +- [Restore archive data within a particular time range](restore.md) |
azure-monitor | Logs Ingestion Api Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-ingestion-api-overview.md | Title: Logs ingestion API in Azure Monitor (Preview) -description: Send data to Log Analytics workspace using REST API. + Title: Logs Ingestion API in Azure Monitor (preview) +description: Send data to a Log Analytics workspace by using a REST API. Last updated 06/27/2022 -# Logs ingestion API in Azure Monitor (Preview) -The Logs ingestion API in Azure Monitor lets you send data to a Log Analytics workspace from any REST API client. This allows you to send data from virtually any source to [supported built-in tables](#supported-tables) or to custom tables that you create. You can even extend the schema of built-in tables with custom columns. +# Logs Ingestion API in Azure Monitor (preview) -> [!NOTE] -> The Logs ingestion API was previously referred to as the custom logs API. +The Logs Ingestion API in Azure Monitor lets you send data to a Log Analytics workspace from any REST API client. By using this API, you can send data from almost any source to [supported built-in tables](#supported-tables) or to custom tables that you create. You can even extend the schema of built-in tables with custom columns. +> [!NOTE] +> The Logs Ingestion API was previously referred to as the custom logs API. ## Basic operation-Your application sends data to a [data collection endpoint](../essentials/data-collection-endpoint-overview.md) which is a unique connection point for your subscription. The payload of your API call includes the source data formatted in JSON. The call specifies a [data collection rule](../essentials/data-collection-rule-overview.md) that understands the format of the source data, potentially filters and transforms it for the target table, and then directs it to a specific table in a specific workspace. You can modify the target table and workspace by modifying the data collection rule without any change to the REST API call or source data. +Your application sends data to a [data collection endpoint (DCE)](../essentials/data-collection-endpoint-overview.md), which is a unique connection point for your subscription. The payload of your API call includes the source data formatted in JSON. The call: ++- Specifies a [data collection rule (DCR)](../essentials/data-collection-rule-overview.md) that understands the format of the source data. +- Potentially filters and transforms it for the target table. +- Directs it to a specific table in a specific workspace. ++You can modify the target table and workspace by modifying the DCR without any change to the REST API call or source data. > [!NOTE]-> See [Migrate from Data Collector API and custom fields-enabled tables to DCR-based custom logs](custom-logs-migrate.md) to migrate solutions from the [Data Collector API](data-collector-api.md). +> To migrate solutions from the [Data Collector API](data-collector-api.md), see [Migrate from Data Collector API and custom fields-enabled tables to DCR-based custom logs](custom-logs-migrate.md). ## Supported tables +The following tables are supported. + ### Custom tables-Logs ingestion API can send data to any custom table that you create and to certain built-in tables in your Log Analytics workspace. The target table must exist before you can send data to it. ++The Logs Ingestion API can send data to any custom table that you create and to certain built-in tables in your Log Analytics workspace. The target table must exist before you can send data to it. ### Built-in tables-Logs ingestion API can send data to the following built-in tables. Other tables may be added to this list as support for them is implemented. ++The Logs Ingestion API can send data to the following built-in tables. Other tables might be added to this list as support for them is implemented: - [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) - [SecurityEvents](/azure/azure-monitor/reference/tables/securityevent) Logs ingestion API can send data to the following built-in tables. Other tables ### Table limits -* Custom tables must have the `_CL` suffix. -* Column names can consist of alphanumeric characters as well as the characters `_` and `-`. They must start with a letter. -* Columns extended on top of built-in tables must have the suffix `_CF`. Columns in a custom table do not need this suffix. +Tables have the following limitations: +* Custom tables must have the `_CL` suffix. +* Column names can consist of alphanumeric characters and the characters `_` and `-`. They must start with a letter. +* Columns extended on top of built-in tables must have the suffix `_CF`. Columns in a custom table don't need this suffix. ## Authentication-Authentication for the logs ingestion API is performed at the data collection endpoint which uses standard Azure Resource Manager authentication. A common strategy is to use an Application ID and Application Key as described in [Tutorial: Add ingestion-time transformation to Azure Monitor Logs (preview)](tutorial-logs-ingestion-portal.md). ++Authentication for the Logs Ingestion API is performed at the DCE, which uses standard Azure Resource Manager authentication. A common strategy is to use an application ID and application key as described in [Tutorial: Add ingestion-time transformation to Azure Monitor Logs (preview)](tutorial-logs-ingestion-portal.md). ## Source data-The source data sent by your application is formatted in JSON and must match the structure expected by the data collection rule. It doesn't necessarily need to match the structure of the target table since the DCR can include a [transformation](../essentials//data-collection-transformations.md) to convert the data to match the table's structure. ++The source data sent by your application is formatted in JSON and must match the structure expected by the DCR. It doesn't necessarily need to match the structure of the target table because the DCR can include a [transformation](../essentials//data-collection-transformations.md) to convert the data to match the table's structure. ## Data collection rule+ [Data collection rules](../essentials/data-collection-rule-overview.md) define data collected by Azure Monitor and specify how and where that data should be sent or stored. The REST API call must specify a DCR to use. A single DCE can support multiple DCRs, so you can specify a different DCR for different sources and target tables. -The DCR must understand the structure of the input data and the structure of the target table. If the two don't match, it can use a [transformation](../essentials/data-collection-transformations.md) to convert the source data to match the target table. You may also use the transformation to filter source data and perform any other calculations or conversions. +The DCR must understand the structure of the input data and the structure of the target table. If the two don't match, it can use a [transformation](../essentials/data-collection-transformations.md) to convert the source data to match the target table. You can also use the transformation to filter source data and perform any other calculations or conversions. -## Sending data -To send data to Azure Monitor with the logs ingestion API, make a POST call to the data collection endpoint over HTTP. Details of the call are as follows: +## Send data ++To send data to Azure Monitor with the Logs Ingestion API, make a POST call to the DCE over HTTP. Details of the call are described in the following sections. ### Endpoint URI+ The endpoint URI uses the following format, where the `Data Collection Endpoint` and `DCR Immutable ID` identify the DCE and DCR. `Stream Name` refers to the [stream](../essentials/data-collection-rule-structure.md#custom-logs) in the DCR that should handle the custom data. ``` The endpoint URI uses the following format, where the `Data Collection Endpoint` ``` > [!NOTE]-> You can retrieve the immutable ID from the JSON view of the DCR. See [Collect information from DCR](tutorial-logs-ingestion-portal.md#collect-information-from-dcr). +> You can retrieve the immutable ID from the JSON view of the DCR. For more information, see [Collect information from DCR](tutorial-logs-ingestion-portal.md#collect-information-from-dcr). ### Headers | Header | Required? | Value | Description | |:|:|:|:|-| Authorization | Yes | Bearer {Bearer token obtained through the Client Credentials Flow} | | +| Authorization | Yes | Bearer (bearer token obtained through the client credentials flow) | | | Content-Type | Yes | `application/json` | |-| Content-Encoding | No | `gzip` | Use the GZip compression scheme for performance optimization. | +| Content-Encoding | No | `gzip` | Use the gzip compression scheme for performance optimization. | | x-ms-client-request-id | No | String-formatted GUID | Request ID that can be used by Microsoft for any troubleshooting purposes. | ### Body+ The body of the call includes the custom data to be sent to Azure Monitor. The shape of the data must be a JSON object or array with a structure that matches the format expected by the stream in the DCR. ## Sample call-For sample data and API call using the logs ingestion API, see either [Send custom logs to Azure Monitor Logs using the Azure portal (preview)](tutorial-logs-ingestion-portal.md) or [Send custom logs to Azure Monitor Logs using Resource Manager templates](tutorial-logs-ingestion-api.md) ++For sample data and an API call using the Logs Ingestion API, see either [Send custom logs to Azure Monitor Logs using the Azure portal (preview)](tutorial-logs-ingestion-portal.md) or [Send custom logs to Azure Monitor Logs using Resource Manager templates](tutorial-logs-ingestion-api.md). ## Limits and restrictions-For limits related to Logs ingestion API, see [Azure Monitor service limits](../service-limits.md#logs-ingestion-api). - +For limits related to the Logs Ingestion API, see [Azure Monitor service limits](../service-limits.md#logs-ingestion-api). ## Next steps -- [Walk through a tutorial sending custom logs using the Azure portal.](tutorial-logs-ingestion-portal.md)-- [Walk through a tutorial sending custom logs using Resource Manager templates and REST API.](tutorial-logs-ingestion-api.md)+- [Walk through a tutorial sending custom logs using the Azure portal](tutorial-logs-ingestion-portal.md) +- [Walk through a tutorial sending custom logs using Resource Manager templates and REST API](tutorial-logs-ingestion-api.md) |
azure-monitor | Manage Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-access.md | Grant a user access to log data from their resources and read all Azure AD sign- ## Set table-level read access -To create a role that lets users or groups read data from specific tables in a workspace: +To create a [custom role](../../role-based-access-control/custom-roles.md) that lets specific users or groups read data from specific tables in a workspace: 1. Create a custom role that grants read access to table data, based on the built-in Azure Monitor Logs **Reader** role: |
azure-monitor | Monitor Workspace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/monitor-workspace.md | Title: Monitor health of Log Analytics workspace in Azure Monitor -description: Describes how to monitor the health of your Log Analytics workspace using data in the Operation table. +description: The article describes how to monitor the health of your Log Analytics workspace by using data in the Operation table. Last updated 03/21/2022 -# Monitor health of Log Analytics workspace in Azure Monitor -To maintain the performance and availability of your Log Analytics workspace in Azure Monitor, you need to be able to proactively detect any issues that arise. This article describes how to monitor the health of your Log Analytics workspace using data in the [Operation](/azure/azure-monitor/reference/tables/operation) table. This table is included in every Log Analytics workspace and contains error and warnings that occur in your workspace. It is recommended to create alerts for issues in level "Warning" and "Error". +# Monitor health of a Log Analytics workspace in Azure Monitor ++To maintain the performance and availability of your Log Analytics workspace in Azure Monitor, you need to be able to proactively detect any issues that arise. This article describes how to monitor the health of your Log Analytics workspace by using data in the [Operation](/azure/azure-monitor/reference/tables/operation) table. This table is included in every Log Analytics workspace. It contains error messages and warnings that occur in your workspace. We recommend that you create alerts for issues with the level of Warning and Error. ## _LogOperation function -Azure Monitor Logs sends details on any issues to the [Operation](/azure/azure-monitor/reference/tables/operation) table in the workspace where the issue occurred. The **_LogOperation** system function is based on the **Operation** table and provides a simplified set of information for analysis and alerting. +Azure Monitor Logs sends information on any issues to the [Operation](/azure/azure-monitor/reference/tables/operation) table in the workspace where the issue occurred. The `_LogOperation` system function is based on the **Operation** table and provides a simplified set of information for analysis and alerting. ## Columns -The **_LogOperation** function returns the columns in the following table. +The `_LogOperation` function returns the columns in the following table. | Column | Description | |:|:| | TimeGenerated | Time that the incident occurred in UTC. |-| Category | Operation category group. Can be used to filter on types of operations and help create more precise system auditing and alerts. See the section below for a list of categories. | -| Operation | Description of the operation type. The operation can indicate that one of the Log Analytics limits was reached, a backend process related issue, or any other service message. | -| Level | Severity level of the issue:<br>- Info: No specific attention needed.<br>- Warning: Process was not completed as expected, and attention is needed.<br>- Error: Process failed, attention needed. +| Category | Operation category group. Can be used to filter on types of operations and help create more precise system auditing and alerts. See the following section for a list of categories. | +| Operation | Description of the operation type. The operation can indicate that one of the Log Analytics limits was reached, a back-end process related issue, or any other service message. | +| Level | Severity level of the issue:<br>- Info: No specific attention needed.<br>- Warning: Process wasn't completed as expected, and attention is needed.<br>- Error: Process failed, and attention is needed. | Detail | Detailed description of the operation, includes the specific error message. | | _ResourceId | Resource ID of the Azure resource related to the operation. | | Computer | Computer name if the operation is related to an Azure Monitor agent. | | CorrelationId | Used to group consecutive related operations. | - ## Categories -The following table describes the categories from the _LogOperation function. +The following table describes the categories from the `_LogOperation` function. | Category | Description | |:|:| | Ingestion | Operations that are part of the data ingestion process. | | Agent | Indicates an issue with agent installation. |-| Data collection | Operations related to data collections processes. | -| Solution targeting | Operation of type *ConfigurationScope* was processed. | +| Data collection | Operations related to data collection processes. | +| Solution targeting | Operation of type `ConfigurationScope` was processed. | | Assessment solution | An assessment process was executed. | - ### Ingestion-Ingestion operations are issues that occurred during data ingestion including notification about reaching the Azure Log Analytics workspace limits. Error conditions in this category might suggest data loss, so they are important to monitor. The table below provides details on these operations. See [Azure Monitor service limits](../service-limits.md#log-analytics-workspaces) for service limits for Log Analytics workspaces. - -#### Operation: Data collection stopped +Ingestion operations are issues that occurred during data ingestion and include notification about reaching the Log Analytics workspace limits. Error conditions in this category might suggest data loss, so they're important to monitor. For service limits for Log Analytics workspaces, see [Azure Monitor service limits](../service-limits.md#log-analytics-workspaces). ++#### Operation: Data collection stopped + "Data collection stopped due to daily limit of free data reached. Ingestion status = OverQuota" -In the past 7 days, logs collection reached the daily set limit. The limit is set either as the workspace is set to "free tier", or daily collection limit was configured for this workspace. -Note, after reaching the set limit, your data collection will automatically stop for the day and will resume only during the next collection day. - -Recommended Actions: -* Check _LogOperation table for collection stopped and collection resumed events.</br> +In the past seven days, logs collection reached the daily set limit. The limit is set either as the workspace is set to **Free tier** or the daily collection limit was configured for this workspace. +After your data collection reaches the set limit, it automatically stops for the day and will resume only during the next collection day. ++Recommended actions: ++* Check the `_LogOperation` table for collection stopped and collection resumed events:</br> `_LogOperation | where TimeGenerated >= ago(7d) | where Category == "Ingestion" | where Operation has "Data collection"`-* [Create an alert](daily-cap.md#alert-when-daily-cap-is-reached) on "Data collection stopped" Operation event, this alert will allow you to get notified when the collection limit was reached. -* Data collected after the daily collection limit is reached will be lost, use ΓÇÿworkspace insightsΓÇÖ blade to review usage rates from each source. -Or, you can decide to ([Manage your maximum daily data volume](daily-cap.md) \ [change the pricing tier](cost-logs.md#commitment-tiers) to one that will suite your collection rates pattern). -* Data collection rate is calculated per day, and will reset at the start of the next day, you can also monitor collection resume event by [Create an alert](./daily-cap.md#alert-when-daily-cap-is-reached) on "Data collection resumed" Operation event. +* [Create an alert](daily-cap.md#alert-when-daily-cap-is-reached) on the "Data collection stopped" Operation event. This alert notifies you when the collection limit is reached. +* Data collected after the daily collection limit is reached will be lost. Use the **Workspace insights** pane to review usage rates from each source. Or you can decide to [manage your maximum daily data volume](daily-cap.md) or [change the pricing tier](cost-logs.md#commitment-tiers) to one that suits your collection rates pattern. +* The data collection rate is calculated per day and resets at the start of the next day. You can also monitor a collection resume event by [creating an alert](./daily-cap.md#alert-when-daily-cap-is-reached) on the "Data collection resumed" Operation event. #### Operation: Ingestion rate-"The data ingestion volume rate crossed the threshold in your workspace: {0:0.00} MB per one minute and data has been dropped." -Recommended Actions: -* Check _LogOperation table for ingestion rate event -`_LogOperation | where TimeGenerated >= ago(7d) | where Category == "Ingestion" | where Operation has "Ingestion rate"` - Note: Operation table in the workspace every 6 hours while the threshold continues to be exceeded. -* [Create an alert](daily-cap.md#alert-when-daily-cap-is-reached) on "Data collection stopped" Operation event, this alert will allow you to get notified when the limit is reached. -* Data collected while ingestion rate reached 100% will be dropped and lost. +"The data ingestion volume rate crossed the threshold in your workspace: {0:0.00} MB per one minute and data has been dropped." ++Recommended actions: -'workspace insights' blade to review your usage patterns and try to reduce them.</br> -For further information: </br> -[Azure Monitor service limits](../service-limits.md#data-ingestion-volume-rate) </br> -[Analyze usage in Log Analytics workspace](analyze-usage.md) +* Check the `_LogOperation` table for an ingestion rate event:</br> +`_LogOperation | where TimeGenerated >= ago(7d) | where Category == "Ingestion" | where Operation has "Ingestion rate"` + </br>An event is sent to the **Operation** table in the workspace every six hours while the threshold continues to be exceeded. +* [Create an alert](daily-cap.md#alert-when-daily-cap-is-reached) on the "Data collection stopped" Operation event. This alert notifies you when the limit is reached. +* Data collected while the ingestion rate reached 100 percent will be dropped and lost. Use the **Workspace insights** pane to review your usage patterns and try to reduce them.</br> +For more information, see: </br> + - [Azure Monitor service limits](../service-limits.md#data-ingestion-volume-rate) </br> + - [Analyze usage in Log Analytics workspace](analyze-usage.md) - #### Operation: Maximum table column count-"Data of type \<**table name**\> was dropped because number of fields \<**new fields count**\> is above the limit of \<**current field count limit**\> custom fields per data type." -Recommended Actions: -For custom tables, you can move to [Parsing the data](./parse-text.md) in queries. +"Data of type \<**table name**\> was dropped because number of fields \<**new fields count**\> is above the limit of \<**current field count limit**\> custom fields per data type." ++Recommended action: For custom tables, you can move to [parsing the data](./parse-text.md) in queries. #### Operation: Field content validation-"The following fields' values \<**field name**\> of type \<**table name**\> have been trimmed to the max allowed size, \<**field size limit**\> bytes. Please adjust your input accordingly." -Field larger than the limit size was proccessed by Azure logs, the field was trimmed to the allowed field limit. We donΓÇÖt recommend sending fields larger than the allowed limit as this will result in data loss. +"The following fields' values \<**field name**\> of type \<**table name**\> have been trimmed to the max allowed size, \<**field size limit**\> bytes. Please adjust your input accordingly." ++A field larger than the limit size was processed by Azure logs. The field was trimmed to the allowed field limit. We don't recommend sending fields larger than the allowed limit because it results in data loss. ++Recommended actions: -Recommended Actions: Check the source of the affected data type:-* If the data is being sent through the HTTP Data Collector API, you will need to change your code\script to split the data before itΓÇÖs ingested. -* For custom logs, collected by Log Analytics agent, change the logging settings of the application\tool. -* For any other data type, raise a support case. -</br>Read more: [Azure Monitor service limits](../service-limits.md#data-ingestion-volume-rate) ++* If the data is being sent through the HTTP Data Collector API, you need to change your code\script to split the data before it's ingested. +* For custom logs, collected by a Log Analytics agent, change the logging settings of the application or tool. +* For any other data type, raise a support case. For more information, see [Azure Monitor service limits](../service-limits.md#data-ingestion-volume-rate). ### Data collection++The following section provides information on data collection. + #### Operation: Azure Activity Log collection-"Access to the subscription was lost. Ensure that the \<**subscription id**\> subscription is in the \<**tenant id**\> Azure Active Directory tenant. If the subscription is transferred to another tenant, there is no impact to the services, but information for the tenant could take up to an hour to propagate. '" -Description: In some situations, like moving a subscription to a different tenant, the Azure Activity logs might stop flowing in into the workspace. In those situations, we need to reconnect the subscription following the process described in this article. +"Access to the subscription was lost. Ensure that the \<**subscription id**\> subscription is in the \<**tenant id**\> Azure Active Directory tenant. If the subscription is transferred to another tenant, there is no impact to the services, but information for the tenant could take up to an hour to propagate." ++In some situations, like moving a subscription to a different tenant, the Azure activity logs might stop flowing into the workspace. In those situations, you need to reconnect the subscription following the process described in this article. -Recommended Actions: -* If the subscription mentioned on the warning message no longer exists, navigate to the ΓÇÿAzure Activity logΓÇÖ blade under ΓÇÿWorkspace Data SourcesΓÇÖ, select the relevant subscription, and finally select the ΓÇÿDisconnectΓÇÖ button. -* If you no longer have access to the subscription mentioned on the warning message: - * Follow step 1 to disconnect the subscription. - * To continue collecting logs from this subscription, contact the subscription owner to fix the permissions, re-enable activity log collection. -* [Create a diagnostic setting](../essentials/activity-log.md#send-to-log-analytics-workspace) to send the Activity log to a Log Analytics workspace. +Recommended actions: ++* If the subscription mentioned in the warning message no longer exists, go to the **Azure Activity log** pane under **Workspace Data Sources**. Select the relevant subscription, and then select the **Disconnect** button. +* If you no longer have access to the subscription mentioned in the warning message: + * Follow the preceding step to disconnect the subscription. + * To continue collecting logs from this subscription, contact the subscription owner to fix the permissions and re-enable activity log collection. +* [Create a diagnostic setting](../essentials/activity-log.md#send-to-log-analytics-workspace) to send the activity log to a Log Analytics workspace. ### Agent++The following section provides information on agents. + #### Operation: Linux Agent-"Two successive configuration applications from OMS Settings failed" -Config settings on the portal have changed. +"Two successive configuration applications from OMS Settings failed." -Recommended Action -This issue is raised in case there is an issue for the Agent to retrieve the new config settings. -To mitigate this issue, you will need to reinstall the agent. -Check _LogOperation table for agent event.</br> +Configuration settings on the portal have changed. - `_LogOperation | where TimeGenerated >= ago(6h) | where Category == "Agent" | where Operation == "Linux Agent" | distinct _ResourceId` +Recommended action: +This issue is raised in case there's an issue for the agent to retrieve the new config settings. To mitigate this issue, reinstall the agent. +Check the `_LogOperation` table for the agent event:</br> -The list will list the resource IDs where the Agent has the wrong configuration. -To mitigate the issue, you will need to reinstall the Agents listed. + `_LogOperation | where TimeGenerated >= ago(6h) | where Category == "Agent" | where Operation == "Linux Agent" | distinct _ResourceId` - +The list will show the resource IDs where the agent has the wrong configuration. To mitigate the issue, reinstall the agents listed. ## Alert rules-Use [log query alerts](../alerts/alerts-log-query.md) in Azure Monitor to be proactively notified when an issue is detected in your Log Analytics workspace. Use a strategy that allows you to respond in a timely manner to issues while minimizing your costs. Your subscription will be charged for each alert rule as listed in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs). -A recommended strategy is to start with two alert rules based on the level of the issue. Use a short frequency such as every 5 minutes for Errors and a longer frequency such as 24 hours for Warnings. Since Errors indicate potential data loss, you want to respond to them quickly to minimize any loss. Warnings typically indicate an issue that does not require immediate attention, so you can review them daily. +Use [log query alerts](../alerts/alerts-log-query.md) in Azure Monitor to be proactively notified when an issue is detected in your Log Analytics workspace. Use a strategy that allows you to respond in a timely manner to issues while minimizing your costs. Your subscription will be charged for each alert rule as listed in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs). -Use the process in [Create, view, and manage log alerts using Azure Monitor](../alerts/alerts-log.md) to create the log alert rules. The following sections describe the details for each rule. +A recommended strategy is to start with two alert rules based on the level of the issue. Use a short frequency such as every 5 minutes for Errors and a longer frequency such as 24 hours for Warnings. Because Errors indicate potential data loss, you want to respond to them quickly to minimize any loss. Warnings typically indicate an issue that doesn't require immediate attention, so you can review them daily. +Use the process in [Create, view, and manage log alerts by using Azure Monitor](../alerts/alerts-log.md) to create the log alert rules. The following sections describe the details for each rule. | Query | Threshold value | Period | Frequency | |:|:|:|:| | `_LogOperation | where Level == "Error"` | 0 | 5 | 5 |-| `_LogOperation | where Level == "Warning"` | 0 | 1440 | 1440 | +| `_LogOperation | where Level == "Warning"` | 0 | 1,440 | 1,440 | -These alert rules will respond the same to all operations with Error or Warning. As you become more familiar with the operations that are generating alerts, you may want to respond differently for particular operations. For example, you may want to send notifications to different people for particular operations. +These alert rules will respond the same to all operations with Error or Warning. As you become more familiar with the operations that are generating alerts, you might want to respond differently for particular operations. For example, you might want to send notifications to different people for particular operations. -To create an alert rule for a specific operation, use a query that includes the **Category** and **Operation** columns. +To create an alert rule for a specific operation, use a query that includes the **Category** and **Operation** columns. -The following example creates a warning alert when the ingestion volume rate has reached 80% of the limit. +The following example creates a Warning alert when the ingestion volume rate has reached 80 percent of the limit: - Target: Select your Log Analytics workspace - Criteria: The following example creates a warning alert when the ingestion volume rate has - Alert rule name: Daily data limit reached - Severity: Warning (Sev 1) --The following example creates a warning alert when the data collection has reached the daily limit. +The following example creates a Warning alert when the data collection has reached the daily limit: - Target: Select your Log Analytics workspace - Criteria: The following example creates a warning alert when the data collection has reach - Frequency: 5 (minutes) - Alert rule name: Daily data limit reached - Severity: Warning (Sev 1)- + ## Next steps - Learn more about [log alerts](../alerts/alerts-log.md). |
azure-monitor | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Monitor description: Lists Azure Policy Regulatory Compliance controls available for Azure Monitor. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/10/2022 Last updated : 10/12/2022 |
azure-monitor | Workbooks Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-overview.md | Workbooks are helpful for scenarios such as: - Sharing the results of a resizing experiment of your VM with other members of your team. You can explain the goals for the experiment with text. Then you can show each usage metric and the analytics queries used to evaluate the experiment, along with clear call-outs for whether each metric was above or below target. - Reporting the impact of an outage on the usage of your VM. You can combine data, text explanation, and a discussion of next steps to prevent outages in the future. +Watch this video to see how you can use Azure Workbooks to get insights and visualize your data. +> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5a1su] + ## The gallery The gallery lists all the saved workbooks and templates for your workspace. You can easily organize, sort, and manage workbooks of all types. |
azure-monitor | Monitor Virtual Machine Workloads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-workloads.md | Azure Monitor has no ability to monitor the status of a service or daemon. There > [!NOTE] > The Change Tracking and Analysis solution is different from the [Change Analysis](vminsights-change-analysis.md) feature in VM insights. This feature is in public preview and not yet included in this scenario. -For different options to enable the Change Tracking solution on your virtual machines, see [Enable Change Tracking and Inventory](../../automation/change-tracking/overview.md#enable-change-tracking-and-inventory). This solution includes methods to configure virtual machines at scale. You'll have to [create an Azure Automation account](/azure/automation/quickstarts/create-azure-automation-account-portal) to support the solution. +For different options to enable the Change Tracking solution on your virtual machines, see [Enable Change Tracking and Inventory](../../automation/change-tracking/overview.md#enable-change-tracking-and-inventory). This solution includes methods to configure virtual machines at scale. You'll have to [create an Azure Automation account](../../automation/quickstarts/create-azure-automation-account-portal.md) to support the solution. When you enable Change Tracking and Inventory, two new tables are created in your Log Analytics workspace. Use these tables for log query alert rules. Use [SQL Insights (preview)](/azure/azure-sql/database/sql-insights-overview) to ## Next steps * [Learn how to analyze data in Azure Monitor logs using log queries](../logs/get-started-queries.md)-* [Learn about alerts using metrics and logs in Azure Monitor](../alerts/alerts-overview.md) +* [Learn about alerts using metrics and logs in Azure Monitor](../alerts/alerts-overview.md) |
azure-monitor | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md | This article lists significant changes to Azure Monitor documentation. | Article | Description | |||-|[Azure Monitor Agent overview](https://docs.microsoft.com/azure/azure-monitor/agents/agents-overview)|Added Azure Monitor Agent support for ARM64-based virtual machines for a number of distributions. <br><br>Azure Monitor Agent and legacy agents don't support machines and appliances that run heavily customized or stripped-down versions of operating system distributions. <br><br>Azure Monitor Agent versions 1.15.2 and higher now support syslog RFC formats, including Cisco Meraki, Cisco ASA, Cisco FTD, Sophos XG, Juniper Networks, Corelight Zeek, CipherTrust, NXLog, McAfee, and Common Event Format (CEF).| +|[Azure Monitor Agent overview](./agents/agents-overview.md)|Added Azure Monitor Agent support for ARM64-based virtual machines for a number of distributions. <br><br>Azure Monitor Agent and legacy agents don't support machines and appliances that run heavily customized or stripped-down versions of operating system distributions. <br><br>Azure Monitor Agent versions 1.15.2 and higher now support syslog RFC formats, including Cisco Meraki, Cisco ASA, Cisco FTD, Sophos XG, Juniper Networks, Corelight Zeek, CipherTrust, NXLog, McAfee, and Common Event Format (CEF).| ### Alerts | Article | Description | |||-|[Convert ITSM actions that send events to ServiceNow to secure webhook actions](https://docs.microsoft.com/azure/azure-monitor/alerts/itsm-convert-servicenow-to-webhook)|As of September 2022, we're starting the 3-year process of deprecating support of using ITSM actions to send events to ServiceNow. Learn how to convert ITSM actions that send events to ServiceNow to secure webhook actions| -|[Create a new alert rule](https://docs.microsoft.com/azure/azure-monitor/alerts/alerts-create-new-alert-rule)|Added description of all available monitoring services to create a new alert rule and alert processing rules pages. <br><br>Added support for regional processing for metric alert rules that monitor a custom metric with the scope defined as one of the supported regions. <br><br> Clarified that selecting the **Automatically resolve alerts** setting makes log alerts stateful.<| +|[Convert ITSM actions that send events to ServiceNow to secure webhook actions](./alerts/itsm-convert-servicenow-to-webhook.md)|As of September 2022, we're starting the 3-year process of deprecating support of using ITSM actions to send events to ServiceNow. Learn how to convert ITSM actions that send events to ServiceNow to secure webhook actions| +|[Create a new alert rule](./alerts/alerts-create-new-alert-rule.md)|Added description of all available monitoring services to create a new alert rule and alert processing rules pages. <br><br>Added support for regional processing for metric alert rules that monitor a custom metric with the scope defined as one of the supported regions. <br><br> Clarified that selecting the **Automatically resolve alerts** setting makes log alerts stateful.<| |[Types of Azure Monitor alerts](https://learn.microsoft.com/azure/azure-monitor/alerts/alerts-types)|Azure Database for PostgreSQL - Flexible Servers is supported for monitoring multiple resources.|-|[Upgrade legacy rules management to the current Log Alerts API from legacy Log Analytics Alert API](https://docs.microsoft.com/azure/azure-monitor/alerts/alerts-log-api-switch)|The process of moving legacy log alert rules management from the legacy API to the current API is now supported by the government cloud.| +|[Upgrade legacy rules management to the current Log Alerts API from legacy Log Analytics Alert API](./alerts/alerts-log-api-switch.md)|The process of moving legacy log alert rules management from the legacy API to the current API is now supported by the government cloud.| ### Application insights | Article | Description | |||-|[Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](https://docs.microsoft.com/azure/azure-monitor/app/java-in-process-agent)|New OpenTelemetry `@WithSpan` annotation guidance.| -|[Capture Application Insights custom metrics with .NET and .NET Core](https://docs.microsoft.com/azure/azure-monitor/app/tutorial-asp-net-custom-metrics)|Tutorial steps and images have been updated.| -|[Configuration options - Azure Monitor Application Insights for Java](https://learn.microsoft.com/azure/azure-monitor/app/java-in-process-agent)|Connection string guidance updated.| -|[Enable Application Insights for ASP.NET Core applications](https://docs.microsoft.com/azure/azure-monitor/app/tutorial-asp-net-core)|Tutorial steps and images have been updated.| -|[Enable Azure Monitor OpenTelemetry Exporter for .NET, Node.js, and Python applications (preview)](https://docs.microsoft.com/azure/azure-monitor/app/opentelemetry-enable)|Our product feedback link at the bottom of each document has been fixed.| -|[Filter and preprocess telemetry in the Application Insights SDK](https://docs.microsoft.com/azure/azure-monitor/app/api-filtering-sampling)|Added sample initializer to control which client IP gets used as part of geo-location mapping.| -|[Java Profiler for Azure Monitor Application Insights](https://docs.microsoft.com/azure/azure-monitor/app/java-standalone-profiler)|Our new Java Profiler was announced at Ignite. Read all about it!| -|[Release notes for Azure Web App extension for Application Insights](https://docs.microsoft.com/azure/azure-monitor/app/web-app-extension-release-notes)|Added release notes for 2.8.44 and 2.8.43.| -|[Resource Manager template samples for creating Application Insights resources](https://docs.microsoft.com/azure/azure-monitor/app/resource-manager-app-resource)|Fixed inaccurate tagging of workspace-based resources as still in Preview.| -|[Unified cross-component transaction diagnostics](https://docs.microsoft.com/azure/azure-monitor/app/transaction-diagnostics)|A complete FAQ section is added to help troubleshoot Azure portal errors, such as "error retrieving data".| -|[Upgrading from Application Insights Java 2.x SDK](https://docs.microsoft.com/azure/azure-monitor/app/java-standalone-upgrade-from-2x)|Additional upgrade guidance added. Java 2.x has been deprecated.| -|[Using Azure Monitor Application Insights with Spring Boot](https://docs.microsoft.com/azure/azure-monitor/app/java-spring-boot)|Configuration options have been updated.| +|[Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](./app/java-in-process-agent.md)|New OpenTelemetry `@WithSpan` annotation guidance.| +|[Capture Application Insights custom metrics with .NET and .NET Core](./app/tutorial-asp-net-custom-metrics.md)|Tutorial steps and images have been updated.| +|[Configuration options - Azure Monitor Application Insights for Java](/azure/azure-monitor/app/java-in-process-agent)|Connection string guidance updated.| +|[Enable Application Insights for ASP.NET Core applications](./app/tutorial-asp-net-core.md)|Tutorial steps and images have been updated.| +|[Enable Azure Monitor OpenTelemetry Exporter for .NET, Node.js, and Python applications (preview)](./app/opentelemetry-enable.md)|Our product feedback link at the bottom of each document has been fixed.| +|[Filter and preprocess telemetry in the Application Insights SDK](./app/api-filtering-sampling.md)|Added sample initializer to control which client IP gets used as part of geo-location mapping.| +|[Java Profiler for Azure Monitor Application Insights](./app/java-standalone-profiler.md)|Our new Java Profiler was announced at Ignite. Read all about it!| +|[Release notes for Azure Web App extension for Application Insights](./app/web-app-extension-release-notes.md)|Added release notes for 2.8.44 and 2.8.43.| +|[Resource Manager template samples for creating Application Insights resources](./app/resource-manager-app-resource.md)|Fixed inaccurate tagging of workspace-based resources as still in Preview.| +|[Unified cross-component transaction diagnostics](./app/transaction-diagnostics.md)|A complete FAQ section is added to help troubleshoot Azure portal errors, such as "error retrieving data".| +|[Upgrading from Application Insights Java 2.x SDK](./app/java-standalone-upgrade-from-2x.md)|Additional upgrade guidance added. Java 2.x has been deprecated.| +|[Using Azure Monitor Application Insights with Spring Boot](./app/java-spring-boot.md)|Configuration options have been updated.| ### Autoscale | Article | Description | |||-|[Autoscale with multiple profiles](https://docs.microsoft.com/azure/azure-monitor/autoscale/autoscale-multiprofile)|New article: Using multiple profiles in autoscale with CLI PowerShell and templates.| -|[Flapping in Autoscale](https://docs.microsoft.com/azure/azure-monitor/autoscale/autoscale-flapping)|New Article: Flapping in autoscale.| -|[Understand Autoscale settings](https://docs.microsoft.com/azure/azure-monitor/autoscale/autoscale-understanding-settings)|Clarified how often autoscale runs.| +|[Autoscale with multiple profiles](./autoscale/autoscale-multiprofile.md)|New article: Using multiple profiles in autoscale with CLI PowerShell and templates.| +|[Flapping in Autoscale](./autoscale/autoscale-flapping.md)|New Article: Flapping in autoscale.| +|[Understand Autoscale settings](./autoscale/autoscale-understanding-settings.md)|Clarified how often autoscale runs.| ### Change analysis | Article | Description | |||-|[Troubleshoot Azure Monitor's Change Analysis](https://docs.microsoft.com/azure/azure-monitor/change/change-analysis-troubleshoot)|Added section about partial data and how to mitigate to the troubleshooting guide.| +|[Troubleshoot Azure Monitor's Change Analysis](./change/change-analysis-troubleshoot.md)|Added section about partial data and how to mitigate to the troubleshooting guide.| ### Essentials | Article | Description | |||-|[Structure of transformation in Azure Monitor (preview)](https://docs.microsoft.com/azure/azure-monitor/essentials/data-collection-transformations-structure)|New KQL functions supported.| +|[Structure of transformation in Azure Monitor (preview)](./essentials/data-collection-transformations-structure.md)|New KQL functions supported.| ### Virtual Machines | Article | Description | |||-|[Migrate from Service Map to Azure Monitor VM insights](https://docs.microsoft.com/azure/azure-monitor/vm/vminsights-migrate-from-service-map)|Added a new article with guidance for migrating from the Service Map solution to Azure Monitor VM insights.| +|[Migrate from Service Map to Azure Monitor VM insights](./vm/vminsights-migrate-from-service-map.md)|Added a new article with guidance for migrating from the Service Map solution to Azure Monitor VM insights.| ### Network Insights This article lists significant changes to Azure Monitor documentation. ### Visualizations | Article | Description | |||-|[Access deprecated Troubleshooting guides in Azure Workbooks](https://docs.microsoft.com/azure/azure-monitor/visualize/workbooks-access-troubleshooting-guide)|New article: Access deprecated Troubleshooting guides in Azure Workbooks.| +|[Access deprecated Troubleshooting guides in Azure Workbooks](./visualize/workbooks-access-troubleshooting-guide.md)|New article: Access deprecated Troubleshooting guides in Azure Workbooks.| ## August 2022 All references to unsupported versions of .NET and .NET CORE have been scrubbed | Article | Description | |:|:| | [Migrate from VM insights guest health (preview) to Azure Monitor log alerts](vm/vminsights-health-migrate.md) | New article describing process to replace VM guest health with alert rules |-| [VM insights guest health (preview)](vm/vminsights-health-overview.md) | Added deprecation statement | +| [VM insights guest health (preview)](vm/vminsights-health-overview.md) | Added deprecation statement | |
azure-netapp-files | Azure Netapp Files Create Volumes Smb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md | Before creating an SMB volume, you need to create an Active Directory connection * **Volume name** Specify the name for the volume that you are creating. - A volume name must be unique within each capacity pool. It must be at least three characters long. The name must begin with a letter. It can contain letters, numbers, underscores ('_'), and hyphens ('-') only. -- You can't use `default` or `bin` as the volume name. + Refer to [Naming rules and restrictions for Azure resources](../azure-resource-manager/management/resource-name-rules.md#microsoftnetapp) for naming conventions on volumes. Additionally, you cannot use `default` or `bin` as the volume name. * **Capacity pool** Specify the capacity pool where you want the volume to be created. |
azure-netapp-files | Azure Netapp Files Create Volumes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes.md | This article shows you how to create an NFS volume. For SMB volumes, see [Create 2. In the Create a Volume window, click **Create**, and provide information for the following fields under the Basics tab: * **Volume name** - Specify the name for the volume that you are creating. + Specify the name for the volume that you are creating. - A volume name must be unique within each capacity pool. It must be at least three characters long. The name must begin with a letter. It can contain letters, numbers, underscores ('_'), and hyphens ('-') only. -- You cannot use `default` or `bin` as the volume name. + Refer to [Naming rules and restrictions for Azure resources](../azure-resource-manager/management/resource-name-rules.md#microsoftnetapp) for naming conventions on volumes. Additionally, you cannot use `default` or `bin` as the volume name. * **Capacity pool** Specify the capacity pool where you want the volume to be created. |
azure-netapp-files | Azure Netapp Files Solution Architectures | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md | This section provides references for Windows applications and SQL Server solutio * [SQL Server on Azure Virtual Machines with Azure NetApp Files - Azure Example Scenarios](/azure/architecture/example-scenario/file-storage/sql-server-azure-netapp-files) * [SQL Server on Azure Deployment Guide Using Azure NetApp Files](https://techcommunity.microsoft.com/t5/azure-architecture-blog/deploying-sql-server-on-azure-using-azure-netapp-files/ba-p/3023143) * [Benefits of using Azure NetApp Files for SQL Server deployment](solutions-benefits-azure-netapp-files-sql-server.md)+* [Managing SQL Server 2022 T-SQL snapshot backup with Azure NetApp Files snapshots](https://techcommunity.microsoft.com/t5/azure-architecture-blog/managing-sql-server-2022-t-sql-snapshot-backup-with-azure-netapp/ba-p/3654798) * [Deploy SQL Server Over SMB with Azure NetApp Files](https://www.youtube.com/watch?v=x7udfcYbibs) * [Deploy SQL Server Always-On Failover Cluster over SMB with Azure NetApp Files](https://www.youtube.com/watch?v=zuNJ5E07e8Q) * [Deploy Always-On Availability Groups with Azure NetApp Files](https://www.youtube.com/watch?v=y3VQmzzeyvc) This section provides references to SAP on Azure solutions. * [SAP HANA scale-out with HSR and Pacemaker on RHEL - Azure Virtual Machines](../virtual-machines/workloads/sap/sap-hana-high-availability-scale-out-hsr-rhel.md) * [Implementing Azure NetApp Files with Kerberos for SAP HANA](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/implementing-azure-netapp-files-with-kerberos/ba-p/3142010) * [Azure Application Consistent Snapshot tool (AzAcSnap)](azacsnap-introduction.md)+* [Protecting HANA databases configured with HSR on Azure NetApp Files with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/protecting-hana-databases-configured-with-hsr-on-azure-netapp/ba-p/3654620) * [Manual Recovery Guide for SAP HANA on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-hana-on-azure-vms-from-azure/ba-p/3290161) * [SAP HANA Disaster Recovery with Azure NetApp Files](https://docs.netapp.com/us-en/netapp-solutions-sap/pdfs/sidebar/SAP_HANA_Disaster_Recovery_with_Azure_NetApp_Files.pdf) * [SAP HANA backup and recovery on Azure NetApp Files with SnapCenter Service](https://docs.netapp.com/us-en/netapp-solutions-sap/pdfs/sidebar/SAP_HANA_backup_and_recovery_on_Azure_NetApp_Files_with_SnapCenter_Service.pdf) |
azure-netapp-files | Create Volumes Dual Protocol | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-volumes-dual-protocol.md | To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volu | NFSv3 | `Unix` | None | UNIX (mode bits or NFSv4.x ACLs) <br><br> NFSv4.x ACLs can be applied using an NFSv4.x administrative client and honored by NFSv3 clients. | | NFS | `Ntfs` | UNIX to Windows | NTFS ACLs (based on mapped Windows user SID) | -* The LDAP with extended groups feature supports the dual protocol of both [NFSv3 and SMB] and [NFSv4.1 and SMB] with the Unix security style. See [Configure ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md) for more information. +* The LDAP with extended groups feature supports the dual protocol of both [NFSv3 and SMB] and [NFSv4.1 and SMB] with the Unix security style. See [Configure AD DS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md) for more information. -* If you have large topologies, and you use the Unix security style with a dual-protocol volume or LDAP with extended groups, you should use the **LDAP Search Scope** option on the Active Directory Connections page to avoid "access denied" errors on Linux clients for Azure NetApp Files. See [Configure ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md#ldap-search-scope) for more information. +* If you have large topologies, and you use the Unix security style with a dual-protocol volume or LDAP with extended groups, you should use the **LDAP Search Scope** option on the Active Directory Connections page to avoid "access denied" errors on Linux clients for Azure NetApp Files. See [Configure AD DS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md#ldap-search-scope) for more information. * You don't need a server root CA certificate for creating a dual-protocol volume. It is required only if LDAP over TLS is enabled. To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volu * **Volume name** Specify the name for the volume that you are creating. - A volume name must be unique within each capacity pool. It must be at least three characters long. The name must begin with a letter. It can contain letters, numbers, underscores ('_'), and hyphens ('-') only. -- You cannot use `default` or `bin` as the volume name. + Refer to [Naming rules and restrictions for Azure resources](../azure-resource-manager/management/resource-name-rules.md#microsoftnetapp) for naming conventions on volumes. Additionally, you can't use `default` or `bin` as the volume name. * **Capacity pool** Specify the capacity pool where you want the volume to be created. The **Allow local NFS users with LDAP** option in Active Directory connections e > [!NOTE] > Before enabling this option, you should understand the [considerations](#considerations). -> The **Allow local NFS users with LDAP** option is part of the **LDAP with extended groups** feature and requires registration. See [Configure ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md) for details. +> The **Allow local NFS users with LDAP** option is part of the **LDAP with extended groups** feature and requires registration. See [Configure AD DS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md) for details. 1. Click **Active Directory connections**. On an existing Active Directory connection, click the context menu (the three dots `…`), and select **Edit**. Follow instructions in [Configure an NFS client for Azure NetApp Files](configur * [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md) * [Configure an NFS client for Azure NetApp Files](configure-nfs-clients.md) * [Configure Unix permissions and change ownership mode](configure-unix-permissions-change-ownership-mode.md). -* [Configure ADDS LDAP over TLS for Azure NetApp Files](configure-ldap-over-tls.md) -* [Configure ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md) +* [Configure AD DS LDAP over TLS for Azure NetApp Files](configure-ldap-over-tls.md) +* [Configure AD DS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md) * [Troubleshoot volume errors for Azure NetApp Files](troubleshoot-volumes.md) |
azure-resource-manager | Delete Resource Group | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/delete-resource-group.md | Title: Delete resource group and resources description: Describes how to delete resource groups and resources. It describes how Azure Resource Manager orders the deletion of resources when a deleting a resource group. It describes the response codes and how Resource Manager handles them to determine if the deletion succeeded. Previously updated : 09/28/2021 Last updated : 10/13/2022 For a list of operations, see [Azure resource provider operations](../../role-ba If you have the required access, but the delete request fails, it may be because there's a [lock on the resources or resource group](lock-resources.md). Even if you didn't manually lock a resource group, it may have been [automatically locked by a related service](lock-resources.md#managed-applications-and-locks). Or, the deletion can fail if the resources are connected to resources in other resource groups that aren't being deleted. For example, you can't delete a virtual network with subnets that are still in use by a virtual machine. +## Accidental deletion ++If you accidentally delete a resource group or resource, in some situations it might be possible to recover it. ++Some resource types support *soft delete*. You might have to configure soft delete before you can use it. For more information about enabling soft delete, see the documentation for [Azure Key Vault](../../key-vault/general/soft-delete-overview.md), [Azure Backup](../../backup/backup-azure-delete-vault.md), and [Azure Storage](../../storage/blobs/soft-delete-container-overview.md). ++You can also [open an Azure support case](../../azure-portal/supportability/how-to-create-azure-support-request.md). Provide as much detail as you can about the deleted resources, including their resource IDs, types, and resource names, and request that the support engineer check if the resources can be restored. ++> [!NOTE] +> Recovery of deleted resources is not possible under all circumstances. A support engineer will investigate your scenario and advise you whether it's possible. + ## Next steps * To understand Resource Manager concepts, see [Azure Resource Manager overview](overview.md). |
azure-resource-manager | Resource Name Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md | description: Shows the rules and restrictions for naming Azure resources. Previously updated : 10/05/2022 Last updated : 10/18/2022 # Naming rules and restrictions for Azure resources In the following tables, the term alphanumeric refers to: > | netAppAccounts / capacityPools | NetApp account | 1-64 | Alphanumerics, underscores, and hyphens.<br><br>Start with alphanumeric. | > | netAppAccounts / snapshots | NetApp account | 1-255 | Alphanumerics, underscores, and hyphens. <br><br> Start with alphanumeric. | > | netAppAccounts / snapshotPolicies | NetApp account | 1-64 | Alphanumerics, underscores, and hyphens.<br><br>Start with alphanumeric. |-> | netAppAccounts / volumes | NetApp account | 1-64 | Alphanumerics, underscores, and hyphens. <br><br> Start with alphanumeric. | +> | netAppAccounts / volumes | NetApp account | 1-64 | Alphanumerics, underscores, and hyphens. <br><br> Start with alphanumeric. <br><br> Volume cannot be named `bin` or `default`. | > | netAppAccounts / volumeGroups | NetApp account | 3-64 | Alphanumerics, underscores, and hyphens.<br><br>Start with alphanumeric. | ## Microsoft.Network |
azure-resource-manager | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Resource Manager description: Lists Azure Policy Regulatory Compliance controls available for Azure Resource Manager. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/10/2022 Last updated : 10/12/2022 |
azure-signalr | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure SignalR description: Lists Azure Policy Regulatory Compliance controls available for Azure SignalR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/10/2022 Last updated : 10/12/2022 |
azure-signalr | Signalr Howto Reverse Proxy Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-reverse-proxy-overview.md | -A reverse proxy server can be used in front of Azure SignalR Service. Reverse proxy servers sit in between the clients and the Azure SignalR service and other services can help in various scenarios. For example, reverse proxy servers can load balance different client requests to different backend services, you can usually configure different routing rules for different client requests, and provide seamless user experience for users accessing different backend services. They can also protect your backend servers from common exploits vulnerabilities with centralized protection control. Services such as [Azure Application Gateway](/azure/application-gateway/overview), [Azure API Management](/azure/api-management/api-management-key-concepts) or [Akamai](https://www.akamai.com) can act as reverse proxy servers. +A reverse proxy server can be used in front of Azure SignalR Service. Reverse proxy servers sit in between the clients and the Azure SignalR service and other services can help in various scenarios. For example, reverse proxy servers can load balance different client requests to different backend services, you can usually configure different routing rules for different client requests, and provide seamless user experience for users accessing different backend services. They can also protect your backend servers from common exploits vulnerabilities with centralized protection control. Services such as [Azure Application Gateway](../application-gateway/overview.md), [Azure API Management](../api-management/api-management-key-concepts.md) or [Akamai](https://www.akamai.com) can act as reverse proxy servers. A common architecture using a reverse proxy server with Azure SignalR is as below: There are several general practices to follow when using a reverse proxy in fron - Learn [how to work with Application Gateway](./signalr-howto-work-with-app-gateway.md). -- Learn more about [the internals of Azure SignalR](./signalr-concept-internals.md).+- Learn more about [the internals of Azure SignalR](./signalr-concept-internals.md). |
azure-video-indexer | Limited Access Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/limited-access-features.md | This section talks about limited access features in Azure Video Indexer. FAQ about Limited Access can be found [here](https://aka.ms/limitedaccesscogservices). -If you need help with Azure Video Indexer, find support [here](/azure/cognitive-services/cognitive-services-support-options). +If you need help with Azure Video Indexer, find support [here](../cognitive-services/cognitive-services-support-options.md). [Report Abuse](https://msrc.microsoft.com/report/abuse) of Azure Video Indexer. If you need help with Azure Video Indexer, find support [here](/azure/cognitive- Learn more about the legal terms that apply to this service [here](https://azure.microsoft.com/support/legal/). - |
azure-video-indexer | Observed People Featured Clothing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/observed-people-featured-clothing.md | Title: Enable featured clothing of an observed person description: When indexing a video using Azure Video Indexer advanced video settings, you can view the featured clothing of an observed person. Previously updated : 09/10/2022 Last updated : 10/10/2022 When indexing a video using Azure Video Indexer advanced video settings, you can This article discusses how to view the featured clothing insight and how the featured clothing images are ranked. +## View an intro video ++You can view the following short video that discusses how to view and use the featured clothing insight. ++[An intro video](https://www.youtube.com/watch?v=x33fND286eE). + ## Viewing featured clothing The featured clothing insight is available when indexing your file by choosing the Advanced option -> Advanced video or Advanced video + audio preset (under Video + audio indexing). Standard indexing will not include this insight. |
azure-vmware | Attach Azure Netapp Files To Azure Vmware Solution Hosts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md | Title: Attach Azure NetApp Files datastores to Azure VMware Solution hosts (Prev description: Learn how to create Azure NetApp Files-based NSF datastores for Azure VMware Solution hosts. Previously updated : 08/08/2022 Last updated : 10/18/2022 Before you begin the prerequisites, review the [Performance best practices](#per Azure VMware Solution currently supports the following regions: -**America** : East US, West US, Central US, South Central US, North Central US, Canada East, Canada Central . +**America** : East US, East US 2, West US, Central US, South Central US, North Central US, Canada East, Canada Central . **Europe** : West Europe, North Europe, UK West, UK South, France Central, Switzerland West, Germany West Central. -**Asia** : Southeast Asia, Japan West. +**Asia** : East Asia, Southeast Asia, Japan East, Japan West. **Australia** : Australia East, Australia Southeast. |
azure-vmware | Concepts Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-storage.md | vSAN datastores use data-at-rest encryption by default using keys stored in Azur ## Datastore capacity expansion options -The vSAN datastore capacity can be expanded by connecting Azure storage resources such as [Azure NetApp Files volumes as datastores](/azure/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts). Virtual machines can be migrated between vSAN and Azure NetApp Files datastores using storage vMotion. -Azure NetApp Files is available in [Ultra, Premium and Standard performance tiers](/azure/azure-netapp-files/azure-netapp-files-service-levels) to allow for adjusting performance and cost to the requirements of the workloads. +The vSAN datastore capacity can be expanded by connecting Azure storage resources such as [Azure NetApp Files volumes as datastores](./attach-azure-netapp-files-to-azure-vmware-solution-hosts.md). Virtual machines can be migrated between vSAN and Azure NetApp Files datastores using storage vMotion. +Azure NetApp Files is available in [Ultra, Premium and Standard performance tiers](../azure-netapp-files/azure-netapp-files-service-levels.md) to allow for adjusting performance and cost to the requirements of the workloads. ## Azure storage integration Now that you've covered Azure VMware Solution storage concepts, you may want to - [Scale clusters in the private cloud][tutorial-scale-private-cloud] - You can scale the clusters and hosts in a private cloud as required for your application workload. Performance and availability limitations for specific services should be addressed on a case by case basis. -- [Azure NetApp Files with Azure VMware Solution](netapp-files-with-azure-vmware-solution.md) - You can use Azure NetApp Files to migrate and run the most demanding enterprise file-workloads in the cloud: databases, and general purpose computing applications, with no code changes. Azure NetApp Files volumes can be attached to virtual machines, and as [datastores](/azure/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts) to extend the vSAN datastore capacity without adding more nodes.+- [Azure NetApp Files with Azure VMware Solution](netapp-files-with-azure-vmware-solution.md) - You can use Azure NetApp Files to migrate and run the most demanding enterprise file-workloads in the cloud: databases, and general purpose computing applications, with no code changes. Azure NetApp Files volumes can be attached to virtual machines, and as [datastores](./attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) to extend the vSAN datastore capacity without adding more nodes. - [vSphere role-based access control for Azure VMware Solution](concepts-identity.md) - You use vCenter Server to manage VM workloads and NSX-T Manager to manage and extend the private cloud. Access and identity management use the CloudAdmin role for vCenter Server and restricted administrator rights for NSX-T Manager. Now that you've covered Azure VMware Solution storage concepts, you may want to <!-- LINKS - internal --> [tutorial-scale-private-cloud]: ./tutorial-scale-private-cloud.md-[concepts-identity]: ./concepts-identity.md +[concepts-identity]: ./concepts-identity.md |
azure-vmware | Configure Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-customer-managed-keys.md | The following diagram shows how Azure VMware Solution uses Azure Active Director Before you begin to enable customer-managed key (CMK) functionality, ensure the following listed requirements are met: -- You'll need an Azure Key Vault to use CMK functionality. If you don't have an Azure Key Vault, you can create one using [Quickstart: Create a key vault using the Azure portal](https://docs.microsoft.com/azure/key-vault/general/quick-create-portal).-- If you enabled restricted access to key vault, you'll need to allow Microsoft Trusted Services to bypass the Azure Key Vault firewall. Go to [Configure Azure Key Vault networking settings](https://docs.microsoft.com/azure/key-vault/general/how-to-azure-key-vault-network-security?tabs=azure-portal) to learn more.+- You'll need an Azure Key Vault to use CMK functionality. If you don't have an Azure Key Vault, you can create one using [Quickstart: Create a key vault using the Azure portal](../key-vault/general/quick-create-portal.md). +- If you enabled restricted access to key vault, you'll need to allow Microsoft Trusted Services to bypass the Azure Key Vault firewall. Go to [Configure Azure Key Vault networking settings](../key-vault/general/how-to-azure-key-vault-network-security.md?tabs=azure-portal) to learn more. >[!NOTE]- >After firewall rules are in effect, users can only perform Key Vault [data plane](https://docs.microsoft.com/azure/key-vault/general/security-features#privileged-access) operations when their requests originate from allowed VMs or IPv4 address ranges. This also applies to accessing key vault from the Azure portal. This also affects the key vault Picker by Azure VMware Solution. Users may be able to see a list of key vaults, but not list keys, if firewall rules prevent their client machine or user does not have list permission in key vault. + >After firewall rules are in effect, users can only perform Key Vault [data plane](../key-vault/general/security-features.md#privileged-access) operations when their requests originate from allowed VMs or IPv4 address ranges. This also applies to accessing key vault from the Azure portal. This also affects the key vault Picker by Azure VMware Solution. Users may be able to see a list of key vaults, but not list keys, if firewall rules prevent their client machine or user does not have list permission in key vault. - Enable **System Assigned identity** on your Azure VMware Solution private cloud if you didn't enable it during software-defined data center (SDDC) provisioning. Before you begin to enable customer-managed key (CMK) functionality, ensure the privateCloudId=$(az vmware private-cloud show --name $privateCloudName --resource-group $resourceGroupName --query id | tr -d '"') ``` - To configure the system-assigned identity on Azure VMware Solution private cloud with Azure CLI, call [az-resource-update](https://docs.microsoft.com/cli/azure/resource?view=azure-cli-latest#az-resource-update) and provide the variable for the private cloud resource ID that you previously retrieved. + To configure the system-assigned identity on Azure VMware Solution private cloud with Azure CLI, call [az-resource-update](/cli/azure/resource?view=azure-cli-latest#az-resource-update) and provide the variable for the private cloud resource ID that you previously retrieved. ```azurecli-interactive az resource update --ids $privateCloudId --set identity.type=SystemAssigned --api-version "2021-12-01" Before you begin to enable customer-managed key (CMK) functionality, ensure the principalId=$(az vmware private-cloud show --name $privateCloudName --resource-group $resourceGroupName --query identity.principalId | tr -d '"') ``` - To configure the key vault access policy with Azure CLI, call [az keyvault set-policy](https://docs.microsoft.com/cli/azure/keyvault#az-keyvault-set-policy) and provide the variable for the principal ID that you previously retrieved for the managed identity. + To configure the key vault access policy with Azure CLI, call [az keyvault set-policy](/cli/azure/keyvault#az-keyvault-set-policy) and provide the variable for the principal ID that you previously retrieved for the managed identity. ```azurecli-interactive az keyvault set-policy --name $keyVault --resource-group $resourceGroupName --object-id $principalId --key-permissions get unwrapKey wrapKey ``` - Learn more about how to [Assign an Azure Key Vault access policy](https://docs.microsoft.com/azure/key-vault/general/assign-access-policy?tabs=azure-portal). + Learn more about how to [Assign an Azure Key Vault access policy](../key-vault/general/assign-access-policy.md?tabs=azure-portal). ## Customer-managed key version lifecycle Navigate to your **Azure Key Vault** and provide access to the SDDC on Azure Key # [Azure CLI](#tab/azure-cli) -To configure customer-managed keys for an Azure VMware Solution private cloud with automatic updating of the key version, call [az vmware private-cloud add-cmk-encryption](https://docs.microsoft.com/cli/azure/vmware/private-cloud?view=azure-cli-latest#az-vmware-private-cloud-add-cmk-encryption). Get the key vault URL and save it to a variable. You'll need this value in the next step to enable CMK. +To configure customer-managed keys for an Azure VMware Solution private cloud with automatic updating of the key version, call [az vmware private-cloud add-cmk-encryption](/cli/azure/vmware/private-cloud?view=azure-cli-latest#az-vmware-private-cloud-add-cmk-encryption). Get the key vault URL and save it to a variable. You'll need this value in the next step to enable CMK. ```azurecli-interactive keyVaultUrl =$(az keyvault show --name <keyvault_name> --resource-group <resource_group_name> --query properties.vaultUri --output tsv) If you accidentally delete the Managed System Identity (MSI) associated with pri ## Next steps -Learn about [Azure Key Vault backup and restore](https://docs.microsoft.com/azure/key-vault/general/backup?tabs=azure-cli) +Learn about [Azure Key Vault backup and restore](../key-vault/general/backup.md?tabs=azure-cli) -Learn about [Azure Key Vault recovery](https://docs.microsoft.com/azure/key-vault/general/key-vault-recovery?tabs=azure-portal#list-recover-or-purge-a-soft-deleted-key-vault) +Learn about [Azure Key Vault recovery](../key-vault/general/key-vault-recovery.md?tabs=azure-portal#list-recover-or-purge-a-soft-deleted-key-vault) |
azure-vmware | Deploy Vsan Stretched Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-vsan-stretched-clusters.md | To request support, send an email request to **avsStretchedCluster@microsoft.com - Number of nodes in first stretched cluster (minimum 6, maximum 16 - in multiples of two) - Estimated provisioning date (used for billing purposes) -When the request support details are received, quota will be reserved for a stretched cluster environment in the region requested. The subscription gets enabled to deploy a stretched cluster SDDC through the Azure portal. A confirmation email will be sent to the designated point of contact within two business days upon which you should be able to [self-deploy a stretched cluster private cloud via the Azure portal](/azure/azure-vmware/tutorial-create-private-cloud?tabs=azure-portal#create-a-private-cloud). Be sure to select **Hosts in two availability zones** to ensure that a stretched cluster gets deployed in the region of your choice. +When the request support details are received, quota will be reserved for a stretched cluster environment in the region requested. The subscription gets enabled to deploy a stretched cluster SDDC through the Azure portal. A confirmation email will be sent to the designated point of contact within two business days upon which you should be able to [self-deploy a stretched cluster private cloud via the Azure portal](./tutorial-create-private-cloud.md?tabs=azure-portal#create-a-private-cloud). Be sure to select **Hosts in two availability zones** to ensure that a stretched cluster gets deployed in the region of your choice. :::image type="content" source="media/stretch-clusters/stretched-clusters-hosts-two-availability-zones.png" alt-text="Screenshot shows where to select hosts in two availability zones."::: -Once the private cloud is created, you can peer both availability zones (AZs) to your on-premises ExpressRoute circuit with Global Reach that helps connect your on-premises data center to the private cloud. Peering both the AZs will ensure that an AZ failure doesn't result in a loss of connectivity to your private cloud. Since an ExpressRoute Auth Key is valid for only one connection, repeat the [Create an ExpressRoute auth key in the on-premises ExpressRoute circuit](/azure/azure-vmware/tutorial-expressroute-global-reach-private-cloud#create-an-expressroute-auth-key-in-the-on-premises-expressroute-circuit) process to generate another authorization. +Once the private cloud is created, you can peer both availability zones (AZs) to your on-premises ExpressRoute circuit with Global Reach that helps connect your on-premises data center to the private cloud. Peering both the AZs will ensure that an AZ failure doesn't result in a loss of connectivity to your private cloud. Since an ExpressRoute Auth Key is valid for only one connection, repeat the [Create an ExpressRoute auth key in the on-premises ExpressRoute circuit](./tutorial-expressroute-global-reach-private-cloud.md#create-an-expressroute-auth-key-in-the-on-premises-expressroute-circuit) process to generate another authorization. :::image type="content" source="media/stretch-clusters/express-route-availability-zones.png" alt-text="Screenshot shows how to generate Express Route authorizations for both availability zones."lightbox="media/stretch-clusters/express-route-availability-zones.png"::: -Next, repeat the process to [peer ExpressRoute Global Reach](/azure/azure-vmware/tutorial-expressroute-global-reach-private-cloud#peer-private-cloud-to-on-premises) two availability zones to the on-premises ExpressRoute circuit. +Next, repeat the process to [peer ExpressRoute Global Reach](./tutorial-expressroute-global-reach-private-cloud.md#peer-private-cloud-to-on-premises) two availability zones to the on-premises ExpressRoute circuit. :::image type="content" source="media/stretch-clusters/express-route-global-reach-peer-availability-zones.png" alt-text="Screenshot shows page to peer both availability zones to on-premises Express Route Global Reach."lightbox="media/stretch-clusters/express-route-global-reach-peer-availability-zones.png"::: No. While in (preview), customers won't see a charge for the witness node and th ### Which SKUs are available? -Stretched clusters will solely be supported on the AV36 SKU. +Stretched clusters will solely be supported on the AV36 SKU. |
backup | Backup Mabs Protection Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-protection-matrix.md | For more information, see the [ExpressRoute routing requirements](../expressrout Support for the following operating systems and applications in MABS are deprecated. We recommended you to upgrade them to continue protecting your data. -If the existing commitments prevent upgrading Windows Server or SQL Server, migrate them to Azure and [use Azure Backup to protect the servers](/azure/backup/). For more information, see [migration of Windows Server, apps and workloads](https://azure.microsoft.com/migration/windows-server/). +If the existing commitments prevent upgrading Windows Server or SQL Server, migrate them to Azure and [use Azure Backup to protect the servers](./index.yml). For more information, see [migration of Windows Server, apps and workloads](https://azure.microsoft.com/migration/windows-server/). For on-premises or hosted environments that you can't upgrade or migrate to Azure, activate Extended Security Updates for the machines for protection and support. Note that only limited editions are eligible for Extended Security Updates. For more information, see [Frequently asked questions](https://www.microsoft.com/windows-server/extended-security-updates). MABS doesn't support protecting the following data types: ## Next steps -* [Support matrix for backup with Microsoft Azure Backup Server or System Center DPM](backup-support-matrix-mabs-dpm.md) +* [Support matrix for backup with Microsoft Azure Backup Server or System Center DPM](backup-support-matrix-mabs-dpm.md) |
backup | Microsoft Azure Backup Server Protection V3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/microsoft-azure-backup-server-protection-v3.md | The following matrix lists what can be protected with Azure Backup Server V3 RTM Support for the following operating systems and applications in MABS are deprecated. We recommended you to upgrade them to continue protecting your data. -If the existing commitments prevent upgrading Windows Server or SQL Server, migrate them to Azure and [use Azure Backup to protect the servers](/azure/backup/). For more information, see [migration of Windows Server, apps and workloads](https://azure.microsoft.com/migration/windows-server/). +If the existing commitments prevent upgrading Windows Server or SQL Server, migrate them to Azure and [use Azure Backup to protect the servers](./index.yml). For more information, see [migration of Windows Server, apps and workloads](https://azure.microsoft.com/migration/windows-server/). For on-premises or hosted environments that you can't upgrade or migrate to Azure, activate Extended Security Updates for the machines for protection and support. Note that only limited editions are eligible for Extended Security Updates. For more information, see [Frequently asked questions](https://www.microsoft.com/windows-server/extended-security-updates). Azure Backup Server can protect data in the following clustered applications: * SQL Server - Azure Backup Server doesn't support backing up SQL Server databases hosted on cluster-shared volumes (CSVs). -Azure Backup Server can protect cluster workloads that are located in the same domain as the MABS server, and in a child or trusted domain. If you want to protect data sources in untrusted domains or workgroups, use NTLM or certificate authentication for a single server, or certificate authentication only for a cluster. +Azure Backup Server can protect cluster workloads that are located in the same domain as the MABS server, and in a child or trusted domain. If you want to protect data sources in untrusted domains or workgroups, use NTLM or certificate authentication for a single server, or certificate authentication only for a cluster. |
backup | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Backup description: Lists Azure Policy Regulatory Compliance controls available for Azure Backup. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/10/2022 Last updated : 10/12/2022 |
bastion | Bastion Connect Vm Rdp Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-rdp-windows.md | description: Learn how to use Azure Bastion to connect to Windows VM using RDP. Previously updated : 08/08/2022 Last updated : 10/18/2022 Before you begin, verify that you've met the following criteria: * Reader role on the virtual machine. * Reader role on the NIC with private IP of the virtual machine. * Reader role on the Azure Bastion resource.+* Reader role on the virtual network of the target virtual machine (if the Bastion deployment is in a peered virtual network). ### Ports To connect to the Windows VM, you must have the following ports open on your Win > [!NOTE] > If you want to specify a custom port value, Azure Bastion must be configured using the Standard SKU. The Basic SKU does not allow you to specify custom ports.-> +++See the [Azure Bastion FAQ](bastion-faq.md) for additional requirements. ## <a name="rdp"></a>Connect To connect to the Windows VM, you must have the following ports open on your Win ## Next steps -Read the [Bastion FAQ](bastion-faq.md). +Read the [Bastion FAQ](bastion-faq.md) for additional connection information. |
batch | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Batch description: Lists Azure Policy Regulatory Compliance controls available for Azure Batch. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/10/2022 Last updated : 10/12/2022 |
center-sap-solutions | Deploy S4hana | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/deploy-s4hana.md | In this how-to guide, you'll learn how to deploy S/4HANA infrastructure in *Azur - An Azure subscription. - Register the **Microsoft.Workloads** Resource Provider on the subscription in which you are deploying the SAP system. - An Azure account with **Contributor** role access to the subscriptions and resource groups in which you'll create the Virtual Instance for SAP solutions (VIS) resource.-- A **User-assigned managed identity** which has Contributor role access to the resource groups of the SAP system. +- A **User-assigned managed identity** which has Contributor role access on the Subscription or atleast all resource groups (Compute, Network,Storage). If you wish to install SAP Software through the Azure Center for SAP solutions, also provide Storage Blob data Reader, Reader and Data Access roles to the identity on SAP bits storage account where you would store the SAP Media. - A [network set up for your infrastructure deployment](prepare-network.md). ## Deployment types |
cloud-services | Cloud Services Guestos Msrc Releases | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md | The following tables show the Microsoft Security Response Center (MSRC) updates | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | | | | | | |-| Rel 22-10 | [5016623] | Latest Cumulative Update(LCU) | 6.49 | Aug 9, 2022 | -| Rel 22-10 | [5016618] | IE Cumulative Updates | 2.129, 3.116, 4.109 | Aug 9, 2022 | -| Rel 22-10 | [5016627] | Latest Cumulative Update(LCU) | 7.17 | Aug 9, 2022 | -| Rel 22-10 | [5016622] | Latest Cumulative Update(LCU) | 5.73 | Aug 9, 2022 | -| Rel 22-10 | [5013637] | .NET Framework 3.5 Security and Quality Rollup LKG | 2.129 | Oct 11, 2022 | -| Rel 22-10 | [5013644] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 2.129 | May 10, 2022 | -| Rel 22-10 | [5013638] | .NET Framework 3.5 Security and Quality Rollup LKG | 4.109 | Jun 14, 2022 | -| Rel 22-10 | [5013643] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 4.109 | May 10, 2022 | -| Rel 22-10 | [5013635] | .NET Framework 3.5 Security and Quality Rollup LKG | 3.116 | Oct 11, 2022 | -| Rel 22-10 | [5013642] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 3.116 | May 10, 2022 | -| Rel 22-10 | [5013641] | . NET Framework 3.5 and 4.7.2 Cumulative Update LKG | 6.49 | May 10, 2022 | -| Rel 22-10 | [5017028] | .NET Framework 4.8 Security and Quality Rollup LKG | 7.17 | Sep 13, 2022 | -| Rel 22-10 | [5018454] | Monthly Rollup | 2.129 | Oct 11, 2022 | -| Rel 22-10 | [5018457] | Monthly Rollup | 3.116 | Oct 11, 2022 | -| Rel 22-10 | [5018474] | Monthly Rollup | 4.109 | Oct 11, 2022 | -| Rel 22-10 | [5016263] | Servicing Stack update | 3.116 | Jul 12, 2022 | -| Rel 22-10 | [5018922] | Servicing Stack update | 4.109 | Oct 11, 2022 | -| Rel 22-10 | [4578013] | OOB Standalone Security Update | 4.109 | Aug 19, 2020 | -| Rel 22-10 | [5017396] | Servicing Stack update | 5.73 | Sep 13, 2022 | -| Rel 22-10 | [5017397] | Servicing Stack update | 2.129 | Sep 13, 2022 | -| Rel 22-10 | [4494175] | Microcode | 5.73 | Sep 1, 2020 | -| Rel 22-10 | [4494174] | Microcode | 6.49 | Sep 1, 2020 | --[5016623]: https://support.microsoft.com/kb/5016623 -[5016618]: https://support.microsoft.com/kb/5016618 -[5016627]: https://support.microsoft.com/kb/5016627 -[5016622]: https://support.microsoft.com/kb/5016622 +| Rel 22-10 | [5020438] | Latest Cumulative Update(LCU) | 6.50 | Oct 17, 2022 | +| Rel 22-10 | [5018413] | IE Cumulative Updates | 2.130, 3.117, 4.110 | Oct 11, 2022 | +| Rel 22-10 | [5020436] | Latest Cumulative Update(LCU) | 7.18 | Oct 17, 2022 | +| Rel 22-10 | [5020439] | Latest Cumulative Update(LCU) | 5.74 | Aug 9, 2022 | +| Rel 22-10 | [5013637] | .NET Framework 3.5 Security and Quality Rollup LKG | 2.130 | Oct 11, 2022 | +| Rel 22-10 | [5013644] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 2.130 | May 10, 2022 | +| Rel 22-10 | [5013638] | .NET Framework 3.5 Security and Quality Rollup LKG | 4.110 | Jun 14, 2022 | +| Rel 22-10 | [5013643] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 4.110 | May 10, 2022 | +| Rel 22-10 | [5013635] | .NET Framework 3.5 Security and Quality Rollup LKG | 3.117 | Oct 11, 2022 | +| Rel 22-10 | [5013642] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 3.117 | May 10, 2022 | +| Rel 22-10 | [5013641] | . NET Framework 3.5 and 4.7.2 Cumulative Update LKG | 6.50 | May 10, 2022 | +| Rel 22-10 | [5013626] | .NET Framework 4.8 Security and Quality Rollup LKG | 6.50 | May 10, 2022 | +| Rel 22-10 | [5017028] | .NET Framework 4.8 Security and Quality Rollup LKG | 7.18 | Sep 13, 2022 | +| Rel 22-10 | [5018454] | Monthly Rollup | 2.130 | Oct 11, 2022 | +| Rel 22-10 | [5020448] | OOB Monthly Rollup | 2.130 | Oct 17, 2022 | +| Rel 22-10 | [5018457] | Monthly Rollup | 3.117 | Oct 11, 2022 | +| Rel 22-10 | [5020449] | OOB Monthly Rollup | 3.117 | Oct 17, 2022 | +| Rel 22-10 | [5018474] | Monthly Rollup | 4.110 | Oct 11, 2022 | +| Rel 22-10 | [5020447] | OOB Monthly Rollup | 4.110 | Oct 17, 2020 | +| Rel 22-10 | [5016263] | Servicing Stack update | 3.117 | Jul 12, 2022 | +| Rel 22-10 | [5018922] | Servicing Stack update | 4.110 | Oct 11, 2022 | +| Rel 22-10 | [4578013] | OOB Standalone Security update | 4.110 | Aug 19, 2020 | +| Rel 22-10 | [5017396] | Servicing Stack update | 5.74 | Sep 13, 2022 | +| Rel 22-10 | [5017397] | Servicing Stack update | 2.130 | Sep 13, 2022 | +| Rel 22-10 | [4494175] | Microcode | 5.74 | Sep 1, 2020 | +| Rel 22-10 | [4494174] | Microcode | 6.50 | Sep 1, 2020 | ++[5020438]: https://support.microsoft.com/kb/5020438 +[5018413]: https://support.microsoft.com/kb/5018413 +[5020436]: https://support.microsoft.com/kb/5020436 +[5020439]: https://support.microsoft.com/kb/5020439 [5013637]: https://support.microsoft.com/kb/5013637 [5013644]: https://support.microsoft.com/kb/5013644 [5013638]: https://support.microsoft.com/kb/5013638 The following tables show the Microsoft Security Response Center (MSRC) updates [5013635]: https://support.microsoft.com/kb/5013635 [5013642]: https://support.microsoft.com/kb/5013642 [5013641]: https://support.microsoft.com/kb/5013641+[5013626]: https://support.microsoft.com/kb/5013626 [5017028]: https://support.microsoft.com/kb/5017028 [5018454]: https://support.microsoft.com/kb/5018454+[5020448]: https://support.microsoft.com/kb/5020448 [5018457]: https://support.microsoft.com/kb/5018457+[5020449]: https://support.microsoft.com/kb/5020449 [5018474]: https://support.microsoft.com/kb/5018474+[5020447]: https://support.microsoft.com/kb/5020447 [5016263]: https://support.microsoft.com/kb/5016263 [5018922]: https://support.microsoft.com/kb/5018922 [4578013]: https://support.microsoft.com/kb/4578013 |
cloud-services | Cloud Services Nodejs Develop Deploy App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-nodejs-develop-deploy-app.md | For more information, see the [Node.js Developer Center]. [Azure SDK for .NET 3.0]: https://www.microsoft.com/download/details.aspx?id=54917 [Connect PowerShell]: /powershell/azure/ [nodejs.org]: https://nodejs.org/-[Overview of Creating a Hosted Service for Azure]: /azure/cloud-services/ +[Overview of Creating a Hosted Service for Azure]: ./index.yml [Node.js Developer Center]: https://azure.microsoft.com/develop/nodejs/ <!-- IMG List --> For more information, see the [Node.js Developer Center]. [The output of the Publish-AzureService command]: ./media/cloud-services-nodejs-develop-deploy-app/node19.png [A browser window displaying the hello world page; the URL indicates the page is hosted on Azure.]: ./media/cloud-services-nodejs-develop-deploy-app/node21.png [The status of the Stop-AzureService command]: ./media/cloud-services-nodejs-develop-deploy-app/node48.png-[The status of the Remove-AzureService command]: ./media/cloud-services-nodejs-develop-deploy-app/node49.png +[The status of the Remove-AzureService command]: ./media/cloud-services-nodejs-develop-deploy-app/node49.png |
cognitive-services | Overview Univariate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/overview-univariate.md | The Anomaly Detector API enables you to monitor and detect abnormalities in your  -Using the Anomaly Detector doesn't require any prior experience in machine learning, and the REST API enables you to easily integrate the service into your applications and processes. -+Using Anomaly Detector doesn't require any prior experience in machine learning, and the REST API enables you to easily integrate the service into your applications and processes. ## Features After signing up: 1. Send a request to the Anomaly Detector API with your data. 1. Process the API response by parsing the returned JSON message. - ## Algorithms * See the following technical blogs for information about the algorithms used: |
cognitive-services | Multivariate Anomaly Detection Synapse | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/tutorials/multivariate-anomaly-detection-synapse.md | In this section, you'll create the following resources in the Azure portal: * Create a key vault and configure secrets and access 1. Create a [key vault](https://portal.azure.com/#create/Microsoft.KeyVault) in the Azure portal.- 2. Go to Key Vault > Access policies, and grant the [Azure Synapse workspace](/azure/data-factory/data-factory-service-identity?context=/azure/synapse-analytics/context/context&tabs=synapse-analytics) permission to read secrets from Azure Key Vault. + 2. Go to Key Vault > Access policies, and grant the [Azure Synapse workspace](../../../data-factory/data-factory-service-identity.md?context=%2fazure%2fsynapse-analytics%2fcontext%2fcontext&tabs=synapse-analytics) permission to read secrets from Azure Key Vault.  If you have the need to run training code and inference code in separate noteboo ### About Synapse -* Quick start: [Configure prerequisites for using Cognitive Services in Azure Synapse Analytics](/azure/synapse-analytics/machine-learning/tutorial-configure-cognitive-services-synapse#create-a-key-vault-and-configure-secrets-and-access). +* Quick start: [Configure prerequisites for using Cognitive Services in Azure Synapse Analytics](../../../synapse-analytics/machine-learning/tutorial-configure-cognitive-services-synapse.md#create-a-key-vault-and-configure-secrets-and-access). * Visit [SynpaseML new website](https://microsoft.github.io/SynapseML/) for the latest docs, demos, and examples.-* Learn more about [Synapse Analytics](/azure/synapse-analytics/). -* Read about the [SynapseML v0.9.5 release](https://github.com/microsoft/SynapseML/releases/tag/v0.9.5) on GitHub. +* Learn more about [Synapse Analytics](../../../synapse-analytics/index.yml). +* Read about the [SynapseML v0.9.5 release](https://github.com/microsoft/SynapseML/releases/tag/v0.9.5) on GitHub. |
cognitive-services | Vehicle Analysis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/vehicle-analysis.md | In addition to exposing the vehicle location, other estimated attributes for **c ### Operation parameters for vehicle analysis -The following table shows the parameters required by each of the vehicle analysis operations. Many are shared with Spatial Analysis; the only one not shared is the `PARKING_REGIONS` setting. The full list of Spatial Analysis operation parameters can be found in the [Spatial Analysis container](/azure/cognitive-services/computer-vision/spatial-analysis-container?tabs=azure-stack-edge#iot-deployment-manifest) guide. +The following table shows the parameters required by each of the vehicle analysis operations. Many are shared with Spatial Analysis; the only one not shared is the `PARKING_REGIONS` setting. The full list of Spatial Analysis operation parameters can be found in the [Spatial Analysis container](./spatial-analysis-container.md?tabs=azure-stack-edge#iot-deployment-manifest) guide. | Operation parameters| Description| ||| Azure Cognitive Services containers aren't licensed to run without being connect ## Next steps -* Set up a [Spatial Analysis container](spatial-analysis-container.md) +* Set up a [Spatial Analysis container](spatial-analysis-container.md) |
cognitive-services | Role Based Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/role-based-access-control.md | A user that should only be validating and reviewing LUIS applications, typically * [LUIS Programmatic v2.0 APIs](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c2f) All the APIs under: - * [LUIS Endpoint APIs v2.0](https://chinaeast2.dev.cognitive.azure.cn/docs/services/5819c76f40a6350ce09de1ac/operations/5819c77140a63516d81aee78) + * [LUIS Endpoint APIs v2.0](/azure/cognitive-services/LUIS/luis-migration-api-v1-to-v2) * [LUIS Endpoint APIs v3.0](https://westcentralus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a9459a1fe8fa44c28dd8) * [LUIS Endpoint APIs v3.0-preview](https://westcentralus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0-preview/operations/5cb0a9459a1fe8fa44c28dd8) |
cognitive-services | Call Center Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/call-center-overview.md | Some example scenarios for the implementation of Azure Cognitive Services in cal > [!TIP] > Try the [Language Studio](https://language.cognitive.azure.com) or [Speech Studio](https://aka.ms/speechstudio/callcenter) for a demonstration on how to use the Language and Speech services to analyze call center conversations. > -> To deploy a call center transcription solution to Azure with a no-code approach, try the [Ingestion Client](/azure/cognitive-services/speech-service/ingestion-client). +> To deploy a call center transcription solution to Azure with a no-code approach, try the [Ingestion Client](./ingestion-client.md). ## Cognitive Services features for call centers Once you've transcribed your audio with the Speech service, you can use the Lang The Speech service offers the following features that can be used for call center use cases: -- [Real-time speech-to-text](/azure/cognitive-services/speech-service/how-to-recognize-speech): Recognize and transcribe audio in real-time from multiple inputs. For example, with virtual agents or agent-assist, you can continuously recognize audio input and control how to process results based on multiple events.-- [Batch speech-to-text](/azure/cognitive-services/speech-service/batch-transcription): Transcribe large amounts of audio files asynchronously including speaker diarization and is typically used in post-call analytics scenarios. Diarization is the process of recognizing and separating speakers in mono channel audio data.-- [Text-to-speech](/azure/cognitive-services/speech-service/text-to-speech): Text-to-speech enables your applications, tools, or devices to convert text into humanlike synthesized speech.-- [Speaker identification](/azure/cognitive-services/speech-service/speaker-recognition-overview): Helps you determine an unknown speakerΓÇÖs identity within a group of enrolled speakers and is typically used for call center customer verification scenarios or fraud detection.-- [Language Identification](/azure/cognitive-services/speech-service/language-identification): Identify languages spoken in audio and can be used in real-time and post-call analysis for insights or to control the environment (such as output language of a virtual agent).+- [Real-time speech-to-text](./how-to-recognize-speech.md): Recognize and transcribe audio in real-time from multiple inputs. For example, with virtual agents or agent-assist, you can continuously recognize audio input and control how to process results based on multiple events. +- [Batch speech-to-text](./batch-transcription.md): Transcribe large amounts of audio files asynchronously including speaker diarization and is typically used in post-call analytics scenarios. Diarization is the process of recognizing and separating speakers in mono channel audio data. +- [Text-to-speech](./text-to-speech.md): Text-to-speech enables your applications, tools, or devices to convert text into humanlike synthesized speech. +- [Speaker identification](./speaker-recognition-overview.md): Helps you determine an unknown speakerΓÇÖs identity within a group of enrolled speakers and is typically used for call center customer verification scenarios or fraud detection. +- [Language Identification](./language-identification.md): Identify languages spoken in audio and can be used in real-time and post-call analysis for insights or to control the environment (such as output language of a virtual agent). The Speech service works well with prebuilt models. However, you might want to further customize and tune the experience for your product or environment. Typical examples for Speech customization include: | Speech customization | Description | | -- | -- |-| [Custom Speech](/azure/cognitive-services/speech-service/custom-speech-overview) | A speech-to-text feature used evaluate and improve the speech recognition accuracy of use-case specific entities (such as alpha-numeric customer, case, and contract IDs, license plates, and names). You can also train a custom model with your own product names and industry terminology. | -| [Custom Neural Voice](/azure/cognitive-services/speech-service/custom-neural-voice) | A text-to-speech feature that lets you create a one-of-a-kind, customized, synthetic voice for your applications. | +| [Custom Speech](./custom-speech-overview.md) | A speech-to-text feature used evaluate and improve the speech recognition accuracy of use-case specific entities (such as alpha-numeric customer, case, and contract IDs, license plates, and names). You can also train a custom model with your own product names and industry terminology. | +| [Custom Neural Voice](./custom-neural-voice.md) | A text-to-speech feature that lets you create a one-of-a-kind, customized, synthetic voice for your applications. | ### Language service The Language service offers the following features that can be used for call center use cases: -- [Personally Identifiable Information (PII) extraction and redaction](/azure/cognitive-services/language-service/personally-identifiable-information/how-to-call-for-conversations): Identify, categorize, and redact sensitive information in conversation transcription.-- [Conversation summarization](/azure/cognitive-services/language-service/summarization/overview?tabs=conversation-summarization): Summarize in abstract text what each conversation participant said about the issues and resolutions. For example, a call center can group product issues that have a high volume.-- [Sentiment analysis and opinion mining](/azure/cognitive-services/language-service/sentiment-opinion-mining/overview): Analyze transcriptions and associate positive, neutral, or negative sentiment at the utterance and conversation-level.+- [Personally Identifiable Information (PII) extraction and redaction](../language-service/personally-identifiable-information/how-to-call-for-conversations.md): Identify, categorize, and redact sensitive information in conversation transcription. +- [Conversation summarization](../language-service/summarization/overview.md?tabs=conversation-summarization): Summarize in abstract text what each conversation participant said about the issues and resolutions. For example, a call center can group product issues that have a high volume. +- [Sentiment analysis and opinion mining](../language-service/sentiment-opinion-mining/overview.md): Analyze transcriptions and associate positive, neutral, or negative sentiment at the utterance and conversation-level. While the Language service works well with prebuilt models, you might want to further customize and tune models to extract more information from your data. Typical examples for Language customization include: | Language customization | Description | | -- | -- |-| [Custom NER (named entity recognition)](/azure/cognitive-services/language-service/custom-named-entity-recognition/overview) | Improve the detection and extraction of entities in transcriptions. | -| [Custom text classification](/azure/cognitive-services/language-service/custom-text-classification/overview) | Classify and label transcribed utterances with either single or multiple classifications. | +| [Custom NER (named entity recognition)](../language-service/custom-named-entity-recognition/overview.md) | Improve the detection and extraction of entities in transcriptions. | +| [Custom text classification](../language-service/custom-text-classification/overview.md) | Classify and label transcribed utterances with either single or multiple classifications. | -You can find an overview of all Language service features and customization options [here](/azure/cognitive-services/language-service/overview#available-features). +You can find an overview of all Language service features and customization options [here](../language-service/overview.md#available-features). ## Next steps -* [Post-call transcription and analytics quickstart](/azure/cognitive-services/speech-service/call-center-quickstart) +* [Post-call transcription and analytics quickstart](./call-center-quickstart.md) * [Try out the Language Studio](https://language.cognitive.azure.com)-* [Try out the Speech Studio](https://aka.ms/speechstudio/callcenter) +* [Try out the Speech Studio](https://aka.ms/speechstudio/callcenter) |
cognitive-services | Call Center Telephony Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/call-center-telephony-integration.md | Usually the telephony client handles the incoming audio stream from the SIP/RTP For easier integration the Speech Service also supports ΓÇ£ALAW in WAV containerΓÇ¥ and ΓÇ£MULAW in WAV containerΓÇ¥ for audio streaming. -To build this integration we recommend using the [Speech SDK](/azure/cognitive-services/speech-service/speech-sdk). +To build this integration we recommend using the [Speech SDK](./speech-sdk.md). > [!TIP]-> For guidance on reducing Text to Speech latency check out the **[How to lower speech synthesis latency](/azure/cognitive-services/speech-service/how-to-lower-speech-synthesis-latency?pivots=programming-language-csharp)** guide. +> For guidance on reducing Text to Speech latency check out the **[How to lower speech synthesis latency](./how-to-lower-speech-synthesis-latency.md?pivots=programming-language-csharp)** guide. > > In addition, consider implementing a Text to Speech cache to store all synthesized audio and playback from the cache in case a string has previously been synthesized. ## Next steps -* [Learn about Speech SDK](/azure/cognitive-services/speech-service/speech-sdk) -* [How to lower speech synthesis latency](/azure/cognitive-services/speech-service/how-to-lower-speech-synthesis-latency) +* [Learn about Speech SDK](./speech-sdk.md) +* [How to lower speech synthesis latency](./how-to-lower-speech-synthesis-latency.md) |
cognitive-services | Ingestion Client | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/ingestion-client.md | -The Ingestion Client is a tool released by Microsoft on [GitHub](/azure/cognitive-services/speech-service/ingestion-client) that helps you quickly deploy a call center transcription solution to Azure with a no-code approach. +The Ingestion Client is a tool released by Microsoft on GitHub that helps you quickly deploy a call center transcription solution to Azure with a no-code approach. > [!TIP] > You can use the tool and resulting solution in production to process a high volume of audio. -Ingestion Client uses the [Azure Cognitive Service for Language](/azure/cognitive-services/language-service/), [Azure Cognitive Service for Speech](/azure/cognitive-services/speech-service/), [Azure storage](https://azure.microsoft.com/product-categories/storage/), and [Azure Functions](https://azure.microsoft.com/services/functions/). +Ingestion Client uses the [Azure Cognitive Service for Language](../language-service/index.yml), [Azure Cognitive Service for Speech](./index.yml), [Azure storage](https://azure.microsoft.com/product-categories/storage/), and [Azure Functions](https://azure.microsoft.com/services/functions/). ## Get started with the Ingestion Client Internally, the tool uses Speech and Language services, and follows best practic The following Speech service features are used by the Ingestion Client: -- [Batch speech-to-text](/azure/cognitive-services/speech-service/batch-transcription): Transcribe large amounts of audio files asynchronously including speaker diarization and is typically used in post-call analytics scenarios. Diarization is the process of recognizing and separating speakers in mono channel audio data.-- [Speaker identification](/azure/cognitive-services/speech-service/speaker-recognition-overview): Helps you determine an unknown speakerΓÇÖs identity within a group of enrolled speakers and is typically used for call center customer verification scenarios or fraud detection.+- [Batch speech-to-text](./batch-transcription.md): Transcribe large amounts of audio files asynchronously including speaker diarization and is typically used in post-call analytics scenarios. Diarization is the process of recognizing and separating speakers in mono channel audio data. +- [Speaker identification](./speaker-recognition-overview.md): Helps you determine an unknown speakerΓÇÖs identity within a group of enrolled speakers and is typically used for call center customer verification scenarios or fraud detection. Language service features used by the Ingestion Client: -- [Personally Identifiable Information (PII) extraction and redaction](/azure/cognitive-services/language-service/personally-identifiable-information/how-to-call-for-conversations): Identify, categorize, and redact sensitive information in conversation transcription.-- [Sentiment analysis and opinion mining](/azure/cognitive-services/language-service/sentiment-opinion-mining/overview): Analyze transcriptions and associate positive, neutral, or negative sentiment at the utterance and conversation-level.+- [Personally Identifiable Information (PII) extraction and redaction](../language-service/personally-identifiable-information/how-to-call-for-conversations.md): Identify, categorize, and redact sensitive information in conversation transcription. +- [Sentiment analysis and opinion mining](../language-service/sentiment-opinion-mining/overview.md): Analyze transcriptions and associate positive, neutral, or negative sentiment at the utterance and conversation-level. Besides Cognitive Services, these Azure products are used to complete the solution: The tool is built to show customers results quickly. You can customize the tool ## Next steps -* [Learn more about Cognitive Services features for call center](/azure/cognitive-services/speech-service/call-center-overview) -* [Explore the Language service features](/azure/cognitive-services/language-service/overview#available-features) -* [Explore the Speech service features](/azure/cognitive-services/speech-service/overview) +* [Learn more about Cognitive Services features for call center](./call-center-overview.md) +* [Explore the Language service features](../language-service/overview.md#available-features) +* [Explore the Speech service features](./overview.md) |
cognitive-services | Get Started With Document Translation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/get-started-with-document-translation.md | gradle run > [!IMPORTANT] >-> For the code samples below, you'll hard-code your Shared Access Signature (SAS) URL where indicated. Remember to remove the SAS URL from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Managed Identity](managed-identity.md). See the Azure Storage [security](/azure/storage/common/authorize-data-access) article for more information. +> For the code samples below, you'll hard-code your Shared Access Signature (SAS) URL where indicated. Remember to remove the SAS URL from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Managed Identity](managed-identity.md). See the Azure Storage [security](../../../storage/common/authorize-data-access.md) article for more information. > > You may need to update the following fields, depending upon the operation: >>> Document Translation can't be used to translate secured documents such as those > [!div class="nextstepaction"] > [Create a customized language system using Custom Translator](../custom-translator/overview.md) >-> +> |
cognitive-services | Getting Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/big-data/getting-started.md | To get started on Azure Kubernetes Service, follow these steps: ## Try a sample -After you set up your Spark cluster and environment, you can run a short sample. This sample assumes Azure Databricks and the `mmlspark.cognitive` package. For an example using `synapseml.cognitive`, see [Add search to AI-enriched data from Apache Spark using SynapseML](/azure/search/search-synapseml-cognitive-services). +After you set up your Spark cluster and environment, you can run a short sample. This sample assumes Azure Databricks and the `mmlspark.cognitive` package. For an example using `synapseml.cognitive`, see [Add search to AI-enriched data from Apache Spark using SynapseML](../../search/search-synapseml-cognitive-services.md). First, you can create a notebook in Azure Databricks. For other Spark cluster providers, use their notebooks or Spark Submit. First, you can create a notebook in Azure Databricks. For other Spark cluster pr - [Short Python Examples](samples-python.md) - [Short Scala Examples](samples-scala.md) - [Recipe: Predictive Maintenance](recipes/anomaly-detection.md)-- [Recipe: Intelligent Art Exploration](recipes/art-explorer.md)+- [Recipe: Intelligent Art Exploration](recipes/art-explorer.md) |
cognitive-services | Multi Region Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/custom-features/multi-region-deployment.md | The same request body to each of those different URLs serves the exact same resp ## Validations and requirements -Assigning deployment resources requires Microsoft Azure Active Directory (Azure AD) authentication. Azure AD is used to confirm you have access to the resources you are interested in assigning to your project for multi-region deployment. In the Language Studio, you can automatically [enable Azure AD authentication](https://aka.ms/rbac-language) by assigning yourself the _Cognitive Services Language Owner_ role to your original resource. To programmatically use Azure AD authentication, learn more from the [Cognitive Services documentation](/azure/cognitive-services/authentication?tabs=powershell&tryIt=true&source=docs#authenticate-with-azure-active-directory). +Assigning deployment resources requires Microsoft Azure Active Directory (Azure AD) authentication. Azure AD is used to confirm you have access to the resources you are interested in assigning to your project for multi-region deployment. In the Language Studio, you can automatically [enable Azure AD authentication](https://aka.ms/rbac-language) by assigning yourself the _Cognitive Services Language Owner_ role to your original resource. To programmatically use Azure AD authentication, learn more from the [Cognitive Services documentation](../../../authentication.md?source=docs&tabs=powershell&tryIt=true#authenticate-with-azure-active-directory). Your project name and resource are used as its main identifiers. Therefore, a Language resource can only have a specific project name in each resource. Any other projects with the same name will not be deployable to that resource. Learn how to deploy models for: * [Conversational language understanding](../../conversational-language-understanding/how-to/deploy-model.md) * [Custom text classification](../../custom-text-classification/how-to/deploy-model.md) * [Custom NER](../../custom-named-entity-recognition/how-to/deploy-model.md)-* [Orchestration workflow](../../orchestration-workflow/how-to/deploy-model.md) +* [Orchestration workflow](../../orchestration-workflow/how-to/deploy-model.md) |
cognitive-services | Developer Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/developer-guide.md | As you use these features in your application, use the following documentation a | Language → Latest GA version | Reference documentation |Samples | ||||-| [C#/.NET → v1.0.0](https://www.nuget.org/packages/Azure.AI.Language.Conversations/1.0.0) | [C# documentation](/dotnet/api/overview/azure/ai.language.conversations-readme-pre) | [C# samples](https://aka.ms/sdk-sample-conversation-dot-net) | +| [C#/.NET → v1.0.0](https://www.nuget.org/packages/Azure.AI.Language.Conversations/1.0.0) | [C# documentation](/dotnet/api/overview/azure/ai.language.conversations-readme) | [C# samples](https://aka.ms/sdk-sample-conversation-dot-net) | | [Python → v1.0.0](https://pypi.org/project/azure-ai-language-conversations/) | [Python documentation](/python/api/overview/azure/ai-language-conversations-readme) | [Python samples](https://aka.ms/sdk-samples-conversation-python) | ### Azure.AI.Language.QuestionAnswering As you use these features in your application, use the following documentation a | Language → Latest GA version |Reference documentation |Samples | ||||-| [C#/.NET → v1.0.0](https://www.nuget.org/packages/Azure.AI.Language.QuestionAnswering/1.0.0#readme-body-tab) | [C# documentation](/dotnet/api/overview/azure/ai.language.questionanswering-readme-pre) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.QuestionAnswering) | +| [C#/.NET → v1.0.0](https://www.nuget.org/packages/Azure.AI.Language.QuestionAnswering/1.0.0#readme-body-tab) | [C# documentation](/dotnet/api/overview/azure/ai.language.questionanswering-readme) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.QuestionAnswering) | | [Python → v1.0.0](https://pypi.org/project/azure-ai-language-questionanswering/1.0.0/) | [Python documentation](/python/api/overview/azure/ai-language-questionanswering-readme) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-questionanswering) | # [REST API](#tab/rest-api) As you use this API in your application, see the following reference documentati ## See also -[Azure Cognitive Service for Language overview](../overview.md) +[Azure Cognitive Service for Language overview](../overview.md) |
cognitive-services | Model Lifecycle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/model-lifecycle.md | -Language service features utilize AI models that are versioned. We update the language service with new model versions to improve accuracy, support, and quality. As models become older, they're retired. Use this article for information on that process, and what you can expect for your applications. +Language service features utilize AI models. We update the language service with new model versions to improve accuracy, support, and quality. As models become older, they are retired. Use this article for information on that process, and what you can expect for your applications. ## Prebuilt features -### Expiration timeline --Our standard (not customized) language service features are built upon AI models that we call pre-trained models. We update the language service with new model versions every few months to improve model accuracy, support, and quality. -As new models and functionalities become available, older less accurate models are deprecated. To ensure you're using the latest model version and avoid interruptions to your applications, we highly recommend using the default model-version parameter (`latest`) in your API calls. After their deprecation date, pre-built model versions will no longer be functional, and your implementation may be broken. +Our standard (not customized) language service features are built on AI models that we call pre-trained models. -Stable (not preview) model versions are deprecated six months after the release of another stable model version. Features in preview don't maintain a minimum retirement period and may be deprecated at any time. +We regularly update the language service with new model versions to improve model accuracy, support, and quality. +By default, all API requests will use the latest Generally Available (GA) model. #### Choose the model-version used on your data -By default, API requests will use the latest Generally Available model. You can use an optional parameter to select the version of the model to be used (not recommended). --> [!TIP] -> If youΓÇÖre using the SDK for C#, Java, JavaScript or Python, see the reference documentation for information on the appropriate model-version parameter. +We recommend using the `latest` model version to utilize the latest and highest quality models. As our models improve, itΓÇÖs possible that some of your model results may change. -For synchronous endpoints, use the `model-version` query parameter. For example: +Preview models used for preview features do not maintain a minimum retirement period and may be deprecated at any time. -`POST <your-language-resource-endpoint>/language/:analyze-text?api-version=2022-05-01&model-version=2022-06-01`. --For asynchronous endpoints, use the `model-version` property in the request body under task properties. - -The model-version used in your API request will be included in the response object. +By default, API and SDK requests will use the latest Generally Available model. You can use an optional parameter to select the version of the model to be used (not recommended). > [!NOTE] > If you are using an model version that is not listed in the table, then it was subjected to the expiration policy. Use the table below to find which model versions are supported by each feature: --| Feature | Supported versions | Model versions to be deprecated | +| Feature | Supported versions | Model versions to be deprecated October 30, 2022| |--|--|| | Sentiment Analysis and opinion mining | `2021-10-01`, `2022-06-01*` | `2019-10-01`, `2020-04-01` | | Language Detection | `2021-11-20*` | `2019-10-01`, `2020-07-01`, `2020-09-01`, `2021-01-05` | |
cognitive-services | Role Based Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/role-based-access-control.md | -Azure Cognitive Service for Language supports Azure role-based access control (Azure RBAC), an authorization system for managing individual access to Azure resources. Using Azure RBAC, you assign different team members different levels of permissions for your projects authoring resources. See the [Azure RBAC documentation](/azure/role-based-access-control/) for more information. +Azure Cognitive Service for Language supports Azure role-based access control (Azure RBAC), an authorization system for managing individual access to Azure resources. Using Azure RBAC, you assign different team members different levels of permissions for your projects authoring resources. See the [Azure RBAC documentation](../../../role-based-access-control/index.yml) for more information. ## Enable Azure Active Directory authentication Azure RBAC can be assigned to a Language resource. To grant access to an Azure r 1. On the **Members** tab, select a user, group, service principal, or managed identity. 1. On the **Review + assign** tab, select **Review + assign** to assign the role. -Within a few minutes, the target will be assigned the selected role at the selected scope. For help with these steps, see [Assign Azure roles using the Azure portal](/azure/role-based-access-control/role-assignments-portal). +Within a few minutes, the target will be assigned the selected role at the selected scope. For help with these steps, see [Assign Azure roles using the Azure portal](../../../role-based-access-control/role-assignments-portal.md). ## Language role types These users are the gatekeepers for the Language applications in production envi * [question answering projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects) :::column-end::: |
cognitive-services | Data Formats | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/concepts/data-formats.md | If you're [importing a project](../how-to/create-project.md#import-project) into | `length` | ` ` | The character length of the entity. |`5`| | `listKey`| ` ` | A normalized value for the list of synonyms to map back to in prediction. | `Microsoft` | | `values`| `{VALUES-FOR-LIST}` | A list of comma separated strings that will be matched exactly for extraction and map to the list key. | `"msft", "microsoft", "MS"` |-| `regexKey`| `{REGEX-PATTERN}` | A regular expression. | `ProductPattern1` | +| `regexKey`| `{REGEX-PATTERN}` | A normalized value for the regular expression to map back to in prediction. | `ProductPattern1` | | `regexPattern`| `{REGEX-PATTERN}` | A regular expression. | `^pre` | | `prebuilts`| `{PREBUILT-COMPONENTS}` | The prebuilt components that can extract common types. You can find the list of prebuilts you can add [here](../prebuilt-component-reference.md). | `Quantity.Number` | | `requiredComponents` | `{REQUIRED-COMPONENTS}` | A setting that specifies a requirement that a specific component be present to return the entity. You can learn more [here](./entity-components.md#required-components). The possible values are `learned`, `regex`, `list`, or `prebuilts` |`"learned", "prebuilt"`| |
cognitive-services | Migrate From Luis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/migrate-from-luis.md | The following table presents a side-by-side comparison between the features of L |Single training mode| Standard and advanced [training modes](#how-are-the-training-times-different-in-clu-how-is-standard-training-different-from-advanced-training) | Training will be required after application migration. | |Two publishing slots and version publishing |Ten deployment slots with custom naming | Deployment will be required after the applicationΓÇÖs migration and training. | |LUIS authoring APIs and SDK support in .NET, Python, Java, and Node.js |[CLU Authoring REST APIs](/rest/api/language/conversational-analysis-authoring). | For more information, see the [quickstart article](../quickstart.md?pivots=rest-api) for information on the CLU authoring APIs. [Refactoring](#do-i-have-to-refactor-my-code-if-i-migrate-my-applications-from-luis-to-clu) will be necessary to use the CLU authoring APIs. |-|LUIS Runtime APIs and SDK support in .NET, Python, Java, and Node.js |[CLU Runtime APIs](/rest/api/language/conversation-analysis-runtime). CLU Runtime SDK support for [.NET](/dotnet/api/overview/azure/ai.language.conversations-readme-pre?view=azure-dotnet-preview&preserve-view=true) and [Python](/python/api/overview/azure/ai-language-conversations-readme?view=azure-python-preview&preserve-view=true). | See [how to call the API](../how-to/call-api.md#use-the-client-libraries-azure-sdk) for more information. [Refactoring](#do-i-have-to-refactor-my-code-if-i-migrate-my-applications-from-luis-to-clu) will be necessary to use the CLU runtime API response. | +|LUIS Runtime APIs and SDK support in .NET, Python, Java, and Node.js |[CLU Runtime APIs](/rest/api/language/conversation-analysis-runtime). CLU Runtime SDK support for [.NET](/dotnet/api/overview/azure/ai.language.conversations-readme) and [Python](/python/api/overview/azure/ai-language-conversations-readme?view=azure-python-preview&preserve-view=true). | See [how to call the API](../how-to/call-api.md#use-the-client-libraries-azure-sdk) for more information. [Refactoring](#do-i-have-to-refactor-my-code-if-i-migrate-my-applications-from-luis-to-clu) will be necessary to use the CLU runtime API response. | ## Migrate your LUIS applications If you have any questions that were unanswered in this article, consider leaving ## Next steps * [Quickstart: create a CLU project](../quickstart.md) * [CLU language support](../language-support.md)-* [CLU FAQ](../faq.md) +* [CLU FAQ](../faq.md) |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/overview.md | As you use CLU, see the following reference documentation and samples for Azure |||| |REST APIs (Authoring) | [REST API documentation](https://aka.ms/clu-authoring-apis) | | |REST APIs (Runtime) | [REST API documentation](https://aka.ms/clu-apis) | |-|C# (Runtime) | [C# documentation](/dotnet/api/overview/azure/ai.language.conversations-readme-pre?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples) | +|C# (Runtime) | [C# documentation](/dotnet/api/overview/azure/ai.language.conversations-readme) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples) | |Python (Runtime)| [Python documentation](/python/api/overview/azure/ai-language-conversations-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples) | ## Responsible AI |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/overview.md | As you use orchestration workflow, see the following reference documentation and |||| |REST APIs (Authoring) | [REST API documentation](https://aka.ms/clu-authoring-apis) | | |REST APIs (Runtime) | [REST API documentation](https://aka.ms/clu-runtime-api) | |-|C# (Runtime) | [C# documentation](/dotnet/api/overview/azure/ai.language.conversations-readme-pre?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples) | +|C# (Runtime) | [C# documentation](/dotnet/api/overview/azure/ai.language.conversations-readme) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples) | |Python (Runtime)| [Python documentation](/python/api/overview/azure/ai-language-conversations-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples) | ## Responsible AI |
cognitive-services | Power Virtual Agents | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/tutorials/power-virtual-agents.md | + + Title: "Tutorial: Add your Question Answering project to Power Virtual Agents" +description: In this tutorial, you will learn how to add your Question Answering project to Power Virtual Agents. +++++ Last updated : 10/11/2022++++# Add your Question Answering project to Power Virtual Agents ++Create and extend a [Power Virtual Agents](https://powervirtualagents.microsoft.com/) bot to provide answers from your knowledge base. ++> [!NOTE] +> The integration demonstrated in this tutorial is in preview and is not intended for deployment to production environments. ++In this tutorial, you learn how to: +> [!div class="checklist"] +> * Create a Power Virtual Agents bot +> * Create a system fallback topic +> * Add Question Answering as an action to a topic as a Power Automate flow +> * Create a Power Automate solution +> * Add a Power Automate flow to your solution +> * Publish Power Virtual Agents +> * Test Power Virtual Agents, and receive an answer from your Question Answering project ++> [!Note] +> The QnA Maker service is being retired on the 31st of March, 2025. A newer version of the question and answering capability is now available as part of [Azure Cognitive Service for Language](/azure/cognitive-services/language-service/). For question answering capabilities within the Language Service, see [question answering](../overview.md). Starting 1st October, 2022 you wonΓÇÖt be able to create new QnA Maker resources. For information on migrating existing QnA Maker knowledge bases to question answering, consult the [migration guide](../how-to/migrate-qnamaker.md). ++## Create and publish a project +1. Follow the [quickstart](../quickstart/sdk.md?pivots=studio) to create a Question Answering project. Once you have deployed your project. +2. After deploying your project from Language Studio, click on ΓÇ£Get Prediction URLΓÇ¥. +3. Get your Site URL from the hostname of Prediction URL and your Account key which would be the Ocp-Apim-Subscription-Key. ++> [!div class="mx-imgBorder"] +> [  ]( ../media/power-virtual-agents/get-prediction-url.png#lightbox) ++4. Create a Custom Question Answering connector: Follow the [connector documentation](/connectors/languagequestionansw/) to create a connection to Question Answering. +5. Use this tutorial to create a Bot with Power Virtual Agents instead of creating a bot from Language Studio. ++## Create a bot in Power Virtual Agents +[Power Virtual Agents](https://powervirtualagents.microsoft.com/) allows teams to create powerful bots by using a guided, no-code graphical interface. You don't need data scientists or developers. ++Create a bot by following the steps in [Create and delete Power Virtual Agents bots](/power-virtual-agents/authoring-first-bot). ++## Create the system fallback topic +In Power Virtual Agents, you create a bot with a series of topics (subject areas), in order to answer user questions by performing actions. ++Although the bot can connect to your project from any topic, this tutorial uses the system fallback topic. The fallback topic is used when the bot can't find an answer. The bot passes the user's text to Question Answering Query knowledgebase API, receives the answer from your project, and displays it to the user as a message. ++Create a fallback topic by following the steps in [Configure the system fallback topic in Power Virtual Agents](/power-virtual-agents/authoring-system-fallback-topic). ++## Use the authoring canvas to add an action +Use the Power Virtual Agents authoring canvas to connect the fallback topic to your project. The topic starts with the unrecognized user text. Add an action that passes that text to Question Answering, and then shows the answer as a message. The last step of displaying an answer is handled as a [separate step](../../../QnAMaker/Tutorials/integrate-with-power-virtual-assistant-fallback-topic.md#add-your-solutions-flow-to-power-virtual-agents), later in this tutorial. ++This section creates the fallback topic conversation flow. ++The new fallback action might already have conversation flow elements. Delete the **Escalate** item by selecting the **Options** menu. ++> [!div class="mx-imgBorder"] +> [  ]( ../media/power-virtual-agents/delete-action.png#lightbox) ++Below the *Message* node, select the (**+**) icon, then select **Call an action**. ++> [!div class="mx-imgBorder"] +> [  ]( ../media/power-virtual-agents/trigger-action-for-power-automate.png#lightbox) ++Select **Create a flow**. This takes you to the Power Automate portal. ++> [!div class="mx-imgBorder"] +> [  ]( ../media/power-virtual-agents/create-flow.png#lightbox) ++Power Automate opens a new template as shown below. ++> [!div class="mx-imgBorder"] +> [  ]( ../media/power-virtual-agents/power-automate-actions.png#lightbox) +**Do not use the template shown above.** ++Instead you need to follow the steps below that creates a Power Automate flow. This flow: +- Takes the incoming user text as a question, and sends it to Question Answering. +- Returns the top response back to your bot. ++click on **Create** in the left panel, then click "OK" to leave the page. ++> [!div class="mx-imgBorder"] +> [  ]( ../media/power-virtual-agents/power-automate-create-new.png#lightbox) ++Select "Instant Cloud flow" ++> [!div class="mx-imgBorder"] +> [  ]( ../media/power-virtual-agents/create-instant-cloud-flow.png#lightbox) ++For testing this connector, you can click on ΓÇ£When PowerVirtual Agents calls a flowΓÇ¥ and click on **Create**. ++> [!div class="mx-imgBorder"] +> [  ]( ../media/power-virtual-agents/create-trigger.png#lightbox) ++Click on "New Step" and search for "Power Virtual Agents". Choose "Add an input" and select text. Next, provide the keyword and the value. ++> [!div class="mx-imgBorder"] +> [  ]( ../media/power-virtual-agents/flow-step-1.png#lightbox) ++Click on "New Step" and search "Language - Question Answering" and choose "Generate answer from Project" from the three actions. ++> [!div class="mx-imgBorder"] +> [  ]( ../media/power-virtual-agents/flow-step-2.png#lightbox) ++This option helps in answering the specified question using your project. Type in the project name, deployment name and API version and select the question from the previous step. ++> [!div class="mx-imgBorder"] +> [  ]( ../media/power-virtual-agents/flow-step-3.png#lightbox) ++Click on "New Step" and search for "Initialize variable". Choose a name for your variable, and select the "String" type. ++> [!div class="mx-imgBorder"] +> [  ]( ../media/power-virtual-agents/flow-step-4.png#lightbox) ++Click on "New Step" again, and search for "Apply to each", then select the output from the previous steps and add an action of "Set variable" and select the connector action. ++> [!div class="mx-imgBorder"] +> [  ]( ../media/power-virtual-agents/flow-step-5.png#lightbox) ++Click on "New Step" and search for "Return value(s) to Power Virtual Agents" and type in a keyword, then choose the previous variable name in the answer. ++> [!div class="mx-imgBorder"] +> [  ]( ../media/power-virtual-agents/flow-step-6.png#lightbox) ++The list of completed steps should look like this. ++> [!div class="mx-imgBorder"] +> [  ]( ../media/power-virtual-agents/flow-step-7.png#lightbox) ++Select **Save** to save the flow. ++## Create a solution and add the flow ++For the bot to find and connect to the flow, the flow must be included in a Power Automate solution. ++1. While still in the Power Automate portal, select Solutions from the left-side navigation. +2. Select **+ New solution**. +3. Enter a display name. The list of solutions includes every solution in your organization or school. Choose a naming convention that helps you filter to just your solutions. For example, you might prefix your email to your solution name: jondoe-power-virtual-agent-question-answering-fallback. +4. Select your publisher from the list of choices. +5. Accept the default values for the name and version. +6. Select **Create** to finish the process. ++**Add your flow to the solution** ++1. In the list of solutions, select the solution you just created. It should be at the top of the list. If it isn't, search by your email name, which is part of the solution name. +2. In the solution, select **+ Add existing**, and then select Flow from the list. +3. Find your flow from the **Outside solutions** list, and then select Add to finish the process. If there are many flows, look at the **Modified** column to find the most recent flow. ++## Add your solution's flow to Power Virtual Agents ++1. Return to the browser tab with your bot in Power Virtual Agents. The authoring canvas should still be open. +2. To insert a new step in the flow, above the **Message** action box, select the plus (+) icon. Then select **Call an action**. +3. From the **Flow** pop-up window, select the new flow named **Generate answers using Question Answering Project...**. The new action appears in the flow. ++> [!div class="mx-imgBorder"] +> [  ]( ../media/power-virtual-agents/flow-step-8.png#lightbox) ++4. To correctly set the input variable to the QnA Maker action, select **Select a variable**, then select **bot.UnrecognizedTriggerPhrase**. ++> [!div class="mx-imgBorder"] +> [  ]( ../media/power-virtual-agents/flow-step-9.png#lightbox) ++5. To correctly set the output variable to the Question Answering action, in the **Message** action, select **UnrecognizedTriggerPhrase**, then select the icon to insert a variable, {x}, then select **FinalAnswer**. +6. From the context toolbar, select **Save**, to save the authoring canvas details for the topic. ++Here's what the final bot canvas looks like: ++> [!div class="mx-imgBorder"] +> [  ]( ../media/power-virtual-agents/flow-step-10.png#lightbox) ++## Test the bot ++As you design your bot in Power Virtual Agents, you can use the [Test bot pane](/power-virtual-agents/authoring-test-bot) to see how the bot leads a customer through the bot conversation. ++1. In the test pane, toggle **Track between topics**. This allows you to watch the progression between topics, as well as within a single topic. +2. Test the bot by entering the user text in the following order. The authoring canvas reports the successful steps with a green check mark. ++|**Question order**|**Test questions** |**Purpose** | +||-|--| +|1 |Hello |Begin conversation | +|2 |Store hours |Sample topic. This is configured for you without any additional work on your part. | +|3 |Yes |In reply to "Did that answer your question?" | +|4 |Excellent |In reply to "Please rate your experience." | +|5 |Yes |In reply to "Can I help with anything else?" | +|6 |How can I improve the throughput performance for query predictions?|This question triggers the fallback action, which sends the text to your knowledge base to answer. Then the answer is shown. the green check marks for the individual actions indicate success for each action.| ++> [!div class="mx-imgBorder"] +> [  ]( ../media/power-virtual-agents/flow-step-11.png#lightbox) ++## Publish your bot ++To make the bot available to all members of your organization, you need to publish it. ++Publish your bot by following the steps in [Publish your bot](/power-virtual-agents/publication-fundamentals-publish-channels). ++## Share your bot ++To make your bot available to others, you first need to publish it to a channel. For this tutorial we'll use the demo website. ++Configure the demo website by following the steps in [Configure a chatbot for a live or demo website](/power-virtual-agents/publication-connect-bot-to-web-channels). ++Then you can share your website URL with your school or organization members. ++## Clean up resources ++When you are done with the knowledge base, remove the QnA Maker resources in the Azure portal. ++## See also ++* [Tutorial: Create an FAQ bot](../tutorials/bot-service.md) |
cognitive-services | Concepts Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-features.md | Enabling inference explainability will add a collection to the JSON response fro In the example above, three action IDs are returned in the _ranking_ collection along with their respective probabilities scores. The action with the largest probability is the _best action_ as determined by the model trained on data sent to the Personalizer APIs, which in this case is `"id": "EntertainmentArticle"`. The action ID can be seen again in the _inferenceExplanation_ collection, along with the feature names and scores determined by the model for that action and the features and values sent to the Rank API. -Recall that Personalizer will either return the _best action_ or an _exploratory action_ chosen by the exploration policy. The best action is the one that the model has determined has the highest probability of maximizing the average reward, whereas exploratory actions are chosen among the set of all possible actions provided in the Rank API call. Actions taken during exploration do not leverage the feature scores in determining which action to take, therefore **feature scores for exploratory actions should not be used to gain an understanding of why the action was taken.** [You can learn more about exploration here](/azure/cognitive-services/personalizer/concepts-exploration). +Recall that Personalizer will either return the _best action_ or an _exploratory action_ chosen by the exploration policy. The best action is the one that the model has determined has the highest probability of maximizing the average reward, whereas exploratory actions are chosen among the set of all possible actions provided in the Rank API call. Actions taken during exploration do not leverage the feature scores in determining which action to take, therefore **feature scores for exploratory actions should not be used to gain an understanding of why the action was taken.** [You can learn more about exploration here](./concepts-exploration.md). For the best actions returned by Personalizer, the feature scores can provide general insight where: * Larger positive scores provide more support for the model choosing this action. For the best actions returned by Personalizer, the feature scores can provide ge ## Next steps -[Reinforcement learning](concepts-reinforcement-learning.md) +[Reinforcement learning](concepts-reinforcement-learning.md) |
cognitive-services | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Services description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/10/2022 Last updated : 10/12/2022 |
cognitive-services | Security Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/security-features.md | For a comprehensive list of Azure service security recommendations see the [Cogn |Feature | Description | |:|:| | [Transport Layer Security (TLS)](/dotnet/framework/network-programming/tls) | All of the Cognitive Services endpoints exposed over HTTP enforce the TLS 1.2 protocol. With an enforced security protocol, consumers attempting to call a Cognitive Services endpoint should follow these guidelines: </br>- The client operating system (OS) needs to support TLS 1.2.</br>- The language (and platform) used to make the HTTP call need to specify TLS 1.2 as part of the request. Depending on the language and platform, specifying TLS is done either implicitly or explicitly.</br>- For .NET users, consider the [Transport Layer Security best practices](/dotnet/framework/network-programming/tls). |-| [Authentication options](./authentication.md)| Authentication is the act of verifying a user's identity. Authorization, by contrast, is the specification of access rights and privileges to resources for a given identity. An identity is a collection of information about a <a href="https://en.wikipedia.org/wiki/Principal_(computer_security)" target="_blank">principal</a>, and a principal can be either an individual user or a service.</br></br>By default, you authenticate your own calls to Cognitive Services using the subscription keys provided; this is the simplest method but not the most secure. The most secure authentication method is to use managed roles in Azure Active Directory. To learn about this and other authentication options, see [Authenticate requests to Cognitive Services](/azure/cognitive-services/authentication). | +| [Authentication options](./authentication.md)| Authentication is the act of verifying a user's identity. Authorization, by contrast, is the specification of access rights and privileges to resources for a given identity. An identity is a collection of information about a <a href="https://en.wikipedia.org/wiki/Principal_(computer_security)" target="_blank">principal</a>, and a principal can be either an individual user or a service.</br></br>By default, you authenticate your own calls to Cognitive Services using the subscription keys provided; this is the simplest method but not the most secure. The most secure authentication method is to use managed roles in Azure Active Directory. To learn about this and other authentication options, see [Authenticate requests to Cognitive Services](./authentication.md). | | [Environment variables](cognitive-services-environment-variables.md) | Environment variables are name-value pairs that are stored within a specific development environment. You can store your credentials in this way as a more secure alternative to using hardcoded values in your code. However, if your environment is compromised, the environment variables are compromised as well, so this is not the most secure approach.</br></br> For instructions on how to use environment variables in your code, see the [Environment variables guide](cognitive-services-environment-variables.md). | | [Customer-managed keys (CMK)](./encryption/cognitive-services-encryption-keys-portal.md) | This feature is for services that store customer data at rest (longer than 48 hours). While this data is already double-encrypted on Azure servers, users can get extra security by adding another layer of encryption, with keys they manage themselves. You can link your service to Azure Key Vault and manage your data encryption keys there. </br></br>You need special approval to get the E0 SKU for your service, which enables CMK. Within 3-5 business days after you submit the [request form](https://aka.ms/cogsvc-cmk), you'll get an update on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once you're approved for using the E0 SKU, you'll need to create a new resource from the Azure portal and select E0 as the Pricing Tier. You won't be able to upgrade from F0 to the new E0 SKU. </br></br>Only some services can use CMK; look for your service on the [Customer-managed keys](./encryption/cognitive-services-encryption-keys-portal.md) page.| | [Virtual networks](./cognitive-services-virtual-networks.md) | Virtual networks allow you to specify which endpoints can make API calls to your resource. The Azure service will reject API calls from devices outside of your network. You can set a formula-based definition of the allowed network, or you can define an exhaustive list of endpoints to allow. This is another layer of security that can be used in combination with others. | | [Data loss prevention](./cognitive-services-data-loss-prevention.md) | The data loss prevention feature lets an administrator decide what types of URIs their Azure resource can take as inputs (for those API calls that take URIs as input). This can be done to prevent the possible exfiltration of sensitive company data: If a company stores sensitive information (such as a customer's private data) in URL parameters, a bad actor inside that company could submit the sensitive URLs to an Azure service, which surfaces that data outside the company. Data loss prevention lets you configure the service to reject certain URI forms on arrival.| | [Customer Lockbox](../security/fundamentals/customer-lockbox-overview.md) |The Customer Lockbox feature provides an interface for customers to review and approve or reject data access requests. It's used in cases where a Microsoft engineer needs to access customer data during a support request. For information on how Customer Lockbox requests are initiated, tracked, and stored for later reviews and audits, see the [Customer Lockbox guide](../security/fundamentals/customer-lockbox-overview.md).</br></br>Customer Lockbox is available for the following -| [Bring your own storage (BYOS)](/azure/cognitive-services/speech-service/speech-encryption-of-data-at-rest)| The Speech service doesn't currently support Customer Lockbox. However, you can arrange for your service-specific data to be stored in your own storage resource using bring-your-own-storage (BYOS). BYOS allows you to achieve similar data controls to Customer Lockbox. Keep in mind that Speech service data stays and is processed in the Azure region where the Speech resource was created. This applies to any data at rest and data in transit. For customization features like Custom Speech and Custom Voice, all customer data is transferred, stored, and processed in the same region where the Speech service resource and BYOS resource (if used) reside. </br></br>To use BYOS with Speech, follow the [Speech encryption of data at rest](/azure/cognitive-services/speech-service/speech-encryption-of-data-at-rest) guide.</br></br> Microsoft does not use customer data to improve its Speech models. Additionally, if endpoint logging is disabled and no customizations are used, then no customer data is stored by Speech. | +| [Bring your own storage (BYOS)](./speech-service/speech-encryption-of-data-at-rest.md)| The Speech service doesn't currently support Customer Lockbox. However, you can arrange for your service-specific data to be stored in your own storage resource using bring-your-own-storage (BYOS). BYOS allows you to achieve similar data controls to Customer Lockbox. Keep in mind that Speech service data stays and is processed in the Azure region where the Speech resource was created. This applies to any data at rest and data in transit. For customization features like Custom Speech and Custom Voice, all customer data is transferred, stored, and processed in the same region where the Speech service resource and BYOS resource (if used) reside. </br></br>To use BYOS with Speech, follow the [Speech encryption of data at rest](./speech-service/speech-encryption-of-data-at-rest.md) guide.</br></br> Microsoft does not use customer data to improve its Speech models. Additionally, if endpoint logging is disabled and no customizations are used, then no customer data is stored by Speech. | ## Next steps -* Explore [Cognitive Services](./what-are-cognitive-services.md) and choose a service to get started. +* Explore [Cognitive Services](./what-are-cognitive-services.md) and choose a service to get started. |
cognitive-services | Use Key Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/use-key-vault.md | zone_pivot_groups: programming-languages-set-twenty-eight # Develop Azure Cognitive Services applications with Key Vault -Use this article to learn how to develop Cognitive Services applications securely by using [Azure Key Vault](/azure/key-vault/general/overview). +Use this article to learn how to develop Cognitive Services applications securely by using [Azure Key Vault](../key-vault/general/overview.md). Key Vault reduces the chances that secrets may be accidentally leaked, because you won't store security information in your application. Key Vault reduces the chances that secrets may be accidentally leaked, because y * A valid Azure subscription - [Create one for free](https://azure.microsoft.com/free) * [Visual Studio IDE](https://visualstudio.microsoft.com/vs/)-* An [Azure Key Vault](/azure/key-vault/general/quick-create-portal) +* An [Azure Key Vault](../key-vault/general/quick-create-portal.md) * [A multi-service resource or a resource for a specific service](./cognitive-services-apis-create-account.md) ::: zone-end Key Vault reduces the chances that secrets may be accidentally leaked, because y * A valid Azure subscription - [Create one for free](https://azure.microsoft.com/free). * [Python 3.7 or later](https://www.python.org/) * [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps)-* An [Azure Key Vault](/azure/key-vault/general/quick-create-portal) +* An [Azure Key Vault](../key-vault/general/quick-create-portal.md) * [A multi-service resource or a resource for a specific service](./cognitive-services-apis-create-account.md) ::: zone-end Key Vault reduces the chances that secrets may be accidentally leaked, because y * A valid Azure subscription - [Create one for free](https://azure.microsoft.com/free). * [Java Development Kit (JDK) version 8 or above](/azure/developer/java/fundamentals/) * [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps)-* An [Azure Key Vault](/azure/key-vault/general/quick-create-portal) +* An [Azure Key Vault](../key-vault/general/quick-create-portal.md) * [A multi-service resource or a resource for a specific service](./cognitive-services-apis-create-account.md) ::: zone-end Key Vault reduces the chances that secrets may be accidentally leaked, because y * A valid Azure subscription - [Create one for free](https://azure.microsoft.com/free). * [Current Node.js v14 LTS or later](https://nodejs.org/) * [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps)-* An [Azure Key Vault](/azure/key-vault/general/quick-create-portal) +* An [Azure Key Vault](../key-vault/general/quick-create-portal.md) * [A multi-service resource or a resource for a specific service](./cognitive-services-apis-create-account.md) ::: zone-end Some Cognitive Services require different information to authenticate API calls, ## Add your credentials to your key vault -For your application to retrieve and use your credentials to authenticate API calls, you will need to add them to your [key vault secrets](/azure/key-vault/secrets/about-secrets). +For your application to retrieve and use your credentials to authenticate API calls, you will need to add them to your [key vault secrets](../key-vault/secrets/about-secrets.md). Repeat these steps to generate a secret for each required resource credential. For example, a key and endpoint. These secret names will be used later to authenticate your application. If you're using a multi-service resource or Language resource, you can update [y ## Next steps -* See [What are Cognitive Services](./what-are-cognitive-services.md) for available features you can develop along with [Azure key vault](/azure/key-vault/general/). +* See [What are Cognitive Services](./what-are-cognitive-services.md) for available features you can develop along with [Azure key vault](../key-vault/general/index.yml). * For additional information on secure application development, see:- * [Best practices for using Azure Key Vault](/azure/key-vault/general/best-practices) + * [Best practices for using Azure Key Vault](../key-vault/general/best-practices.md) * [Cognitive Services security](cognitive-services-security.md)- * [Azure security baseline for Cognitive Services](/security/benchmark/azure/baselines/cognitive-services-security-baseline?toc=/azure/cognitive-services/TOC.json) + * [Azure security baseline for Cognitive Services](/security/benchmark/azure/baselines/cognitive-services-security-baseline?toc=/azure/cognitive-services/TOC.json) |
communication-services | Call Logs Azure Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/call-logs-azure-monitor.md | Diagnostic log for audio stream from Server Endpoint to VoIP Endpoint 3: "packetLossRateAvg": "0", ``` ### Error Codes-The `participantEndReason` will contain a value from the set of Calling SDK error codes. You can refer to these codes to troubleshoot issues during the call, per Endpoint. See [troubleshooting in Azure communication Calling SDK error codes](https://docs.microsoft.com/azure/communication-services/concepts/troubleshooting-info?tabs=csharp%2Cios%2Cdotnet#calling-sdk-error-codes) +The `participantEndReason` will contain a value from the set of Calling SDK error codes. You can refer to these codes to troubleshoot issues during the call, per Endpoint. See [troubleshooting in Azure communication Calling SDK error codes](../troubleshooting-info.md?tabs=csharp%2cios%2cdotnet#calling-sdk-error-codes) |
communication-services | Government Cloud | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/government-cloud.md | The following table shows pair of government clouds that are currently supported | Microsoft 365 cloud| Azure cloud| Support | | | | | | [GCC](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/gcc) | Public | ❌ |-| [GCC-H](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/gcc-high-and-dod) | [US Government](/azure/azure-government/documentation-government-welcome) | ✔️ | +| [GCC-H](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/gcc-high-and-dod) | [US Government](../../../../azure-government/documentation-government-welcome.md) | ✔️ | ## Supported use cases The following table show supported use cases for Gov Cloud user with Azure Commu | Join Teams 1:1 or group call | ❌ | | Join Teams 1:1 or group chat | ❌ | -- [1] Gov cloud users can join a channel Teams meeting with audio and video, but they won't be able to send or receive any chat messages+- [1] Gov cloud users can join a channel Teams meeting with audio and video, but they won't be able to send or receive any chat messages |
communication-services | Azure Ad Api Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/azure-ad-api-permissions.md | None. - Application admin - Cloud application admin -Find more details in [Azure Active Directory documentation](/azure/active-directory/roles/permissions-reference). +Find more details in [Azure Active Directory documentation](../../../../active-directory/roles/permissions-reference.md). |
communication-services | Government Cloud | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/government-cloud.md | The following table shows pair of government clouds that are currently supported | Microsoft 365 cloud| Azure cloud| Support | | | | | | [GCC](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/gcc) | Public | ❌ |-| [GCC-H](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/gcc-high-and-dod) | [US Government](/azure/azure-government/documentation-government-welcome) | ❌ | +| [GCC-H](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/gcc-high-and-dod) | [US Government](../../../../azure-government/documentation-government-welcome.md) | ❌ | ## Supported use cases |
communication-services | Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/concepts.md | An exception policy controls the behavior of a Job based on a trigger and execut [cla]: https://cla.microsoft.com [nuget]: https://www.nuget.org/ [netstandars2mappings]: https://github.com/dotnet/standard/blob/master/docs/versions.md-[useraccesstokens]: /azure/communication-services/quickstarts/access-tokens?pivots=programming-language-csharp +[useraccesstokens]: ../../quickstarts/access-tokens.md?pivots=programming-language-csharp [communication_resource_docs]: ../../quickstarts/create-communication-resource.md?pivots=platform-azp&tabs=windows [communication_resource_create_portal]: ../../quickstarts/create-communication-resource.md?pivots=platform-azp&tabs=windows [communication_resource_create_power_shell]: /powershell/module/az.communication/new-azcommunicationservice An exception policy controls the behavior of a Job based on a trigger and execut [offer_declined_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerofferdeclined [offer_expired_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerofferexpired [offer_revoked_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerofferrevoked-[worker-scoring]: ../../how-tos/router-sdk/customize-worker-scoring.md +[worker-scoring]: ../../how-tos/router-sdk/customize-worker-scoring.md |
communication-services | Teams Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/teams-endpoint.md | Use Graph API to integrate 1:1 chat, group chat, meeting chat, and channel capab The following articles will guide you in implementing the chat for Teams users: - [Authenticate as Teams user](/graph/auth-v2-user) - [Send message as Teams user](/graph/api/chat-post-messages)-- [Receive message as Teams user on webhook](/graph/teams-changenotifications-chatMessage) and then push message to the client with, for example, [SignalR](/azure/azure-signalr/signalr-overview).+- [Receive message as Teams user on webhook](/graph/teams-changenotifications-chatMessage) and then push message to the client with, for example, [SignalR](../../azure-signalr/signalr-overview.md). - [Poll messages for Teams user](/graph/api/chat-list-messages) ## Supported use cases Teams users can join the Teams meeting experience, manage calls, and manage chat Find more details in the following articles: - [Teams interoperability](./teams-interop.md) - [Issue a Teams access token](../quickstarts/manage-teams-identity.md)-- [Start a call to Teams user as a Teams user](../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md)+- [Start a call to Teams user as a Teams user](../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md) |
communication-services | Call Recording | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-recording.md | For example, you can record 1:1 or 1:N scenarios for audio and video calls enabl But also, you can use Call Recording to record complex PSTN or VoIP inbound and outbound calling workflows managed by [Call Automation](https://learn.microsoft.com/azure/communication-services/concepts/voice-video-calling/call-automation).--- -- Regardless of how you established the call, Call Recording allows you to produce mixed or unmixed media files that are stored for 48 hours on a built-in temporary storage. You can retrieve the files and take them to the long-term storage solution of your choice. Call Recording supports all Azure Communication Services data regions. ## Media output and Channel types supported |
communication-services | Add Chat Push Notifications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/add-chat-push-notifications.md | Access the sample code for this tutorial on [GitHub](https://github.com/Azure-Sa ## Prerequisites -1. Finish all the prerequisite steps in [Chat Quickstart](/azure/communication-services/quickstarts/chat/get-started?pivots=programming-language-swift) +1. Finish all the prerequisite steps in [Chat Quickstart](../quickstarts/chat/get-started.md?pivots=programming-language-swift) 2. ANH Setup Create an Azure Notification Hub within the same subscription as your Communication Services resource and link the Notification Hub to your Communication Services resource. See [Notification Hub provisioning](../concepts/notifications.md#notification-hub-provisioning). In protocol extension, chat SDK provides the implementation of `decryptPayload(n 5. Plug the IOS device into your mac, run the program and click ΓÇ£allowΓÇ¥ when asked to authorize push notification on device. -6. As User B, send a chat message. You (User A) should be able to receive a push notification in your IOS device. +6. As User B, send a chat message. You (User A) should be able to receive a push notification in your IOS device. |
container-apps | Get Started Existing Container Image | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started-existing-container-image.md | This article demonstrates how to deploy an existing container to Azure Container - An Azure account with an active subscription. - If you don't have one, you [can create one for free](https://azure.microsoft.com/free/). - Install the [Azure CLI](/cli/azure/install-azure-cli).-- Access to a public or private container registry, such as the [Azure Container Registry](/azure/container-registry/).+- Access to a public or private container registry, such as the [Azure Container Registry](../container-registry/index.yml). [!INCLUDE [container-apps-create-cli-steps.md](../../includes/container-apps-create-cli-steps.md)] Remove-AzResourceGroup -Name $ResourceGroupName -Force ## Next steps > [!div class="nextstepaction"]-> [Environments in Azure Container Apps](environment.md) +> [Environments in Azure Container Apps](environment.md) |
container-apps | Tutorial Java Quarkus Connect Managed Identity Postgresql Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-java-quarkus-connect-managed-identity-postgresql-database.md | Last updated 09/26/2022 # Tutorial: Connect to PostgreSQL Database from a Java Quarkus Container App without secrets using a managed identity -[Azure Container Apps](overview.md) provides a [managed identity](managed-identity.md) for your app, which is a turn-key solution for securing access to [Azure Database for PostgreSQL](/azure/postgresql/) and other Azure services. Managed identities in Container Apps make your app more secure by eliminating secrets from your app, such as credentials in the environment variables. +[Azure Container Apps](overview.md) provides a [managed identity](managed-identity.md) for your app, which is a turn-key solution for securing access to [Azure Database for PostgreSQL](../postgresql/index.yml) and other Azure services. Managed identities in Container Apps make your app more secure by eliminating secrets from your app, such as credentials in the environment variables. This tutorial walks you through the process of building, configuring, deploying, and scaling Java container apps on Azure. At the end of this tutorial, you'll have a [Quarkus](https://quarkus.io) application storing data in a [PostgreSQL](../postgresql/index.yml) database with a managed identity running on [Container Apps](overview.md). When the new webpage shows your list of fruits, your app is connecting to the da Learn more about running Java apps on Azure in the developer guide. > [!div class="nextstepaction"]-> [Azure for Java Developers](/java/azure/) +> [Azure for Java Developers](/java/azure/) |
container-registry | Container Registry Enable Conditional Access Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-enable-conditional-access-policy.md | -The [Conditional Access policy](/azure/active-directory/conditional-access/overview) is designed to enforce strong authentication. The authentication is based on the location, trusted and compliant devices, user assigned roles, authorization method, and the client applications. The policy enables the security to meet the organizations compliance requirements and keep the data and user accounts safe. +The [Conditional Access policy](../active-directory/conditional-access/overview.md) is designed to enforce strong authentication. The authentication is based on the location, trusted and compliant devices, user assigned roles, authorization method, and the client applications. The policy enables the security to meet the organizations compliance requirements and keep the data and user accounts safe. -Learn more about [Conditional Access policy](/azure/active-directory/conditional-access/overview), the [conditions](/azure/active-directory/conditional-access/overview#common-signals) you'll take it into consideration to make [policy decisions.](/azure/active-directory/conditional-access/overview#common-decisions) +Learn more about [Conditional Access policy](../active-directory/conditional-access/overview.md), the [conditions](../active-directory/conditional-access/overview.md#common-signals) you'll take it into consideration to make [policy decisions.](../active-directory/conditional-access/overview.md#common-decisions) The Conditional Access policy applies after the first-factor authentication to the Azure Container Registry is complete. The purpose of Conditional Access for ACR is for user authentication only. The policy enables the user to choose the controls and further blocks or grants access based on the policy decisions. Create a Conditional Access policy and assign your test group of users as follow 1. Under **Grant**, filter and choose from options to enforce grant access or block access, during a sign-in event to the Azure portal. In this case grant access with *Require multifactor authentication*, then choose **Select**. >[!TIP]- > To configure and grant multi-factor authentication, see [configure and conditions for multi-factor authentication.](/azure/active-directory/authentication/tutorial-enable-azure-mfa#configure-the-conditions-for-multi-factor-authentication) + > To configure and grant multi-factor authentication, see [configure and conditions for multi-factor authentication.](../active-directory/authentication/tutorial-enable-azure-mfa.md#configure-the-conditions-for-multi-factor-authentication) 1. Under **Session**, filter and choose from options to enable any control on session level experience of the cloud apps. Create a Conditional Access policy and assign your test group of users as follow ## Next steps * Learn more about [Azure Policy definitions](../governance/policy/concepts/definition-structure.md) and [effects](../governance/policy/concepts/effects.md).-* Learn more about [common access concerns that Conditional Access policies can help with](/azure/active-directory/conditional-access/concept-conditional-access-policy-common). -* Learn more about [Conditional Access policy components](/azure/active-directory/conditional-access/concept-conditional-access-policies). +* Learn more about [common access concerns that Conditional Access policies can help with](../active-directory/conditional-access/concept-conditional-access-policy-common.md). +* Learn more about [Conditional Access policy components](../active-directory/conditional-access/concept-conditional-access-policies.md). |
container-registry | Manual Regional Move | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/manual-regional-move.md | Azure CLI * Exporting and using a Resource Manager template can help re-create many registry settings. You can edit the template to configure more settings, or update the target registry after creation. * Currently, Azure Container Registry doesn't support a registry move to a different Active Directory tenant. This limitation applies to both registries encrypted with a [customer-managed key](tutorial-enable-customer-managed-keys.md) and unencrypted registries. * If you are unable to move a registry is outlined in this article, create a new registry, manually recreate settings, and [Import registry content in the target registry](#import-registry-content-in-target-registry).-* You can find the steps to move resources of registry to a new resource group in the same subscription or move resources to a [new subscription.](/azure/azure-resource-manager/management/move-resource-group-and-subscription) +* You can find the steps to move resources of registry to a new resource group in the same subscription or move resources to a [new subscription.](../azure-resource-manager/management/move-resource-group-and-subscription.md) ## Export template from source registry After you have successfully deployed the target registry, migrated content, and ## Next steps * Learn more about [importing container images](container-registry-import-images.md) to an Azure container registry from a public registry or another private registry. -* See the [Resource Manager template reference](/azure/templates/microsoft.containerregistry/registries) for Azure Container Registry. +* See the [Resource Manager template reference](/azure/templates/microsoft.containerregistry/registries) for Azure Container Registry. |
container-registry | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/security-controls-policy.md | description: Lists Azure Policy Regulatory Compliance controls available for Azu Previously updated : 10/10/2022 Last updated : 10/12/2022 |
cosmos-db | Integrations Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/integrations-overview.md | Read more about [how to choose the right compute service on Azure](/azure/archit ### Azure Cognitive Search Azure Cognitive Search is fully managed cloud search service that provides auto-complete, geospatial search, filtering and faceting capabilities for a rich user experience.-Here's how you can [index data from the Azure Cosmos DB for MongoDB account](/azure/search/search-howto-index-cosmosdb-mongodb) to use with Azure Cognitive Search. +Here's how you can [index data from the Azure Cosmos DB for MongoDB account](../../search/search-howto-index-cosmosdb-mongodb.md) to use with Azure Cognitive Search. ## Improve database security Azure AD managed identities eliminate the need for developers to manage credenti Learn about other key integrations: * [Monitor Azure Cosmos DB with Azure Monitor.](/azure/cosmos-db/monitor-cosmos-db?tabs=azure-diagnostics.md)-* [Set up analytics with Azure Synapse Link.](/azure/cosmos-db/configure-synapse-link) +* [Set up analytics with Azure Synapse Link.](../configure-synapse-link.md) |
cosmos-db | Modeling Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/modeling-data.md | Just as there's no single way to represent a piece of data on a screen, there's ## Next steps -* To learn more about Azure Cosmos DB, refer to the service's [documentation](/azure/cosmos-db/) page. +* To learn more about Azure Cosmos DB, refer to the service's [documentation](../index.yml) page. * To understand how to shard your data across multiple partitions, refer to [Partitioning Data in Azure Cosmos DB](../partitioning-overview.md). Data Modeling and Partitioning - a Real-World Example](how-to-model-partition-ex * Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md) - * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) + * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) |
cosmos-db | Powerbi Visualize | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/powerbi-visualize.md | To build a Power BI report/dashboard: ## Next steps * To learn more about Power BI, see [Get started with Power BI](https://powerbi.microsoft.com/documentation/powerbi-service-get-started/).-* To learn more about Azure Cosmos DB, see the [Azure Cosmos DB documentation landing page](/azure/cosmos-db/). +* To learn more about Azure Cosmos DB, see the [Azure Cosmos DB documentation landing page](../index.yml). |
cosmos-db | Quickstart Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-nodejs.md | Add the following code at the end of the `index.js` file to include the required ### Add variables for names -Add the following variables to manage unique database and container names and the [partition key (pk)](/azure/cosmos-db/partitioning-overview). +Add the following variables to manage unique database and container names and the [partition key (pk)](../partitioning-overview.md). :::code language="javascript" source="~/cosmos-db-sql-api-javascript-samples/001-quickstart/index.js" range="13-19"::: Touring-1000 Blue, 50 read In this quickstart, you learned how to create an Azure Cosmos DB SQL API account, create a database, and create a container using the JavaScript SDK. You can now dive deeper into the SDK to import more data, perform complex queries, and manage your Azure Cosmos DB SQL API resources. > [!div class="nextstepaction"]-> [Tutorial: Build a Node.js console app](sql-api-nodejs-get-started.md) +> [Tutorial: Build a Node.js console app](sql-api-nodejs-get-started.md) |
cosmos-db | Samples Go | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-go.md | + + Title: API for NoSQL Go examples for Azure Cosmos DB +description: Find Go examples on GitHub for common tasks in Azure Cosmos DB, including CRUD operations. ++++ms.devlang: go + Last updated : 10/17/2022+++# Azure Cosmos DB Go examples ++> [!div class="op_single_selector"] +> * [.NET SDK Examples](samples-dotnet.md) +> * [Java V4 SDK Examples](samples-java.md) +> * [Spring Data V3 SDK Examples](samples-java-spring-data.md) +> * [Node.js Examples](samples-nodejs.md) +> * [Python Examples](samples-python.md) +> * [Go Examples](samples-go.md) +> * [Azure Code Sample Gallery](https://azure.microsoft.com/resources/samples/?sort=0&service=cosmos-db) ++Sample solutions that do CRUD operations and other common operations on Azure Cosmos DB resources are included in the [azure-documentdb-go](https://github.com/Azure/azure-sdk-for-go) GitHub repository. This article provides: ++* Links to the tasks in each of the Go example project files. +* Links to the related API reference content. ++## Prerequisites ++- An Azure Cosmos DB Account. Your options are: + * Within an Azure active subscription: + * [Create an Azure free Account](https://azure.microsoft.com/free) or use your existing subscription + * [Visual Studio Monthly Credits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers) + * [Azure Cosmos DB Free Tier](../free-tier.md) + * Without an Azure active subscription: + * [Try Azure Cosmos DB for free](../try-free.md), a tests environment that lasts for 30 days. + * [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) +- [go](https://go.dev/) installed on your computer, and a working knowledge of Go. +- [Visual Studio Code](https://code.visualstudio.com/). +- The [Go extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=golang.Go). +- [Git](https://www.git-scm.com/downloads). +- [Azure Cosmos DB for NoSQL SDK for Go](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/data/azcosmos) ++## Database examples ++The [cosmos_client.go](https://github.com/Azure/azure-sdk-for-go/blob/sdk/dat) conceptual article. ++| Task | API reference | +| | | +| [Create a database](https://github.com/Azure/azure-sdk-for-go/blob/sdk/data/azcosmos/v0.3.2/sdk/data/azcosmos/cosmos_client.go#L151) |Client.CreateDatabase | +| [Read a database by ID](https://github.com/Azure/azure-sdk-for-go/blob/sdk/data/azcosmos/v0.3.2/sdk/data/azcosmos/cosmos_client.go#L119) |Client.NewDatabase| +| [Delete a database](https://github.com/Azure/azure-sdk-for-go/blob/sdk/data/azcosmos/v0.3.2/sdk/data/azcosmos/cosmos_database.go#L155) |DatabaseClient.Delete| ++## Container examples ++The [cosmos_database.go](https://github.com/Azure/azure-sdk-for-go/blob/sdk/dat) conceptual article. ++| Task | API reference | +| | | +| [Create a container](https://github.com/Azure/azure-sdk-for-go/blob/sdk/data/azcosmos/v0.3.2/sdk/data/azcosmos/cosmos_database.go#L47) |DatabaseClient.CreateContainer | +| [Get a container by its ID](https://github.com/Azure/azure-sdk-for-go/blob/sdk/data/azcosmos/v0.3.2/sdk/data/azcosmos/cosmos_database.go#L35) |DatabaseClient.NewContainer | +| [Delete a container](https://github.com/Azure/azure-sdk-for-go/blob/sdk/data/azcosmos/v0.3.2/sdk/data/azcosmos/cosmos_container.go#L109) |ContainerClient.Delete | ++## Item examples ++The [cosmos_container.go](https://github.com/Azure/azure-sdk-for-go/blob/sdk/dat) conceptual article. ++| Task | API reference | +| | | +| [Create a item in a container](https://github.com/Azure/azure-sdk-for-go/blob/sdk/data/azcosmos/v0.3.2/sdk/data/azcosmos/cosmos_container.go#L184) |ContainerClient.CreateItem | +| [Read an item by its ID](https://github.com/Azure/azure-sdk-for-go/blob/sdk/data/azcosmos/v0.3.2/sdk/data/azcosmos/cosmos_container.go#L325) |ContainerClient.ReadItem | +| [Query items](https://github.com/Azure/azure-sdk-for-go/blob/sdk/data/azcosmos/v0.3.2/sdk/data/azcosmos/cosmos_container.go#L410) |ContainerClient.NewQueryItemsPager | +| [Replace an item](https://github.com/Azure/azure-sdk-for-go/blob/sdk/data/azcosmos/v0.3.2/sdk/data/azcosmos/cosmos_container.go#L279) |ContainerClient.ReplaceItem | +| [Upsert an item](https://github.com/Azure/azure-sdk-for-go/blob/sdk/data/azcosmos/v0.3.2/sdk/data/azcosmos/cosmos_container.go#L229) |ContainerClient.UpsertIitem | +| [Delete an item](https://github.com/Azure/azure-sdk-for-go/blob/sdk/data/azcosmos/v0.3.2/sdk/data/azcosmos/cosmos_container.go#L366) |ContainerClient.DeleteItem | +++## Next steps ++Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. +* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md) +* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) |
cosmos-db | Samples Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-python.md | Title: API for NoSQL Python examples for Azure Cosmos DB description: Find Python examples on GitHub for common tasks in Azure Cosmos DB, including CRUD operations.-+++ ms.devlang: python Previously updated : 08/26/2021- Last updated : 10/18/2021 + # Azure Cosmos DB Python examples+ [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)] > [!div class="op_single_selector"] -> * [.NET SDK Examples](samples-dotnet.md) -> * [Java V4 SDK Examples](samples-java.md) -> * [Spring Data V3 SDK Examples](samples-java-spring-data.md) -> * [Node.js Examples](samples-nodejs.md) -> * [Python Examples](samples-python.md) -> * [Azure Code Sample Gallery](https://azure.microsoft.com/resources/samples/?sort=0&service=cosmos-db) +> +> - [.NET SDK Examples](samples-dotnet.md) +> - [Java V4 SDK Examples](samples-java.md) +> - [Spring Data V3 SDK Examples](samples-java-spring-data.md) +> - [Node.js Examples](samples-nodejs.md) +> - [Python Examples](samples-python.md) +> - [Azure Code Sample Gallery](https://azure.microsoft.com/resources/samples/?sort=0&service=cosmos-db) +> -Sample solutions that do CRUD operations and other common operations on Azure Cosmos DB resources are included in the [azure-documentdb-python](https://github.com/Azure/azure-documentdb-python) GitHub repository. This article provides: +Sample solutions that do CRUD operations and other common operations on Azure Cosmos DB resources are included in the `main/sdk/cosmos` folder of the [azure/azure-sdk-for-python](https://github.com/azure/azure-sdk-for-python/tree/main/sdk/cosmos) GitHub repository. This article provides: -* Links to the tasks in each of the Python example project files. -* Links to the related API reference content. +- Links to the tasks in each of the Python example project files. +- Links to the related API reference content. ## Prerequisites -- An Azure Cosmos DB Account. You options are:- * Within an Azure active subscription: - * [Create an Azure free Account](https://azure.microsoft.com/free) or use your existing subscription - * [Visual Studio Monthly Credits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers) - * [Azure Cosmos DB Free Tier](../free-tier.md) - * Without an Azure active subscription: - * [Try Azure Cosmos DB for free](../try-free.md), a tests environment that lasts for 30 days. - * [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) +- An Azure Cosmos DB Account. Your options are: + - Within an Azure active subscription: + - [Create an Azure free Account](https://azure.microsoft.com/free) or use your existing subscription + - [Visual Studio Monthly Credits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers) + - [Azure Cosmos DB Free Tier](../free-tier.md) + - Without an Azure active subscription: + - [Try Azure Cosmos DB for free](../try-free.md), a tests environment that lasts for 30 days. + - [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) - [Python 2.7 or 3.6+](https://www.python.org/downloads/), with the `python` executable in your `PATH`. - [Visual Studio Code](https://code.visualstudio.com/). - The [Python extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-python.python#overview).-- [Git](https://www.git-scm.com/downloads). +- [Git](https://www.git-scm.com/downloads). - [Azure Cosmos DB for NoSQL SDK for Python](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/cosmos/azure-cosmos) ## Database examples The [index_management.py](https://github.com/Azure/azure-sdk-for-python/blob/mas ## Next steps Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md) -* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) ++- If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md) +- If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) |
cosmos-db | Quickstart App Stacks Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-app-stacks-java.md | This quickstart shows you how to build a Java app that connects to a cluster, an ## Prerequisites - An Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free).-- A supported [Java Development Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 8, which is included in [Azure Cloud Shell](/azure/cloud-shell/overview).+- A supported [Java Development Kit](/azure/developer/jav). - The [Apache Maven](https://maven.apache.org) build tool. - An Azure Cosmos DB for PostgreSQL cluster. To create a cluster, see [Create a cluster in the Azure portal](quickstart-create-portal.md). public class DemoApplication ## Next steps |
cosmos-db | Quickstart Connect Psql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-connect-psql.md | Last updated 09/28/2022 [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] -This quickstart shows you how to use the [psql](https://www.postgresql.org/docs/current/app-psql.html) connection string in [Azure Cloud Shell](/azure/cloud-shell/overview) to connect to an Azure Cosmos DB for PostgreSQL cluster. +This quickstart shows you how to use the [psql](https://www.postgresql.org/docs/current/app-psql.html) connection string in [Azure Cloud Shell](../../cloud-shell/overview.md) to connect to an Azure Cosmos DB for PostgreSQL cluster. ## Prerequisites Now that you've connected to the cluster, the next step is to create tables and shard them for horizontal scaling. > [!div class="nextstepaction"]-> [Create and distribute tables >](quickstart-distribute-tables.md) +> [Create and distribute tables >](quickstart-distribute-tables.md) |
cosmos-db | Tutorial Private Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/tutorial-private-access.md | them. ## Prerequisites - An Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free).-- If you want to run the code locally, [Azure CLI](/cli/azure/install-azure-cli) installed. You can also run the code in [Azure Cloud Shell](/azure/cloud-shell/overview).+- If you want to run the code locally, [Azure CLI](/cli/azure/install-azure-cli) installed. You can also run the code in [Azure Cloud Shell](../../cloud-shell/overview.md). ## Create a virtual network az group delete --resource-group link-demo endpoints](../../private-link/private-endpoint-overview.md) * Learn about [virtual networks](../../virtual-network/concepts-and-best-practices.md)-* Learn about [private DNS zones](../../dns/private-dns-overview.md) +* Learn about [private DNS zones](../../dns/private-dns-overview.md) |
cosmos-db | Reserved Capacity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/reserved-capacity.md | When your reservation expires, your Azure Cosmos DB instances continue to run an You can cancel, exchange, or refund reservations with certain limitations. For more information, see [Self-service exchanges and refunds for Azure Reservations](../cost-management-billing/reservations/exchange-and-refund-azure-reservations.md). +## Exceeding reserved capacity ++When you reserve capacity for your Azure Cosmos DB resources, you are reserving [provisioned thorughput](set-throughput.md). If the provisioned throughput is exceeded, requests beyond that provisioning will be rate-limited. For more information, see [provisioned throughput types](how-to-choose-offer.md#overview-of-provisioned-throughput-types). + ## Next steps The reservation discount is applied automatically to the Azure Cosmos DB resources that match the reservation scope and attributes. You can update the scope of the reservation through the Azure portal, PowerShell, Azure CLI, or the API. |
cosmos-db | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Cosmos DB description: Lists Azure Policy Regulatory Compliance controls available for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/10/2022 Last updated : 10/12/2022 |
cosmos-db | How To Use C Plus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-c-plus.md | Follow these links to learn more about Azure Storage and the API for Table in Az * [Introduction to the API for Table](introduction.md) * [List Azure Storage resources in C++](../../storage/common/storage-c-plus-plus-enumeration.md) * [Storage Client Library for C++ reference](https://azure.github.io/azure-storage-cpp)-* [Azure Storage documentation](/azure/storage/) +* [Azure Storage documentation](../../storage/index.yml) |
cost-management-billing | Understand Azure Data Explorer Reservation Charges | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-azure-data-explorer-reservation-charges.md | After you buy an Azure Data Explorer reserved capacity, the reservation discount A reservation discount is on a "*use-it-or-lose-it*" basis. So, if you don't have matching resources for any hour, then you lose a reservation quantity for that hour. You can't carry forward discounts for unused reserved hours. -When you shut down a resource, the reservation discount automatically applies to another matching resource in the specified scope. If no matching resources are found in the specified scope, then the reserved hours are *lost*. +When you shut down, stop, or suspend the Azure Data Explorer cluster, the applicable reservations automatically apply to other matching resources (compute and Azure Data Explorer markup) in the specified scope. If no matching resources are found in the specified scope, then the reserved hours are *lost*. ## Discount for other resources To learn more about Azure reservations, see the following articles: * [Manage Azure reservations](manage-reserved-vm-instance.md) * [Understand reservation usage for your pay-as-you-go subscription](understand-reserved-instance-usage.md) * [Understand reservation usage for your Enterprise enrollment](understand-reserved-instance-usage-ea.md)-* [Understand reservation usage for CSP subscriptions](/partner-center/azure-reservations) +* [Understand reservation usage for CSP subscriptions](/partner-center/azure-reservations) |
data-factory | Ci Cd Github Troubleshoot Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ci-cd-github-troubleshoot-guide.md | If you are using old default parameterization template, new way to include globa Default parameterization template should include all values from global parameter list. #### Resolution-* Use updated [default parameterization template.](/azure/data-factory/continuous-integration-delivery-resource-manager-custom-parameters#default-parameterization-template) as one time migration to new method of including global parameters. This template references to all values in global parameter list. You also have to update the deployment task in the **release pipeline** if you are already overriding the template parameters there. +* Use updated [default parameterization template.](./continuous-integration-delivery-resource-manager-custom-parameters.md#default-parameterization-template) as one time migration to new method of including global parameters. This template references to all values in global parameter list. You also have to update the deployment task in the **release pipeline** if you are already overriding the template parameters there. * Update the template parameter names in CI/CD pipeline if you are already overriding the template parameters (for global parameters). ### Error code: InvalidTemplate For more help with troubleshooting, try the following resources: * [Data Factory feature requests](/answers/topics/azure-data-factory.html) * [Azure videos](https://azure.microsoft.com/resources/videos/index/?sort=newest&services=data-factory) * [Stack overflow forum for Data Factory](https://stackoverflow.com/questions/tagged/azure-data-factory)-* [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory) +* [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory) |
data-factory | Concepts Change Data Capture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-change-data-capture.md | To learn more, see [Azure Data Factory overview](introduction.md) or [Azure Syna ## Overview -When you perform data integration and ETL processes in the cloud, your jobs can perform much better and be more effective when you only read the source data that has changed since the last time the pipeline ran, rather than always querying an entire dataset on each run. Executing pipelines that only read the latest changed data is available in many of ADF's source connectors by simply enabling a checkbox property inside the source transformation. Support for full-fidelity CDC, which includes row markers for inserts, upserts, deletes, and updates, as well as rules for resetting the ADF-managed checkpoint are available in several ADF connectors. To easily capture changes and deltas, ADF supports patterns and templates for managing incremental pipelines with user-controlled checkpoints as well, which you'll find in the table below. +When you perform data integration and ETL processes in the cloud, your jobs can perform much better and be more effective when you only read the source data that has changed since the last time the pipeline ran, rather than always querying an entire dataset on each run. ADF provides multiple different ways for you to easily get delta data only from the last run. -## CDC Connector support +### Native change data capture in mapping data flow -| Connector | Full CDC | Incremental CDC | Incremental pipeline pattern | -| :-- | : | : | : | -| [ADLS Gen1](load-azure-data-lake-store.md) | | Γ£ô | | -| [ADLS Gen2](load-azure-data-lake-storage-gen2.md) | | Γ£ô | | -| [Azure Blob Storage](connector-azure-blob-storage.md) | | Γ£ô | | -| [Azure Cosmos DB (SQL API)](connector-azure-cosmos-db.md) | Γ£ô | Γ£ô | | -| [Azure Database for MySQL](connector-azure-database-for-mysql.md) | | Γ£ô | | -| [Azure Database for PostgreSQL](connector-azure-database-for-postgresql.md) | | Γ£ô | | -| [Azure SQL Database](connector-azure-sql-database.md) | Γ£ô | Γ£ô | [Γ£ô](tutorial-incremental-copy-portal.md) | -| [Azure SQL Managed Instance](connector-azure-sql-managed-instance.md) | Γ£ô | Γ£ô | [Γ£ô](tutorial-incremental-copy-change-data-capture-feature-portal.md) | -| [Azure SQL Server](connector-sql-server.md) | Γ£ô | Γ£ô | [Γ£ô](tutorial-incremental-copy-multiple-tables-portal.md) | -| [Common data model](format-common-data-model.md) | | Γ£ô | | -| [SAP CDC](connector-sap-change-data-capture.md) | Γ£ô | Γ£ô | Γ£ô | +The changed data including inserted, updated and deleted rows can be automatically detected and extracted by ADF mapping data flow from the source databases. No timestamp or ID columns are required to identify the changes since it uses the native change data capture technology in the databases. By simply chaining a source transform and a sink transform reference to a database dataset in a mapping data flow, you will see the changes happened on the source database to be automatically applied to the target database, so that you can easily synchronize data between two tables. You can also add any transformations in between for any business logic to process the delta data. +**Supported connectors** +- [SAP CDC](connector-sap-change-data-capture.md) +- [Azure SQL Database](connector-azure-sql-database.md) +- [Azure SQL Server](connector-sql-server.md) +- [Azure SQL Managed Instance](connector-azure-sql-managed-instance.md) +- [Azure Cosmos DB (SQL API)](connector-azure-cosmos-db.md) -ADF makes it super-simple to enable and use CDC. Many of the connectors listed above will enable a checkbox similar to the one shown below from the data flow source transformation. +### Auto incremental extraction in mapping data flow +The newly updated rows or updated files can be automatically detected and extracted by ADF mapping data flow from the source stores. When you want to get delta data from the databases, the incremental column is required to identify the changes. When you want to load new files or updated files only from a storage store, ADF mapping data flow just works through filesΓÇÖ last modify time. ++**Supported connectors** +- [Azure Blob Storage](connector-azure-blob-storage.md) +- [ADLS Gen2](load-azure-data-lake-storage-gen2.md) +- [ADLS Gen1](load-azure-data-lake-store.md) +- [Azure SQL Database](connector-azure-sql-database.md) +- [Azure SQL Server](connector-sql-server.md) +- [Azure SQL Managed Instance](connector-azure-sql-managed-instance.md) +- [Azure Database for MySQL](connector-azure-database-for-mysql.md) +- [Azure Database for PostgreSQL](connector-azure-database-for-postgresql.md) +- [Common data model](format-common-data-model.md) ++### Customer managed delta data extraction in pipeline ++You can always build your own delta data extraction pipeline for all ADF supported data stores including using lookup activity to get the watermark value stored in an external control table, copy activity or mapping data flow activity to query the delta data against timestamp or ID column, and SP activity to write the new watermark value back to your external control table for the next run. When you want to load new files only from a storage store, you can either delete files every time after they have been moved to the destination successfully, or leverage the time partitioned folder or file names or last modified time to identify the new files. +++## Best Practices ++**Change data capture from databases:** ++- Native change data capture is always recommended as the simplest way for you to get change data. It also brings much less burden on your source database when ADF extracts the change data for further processing. +- If your database stores are not part of the ADF connector list with native change data capture support, we recommend you to check the auto incremental extraction option where you only need to input incremental column to capture the changes. ADF will take care of the rest including creating a dynamic query for delta loading and managing the checkpoint for each activity run. +- Customer managed delta data extraction in pipeline covers all the ADF supported databases and give you the flexibility to control everything by yourself. ++**Change files capture from file based storages:** ++- When you want to load data from Azure Blob Storage, Azure Data Lake Storage Gen2 or Azure Data Lake Storage Gen1, mapping data flow provides you the opportunity to get new or updated files only by simple one click. It is the simplest and recommended way for you to achieve delta load from these file based storages in mapping data flow. +- You can get more [best practices](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/best-practices-of-how-to-use-adf-copy-activity-to-copy-new-files/ba-p/1532484). +++## Checkpoint ++When you enable native change data capture or auto incremental extraction options in ADF mapping data flow, ADF helps you to manage the checkpoint to make sure each activity run will automatically only read the source data that has changed since the last time the pipeline run. By default, the checkpoint is coupled with your pipeline and activity name. If you change your pipeline name or activity name, the checkpoint will be reset, which leads you to start from beginning or get changes from now in the next run. If you do want to change the pipeline name or activity name but still keep the checkpoint to get changed data from the last run automatically, please use your own [Checkpoint key](control-flow-execute-data-flow-activity.md#checkpoint-key) in data flow activity to achieve that. ++When you debug the pipeline, this feature works the same. The checkpoint will be reset when you refresh your browser during the debug run. After you are satisfied with the pipeline result from debug run, you can go ahead to publish and trigger the pipeline. At the moment when you first time trigger your published pipeline, it automatically restarts from the beginning or gets changes from now on. ++In the monitoring section, you always have the chance to rerun a pipeline. When you are doing so, the changed data is always captured from the previous checkpoint of your selected pipeline run. ++## Tutorials ++The followings are the tutorials to start the change data capture in Azure Data Factory and Azure Synapse Analytics. ++- [SAP CDC tutorial in ADF](sap-change-data-capture-introduction-architecture.md#sap-cdc-capabilities) +- [Incrementally copy data from a source data store to a destination data store tutorials](tutorial-incremental-copy-overview.md) -The "Full CDC" and "Incremental CDC" features are available in both ADF and Synapse data flows and pipelines. In each of those options, ADF manages the checkpoint automatically for you. You can turn on the change data capture feature in the data flow source and you can also reset the checkpoint in the data flow activity. To reset the checkpoint for your CDC pipeline, go into the data flow activity in your pipeline and override the checkpoint key. Connectors in ADF that support "full CDC" also provide automatic tagging of rows as update, insert, delete. ## Next steps |
data-factory | Copy Activity Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-monitoring.md | Copy activity execution details and performance characteristics are also returne | logPath | Path to the session log of skipped data in the blob storage. See [Fault tolerance](copy-activity-overview.md#fault-tolerance). | Text (string) | | executionDetails | More details on the stages the Copy activity goes through and the corresponding steps, durations, configurations, and so on. We don't recommend that you parse this section because it might change. To better understand how it helps you understand and troubleshoot copy performance, refer to [Monitor visually](#monitor-visually) section. | Array | | perfRecommendation | Copy performance tuning tips. See [Performance tuning tips](copy-activity-performance-troubleshooting.md#performance-tuning-tips) for details. | Array |-| billingReference | The billing consumption for the given run. Learn more from [Monitor consumption at activity-run level](plan-manage-costs.md#monitor-consumption-at-activity-run-level). | Object | +| billingReference | The billing consumption for the given run. Learn more from [Monitor consumption at activity-run level](plan-manage-costs.md#monitor-consumption-at-activity-run-level-in-azure-data-factory). | Object | | durationInQueue | Queueing duration in second before the copy activity starts to execute. | Object | **Example:** |
data-factory | How To Schedule Azure Ssis Integration Runtime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-schedule-azure-ssis-integration-runtime.md | In this section, you will learn to create Azure Automation runbook that executes ### Create your Azure Automation account -If you do not have an Azure Automation account already, create one by following the instructions in this step. For detailed steps, see [Create an Azure Automation account](/azure/automation/quickstarts/create-azure-automation-account-portal) article. As part of this step, you create an **Azure Run As** account (a service principal in your Azure Active Directory) and assign it a **Contributor** role in your Azure subscription. Ensure that it is the same subscription that contains your ADF with Azure SSIS IR. Azure Automation will use this account to authenticate to Azure Resource Manager and operate on your resources. +If you do not have an Azure Automation account already, create one by following the instructions in this step. For detailed steps, see [Create an Azure Automation account](../automation/quickstarts/create-azure-automation-account-portal.md) article. As part of this step, you create an **Azure Run As** account (a service principal in your Azure Active Directory) and assign it a **Contributor** role in your Azure subscription. Ensure that it is the same subscription that contains your ADF with Azure SSIS IR. Azure Automation will use this account to authenticate to Azure Resource Manager and operate on your resources. 1. Launch **Microsoft Edge** or **Google Chrome** web browser. Currently, ADF UI/app is only supported in Microsoft Edge and Google Chrome web browsers. 2. Sign in to [Azure portal](https://portal.azure.com/). See the following articles from SSIS documentation: - [Deploy, run, and monitor an SSIS package on Azure](/sql/integration-services/lift-shift/ssis-azure-deploy-run-monitor-tutorial) - [Connect to SSIS catalog on Azure](/sql/integration-services/lift-shift/ssis-azure-connect-to-catalog-database) - [Schedule package execution on Azure](/sql/integration-services/lift-shift/ssis-azure-schedule-packages)-- [Connect to on-premises data sources with Windows authentication](/sql/integration-services/lift-shift/ssis-azure-connect-with-windows-auth)+- [Connect to on-premises data sources with Windows authentication](/sql/integration-services/lift-shift/ssis-azure-connect-with-windows-auth) |
data-factory | Plan Manage Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/plan-manage-costs.md | Last updated 08/18/2022 This article describes how you plan for and manage costs for Azure Data Factory. -First, at the beginning of the ETL project, you use a combination of the Azure pricing and per-pipeline consumption and pricing calculators to help plan for Azure Data Factory costs before you add any resources for the service to estimate costs. Next, as you add Azure resources, review the estimated costs. After you've started using Azure Data Factory resources, use Cost Management features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act. Costs for Azure Data Factory are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for data factory, you're billed for all Azure services and resources used in your Azure subscription, including the third-party services. +First, at the beginning of the ETL project, you use a combination of the Azure pricing and per-pipeline consumption and pricing calculators to help plan for Azure Data Factory costs before you add any resources for the service to estimate costs. Next, as you add Azure resources, review the estimated costs. After you've started using Azure Data Factory resources, use Cost Management features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act. Costs for Azure Data Factory are only a portion of the monthly costs in your Azure bill. Note that this article only explains how to plan for and manage costs for data factory. You're billed for all Azure services and resources used in your Azure subscription, including the third-party services. ## Prerequisites Cost analysis in Cost Management supports most Azure account types, but not all ## Estimate costs before using Azure Data Factory -Use the [ADF pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=data-factory) to get an estimate of the cost of running your ETL workload in Azure Data Factory. To use the calculator, you have to input details such as number of activity runs, number of data integration unit hours, type of compute used for Data Flow, core count, instance count, execution duration, and so on. +Use the [ADF pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=data-factory) to get an estimate of the cost of running your ETL workload in Azure Data Factory. .To use the calculator, you have to input details such as number of activity runs, number of data integration unit hours, type of compute used for Data Flow, core count, instance count, execution duration, and etc. One of the commonly asked questions for the pricing calculator is what values should be used as inputs. During the proof-of-concept phase, you can conduct trial runs using sample datasets to understand the consumption for various ADF meters. Then based on the consumption for the sample dataset, you can project out the consumption for the full dataset and operationalization schedule. One of the commonly asked questions for the pricing calculator is what values sh For example, letΓÇÖs say you need to move 1 TB of data daily from AWS S3 to Azure Data Lake Gen2. You can perform POC of moving 100 GB of data to measure the data ingestion throughput and understand the corresponding billing consumption. -Here is a sample copy activity run detail (your actual mileage will vary based on the shape of your specific dataset, network speeds, egress limits on S3 account, ingress limits on ADLS Gen2, and other factors). +Here's a sample copy activity run detail (your actual mileage will vary based on the shape of your specific dataset, network speeds, egress limits on S3 account, ingress limits on ADLS Gen2, and other factors). :::image type="content" source="media/plan-manage-costs/s3-copy-run-details.png" alt-text="S3 copy run"::: -By leveraging the [consumption monitoring at pipeline-run level](#monitor-consumption-at-pipeline-run-level), you can see the corresponding data movement meter consumption quantities: +By using the [consumption monitoring at pipeline-run level](#monitor-consumption-at-pipeline-run-level-in-azure-data-factory), you can see the corresponding data movement meter consumption quantities: :::image type="content" source="media/plan-manage-costs/s3-copy-pipeline-consumption.png" alt-text="S3 copy pipeline consumption"::: Now you can plug 30 activity runs and 380 DIU-hours into ADF pricing calculator ## Understand the full billing model for Azure Data Factory -Azure Data Factory runs on Azure infrastructure that accrues costs when you deploy new resources. It's important to understand that there could be other additional infrastructure costs that might accrue. +Azure Data Factory runs on Azure infrastructure that accrues costs when you deploy new resources. It's important to understand that other extra infrastructure costs might accrue. ### How you're charged for Azure Data Factory -Azure Data Factory is a serverless and elastic data integration service built for cloud scale. This means there is not a fixed-size compute that you need to plan for peak load; rather you specify how much resource to allocate on demand per operation, which allows you to design the ETL processes in a much more scalable manner. In addition, ADF is billed on a consumption-based plan, which means you only pay for what you use. +Azure Data Factory is a serverless and elastic data integration service built for cloud scale. There isn't a fixed-size compute that you need to plan for peak load; rather you specify how much resource to allocate on demand per operation, which allows you to design the ETL processes in a much more scalable manner. In addition, ADF is billed on a consumption-based plan, which means you only pay for what you use. When you create or use Azure Data Factory resources, you might get charged for the following meters: -- Orchestration Activity Runs - You are charged for it based on the number of activity runs orchestrate.-- Data Integration Unit (DIU) Hours ΓÇô For copy activities run on Azure Integration Runtime, you are charged based on number of DIU used and execution duration.-- vCore Hours ΓÇô for data flow execution and debugging, you are charged for based on compute type, number of vCores, and execution duration.+- Orchestration Activity Runs - You're charged for it based on the number of activity runs orchestrate. +- Data Integration Unit (DIU) Hours ΓÇô For copy activities run on Azure Integration Runtime, you're charged based on number of DIU used and execution duration. +- vCore Hours ΓÇô for data flow execution and debugging, you're charged for based on compute type, number of vCores, and execution duration. At the end of your billing cycle, the charges for each meter are summed. Your bill or invoice shows a section for all Azure Data Factory costs. There's a separate line item for each meter. You can pay for Azure Data Factory charges with your Azure Prepayment credit. Ho Azure Data Factory costs can be monitored at the factory, pipeline-run and activity-run levels. -### Monitor costs at factory level +### Monitor costs at factory level with Cost Analysis -As you use Azure resources with Data Factory, you incur costs. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days) or by unit usage (bytes, megabytes, and so on.) As soon as Data Factory use starts, costs are incurred and you can see the costs in [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). +As you use Azure resources with Data Factory, you incur costs. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days) or by unit usage (bytes, megabytes, and so on.) As soon as Data Factory use starts, costs are incurred and you can see the costs in [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md). When you use cost analysis, you view Data Factory costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. And you see where overspending might have occurred. If you've created budgets, you can also easily see where they're exceeded. Here's an example showing costs for just Data Factory. In the preceding example, you see the current cost for the service. Costs by Azure regions (locations) and Data Factory costs by resource group are also shown. From here, you can explore costs on your own. -### Monitor consumption at pipeline-run level +### Monitor costs at pipeline level with Cost Analysis -Depending on the types of activities you have in your pipeline, how much data you are moving and transforming, and the complexity of the transformation, executing a pipeline will spin different billing meters in Azure Data Factory. +In certain cases, you may want a granular breakdown of cost of operations within our factory, for instance, for charge back purposes. Integrating Azure Billing [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md) platform, Data Factory can separate out billing charges for each pipeline. By **opting in** Azure Data Factory detailed billing reporting for a factory, you can better understand how much each pipeline is costing you, within the aforementioned factory. -You can view the amount of consumption for different meters for individual pipeline runs in the Azure Data Factory user experience. To open the monitoring experience, select the **Monitor & Manage** tile in the data factory blade of the [Azure portal](https://portal.azure.com/). If you're already in the ADF UX, click on the **Monitor** icon on the left sidebar. The default monitoring view is list of pipeline runs. +You need to opt in for _each_ factory that you want detailed billing for. To turn on per pipeline detailed billing feature, ++1. Go to Azure Data Factory portal +1. Under _Monitor_ tab, select _Factory setting_ in _General_ section +1. Select _Showing billing report_ by pipeline +1. Publish the change +++> [!NOTE] +> The detailed pipeline billing settings is _not_ included in the exported ARM templates from your factory. That means [Continuous Integration and Delivery (CI/CD)](continuous-integration-delivery-improvements.md) will not overwrite billing behaviors for the factory. This allows you to set different billing behaviors for development, test, and production factories. ++Once the feature is enabled, each pipeline will have a separate entry in our Billing report: It shows _exactly_ how much each pipeline costs, in the selected time interval. It allows you to identify spending trends, and notice overspending, if any occurred. +++Using the graphing tools of Cost Analysis, you get similar charts and trends lines as shown [above](#monitor-costs-at-factory-level-with-cost-analysis), but for individual pipelines. You also get the summary view by factory name, as factory name is included in billing report, allowing for proper filtering when necessary. ++> [!WARNING] +> By opting in the per billing setting, there will be one entry for each pipeline in your factory. Please be particularly aware if you have excessive amount of pipelines in the factory, as it may significantly lengthen and complicate your billing report. ++#### Limitations ++Following are known limitations of per pipeline billing features. These billing meters won't file under the pipeline that spins it, but instead will file under a fall-back line item for your factory. ++- Data Factory Operations charges, including Read/Write and Monitoring +- Charges for [Azure Data Factory SQL Server Integration Services (SSIS) nodes](tutorial-deploy-ssis-packages-azure.md) +- If you have [Time to Live (TTL)](concepts-integration-runtime-performance.md#time-to-live) configured for Azure Integration Runtime (Azure IR), Data Flow activities run on these IR won't file under individual pipelines. ++### Monitor consumption at pipeline-run level in Azure Data Factory ++Depending on the types of activities you have in your pipeline, how much data you're moving and transforming, and the complexity of the transformation, executing a pipeline will spin different billing meters in Azure Data Factory. ++You can view the amount of consumption for different meters for individual pipeline runs in the Azure Data Factory user experience. To open the monitoring experience, select the **Monitor & Manage** tile in the data factory blade of the [Azure portal](https://portal.azure.com/). If you're already in the ADF UX, select on the **Monitor** icon on the left sidebar. The default monitoring view is list of pipeline runs. Clicking the **Consumption** button next to the pipeline name will display a pop-up window showing you the consumption for your pipeline run aggregated across all of the activities within the pipeline. Clicking the **Consumption** button next to the pipeline name will display a pop :::image type="content" source="media/plan-manage-costs/pipeline-consumption-details.png" alt-text="Pipeline consumption details"::: -The pipeline run consumption view shows you the amount consumed for each ADF meter for the specific pipeline run, but it does not show the actual price charged, because the amount billed to you is dependent on the type of Azure account you have and the type of currency used. To view the full list of supported account types, see [Understand Cost Management data](../cost-management-billing/costs/understand-cost-mgt-data.md). +The pipeline run consumption view shows you the amount consumed for each ADF meter for the specific pipeline run, but it doesn't show the actual price charged, because the amount billed to you is dependent on the type of Azure account you have and the type of currency used. To view the full list of supported account types, see [Understand Cost Management data](../cost-management-billing/costs/understand-cost-mgt-data.md). ++### Monitor consumption at activity-run level in Azure Data Factory -### Monitor consumption at activity-run level Once you understand the aggregated consumption at pipeline-run level, there are scenarios where you need to further drill down and identify which is the most costly activity within the pipeline. -To see the consumption at activity-run level, go to your data factory **Author & Monitor** UI. From the **Monitor** tab where you see a list of pipeline runs, click the **pipeline name** link to access the list of activity runs in the pipeline run. Click on the **Output** button next to the activity name and look for **billableDuration** property in the JSON output: +To see the consumption at activity-run level, go to your data factory **Author & Monitor** UI. From the **Monitor** tab where you see a list of pipeline runs, select the **pipeline name** link to access the list of activity runs in the pipeline run. Select on the **Output** button next to the activity name and look for **billableDuration** property in the JSON output: -Here is a sample out from a copy activity run: +Here's a sample out from a copy activity run: :::image type="content" source="media/plan-manage-costs/copy-output.png" alt-text="Copy output"::: -And here is a sample out from a Mapping Data Flow activity run: +And here's a sample out from a Mapping Data Flow activity run: :::image type="content" source="media/plan-manage-costs/dataflow-output.png" alt-text="Dataflow output"::: And here is a sample out from a Mapping Data Flow activity run: You can create [budgets](../cost-management-billing/costs/tutorial-acm-create-budgets.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that automatically notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an overall cost monitoring strategy. -Budgets can be created with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you additional money. For more information about the filter options available when you create a budget, see [Group and filter options](../cost-management-billing/costs/group-filter.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). +Budgets can be created with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you extra money. For more information about the filter options available when you create a budget, see [Group and filter options](../cost-management-billing/costs/group-filter.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). ## Export cost data -You can also [export your cost data](../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need or others to do additional data analysis for costs. For example, finance teams can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets. +You can also [export your cost data](../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need or others to do other data analysis for costs. For example, finance teams can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets. ## Next steps |
data-factory | Scenario Ssis Migration Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scenario-ssis-migration-rules.md | Connection that contains host name may fail, typically because the Azure virtual You can use below options for SSIS Integration runtime to access these resources: -- [Join Azure-SSIS IR to a virtual network that connects to on-premises sources](/azure/data-factory/join-azure-ssis-integration-runtime-virtual-network)+- [Join Azure-SSIS IR to a virtual network that connects to on-premises sources](./join-azure-ssis-integration-runtime-virtual-network.md) - Migrate your data to Azure and use Azure resource endpoint. - Use Managed Identity authentication if moving to Azure resources.-- [Use self-hosted IR to connect on-premises sources](/azure/data-factory/self-hosted-integration-runtime-proxy-ssis).+- [Use self-hosted IR to connect on-premises sources](./self-hosted-integration-runtime-proxy-ssis.md). ### [1002]Connection with absolute or UNC path might not be accessible Recommendation You can use below options for SSIS Integration runtime to access these resources: -- [Change to %TEMP%](/azure/data-factory/ssis-azure-files-file-shares)-- [Migrate your files to Azure Files](/azure/data-factory/ssis-azure-files-file-shares)-- [Join Azure-SSIS IR to a virtual network that connects to on-premises sources](/azure/data-factory/join-azure-ssis-integration-runtime-virtual-network).-- [Use self-hosted IR to connect on-premises sources](/azure/data-factory/self-hosted-integration-runtime-proxy-ssis).+- [Change to %TEMP%](./ssis-azure-files-file-shares.md) +- [Migrate your files to Azure Files](./ssis-azure-files-file-shares.md) +- [Join Azure-SSIS IR to a virtual network that connects to on-premises sources](./join-azure-ssis-integration-runtime-virtual-network.md). +- [Use self-hosted IR to connect on-premises sources](./self-hosted-integration-runtime-proxy-ssis.md). ### [1003]Connection with Windows authentication may fail Azure-SSIS IR only includes built-in providers or drivers by default. Without cu Recommendation -[Customize Azure-SSIS integration runtime](/azure/data-factory/how-to-configure-azure-ssis-ir-custom-setup) to install non built-in provider or driver. +[Customize Azure-SSIS integration runtime](./how-to-configure-azure-ssis-ir-custom-setup.md) to install non built-in provider or driver. ### [1005]Analysis Services Connection Manager cannot use an account with MFA enabled Recommendation You can use below methods to have Windows environment variables working in SSIS Integration runtime: -- [Customize SSIS integration runtime setup](/azure/data-factory/how-to-configure-azure-ssis-ir-custom-setup) with Windows environment variables.+- [Customize SSIS integration runtime setup](./how-to-configure-azure-ssis-ir-custom-setup.md) with Windows environment variables. - [Use Package or Project Parameter](/sql/integration-services/integration-services-ssis-package-and-project-parameters). ### [1007]SQL Server Native Client (SNAC) OLE DB driver is deprecated The component is only supported in Azure SSIS integration runtime enterprise edi Recommendation -[Configure Azure SSIS integration runtime to enterprise edition](/azure/data-factory/how-to-configure-azure-ssis-ir-enterprise-edition). +[Configure Azure SSIS integration runtime to enterprise edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md). ### [2002]ORC and Parquet file format aren't by default enabled ORC and Parquet file format need JRE, which isn't by default installed in Azure Recommendation -Install compatible JRE by [customize setup for the Azure-SSIS integration runtime](/azure/data-factory/how-to-configure-azure-ssis-ir-custom-setup). +Install compatible JRE by [customize setup for the Azure-SSIS integration runtime](./how-to-configure-azure-ssis-ir-custom-setup.md). ### [2003]Third party component isn't by default enabled Recommendation - Contact the third party to get an SSIS Integration runtime compatible version. -- For in-house or open source component, [customize Azure-SSIS integration runtime](/azure/data-factory/how-to-configure-azure-ssis-ir-custom-setup) to install necessary SQL Server 2017 compatible components.+- For in-house or open source component, [customize Azure-SSIS integration runtime](./how-to-configure-azure-ssis-ir-custom-setup.md) to install necessary SQL Server 2017 compatible components. ### [2004]Azure Blob source and destination is discovered Azure SSIS integration time is provisioned with built-in log providers by defaul Recommendation -[Customize Azure-SSIS integration runtime](/azure/data-factory/how-to-configure-azure-ssis-ir-custom-setup) to install non built-in provider or driver. +[Customize Azure-SSIS integration runtime](./how-to-configure-azure-ssis-ir-custom-setup.md) to install non built-in provider or driver. ### [3001]Absolute or UNC path is discovered in Execute Process Task Recommendation You can use below options for SSIS Integration runtime to launch your executable(s): -- [Migrate your executable(s) to Azure Files](/azure/data-factory/ssis-azure-files-file-shares).-- [Join Azure-SSIS IR to a virtual network](/azure/data-factory/join-azure-ssis-integration-runtime-virtual-network) that connects to on-premises sources.-- If necessary, [customize setup script to install your executable(s)](/azure/data-factory/how-to-configure-azure-ssis-ir-custom-setup) in advance when starting IR.+- [Migrate your executable(s) to Azure Files](./ssis-azure-files-file-shares.md). +- [Join Azure-SSIS IR to a virtual network](./join-azure-ssis-integration-runtime-virtual-network.md) that connects to on-premises sources. +- If necessary, [customize setup script to install your executable(s)](./how-to-configure-azure-ssis-ir-custom-setup.md) in advance when starting IR. ### [4001]Absolute or UNC configuration path is discovered in package configuration Recommendation You can use below options for SSIS Integration runtime to access these resources: -- [Migrate your files to Azure Files](/azure/data-factory/ssis-azure-files-file-shares)-- [Join Azure-SSIS IR to a virtual network that connects to on-premises sources](/azure/data-factory/join-azure-ssis-integration-runtime-virtual-network).-- [Use self-hosted IR to connect on-premises sources](/azure/data-factory/self-hosted-integration-runtime-proxy-ssis).+- [Migrate your files to Azure Files](./ssis-azure-files-file-shares.md) +- [Join Azure-SSIS IR to a virtual network that connects to on-premises sources](./join-azure-ssis-integration-runtime-virtual-network.md). +- [Use self-hosted IR to connect on-premises sources](./self-hosted-integration-runtime-proxy-ssis.md). ### [4002]Registry entry is discovered in package configuration You can use below options: Additional Information -[Access Control for Sensitive Data in Packages](/sql/integration-services/security/access-control-for-sensitive-data-in-packages) +[Access Control for Sensitive Data in Packages](/sql/integration-services/security/access-control-for-sensitive-data-in-packages) |
data-factory | Transform Data Using Machine Learning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-machine-learning.md | Last updated 09/22/2022 > [!NOTE] > Since Machine Learning Studio (classic) resources can no longer be created after 1 Dec, 2021, users are encouraged to use [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/) with the [Machine Learning Execute Pipeline activity](transform-data-machine-learning-service.md) rather than using the Batch Execution activity to execute Machine Learning Studio (classic) batches. -[ML Studio (classic)](/azure/machine-learning/) enables you to build, test, and deploy predictive analytics solutions. From a high-level point of view, it is done in three steps: +[ML Studio (classic)](../machine-learning/index.yml) enables you to build, test, and deploy predictive analytics solutions. From a high-level point of view, it is done in three steps: 1. **Create a training experiment**. You do this step by using the ML Studio (classic). ML Studio (classic) is a collaborative visual development environment that you use to train and test a predictive analytics model using training data. 2. **Convert it to a predictive experiment**. Once your model has been trained with existing data and you are ready to use it to score new data, you prepare and streamline your experiment for scoring. 3. **Deploy it as a web service**. You can publish your scoring experiment as an Azure web service. You can send data to your model via this web service end point and receive result predictions from the model. ### Using Machine Learning Studio (classic) with Azure Data Factory or Synapse Analytics-Azure Data Factory and Synapse Analytics enable you to easily create pipelines that use a published [Machine Learning Studio (classic)](/azure/machine-learning) web service for predictive analytics. Using the **Batch Execution Activity** in a pipeline, you can invoke Machine Learning Studio (classic) web service to make predictions on the data in batch. +Azure Data Factory and Synapse Analytics enable you to easily create pipelines that use a published [Machine Learning Studio (classic)](../machine-learning/index.yml) web service for predictive analytics. Using the **Batch Execution Activity** in a pipeline, you can invoke Machine Learning Studio (classic) web service to make predictions on the data in batch. Over time, the predictive models in the Machine Learning Studio (classic) scoring experiments need to be retrained using new input datasets. You can retrain a model from a pipeline by doing the following steps: See the following articles that explain how to transform data in other ways: * [Hadoop Streaming activity](transform-data-using-hadoop-streaming.md) * [Spark activity](transform-data-using-spark.md) * [.NET custom activity](transform-data-using-dotnet-custom-activity.md)-* [Stored procedure activity](transform-data-using-stored-procedure.md) +* [Stored procedure activity](transform-data-using-stored-procedure.md) |
data-factory | Data Factory Azure Ml Batch Execution Activity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-azure-ml-batch-execution-activity.md | Last updated 10/22/2021 > This article applies to version 1 of Data Factory. If you are using the current version of the Data Factory service, see [transform data using machine learning in Data Factory](../transform-data-using-machine-learning.md). ### Machine Learning Studio (classic)-[ML Studio (classic)](/azure/machine-learning/) enables you to build, test, and deploy predictive analytics solutions. From a high-level point of view, it is done in three steps: +[ML Studio (classic)](../../machine-learning/index.yml) enables you to build, test, and deploy predictive analytics solutions. From a high-level point of view, it is done in three steps: 1. **Create a training experiment**. You do this step by using ML Studio (classic). Studio (classic) is a collaborative visual development environment that you use to train and test a predictive analytics model using training data. 2. **Convert it to a predictive experiment**. Once your model has been trained with existing data and you are ready to use it to score new data, you prepare and streamline your experiment for scoring. You can also use [Data Factory Functions](data-factory-functions-variables.md) i [adf-build-1st-pipeline]: data-factory-build-your-first-pipeline.md -[azure-machine-learning]: https://azure.microsoft.com/services/machine-learning/ +[azure-machine-learning]: https://azure.microsoft.com/services/machine-learning/ |
data-factory | Data Factory Data Processing Using Batch | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-data-processing-using-batch.md | After you process data, you can consume it with online tools such as Power BI. H * [Azure and Power BI: Basic overview](https://powerbi.microsoft.com/documentation/powerbi-azure-and-power-bi/) ## References-* [Azure Data Factory](/azure/data-factory/) +* [Azure Data Factory](../index.yml) * [Introduction to the Data Factory service](data-factory-introduction.md) * [Get started with Data Factory](data-factory-build-your-first-pipeline.md) * [Use custom activities in a Data Factory pipeline](data-factory-use-custom-activities.md)-* [Azure Batch](/azure/batch/) +* [Azure Batch](../../batch/index.yml) * [Basics of Batch](/azure/azure-sql/database/sql-database-paas-overview) * [Overview of Batch features](../../batch/batch-service-workflow-features.md)) After you process data, you can consume it with online tools such as Power BI. H * [Get started with the Batch client library for .NET](../../batch/quick-run-dotnet.md) [batch-explorer]: https://github.com/Azure/azure-batch-samples/tree/master/CSharp/BatchExplorer-[batch-explorer-walkthrough]: /archive/blogs/windowshpc/azure-batch-explorer-sample-walkthrough +[batch-explorer-walkthrough]: /archive/blogs/windowshpc/azure-batch-explorer-sample-walkthrough |
data-lake-analytics | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/10/2022 Last updated : 10/12/2022 |
data-lake-store | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Storage Gen1 description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/10/2022 Last updated : 10/12/2022 |
databox-online | Azure Stack Edge Gpu Deploy Iot Edge Linux Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md | To deploy and run an IoT Edge module on your Ubuntu VM, see the steps in [Deploy To deploy NvidiaΓÇÖs DeepStream module, see [Deploy the Nvidia DeepStream module on Ubuntu VM on Azure Stack Edge Pro with GPU](azure-stack-edge-deploy-nvidia-deepstream-module.md). -To deploy NVIDIA DIGITS, see [Enable a GPU in a prefabricated NVIDIA module](/azure/iot-edge/configure-connect-verify-gpu?view=iotedge-2020-11&preserve-view=true#enable-a-gpu-in-a-prefabricated-nvidia-module). +To deploy NVIDIA DIGITS, see [Enable a GPU in a prefabricated NVIDIA module](../iot-edge/configure-connect-verify-gpu.md?preserve-view=true&view=iotedge-2020-11#enable-a-gpu-in-a-prefabricated-nvidia-module). |
databox-online | Azure Stack Edge Gpu Deploy Virtual Machine Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-portal.md | -> You will need to enable multifactor authentication for the user who manages the VMs and images that are deployed on your device from the cloud. The cloud operations will fail if the user doesn't have multifactor authentication enabled. For steps to enable multifactor authentication, see [Manage authentication methods for Azure AD Multi-Factor Authentication](/azure/active-directory/authentication/howto-mfa-userdevicesettings). +> You will need to enable multifactor authentication for the user who manages the VMs and images that are deployed on your device from the cloud. The cloud operations will fail if the user doesn't have multifactor authentication enabled. For steps to enable multifactor authentication, see [Manage authentication methods for Azure AD Multi-Factor Authentication](../active-directory/authentication/howto-mfa-userdevicesettings.md). ## VM deployment workflow Follow these steps to connect to a Windows VM. - [Deploy a GPU VM](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md) - [Troubleshoot VM deployment](azure-stack-edge-gpu-troubleshoot-virtual-machine-provisioning.md) - [Monitor VM activity on your device](azure-stack-edge-gpu-monitor-virtual-machine-activity.md)-- [Monitor CPU and memory on a VM](azure-stack-edge-gpu-monitor-virtual-machine-metrics.md)-+- [Monitor CPU and memory on a VM](azure-stack-edge-gpu-monitor-virtual-machine-metrics.md) |
databox | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Data Box description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Box. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/10/2022 Last updated : 10/12/2022 |
ddos-protection | Ddos Protection Reference Architectures | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-reference-architectures.md | There are many ways to implement an N-tier architecture. The following diagrams ### PaaS web application -This reference architecture shows running an Azure App Service application in a single region. This architecture shows a set of proven practices for a web application that uses [Azure App Service](/azure/app-service/) and [Azure SQL Database](/azure/sql-database/). +This reference architecture shows running an Azure App Service application in a single region. This architecture shows a set of proven practices for a web application that uses [Azure App Service](../app-service/index.yml) and [Azure SQL Database](/azure/sql-database/). A standby region is set up for failover scenarios. Azure Traffic Manager routes incoming requests to Application Gateway in one of the regions. During normal operations, it routes requests to Application Gateway in the active region. If that region becomes unavailable, Traffic Manager fails over to Application Gateway in the standby region. For more information about hub-and-spoke topology, see [Hub-spoke network topolo ## Next steps -- Learn how to [create a DDoS protection plan](manage-ddos-protection.md).+- Learn how to [create a DDoS protection plan](manage-ddos-protection.md). |
defender-for-cloud | Apply Security Baseline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/apply-security-baseline.md | Title: Harden your Windows and Linux OS with Azure security baseline and Microsoft Defender for Cloud -description: Learn how Microsoft Defender for Cloud uses the guest configuration to compare your OS hardening with the guidance from Microsoft Cloud Security Benchmark +description: Learn how Microsoft Defender for Cloud uses the guest configuration to compare your OS hardening with the guidance from Microsoft cloud security benchmark Last updated 11/09/2021 To reduce a machine's attack surface and avoid known risks, it's important to configure the operating system (OS) as securely as possible. -The Microsoft Cloud Security Benchmark has guidance for OS hardening which has led to security baseline documents for [Windows](../governance/policy/samples/guest-configuration-baseline-windows.md) and [Linux](../governance/policy/samples/guest-configuration-baseline-linux.md). +The Microsoft cloud security benchmark has guidance for OS hardening which has led to security baseline documents for [Windows](../governance/policy/samples/guest-configuration-baseline-windows.md) and [Linux](../governance/policy/samples/guest-configuration-baseline-linux.md). Use the security recommendations described in this article to assess the machines in your environment and: Microsoft Defender for Cloud includes two recommendations that check whether the - For **Windows** machines, [Vulnerabilities in security configuration on your Windows machines should be remediated (powered by Guest Configuration)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8c3d9ad0-3639-4686-9cd2-2b2ab2609bda) compares the configuration with the [Windows security baseline](../governance/policy/samples/guest-configuration-baseline-windows.md). - For **Linux** machines, [Vulnerabilities in security configuration on your Linux machines should be remediated (powered by Guest Configuration)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1f655fb7-63ca-4980-91a3-56dbc2b715c6) compares the configuration with the [Linux security baseline](../governance/policy/samples/guest-configuration-baseline-linux.md). -These recommendations use the guest configuration feature of Azure Policy to compare the OS configuration of a machine with the baseline defined in the [Microsoft Cloud Security Benchmark](/security/benchmark/azure/overview). +These recommendations use the guest configuration feature of Azure Policy to compare the OS configuration of a machine with the baseline defined in the [Microsoft cloud security benchmark](/security/benchmark/azure/overview). ## Compare machines in your subscriptions with the OS security baselines To learn more about these configuration settings, see: - [Windows security baseline](../governance/policy/samples/guest-configuration-baseline-windows.md) - [Linux security baseline](../governance/policy/samples/guest-configuration-baseline-linux.md)-- [Microsoft Cloud Security Benchmark](/security/benchmark/azure/overview)+- [Microsoft cloud security benchmark](/security/benchmark/azure/overview) |
defender-for-cloud | Attack Path Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/attack-path-reference.md | To learn about how to respond to these attack paths, see [Identify and remediate |--|--| | Internet exposed VM has high severity vulnerabilities | Virtual machine '\[MachineName]' is reachable from the internet and has high severity vulnerabilities \[RCE] | | Internet exposed VM has high severity vulnerabilities and high permission to a subscription | Virtual machine '\[MachineName]' is reachable from the internet, has high severity vulnerabilities \[RCE] and \[IdentityDescription] with \[PermissionType] permission to subscription '\[SubscriptionName]' |-| Internet exposed VM has high severity vulnerabilities and read permission to a data store with sensitive data | Virtual machine '\[MachineName]' is reachable from the internet, has high severity vulnerabilities \[RCE] and \[IdentityDescription] with read permission to \[DatabaseType] '\[DatabaseName]' containing sensitive data. For more details, you can learn how to [prioritize security actions by data sensitivity](/azure/defender-for-cloud/information-protection). | +| Internet exposed VM has high severity vulnerabilities and read permission to a data store with sensitive data | Virtual machine '\[MachineName]' is reachable from the internet, has high severity vulnerabilities \[RCE] and \[IdentityDescription] with read permission to \[DatabaseType] '\[DatabaseName]' containing sensitive data. For more details, you can learn how to [prioritize security actions by data sensitivity](./information-protection.md). | | Internet exposed VM has high severity vulnerabilities and read permission to a data store | Virtual machine '\[MachineName]' is reachable from the internet, has high severity vulnerabilities \[RCE] and \[IdentityDescription] with read permission to \[DatabaseType] '\[DatabaseName]'. | | Internet exposed VM has high severity vulnerabilities and read permission to a Key Vault | Virtual machine '\[MachineName]' is reachable from the internet, has high severity vulnerabilities \[RCE] and \[IdentityDescription] with read permission to Key Vault '\[KVName]' | | VM has high severity vulnerabilities and high permission to a subscription | Virtual machine '\[MachineName]' has high severity vulnerabilities \[RCE] and has high permission to subscription '\[SubscriptionName]' |-| VM has high severity vulnerabilities and read permission to a data store with sensitive data | Virtual machine '\[MachineName]' has high severity vulnerabilities \[RCE] and \[IdentityDescription] with read permission to \[DatabaseType] '\[DatabaseName]' containing sensitive data. For more details, you can learn how to [prioritize security actions by data sensitivity](/azure/defender-for-cloud/information-protection). | +| VM has high severity vulnerabilities and read permission to a data store with sensitive data | Virtual machine '\[MachineName]' has high severity vulnerabilities \[RCE] and \[IdentityDescription] with read permission to \[DatabaseType] '\[DatabaseName]' containing sensitive data. For more details, you can learn how to [prioritize security actions by data sensitivity](./information-protection.md). | | VM has high severity vulnerabilities and read permission to a Key Vault | Virtual machine '\[MachineName]' has high severity vulnerabilities \[RCE] and \[IdentityDescription] with read permission to Key Vault '\[KVName]' | | VM has high severity vulnerabilities and read permission to a data store | Virtual machine '\[MachineName]' has high severity vulnerabilities \[RCE] and \[IdentityDescription] with read permission to \[DatabaseType] '\[DatabaseName]' | To learn about how to respond to these attack paths, see [Identify and remediate | Internet exposed EC2 instance has high severity vulnerabilities and high permission to an account | AWS EC2 instance '\[EC2Name]' is reachable from the internet, has high severity vulnerabilities\[RCE] and has '\[permission]' permission to account '\[AccountName]' | | Internet exposed EC2 instance has high severity vulnerabilities and read permission to a DB | AWS EC2 instance '\[MachineName]' is reachable from the internet, has high severity vulnerabilities\[RCE] and has '\[permission]' permission to DB '\[DatabaseName]'| | Internet exposed EC2 instance has high severity vulnerabilities and read permission to S3 bucket | Option 1 <br> AWS EC2 instance '\[MachineName]' is reachable from the internet, has high severity vulnerabilities\[RCE] and has IAM role attached with '\[Rolepermission]' permission via IAM policy to S3 bucket '\[BucketName]' <br> <br> Option 2 <br> AWS EC2 instance '\[MachineName]' is reachable from the internet, has high severity vulnerabilities\[RCE] and has IAM role attached with '\[S3permission]' permission via bucket policy to S3 bucket '\[BucketName]' <br> <br> Option 3 <br> AWS EC2 instance '\[MachineName]' is reachable from the internet, has high severity vulnerabilities\[RCE] and has IAM role attached with '\[Rolepermission]' permission via IAM policy and '\[S3permission]' permission via bucket policy to S3 bucket '\[BucketName]'|-| Internet exposed EC2 instance has high severity vulnerabilities and read permission to a S3 bucket with sensitive data | Option 1 <br> AWS EC2 instance '\[MachineName]' is reachable from the internet, has high severity vulnerabilities\[RCE] and has IAM role attached with '\[Rolepermission]' permission via IAM policy to S3 bucket '\[BucketName]' containing sensitive data <br> <br> Option 2 <br> AWS EC2 instance '\[MachineName]' is reachable from the internet, has high severity vulnerabilities\[RCE] and has IAM role attached with '\[S3permission]' permission via bucket policy to S3 bucket '\[BucketName]' containing sensitive data <br> <br> Option 3 <br> AWS EC2 instance '\[MachineName]' is reachable from the internet, has high severity vulnerabilities\[RCE] and has IAM role attached with '\[Rolepermission]' permission via IAM policy and '\[S3permission] permission via bucket policy to S3 bucket '\[BucketName]' containing sensitive data <br><br> . For more details, you can learn how to [prioritize security actions by data sensitivity](/azure/defender-for-cloud/information-protection). | +| Internet exposed EC2 instance has high severity vulnerabilities and read permission to a S3 bucket with sensitive data | Option 1 <br> AWS EC2 instance '\[MachineName]' is reachable from the internet, has high severity vulnerabilities\[RCE] and has IAM role attached with '\[Rolepermission]' permission via IAM policy to S3 bucket '\[BucketName]' containing sensitive data <br> <br> Option 2 <br> AWS EC2 instance '\[MachineName]' is reachable from the internet, has high severity vulnerabilities\[RCE] and has IAM role attached with '\[S3permission]' permission via bucket policy to S3 bucket '\[BucketName]' containing sensitive data <br> <br> Option 3 <br> AWS EC2 instance '\[MachineName]' is reachable from the internet, has high severity vulnerabilities\[RCE] and has IAM role attached with '\[Rolepermission]' permission via IAM policy and '\[S3permission] permission via bucket policy to S3 bucket '\[BucketName]' containing sensitive data <br><br> . For more details, you can learn how to [prioritize security actions by data sensitivity](./information-protection.md). | | Internet exposed EC2 instance has high severity vulnerabilities and read permission to a KMS | Option 1 <br> AWS EC2 instance '\[MachineName]' is reachable from the internet, has high severity vulnerabilities\[RCE] and has IAM role attached with '\[Rolepermission]' permission via IAM policy to AWS Key Management Service (KMS) '\[KeyName]' <br> <br> Option 2 <br> AWS EC2 instance '\[MachineName]' is reachable from the internet, has vulnerabilities allowing remote code execution and has IAM role attached with '\[Keypermission]' permission via AWS Key Management Service (KMS) policy to key '\[KeyName]' <br> <br> Option 3 <br> AWS EC2 instance '\[MachineName]' is reachable from the internet, has vulnerabilities allowing remote code execution and has IAM role attached with '\[Rolepermission]' permission via IAM policy and '\[Keypermission] permission via AWS Key Management Service (KMS) policy to key '\[KeyName]' | | Internet exposed EC2 instance has high severity vulnerabilities | AWS EC2 instance '\[EC2Name]' is reachable from the internet and has high severity vulnerabilities\[RCE] | To learn about how to respond to these attack paths, see [Identify and remediate | Attack Path Display Name | Attack Path Description | |--|--|-| Internet exposed AWS S3 Bucket with sensitive data is publicly accessible | S3 bucket '\[BucketName]' with sensitive data is reachable from the internet and allows public read access without authorization required. For more details, you can learn how to [prioritize security actions by data sensitivity](/azure/defender-for-cloud/information-protection). | +| Internet exposed AWS S3 Bucket with sensitive data is publicly accessible | S3 bucket '\[BucketName]' with sensitive data is reachable from the internet and allows public read access without authorization required. For more details, you can learn how to [prioritize security actions by data sensitivity](./information-protection.md). | ### Azure containers To learn about how to respond to these attack paths, see [Identify and remediate | Insight | Description | Supported entities | |--|--|--| | Exposed to the internet | Indicates that a resource is exposed to the internet. Supports port filtering | Azure virtual machine, AWS EC2, Azure storage account, Azure SQL server, Azure Cosmos DB, AWS S3, Kubernetes pod. |-| Contains sensitive data | Indicates that a resource contains sensitive data based on Microsoft Purview scan and applicable only if Microsoft Purview is enabled. For more details, you can learn how to [prioritize security actions by data sensitivity](/azure/defender-for-cloud/information-protection). | Azure SQL Server, Azure Storage Account, AWS S3 bucket. | +| Contains sensitive data | Indicates that a resource contains sensitive data based on Microsoft Purview scan and applicable only if Microsoft Purview is enabled. For more details, you can learn how to [prioritize security actions by data sensitivity](./information-protection.md). | Azure SQL Server, Azure Storage Account, AWS S3 bucket. | | Has tags | List the resource tags of the cloud resource | All Azure and AWS resources. | | Installed software | List all software installed on the machine. This is applicable only for VMs that have Threat and vulnerability management integration with Defender for Cloud enabled and are connected to Defender for Cloud. | Azure virtual machine, AWS EC2 | | Allows public access | Indicates that a public read access is allowed to the data store with no authorization required | Azure storage account, AWS S3 bucjet | To learn about how to respond to these attack paths, see [Identify and remediate ## Next steps For related information, see the following:-- [What are the Cloud Security Graph, Attack Path Analysis, and the Cloud Security Explorer?](concept-attack-path.md)+- [What are the cloud security graph, attack path analysis, and the cloud security explorer?](concept-attack-path.md) - [Identify and remediate attack paths](how-to-manage-attack-path.md)-- [Cloud Security Explorer](how-to-manage-cloud-security-explorer.md)+- [Cloud security explorer](how-to-manage-cloud-security-explorer.md) |
defender-for-cloud | Auto Deploy Azure Monitoring Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/auto-deploy-azure-monitoring-agent.md | Before you deploy AMA with Defender for Cloud, you must have the following prere - [Onboard your AWS connector](quickstart-onboard-aws.md) and auto provision Azure Arc. - [Onboard your GCP connector](quickstart-onboard-gcp.md) and auto provision Azure Arc. - Other clouds and on-premises machines- - [Install Azure Arc](/azure/azure-arc/servers/learn/quick-enable-hybrid-vm). + - [Install Azure Arc](../azure-arc/servers/learn/quick-enable-hybrid-vm.md). - Make sure the Defender plans that you want the Azure Monitor Agent to support are enabled: - [Enable Defender for Servers Plan 2 on Azure and on-premises VMs](enable-enhanced-security.md) - [Enable Defender plans on the subscriptions for your AWS VMs](quickstart-onboard-aws.md) To deploy the Azure Monitor Agent with Defender for Cloud: By default: - The Azure Monitor Agent is installed on all existing machines in the selected subscription, and on all new machines created in the subscription.- - The Log Analytics agent isn't uninstalled from machines that already have it installed. You can [leave the Log Analytics agent](#impact-of-running-with-both-the-log-analytics-and-azure-monitor-agents) on the machine, or you can manually [remove the Log Analytics agent](/azure/azure-monitor/agents/azure-monitor-agent-migration) if you don't require it for other protections. + - The Log Analytics agent isn't uninstalled from machines that already have it installed. You can [leave the Log Analytics agent](#impact-of-running-with-both-the-log-analytics-and-azure-monitor-agents) on the machine, or you can manually [remove the Log Analytics agent](../azure-monitor/agents/azure-monitor-agent-migration.md) if you don't require it for other protections. - The agent sends data to the default workspace for the subscription. You can also [configure a custom workspace](#configure-custom-destination-log-analytics-workspace) to send data to. - You can't enable [collection of additional security events](#additional-security-events-collection). You can run both the Log Analytics and Azure Monitor Agents on the same machine, When you enable Defender for Servers Plan 2, Defender for Cloud decides which agent to provision. In most cases, the default is the Log Analytics agent. -Learn more about [migrating to the Azure Monitor Agent](/azure/azure-monitor/agents/azure-monitor-agent-migration). +Learn more about [migrating to the Azure Monitor Agent](../azure-monitor/agents/azure-monitor-agent-migration.md). ## Custom configurations To configure a custom destination workspace for the Azure Monitor Agent: The Azure Monitor Agent requires Log analytics workspace solutions. These solutions are automatically installed when you auto-provision the Azure Monitor Agent with the default workspace. -The required [Log Analytics workspace solutions](/azure/azure-monitor/insights/solutions) for the data that you're collecting are: +The required [Log Analytics workspace solutions](../azure-monitor/insights/solutions.md) for the data that you're collecting are: - Security posture management (CSPM) ΓÇô **SecurityCenterFree solution** - Defender for Servers Plan 2 ΓÇô **Security solution** The Azure Monitor Agent requires additional extensions. The ASA extension, which When you auto-provision the Log Analytics agent in Defender for Cloud, you can choose to collect additional security events to the workspace. When you auto-provision the Azure Monitor agent in Defender for Cloud, the option to collect additional security events to the workspace isn't available. Defender for Cloud doesn't rely on these security events, but they can be helpful for investigations through Microsoft Sentinel. -If you want to collect security events when you auto-provision the Azure Monitor Agent, you can create a [Data Collection Rule](/azure/azure-monitor/essentials/data-collection-rule-overview) to collect the required events. +If you want to collect security events when you auto-provision the Azure Monitor Agent, you can create a [Data Collection Rule](../azure-monitor/essentials/data-collection-rule-overview.md) to collect the required events. Like for Log Analytics workspaces, Defender for Cloud users are eligible for [500-MB of free data](enhanced-security-features-overview.md#faqpricing-and-billing) daily on defined data types that include security events. Now that you enabled the Azure Monitor Agent, check out the features that are su - [Endpoint protection assessment](endpoint-protection-recommendations-technical.md) - [Adaptive application controls](adaptive-application-controls.md) - [Fileless attack detection](defender-for-servers-introduction.md#plan-features)-- [File Integrity Monitoring](file-integrity-monitoring-enable-ama.md)+- [File Integrity Monitoring](file-integrity-monitoring-enable-ama.md) |
defender-for-cloud | Concept Attack Path | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-attack-path.md | Title: What are the Cloud Security Graph, Attack Path Analysis, and the Cloud Security Explorer? + Title: What are the cloud security graph, attack path analysis, and the cloud security explorer? description: Learn how to prioritize remediation of cloud misconfigurations and vulnerabilities based on risk. -# What are the Cloud Security Graph, Attack Path Analysis, and the Cloud Security Explorer? +# What are the cloud security graph, attack path analysis, and the cloud security explorer? One of the biggest challenges that security teams face today is the number of security issues they face on a daily basis. There are numerous security issues that need to be resolve and never enough resources to address them all. Defender for Cloud's contextual security capabilities assists security teams to assess the risk behind each security issue, and identify the highest risk issues that need to be resolved soonest. Defender for Cloud assists security teams to reduce the risk of an impactful breach to their environment in the most effective way. -## What is Cloud Security Graph? +## What is cloud security graph? -The Cloud Security Graph is a graph-based context engine that exists within Defender for Cloud. The Cloud Security Graph collects data from your multicloud environment and other data sources. For example, the cloud assets inventory, connections and lateral movement possibilities between resources, exposure to internet, permissions, network connections, vulnerabilities and more. The data collected is then used to build a graph representing your multicloud environment. +The cloud security graph is a graph-based context engine that exists within Defender for Cloud. The cloud security graph collects data from your multicloud environment and other data sources. For example, the cloud assets inventory, connections and lateral movement possibilities between resources, exposure to internet, permissions, network connections, vulnerabilities and more. The data collected is then used to build a graph representing your multicloud environment. -Defender for Cloud then uses the generated graph to perform an Attack Path Analysis and find the issues with the highest risk that exist within your environment. You can also query the graph using the Cloud Security Explorer. +Defender for Cloud then uses the generated graph to perform an attack path analysis and find the issues with the highest risk that exist within your environment. You can also query the graph using the cloud security explorer. :::image type="content" source="media/concept-cloud-map/security-map.png" alt-text="Screenshot of a conceptualized graph that shows the complexity of security graphing." lightbox="media/concept-cloud-map/security-map.png"::: -## What is Attack Path Analysis? +## What is attack path analysis? -Attack Path Analysis is a graph-based algorithm that scans the Cloud Security Graph. The scans expose exploitable paths that attackers may use to breach your environment to reach your high-impact assets. Attack Path Analysis exposes those attack paths and suggests recommendations as to how best remediate the issues that will break the attack path and prevent successful breach. +Attack path analysis is a graph-based algorithm that scans the cloud security graph. The scans expose exploitable paths that attackers may use to breach your environment to reach your high-impact assets. Attack path analysis exposes those attack paths and suggests recommendations as to how best remediate the issues that will break the attack path and prevent successful breach. -By taking your environment's contextual information into account such as, internet exposure, permissions, lateral movement, and more. Attack Path Analysis identifies issues that may lead to a breach on your environment, and helps you to remediate the highest risk ones first. +By taking your environment's contextual information into account such as, internet exposure, permissions, lateral movement, and more. Attack path analysis identifies issues that may lead to a breach on your environment, and helps you to remediate the highest risk ones first. :::image type="content" source="media/concept-cloud-map/attack-path.png" alt-text="Image that shows a sample attack path from attacker to your sensitive data."::: -Learn how to use [Attack Path Analysis](how-to-manage-attack-path.md). +Learn how to use [attack path analysis](how-to-manage-attack-path.md). -## What is Cloud Security Explorer? +## What is cloud security explorer? -Using the Cloud Security Explorer, you can proactively identify security risks in your multicloud environment by running graph-based queries on the Cloud Security Graph. Your security team can use the query builder to search for and locate risks, while taking your organization's specific contextual and conventional information into account. +Using the cloud security explorer, you can proactively identify security risks in your multicloud environment by running graph-based queries on the cloud security graph. Your security team can use the query builder to search for and locate risks, while taking your organization's specific contextual and conventional information into account. -Cloud Security Explorer provides you with the ability to perform proactive exploration features. You can search for security risks within your organization by running graph-based path-finding queries on top the contextual security data that is already provided by Defender for Cloud. Such as, cloud misconfigurations, vulnerabilities, resource context, lateral movement possibilities between resources and more. +Cloud security explorer provides you with the ability to perform proactive exploration features. You can search for security risks within your organization by running graph-based path-finding queries on top the contextual security data that is already provided by Defender for Cloud. Such as, cloud misconfigurations, vulnerabilities, resource context, lateral movement possibilities between resources and more. -Learn how to use the [Cloud Security Explorer](how-to-manage-cloud-security-explorer.md), or check out the list of [insights and connections](attack-path-reference.md#insights-and-connections). +Learn how to use the [cloud security explorer](how-to-manage-cloud-security-explorer.md), or check out the list of [insights and connections](attack-path-reference.md#insights-and-connections). ## Next steps |
defender-for-cloud | Concept Cloud Security Posture Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md | Title: Overview of Cloud Security Posture Management (CSPM) description: Learn more about the new Defender CSPM plan and the other enhanced security features that can be enabled for your multicloud environment through the Defender Cloud Security Posture Management (CSPM) plan. Previously updated : 09/20/2022 Last updated : 10/18/2022 # Cloud Security Posture Management (CSPM) Defender for Cloud continually assesses your resources, subscriptions, and organ ## Defender CSPM plan options -The Defender CSPM plan comes with two options, foundational CSPM capabilities and Defender Cloud Security Posture Management (CSPM). When you deploy Defender for Cloud to your subscription and resources, you'll automatically gain the basic coverages offered by the CSPM plan. To gain access to the other capabilities provided by Defender CSPM, you'll need to [enable the Defender Cloud Security Posture Management (CSPM) plan](enable-enhanced-security.md) to your subscription and resources. +The Defender CSPM plan comes with two options, foundational CSPM capabilities and Defender Cloud Security Posture Management (CSPM). When you deploy Defender for Cloud to your subscription and resources, you'll automatically gain the basic coverage offered by the CSPM plan. To gain access to the other capabilities provided by Defender CSPM, you'll need to [enable the Defender Cloud Security Posture Management (CSPM) plan](enable-enhanced-security.md) on your subscription and resources. The following table summarizes what's included in each plan and their cloud availability. The following table summarizes what's included in each plan and their cloud avai | [Secure score](secure-score-access-and-track.md) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises | | [Governance](#security-governance-and-regulatory-compliance) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises | | [Regulatory compliance](#security-governance-and-regulatory-compliance) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |-| [Cloud Security Explorer](#cloud-security-explorer) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS | -| [Attack Path Analysis](#attack-path-analysis) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS | +| [Cloud security explorer](#cloud-security-explorer) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS | +| [Attack path analysis](#attack-path-analysis) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS | | [Agentless scanning for machines](#agentless-scanning-for-machines) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS | > [!NOTE]-> If you have enabled Defender for DevOps, you will only gain Cloud Security Graph and Attack Path Analysis to the artifacts that arrive through those connectors. +> If you have enabled Defender for DevOps, you will only gain cloud security graph and attack path analysis to the artifacts that arrive through those connectors. > > To enable Governance for for DevOps related recommendations, the Defender Cloud Security Posture Management (CSPM) plan needs to be enabled on the Azure subscription that hosts the DevOps connector. Defender for Cloud continuously assesses your hybrid cloud environment to analyz Learn more about [security and regulatory compliance in Defender for Cloud](concept-regulatory-compliance.md). -## Cloud Security Explorer +## Cloud security explorer -The Cloud Security Graph is a graph-based context engine that exists within Defender for Cloud. The Cloud Security Graph collects data from your multicloud environment and other data sources. For example, the cloud assets inventory, connections and lateral movement possibilities between resources, exposure to internet, permissions, network connections, vulnerabilities and more. The data collected is then used to build a graph representing your multicloud environment. +The cloud security graph is a graph-based context engine that exists within Defender for Cloud. The cloud security graph collects data from your multicloud environment and other data sources. For example, the cloud assets inventory, connections and lateral movement possibilities between resources, exposure to internet, permissions, network connections, vulnerabilities and more. The data collected is then used to build a graph representing your multicloud environment. -Defender for Cloud then uses the generated graph to perform an Attack Path Analysis and find the issues with the highest risk that exist within your environment. You can also query the graph using the Cloud Security Explorer. +Defender for Cloud then uses the generated graph to perform an attack path analysis and find the issues with the highest risk that exist within your environment. You can also query the graph using the cloud security explorer. -Learn more about [Cloud Security Explorer](concept-attack-path.md#what-is-cloud-security-explorer) +Learn more about [cloud security explorer](concept-attack-path.md#what-is-cloud-security-explorer) -## Attack Path Analysis +## Attack path analysis -Attack Path Analysis is a graph-based algorithm that scans the Cloud Security Graph. The scans expose exploitable paths that attackers may use to breach your environment to reach your high-impact assets. Attack Path Analysis exposes those attack paths and suggests recommendations as to how best remediate the issues that will break the attack path and prevent successful breach. +Attack path analysis is a graph-based algorithm that scans the cloud security graph. The scans: -By taking your environment's contextual information into account such as, internet exposure, permissions, lateral movement, and more. Attack Path Analysis identifies issues that may lead to a breach on your environment, and helps you to remediate the highest risk ones first. +- expose exploitable paths that attackers may use to breach your environment and reach your high-impact assets +- provide recommendations for ways to prevent successful breaches -Learn more about [Attack Path Analysis](concept-attack-path.md#what-is-attack-path-analysis) +By taking your environment's contextual information into account such as, internet exposure, permissions, lateral movement, and more, this analysis identifies issues that may lead to a breach on your environment, and helps you to remediate the highest risk ones first. ++Learn more about [attack path analysis](concept-attack-path.md#what-is-attack-path-analysis). ## Agentless scanning for machines |
defender-for-cloud | Concept Easm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-easm.md | You can also learn how to [deploy Defender for EASM](../external-attack-surface- ## Next step -[What are the Cloud Security Graph, Attack Path Analysis, and the Cloud Security Explorer?](concept-attack-path.md) +[What are the cloud security graph, attack path analysis, and the cloud security explorer?](concept-attack-path.md) |
defender-for-cloud | Concept Regulatory Compliance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-regulatory-compliance.md | Title: Regulatory compliance Microsoft Cloud Security Benchmark -description: Learn about the Microsoft Cloud Security Benchmark and the benefits it can bring to your compliance standards across your multicloud environments. -+ Title: Regulatory compliance Microsoft cloud security benchmark +description: Learn about the Microsoft cloud security benchmark and the benefits it can bring to your compliance standards across your multicloud environments. + Last updated 09/21/2022 -# Microsoft Cloud Security Benchmark in Defender for Cloud +# Microsoft cloud security benchmark in Defender for Cloud Microsoft Defender for Cloud streamlines the process for meeting regulatory compliance requirements, using the **regulatory compliance dashboard**. Defender for Cloud continuously assesses your hybrid cloud environment to analyze the risk factors according to the controls and best practices in the standards that you've applied to your subscriptions. The dashboard reflects the status of your compliance with these standards. -The [Microsoft Cloud Security Benchmark](/security/benchmark/azure/introduction) (MCSB) is automatically assigned to your subscriptions and accounts when you onboard Defender for Cloud. This benchmark builds on the cloud security principles defined by the Azure Security Benchmark and applies these principles with detailed technical implementation guidance for Azure, for other cloud providers (such as AWS and GCP), and for other Microsoft clouds. +The [Microsoft cloud security benchmark](/security/benchmark/azure/introduction) (MCSB) is automatically assigned to your subscriptions and accounts when you onboard Defender for Cloud. This benchmark builds on the cloud security principles defined by the Azure Security Benchmark and applies these principles with detailed technical implementation guidance for Azure, for other cloud providers (such as AWS and GCP), and for other Microsoft clouds. The compliance dashboard gives you a view of your overall compliance standing. Security for non-Azure platforms follows the same cloud-neutral security principles as Azure. Each control within the benchmark provides the same granularity and scope of technical guidance across Azure and other cloud resources. |
defender-for-cloud | Custom Security Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-security-policies.md | Important concepts in Azure Policy: - An **assignment** is an application of an initiative or a policy to a specific scope (management group, subscription, etc.) -Defender for Cloud has a built-in initiative, [Microsoft Cloud Security Benchmark](/security/benchmark/azure/introduction), that includes all of its security policies. To assess Defender for CloudΓÇÖs policies on your Azure resources, you should create an assignment on the management group, or subscription you want to assess. +Defender for Cloud has a built-in initiative, [Microsoft cloud security benchmark](/security/benchmark/azure/introduction), that includes all of its security policies. To assess Defender for CloudΓÇÖs policies on your Azure resources, you should create an assignment on the management group, or subscription you want to assess. The built-in initiative has all of Defender for CloudΓÇÖs policies enabled by default. You can choose to disable certain policies from the built-in initiative. For example, to apply all of Defender for CloudΓÇÖs policies except **web application firewall**, change the value of the policyΓÇÖs effect parameter to **Disabled**. |
defender-for-cloud | Defender For Cloud Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md | Defender for Cloud continually assesses your resources, subscriptions, and organ As soon as you open Defender for Cloud for the first time, Defender for Cloud: -- **Generates a secure score** for your subscriptions based on an assessment of your connected resources compared with the guidance in [Microsoft Cloud Security Benchmark](/security/benchmark/azure/overview). Use the score to understand your security posture, and the compliance dashboard to review your compliance with the built-in benchmark. When you've enabled the enhanced security features, you can customize the standards used to assess your compliance, and add other regulations (such as NIST and Azure CIS) or organization-specific security requirements. You can also apply recommendations, and score based on the AWS Foundational Security Best practices standards.+- **Generates a secure score** for your subscriptions based on an assessment of your connected resources compared with the guidance in [Microsoft cloud security benchmark](/security/benchmark/azure/overview). Use the score to understand your security posture, and the compliance dashboard to review your compliance with the built-in benchmark. When you've enabled the enhanced security features, you can customize the standards used to assess your compliance, and add other regulations (such as NIST and Azure CIS) or organization-specific security requirements. You can also apply recommendations, and score based on the AWS Foundational Security Best practices standards. You can also [learn more about secure score](secure-score-security-controls.md). - **Provides hardening recommendations** based on any identified security misconfigurations and weaknesses. Use these security recommendations to strengthen the security posture of your organization's Azure, hybrid, and multicloud resources. -- **Analyze and secure your attack paths** through the Cloud Security Graph, which is a graph-based context engine that exists within Defender for Cloud. The Cloud Security Graph collects data from your multicloud environment and other data sources. For example, the cloud assets inventory, connections and lateral movement possibilities between resources, exposure to internet, permissions, network connections, vulnerabilities and more. The data collected is then used to build a graph representing your multicloud environment. +- **Analyze and secure your attack paths** through the cloud security graph, which is a graph-based context engine that exists within Defender for Cloud. The cloud security graph collects data from your multicloud environment and other data sources. For example, the cloud assets inventory, connections and lateral movement possibilities between resources, exposure to internet, permissions, network connections, vulnerabilities and more. The data collected is then used to build a graph representing your multicloud environment. - Attack Path Analysis is a graph-based algorithm that scans the Cloud Security Graph. The scans expose exploitable paths that attackers may use to breach your environment to reach your high-impact assets. Attack Path Analysis exposes those attack paths and suggests recommendations as to how best remediate the issues that will break the attack path and prevent successful breach. + Attack path analysis is a graph-based algorithm that scans the cloud security graph. The scans expose exploitable paths that attackers may use to breach your environment to reach your high-impact assets. Attack path analysis exposes those attack paths and suggests recommendations as to how best remediate the issues that will break the attack path and prevent successful breach. - By taking your environment's contextual information into account such as, internet exposure, permissions, lateral movement, and more. Attack Path Analysis identifies issues that may lead to a breach on your environment, and helps you to remediate the highest risk ones first. + By taking your environment's contextual information into account such as, internet exposure, permissions, lateral movement, and more. Attack path analysis identifies issues that may lead to a breach on your environment, and helps you to remediate the highest risk ones first. Learn more about [attack path analysis](concept-attack-path.md#what-is-attack-path-analysis). It's a security basic to know and make sure your workloads are secure, and it st Defender for Cloud continuously discovers new resources that are being deployed across your workloads and assesses whether they're configured according to security best practices. If not, they're flagged and you get a prioritized list of recommendations for what you need to fix. Recommendations help you reduce the attack surface across each of your resources. -The list of recommendations is enabled and supported by the Microsoft Cloud Security Benchmark. This Microsoft-authored benchmark, based on common compliance frameworks, began with Azure and now provides a set of guidelines for security and compliance best practices for multiple cloud environments. Learn more in [Microsoft Cloud Security Benchmark introduction](/security/benchmark/azure/introduction). +The list of recommendations is enabled and supported by the Microsoft cloud security benchmark. This Microsoft-authored benchmark, based on common compliance frameworks, began with Azure and now provides a set of guidelines for security and compliance best practices for multiple cloud environments. Learn more in [Microsoft cloud security benchmark introduction](/security/benchmark/azure/introduction). In this way, Defender for Cloud enables you not just to set security policies, but to *apply secure configuration standards across your resources*. The **Defender plans** of Microsoft Defender for Cloud offer comprehensive defen - [Microsoft Defender for Azure Cosmos DB](concept-defender-for-cosmos.md) - [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) - [Security governance and regulatory compliance](concept-cloud-security-posture-management.md#security-governance-and-regulatory-compliance)- - [Cloud Security Explorer](concept-cloud-security-posture-management.md#cloud-security-explorer) - - [Attack Path Analysis](concept-cloud-security-posture-management.md#attack-path-analysis) + - [Cloud security explorer](concept-cloud-security-posture-management.md#cloud-security-explorer) + - [Attack path analysis](concept-cloud-security-posture-management.md#attack-path-analysis) - [Agentless scanning for machines](concept-cloud-security-posture-management.md#agentless-scanning-for-machines) - [Defender for DevOps](defender-for-devops-introduction.md) |
defender-for-cloud | Enhanced Security Features Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enhanced-security-features-overview.md | Defender for Cloud offers many enhanced security features that can help protect - **Multicloud security** - Connect your accounts from Amazon Web Services (AWS) and Google Cloud Platform (GCP) to protect resources and workloads on those platforms with a range of Microsoft Defender for Cloud security features. - **Hybrid security** ΓÇô Get a unified view of security across all of your on-premises and cloud workloads. Apply security policies and continuously assess the security of your hybrid cloud workloads to ensure compliance with security standards. Collect, search, and analyze security data from multiple sources, including firewalls and other partner solutions. - **Threat protection alerts** - Advanced behavioral analytics and the Microsoft Intelligent Security Graph provide an edge over evolving cyber-attacks. Built-in behavioral analytics and machine learning can identify attacks and zero-day exploits. Monitor networks, machines, data stores (SQL servers hosted inside and outside Azure, Azure SQL databases, Azure SQL Managed Instance, and Azure Storage) and cloud services for incoming attacks and post-breach activity. Streamline investigation with interactive tools and contextual threat intelligence.- - **Track compliance with a range of standards** - Defender for Cloud continuously assesses your hybrid cloud environment to analyze the risk factors according to the controls and best practices in [Microsoft Cloud Security Benchmark](/security/benchmark/azure/introduction). When you enable the enhanced security features, you can apply a range of other industry standards, regulatory standards, and benchmarks according to your organization's needs. Add standards and track your compliance with them from the [regulatory compliance dashboard](update-regulatory-compliance-packages.md). + - **Track compliance with a range of standards** - Defender for Cloud continuously assesses your hybrid cloud environment to analyze the risk factors according to the controls and best practices in [Microsoft cloud security benchmark](/security/benchmark/azure/introduction). When you enable the enhanced security features, you can apply a range of other industry standards, regulatory standards, and benchmarks according to your organization's needs. Add standards and track your compliance with them from the [regulatory compliance dashboard](update-regulatory-compliance-packages.md). - **Access and application controls** - Block malware and other unwanted applications by applying machine learning powered recommendations adapted to your specific workloads to create allowlists and blocklists. Reduce the network attack surface with just-in-time, controlled access to management ports on Azure VMs. Access and application control drastically reduce exposure to brute force and other network attacks. - **Container security features** - Benefit from vulnerability management and real-time threat protection on your containerized environments. Charges are based on the number of unique container images pushed to your connected registry. After an image has been scanned once, you won't be charged for it again unless it's modified and pushed once more. - **Breadth threat protection for resources connected to Azure** - Cloud-native threat protection for the Azure services common to all of your resources: Azure Resource Manager, Azure DNS, Azure network layer, and Azure Key Vault. Defender for Cloud has unique visibility into the Azure management layer and the Azure DNS layer, and can therefore protect cloud resources that are connected to those layers. |
defender-for-cloud | Episode Eighteen | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-eighteen.md | + + Title: Defender for Azure Cosmos DB | Defender for Cloud in the Field ++description: Learn about Defender for Cloud integration with Azure Cosmos DB. + Last updated : 10/18/2022+++# Defender for Azure Cosmos DB | Defender for Cloud in the Field ++**Episode description**: In this episode of Defender for Cloud in the Field, Haim Bendanan joins Yuri Diogenes to talk about Defender for Azure Cosmos DB. Haim explains the rationale behind the use of this plan to protect Azure Cosmos DB databases, the different threat detections that are available with this plan, and the security recommendations that were added. Haim also demonstrates how Defender for Azure Cosmos DB detects a SQL injection attack. +<br> +<br> +<iframe src="https://aka.ms/docs/player?id=94238ff5-930e-48be-ad27-a2fff73e473f" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe> ++- [00:00](https://learn.microsoft.com/shows/mdc-in-the-field/defender-cosmos-db#time=00m00s) - Intro ++- [01:37](https://learn.microsoft.com/shows/mdc-in-the-field/defender-cosmos-db#time=01m37s) - Azure Cosmos DB main use case scenarios ++- [02:30](https://learn.microsoft.com/shows/mdc-in-the-field/defender-cosmos-db#time=02m30s) - Recommendations and alerts in Defender for Azure Cosmos DB ++- [04:30](https://learn.microsoft.com/shows/mdc-in-the-field/defender-cosmos-db#time=04m30s) - SQL Injection detection for Azure Cosmos DB ++- [06:15](https://learn.microsoft.com/shows/mdc-in-the-field/defender-cosmos-db#time=06m15s) - Key extraction detection for Azure Cosmos DB ++- [11:00](https://learn.microsoft.com/shows/mdc-in-the-field/defender-cosmos-db#time=11m00s) - Demonstration ++- [14:30](https://learn.microsoft.com/shows/mdc-in-the-field/defender-cosmos-db#time=14m30s) - Final considerations ++## Recommended resources ++Learn more about [Enable Microsoft Defender for Azure Cosmos DB](https://learn.microsoft.com/azure/defender-for-cloud/defender-for-databases-enable-cosmos-protections?tabs=azure-portal) ++- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity) ++- Follow us on social media: + [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F) + [Twitter](https://twitter.com/msftsecurity) ++- Join our [Tech Community](https://aka.ms/SecurityTechCommunity) ++- For more about [Microsoft Security](https://msft.it/6002T9HQY) ++## Next steps ++> [!div class="nextstepaction"] +> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md) |
defender-for-cloud | Episode Seventeen | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-seventeen.md | Learn more about [Entra Permission Management](other-threat-protections.md#entra ## Next steps > [!div class="nextstepaction"]-> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md) +> [New AWS Connector in Microsoft Defender for Cloud](episode-eighteen.md) |
defender-for-cloud | Exempt Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/exempt-resource.md | In such cases, you can create an exemption for a recommendation to: | Release state: | Preview<br>[!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)] | | Pricing: | This is a premium Azure Policy capability that's offered at no more cost for customers with Microsoft Defender for Cloud's enhanced security features enabled. For other users, charges might apply in the future. | | Required roles and permissions: | **Owner** or **Resource Policy Contributor** to create an exemption<br>To create a rule, you need permissions to edit policies in Azure Policy.<br>Learn more in [Azure RBAC permissions in Azure Policy](../governance/policy/overview.md#azure-rbac-permissions-in-azure-policy). |-| Limitations: | Exemptions can be created only for recommendations included in Defender for Cloud's default initiative, [Microsoft Cloud Security Benchmark](/security/benchmark/azure/introduction), or any of the supplied regulatory standard initiatives. Recommendations that are generated from custom initiatives can't be exempted. Learn more about the relationships between [policies, initiatives, and recommendations](security-policy-concept.md). | +| Limitations: | Exemptions can be created only for recommendations included in Defender for Cloud's default initiative, [Microsoft cloud security benchmark](/security/benchmark/azure/introduction), or any of the supplied regulatory standard initiatives. Recommendations that are generated from custom initiatives can't be exempted. Learn more about the relationships between [policies, initiatives, and recommendations](security-policy-concept.md). | | Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet) | To fine-tune the security recommendations that Defender for Cloud makes for your - Mark **one or more resources** as "mitigated" or "risk accepted" for a specific recommendation. > [!NOTE]-> Exemptions can be created only for recommendations included in Defender for Cloud's default initiative, Microsoft Cloud Security Benchmark or any of the supplied regulatory standard initiatives. Recommendations that are generated from any custom initiatives assigned to your subscriptions cannot be exempted. Learn more about the relationships between [policies, initiatives, and recommendations](security-policy-concept.md). +> Exemptions can be created only for recommendations included in Defender for Cloud's default initiative, Microsoft cloud security benchmark or any of the supplied regulatory standard initiatives. Recommendations that are generated from any custom initiatives assigned to your subscriptions cannot be exempted. Learn more about the relationships between [policies, initiatives, and recommendations](security-policy-concept.md). > [!TIP] > You can also create exemptions using the API. For an example JSON, and an explanation of the relevant structures see [Azure Policy exemption structure](../governance/policy/concepts/exemption-structure.md). |
defender-for-cloud | How To Manage Attack Path | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-attack-path.md | Last updated 10/03/2022 Defender for Cloud's contextual security capabilities assists security teams in the reduction of the risk of impactful breaches. Defender for Cloud uses environment context to perform a risk assessment of your security issues. Defender for Cloud identifies the biggest security risk issues, while distinguishing them from less risky issues. -Attack Path Analysis helps you to address the security issues that pose immediate threats with the greatest potential of being exploited in your environment. Defender for Cloud analyzes which security issues are part of potential attack paths that attackers could use to breach your environment. It also highlights the security recommendations that need to be resolved in order to mitigate it. +Attack path analysis helps you to address the security issues that pose immediate threats with the greatest potential of being exploited in your environment. Defender for Cloud analyzes which security issues are part of potential attack paths that attackers could use to breach your environment. It also highlights the security recommendations that need to be resolved in order to mitigate it. You can check out the full list of [Attack path names and descriptions](attack-path-reference.md). While you are [investigating and remediating an attack path](#investigate-and-re ## Next Steps -Learn how to [Build queries with Cloud Security Explorer](how-to-manage-cloud-security-explorer.md). +Learn how to [Build queries with cloud security explorer](how-to-manage-cloud-security-explorer.md). |
defender-for-cloud | How To Manage Cloud Security Explorer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-cloud-security-explorer.md | Title: Build queries with Cloud Security Explorer + Title: Build queries with cloud security explorer -description: Learn how to build queries in Cloud Security Explorer to find vulnerabilities that exist on your multicloud environment. +description: Learn how to build queries in cloud security explorer to find vulnerabilities that exist on your multicloud environment. Last updated 10/03/2022 -# Cloud Security Explorer +# Cloud security explorer Defender for Cloud's contextual security capabilities assists security teams in the reduction of the risk of impactful breaches. Defender for Cloud uses environmental context to perform a risk assessment of your security issues, and identifies the biggest security risks and distinguishes them from less risky issues. -By using the Cloud Security Explorer, you can proactively identify security risks in your cloud environment by running graph-based queries on the Cloud Security Graph, which is Defender for Cloud's context engine. You can prioritize your security team's concerns, while taking your organization's specific context and conventions into account. +By using the cloud security explorer, you can proactively identify security risks in your cloud environment by running graph-based queries on the cloud security graph, which is Defender for Cloud's context engine. You can prioritize your security team's concerns, while taking your organization's specific context and conventions into account. -With the Cloud Security Explorer, you can query all of your security issues and environment context such as assets inventory, exposure to internet, permissions, lateral movement between resources and more. +With the cloud security explorer, you can query all of your security issues and environment context such as assets inventory, exposure to internet, permissions, lateral movement between resources and more. ## Availability With the Cloud Security Explorer, you can query all of your security issues and | Required roles and permissions: | - **Security Reader** <br> - **Security Admin** <br> - **Reader** <br> - **Contributor** <br> - **Owner** | | Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds (Azure, AWS) <br>:::image type="icon" source="./media/icons/no-icon.png"::: Commercial clouds (GCP) <br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet) | -## Build a query with the Cloud Security Explorer +## Build a query with the cloud security explorer -You can use the Cloud Security Explorer to build queries that can proactively hunt for security risks in your environments. +You can use the cloud security explorer to build queries that can proactively hunt for security risks in your environments. **To build a query**: You can use the Cloud Security Explorer to build queries that can proactively hu 1. Navigate to **Microsoft Defender for Cloud** > **Cloud Security Explorer**. - :::image type="content" source="media/concept-cloud-map/cloud-security-explorer.png" alt-text="Screenshot of the Cloud Security Explorer page." lightbox="media/concept-cloud-map/cloud-security-explorer.png"::: + :::image type="content" source="media/concept-cloud-map/cloud-security-explorer.png" alt-text="Screenshot of the cloud security explorer page." lightbox="media/concept-cloud-map/cloud-security-explorer.png"::: 1. Select a resource from the drop-down menu. You can alter any template to search for specific results by changing the query ## Query options -The following information can be queried in the Cloud Security Explorer: +The following information can be queried in the cloud security explorer: - **Recommendations** - All Defender for Cloud security recommendations. |
defender-for-cloud | Iac Vulnerabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/iac-vulnerabilities.md | Once you have set up the Microsoft Security DevOps GitHub action or Azure DevOps ```yml with:- categories: 'Iac" + categories: 'IaC' ``` > [!NOTE] Remote debugging requires inbound ports to be opened on an API app. These ports Enable FTPS enforcement for enhanced security. -**Recommendation**: To [enforce FTPS](/azure/app-service/deploy-ftp?tabs=portal#enforce-ftps), in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *ftpsState* property, setting its value to `"FtpsOnly"` or `"Disabled"` if you don't need FTPS enabled. +**Recommendation**: To [enforce FTPS](../app-service/deploy-ftp.md?tabs=portal#enforce-ftps), in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *ftpsState* property, setting its value to `"FtpsOnly"` or `"Disabled"` if you don't need FTPS enabled. **Severity level**: 1 Enable FTPS enforcement for enhanced security. API apps should require HTTPS to ensure connections are made to the expected server and data in transit is protected from network layer eavesdropping attacks. -**Recommendation**: To [use HTTPS to ensure, server/service authentication and protect data in transit from network layer eavesdropping attacks](/azure/app-service/configure-ssl-bindings#enforce-https), in the [Microsoft.Web/Sites resource properties](/azure/templates/microsoft.web/sites?tabs=json#siteproperties-object), add (or update) the *httpsOnly* property, setting its value to `true`. +**Recommendation**: To [use HTTPS to ensure, server/service authentication and protect data in transit from network layer eavesdropping attacks](../app-service/configure-ssl-bindings.md#enforce-https), in the [Microsoft.Web/Sites resource properties](/azure/templates/microsoft.web/sites?tabs=json#siteproperties-object), add (or update) the *httpsOnly* property, setting its value to `true`. **Severity level**: 2 API apps should require HTTPS to ensure connections are made to the expected ser API apps should require the latest TLS version. -**Recommendation**: To [enforce the latest TLS version](/azure/app-service/configure-ssl-bindings#enforce-tls-versions), in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *minTlsVersion* property, setting its value to `1.2`. +**Recommendation**: To [enforce the latest TLS version](../app-service/configure-ssl-bindings.md#enforce-tls-versions), in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *minTlsVersion* property, setting its value to `1.2`. **Severity level**: 1 Cross-Origin Resource Sharing (CORS) should not allow all domains to access your For enhanced authentication security, use a managed identity. On Azure, managed identities eliminate the need for developers to have to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Active Directory (Azure AD) tokens. -**Recommendation**: To [use Managed Identity](/azure/app-service/overview-managed-identity?tabs=dotnet), in the [Microsoft.Web/sites resource managed identity property](/azure/templates/microsoft.web/sites?tabs=json#ManagedServiceIdentity), add (or update) the *type* property, setting its value to `"SystemAssigned"` or `"UserAssigned"` and providing any necessary identifiers for the identity if required. +**Recommendation**: To [use Managed Identity](../app-service/overview-managed-identity.md?tabs=dotnet), in the [Microsoft.Web/sites resource managed identity property](/azure/templates/microsoft.web/sites?tabs=json#ManagedServiceIdentity), add (or update) the *type* property, setting its value to `"SystemAssigned"` or `"UserAssigned"` and providing any necessary identifiers for the identity if required. **Severity level**: 2 Remote debugging requires inbound ports to be opened on a function app. These po Enable FTPS enforcement for enhanced security. -**Recommendation**: To [enforce FTPS](/azure/app-service/deploy-ftp?tabs=portal#enforce-ftps), in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *ftpsState* property, setting its value to `"FtpsOnly"` or `"Disabled"` if you don't need FTPS enabled. +**Recommendation**: To [enforce FTPS](../app-service/deploy-ftp.md?tabs=portal#enforce-ftps), in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *ftpsState* property, setting its value to `"FtpsOnly"` or `"Disabled"` if you don't need FTPS enabled. **Severity level**: 1 Enable FTPS enforcement for enhanced security. Function apps should require HTTPS to ensure connections are made to the expected server and data in transit is protected from network layer eavesdropping attacks. -**Recommendation**: To [use HTTPS to ensure, server/service authentication and protect data in transit from network layer eavesdropping attacks](/azure/app-service/configure-ssl-bindings#enforce-https), in the [Microsoft.Web/Sites resource properties](/azure/templates/microsoft.web/sites?tabs=json#siteproperties-object), add (or update) the *httpsOnly* property, setting its value to `true`. +**Recommendation**: To [use HTTPS to ensure, server/service authentication and protect data in transit from network layer eavesdropping attacks](../app-service/configure-ssl-bindings.md#enforce-https), in the [Microsoft.Web/Sites resource properties](/azure/templates/microsoft.web/sites?tabs=json#siteproperties-object), add (or update) the *httpsOnly* property, setting its value to `true`. **Severity level**: 2 Function apps should require HTTPS to ensure connections are made to the expecte Function apps should require the latest TLS version. -**Recommendation**: To [enforce the latest TLS version](/azure/app-service/configure-ssl-bindings#enforce-tls-versions), in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *minTlsVersion* property, setting its value to `1.2`. +**Recommendation**: To [enforce the latest TLS version](../app-service/configure-ssl-bindings.md#enforce-tls-versions), in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *minTlsVersion* property, setting its value to `1.2`. **Severity level**: 1 Cross-Origin Resource Sharing (CORS) should not allow all domains to access your For enhanced authentication security, use a managed identity. On Azure, managed identities eliminate the need for developers to have to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Active Directory (Azure AD) tokens. -**Recommendation**: To [use Managed Identity](/azure/app-service/overview-managed-identity?tabs=dotnet), in the [Microsoft.Web/sites resource managed identity property](/azure/templates/microsoft.web/sites?tabs=json#ManagedServiceIdentity), add (or update) the *type* property, setting its value to `"SystemAssigned"` or `"UserAssigned"` and providing any necessary identifiers for the identity if required. +**Recommendation**: To [use Managed Identity](../app-service/overview-managed-identity.md?tabs=dotnet), in the [Microsoft.Web/sites resource managed identity property](/azure/templates/microsoft.web/sites?tabs=json#ManagedServiceIdentity), add (or update) the *type* property, setting its value to `"SystemAssigned"` or `"UserAssigned"` and providing any necessary identifiers for the identity if required. **Severity level**: 2 Remote debugging requires inbound ports to be opened on a web application. These Enable FTPS enforcement for enhanced security. -**Recommendation**: To [enforce FTPS](/azure/app-service/deploy-ftp?tabs=portal#enforce-ftps), in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *ftpsState* property, setting its value to `"FtpsOnly"` or `"Disabled"` if you don't need FTPS enabled. +**Recommendation**: To [enforce FTPS](../app-service/deploy-ftp.md?tabs=portal#enforce-ftps), in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *ftpsState* property, setting its value to `"FtpsOnly"` or `"Disabled"` if you don't need FTPS enabled. **Severity level**: 1 Enable FTPS enforcement for enhanced security. Web apps should require HTTPS to ensure connections are made to the expected server and data in transit is protected from network layer eavesdropping attacks. -**Recommendation**: To [use HTTPS to ensure server/service authentication and protect data in transit from network layer eavesdropping attacks](/azure/app-service/configure-ssl-bindings#enforce-https), in the [Microsoft.Web/Sites resource properties](/azure/templates/microsoft.web/sites?tabs=json#siteproperties-object), add (or update) the *httpsOnly* property, setting its value to `true`. +**Recommendation**: To [use HTTPS to ensure server/service authentication and protect data in transit from network layer eavesdropping attacks](../app-service/configure-ssl-bindings.md#enforce-https), in the [Microsoft.Web/Sites resource properties](/azure/templates/microsoft.web/sites?tabs=json#siteproperties-object), add (or update) the *httpsOnly* property, setting its value to `true`. **Severity level**: 2 Web apps should require HTTPS to ensure connections are made to the expected ser Web apps should require the latest TLS version. **Recommendation**: -To [enforce the latest TLS version](/azure/app-service/configure-ssl-bindings#enforce-tls-versions), in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *minTlsVersion* property, setting its value to `1.2`. +To [enforce the latest TLS version](../app-service/configure-ssl-bindings.md#enforce-tls-versions), in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *minTlsVersion* property, setting its value to `1.2`. **Severity level**: 1 Cross-Origin Resource Sharing (CORS) should not allow all domains to access your For enhanced authentication security, use a managed identity. On Azure, managed identities eliminate the need for developers to have to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Active Directory (Azure AD) tokens. -**Recommendation**: To [use Managed Identity](/azure/app-service/overview-managed-identity?tabs=dotnet), in the [Microsoft.Web/sites resource managed identity property](/azure/templates/microsoft.web/sites?tabs=json#ManagedServiceIdentity), add (or update) the *type* property, setting its value to `"SystemAssigned"` or `"UserAssigned"` and providing any necessary identifiers for the identity if required. +**Recommendation**: To [use Managed Identity](../app-service/overview-managed-identity.md?tabs=dotnet), in the [Microsoft.Web/sites resource managed identity property](/azure/templates/microsoft.web/sites?tabs=json#ManagedServiceIdentity), add (or update) the *type* property, setting its value to `"SystemAssigned"` or `"UserAssigned"` and providing any necessary identifiers for the identity if required. **Severity level**: 2 For enhanced authentication security, use a managed identity. On Azure, managed Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling. -**Recommendation**: [Use built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles](/azure/role-based-access-control/built-in-roles) +**Recommendation**: [Use built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles](../role-based-access-control/built-in-roles.md) **Severity level**: 3 Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC It is important to enable encryption of Automation account variable assets when storing sensitive data. This step can only be taken at creation time. If you have Automation Account Variables storing sensitive data that are not already encrypted, then you will need to delete them and recreate them as encrypted variables. To apply encryption of the Automation account variable assets, in Azure PowerShell - run [the following command](/powershell/module/az.automation/set-azautomationvariable?view=azps-5.4.0&viewFallbackFrom=azps-1.4.0): `Set-AzAutomationVariable -AutomationAccountName '{AutomationAccountName}' -Encrypted $true -Name '{VariableName}' -ResourceGroupName '{ResourceGroupName}' -Value '{Value}'` -**Recommendation**: [Enable encryption of Automation account variable assets](/azure/automation/shared-resources/variables?tabs=azure-powershell) +**Recommendation**: [Enable encryption of Automation account variable assets](../automation/shared-resources/variables.md?tabs=azure-powershell) **Severity level**: 1 Enable only connections via SSL to Redis Cache. Use of secure connections ensure To ensure that only applications from allowed networks, machines, or subnets can access your cluster, restrict access to your Kubernetes Service Management API server. It is recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster. -**Recommendation**: [Restrict access by defining authorized IP ranges](/azure/aks/api-server-authorized-ip-ranges) or [set up your API servers as private clusters](/azure/aks/private-clusters) +**Recommendation**: [Restrict access by defining authorized IP ranges](../aks/api-server-authorized-ip-ranges.md) or [set up your API servers as private clusters](../aks/private-clusters.md) **Severity level**: 1 To ensure that only applications from allowed networks, machines, or subnets can To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. To Use Role-Based Access Control (RBAC) you must recreate your Kubernetes Service cluster and enable RBAC during the creation process. -**Recommendation**: [Enable RBAC in Kubernetes clusters](/azure/aks/operator-best-practices-identity#use-azure-rbac) +**Recommendation**: [Enable RBAC in Kubernetes clusters](../aks/operator-best-practices-identity.md#use-azure-rbac) **Severity level**: 1 To provide granular filtering on the actions that users can perform, use Role-Ba Upgrade your Kubernetes service cluster to a later Kubernetes version to protect against known vulnerabilities in your current Kubernetes version. [Vulnerability CVE-2019-9946](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9946) has been patched in Kubernetes versions 1.11.9+, 1.12.7+, 1.13.5+, and 1.14.0+. Running on older versions could mean you are not using latest security classes. Usage of such old classes and types can make your application vulnerable. -**Recommendation**: To [upgrade Kubernetes service clusters](/azure/aks/upgrade-cluster), in the [Microsoft.ContainerService/managedClusters resource properties](/azure/templates/microsoft.containerservice/managedclusters?tabs=json#managedclusterproperties-object), update the *kubernetesVersion* property, setting its value to one of the following versions (making sure to specify the minor version number): 1.11.9+, 1.12.7+, 1.13.5+, or 1.14.0+. +**Recommendation**: To [upgrade Kubernetes service clusters](../aks/upgrade-cluster.md), in the [Microsoft.ContainerService/managedClusters resource properties](/azure/templates/microsoft.containerservice/managedclusters?tabs=json#managedclusterproperties-object), update the *kubernetesVersion* property, setting its value to one of the following versions (making sure to specify the minor version number): 1.11.9+, 1.12.7+, 1.13.5+, or 1.14.0+. **Severity level**: 1 Upgrade your Kubernetes service cluster to a later Kubernetes version to protect Service Fabric clusters should only use Azure Active Directory for client authentication. A Service Fabric cluster offers several entry points to its management functionality, including the web-based Service Fabric Explorer, Visual Studio and PowerShell. Access to the cluster must be controlled using AAD. -**Recommendation**: [Enable AAD client authentication on your Service Fabric clusters](/azure/service-fabric/service-fabric-cluster-creation-setup-aad) +**Recommendation**: [Enable AAD client authentication on your Service Fabric clusters](../service-fabric/service-fabric-cluster-creation-setup-aad.md) **Severity level**: 1 Learn more about [Defender for DevOps](defender-for-devops-introduction.md). Learn how to [connect your GitHub](quickstart-onboard-github.md) to Defender for Cloud. -Learn how to [connect your Azure DevOps](quickstart-onboard-devops.md) to Defender for Cloud. +Learn how to [connect your Azure DevOps](quickstart-onboard-devops.md) to Defender for Cloud. |
defender-for-cloud | Integration Defender For Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md | For more information about migrating servers from Defender for Endpoint to Defen | Release state: | General availability (GA) | | Pricing: | Requires [Microsoft Defender for Servers Plan 1 or Plan 2](defender-for-servers-introduction.md#defender-for-servers-plans) | | Supported environments: | :::image type="icon" source="./medi) (formerly Windows Virtual Desktop), [Windows 10 Enterprise multi-session](../virtual-desktop/windows-10-multisession-faq.yml) (formerly Enterprise for Virtual Desktops)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure VMs running Windows 11 or Windows 10 (except if running Azure Virtual Desktop or Windows 10 Enterprise multi-session) |-| Required roles and permissions: | * To enable/disable the integration: **Security admin** or **Owner**<br>* To view Defender for Endpoint alerts in Defender for Cloud: **Security reader**, **Reader**, **Resource Group Contributor**, **Resource Group Owner**, **Security admin**, **Subscription owner**, or **Subscription Contributor** | +| Required roles and permissions: | - To enable/disable the integration: **Security admin** or **Owner**<br>- To view Defender for Endpoint alerts in Defender for Cloud: **Security reader**, **Reader**, **Resource Group Contributor**, **Resource Group Owner**, **Security admin**, **Subscription owner**, or **Subscription Contributor** | | Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government (Windows only)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet <br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts <br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected GCP projects | ## Benefits of integrating Microsoft Defender for Endpoint with Defender for Cloud If you enabled the integration, but still don't see the extension running on you ### What are the licensing requirements for Microsoft Defender for Endpoint?-Defender for Endpoint is included at no extra cost with **Microsoft Defender for Servers**. Alternatively, it can be purchased separately for 50 machines or more. +Licenses for Defender for Endpoint for servers are included with **Microsoft Defender for Servers**. Alternatively, you can [purchase licenses for Defender for Endpoint](https://www.microsoft.com/en-us/security/business/get-started/contact-us) for servers separately. ### Do I need to buy a separate anti-malware solution to protect my machines? No. With MDE integration in Defender for Servers, you'll also get malware protection on your machines. Full instructions for switching from a non-Microsoft endpoint solution are avail ### Which Microsoft Defender for Endpoint plan is supported in Defender for Servers? -Defender for Servers Plan 1 and Plan 2 provides the capabilities of [Microsoft Defender for Endpoint Plan 2](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint). --> +Defender for Servers Plan 1 and Plan 2 provides the capabilities of [Microsoft Defender for Endpoint Plan 2](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint). ## Next steps |
defender-for-cloud | Plan Multicloud Security Determine Business Needs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-business-needs.md | Defender for Cloud provides a single management point for protecting Azure, on-p The diagram below shows the Defender for Cloud architecture. Defender for Cloud can: - Provide unified visibility and recommendations across multicloud environments. ThereΓÇÖs no need to switch between different portals to see the status of your resources.-- Compare your resource configuration against industry standards, regulations, and benchmarks. [Learn more](/azure/defender-for-cloud/update-regulatory-compliance-packages) about standards.+- Compare your resource configuration against industry standards, regulations, and benchmarks. [Learn more](./update-regulatory-compliance-packages.md) about standards. - Help security analysts to triage alerts based on threats/suspicious activities. Workload protection capabilities can be applied to critical workloads for threat detection and advanced defenses. :::image type="content" source="media/planning-multicloud-security/architecture.png" alt-text="Diagram that shows multicloud architecture." lightbox="media/planning-multicloud-security/architecture.png"::: ## Next steps -In this article, you've learned how to determine your business needs when designing a multicloud security solution. Continue with the next step to [determine an adoption strategy](plan-multicloud-security-define-adoption-strategy.md). +In this article, you've learned how to determine your business needs when designing a multicloud security solution. Continue with the next step to [determine an adoption strategy](plan-multicloud-security-define-adoption-strategy.md). |
defender-for-cloud | Plan Multicloud Security Determine Data Residency Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-data-residency-requirements.md | There are data considerations around agents and extensions used by Defender for Agents are used in the Defender for Servers plan as follows: -- Non-Azure public clouds connect to Azure by leveraging the [Azure Arc](/azure/azure-arc/servers/overview) service.-- The [Azure Connected Machine agent](/azure/azure-arc/servers/agent-overview) is installed on multicloud machines that onboard as Azure Arc machines. Defender for Cloud should be enabled in the subscription in which the Azure Arc machines are located.-- Defender for Cloud leverages the Connected Machine agent to install extensions (such as Microsoft Defender for Endpoint) that are needed for [Defender for Servers](/azure/defender-for-cloud/defender-for-servers-introduction) functionality.-- [Log analytics agent/Azure Monitor Agent (AMA)](/azure/azure-monitor/agents/agents-overview) is needed for some [Defender for Service Plan 2](/azure/defender-for-cloud/defender-for-servers-introduction) functionality.+- Non-Azure public clouds connect to Azure by leveraging the [Azure Arc](../azure-arc/servers/overview.md) service. +- The [Azure Connected Machine agent](../azure-arc/servers/agent-overview.md) is installed on multicloud machines that onboard as Azure Arc machines. Defender for Cloud should be enabled in the subscription in which the Azure Arc machines are located. +- Defender for Cloud leverages the Connected Machine agent to install extensions (such as Microsoft Defender for Endpoint) that are needed for [Defender for Servers](./defender-for-servers-introduction.md) functionality. +- [Log analytics agent/Azure Monitor Agent (AMA)](../azure-monitor/agents/agents-overview.md) is needed for some [Defender for Service Plan 2](./defender-for-servers-introduction.md) functionality. - The agents can be provisioned automatically by Defender for Cloud. - When you enable auto-provisioning, you specify where to store collected data. Either in the default Log Analytics workspace created by Defender for Cloud, or in any other workspace in your subscription. [Learn more](/azure/defender-for-cloud/enable-data-collection?tabs=autoprovision-feature).- - If you select to continuously export data, you can drill into and configure the types of events and alerts that are saved. [Learn more](/azure/defender-for-cloud/continuous-export?tabs=azure-portal). + - If you select to continuously export data, you can drill into and configure the types of events and alerts that are saved. [Learn more](./continuous-export.md?tabs=azure-portal). - Log Analytics workspace: - You define the Log Analytics workspace you use at the subscription level. It can be either a default workspace, or a custom-created workspace.- - There are [several reasons](/azure/azure-monitor/logs/workspace-design) to select the default workspace rather than the custom workspace. + - There are [several reasons](../azure-monitor/logs/workspace-design.md) to select the default workspace rather than the custom workspace. - The location of the default workspace depends on your Azure Arc machine region. [Learn more](https://learn.microsoft.com/azure/defender-for-cloud/faq-data-collection-agents#where-is-the-default-log-analytics-workspace-created-). - The location of the custom-created workspace is set by your organization. [Learn more](https://learn.microsoft.com/azure/defender-for-cloud/faq-data-collection-agents#how-can-i-use-my-existing-log-analytics-workspace-) about using a custom workspace. ## Defender for Containers plan -[Defender for Containers](/azure/defender-for-cloud/defender-for-containers-introduction) protects your multicloud container deployments running in: +[Defender for Containers](./defender-for-containers-introduction.md) protects your multicloud container deployments running in: - **Azure Kubernetes Service (AKS)** - Microsoft's managed service for developing, deploying, and managing containerized applications. - **Amazon Elastic Kubernetes Service (EKS) in a connected AWS account** - Amazon's managed service for running Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes. - **Google Kubernetes Engine (GKE) in a connected GCP project** - GoogleΓÇÖs managed environment for deploying, managing, and scaling applications using GCP infrastructure.-- **Other Kubernetes distributions** - using [Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/overview), which allows you to attach and configure Kubernetes clusters running anywhere, including other public clouds and on-premises.+- **Other Kubernetes distributions** - using [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md), which allows you to attach and configure Kubernetes clusters running anywhere, including other public clouds and on-premises. Defender for Containers has both agent-based and agentless components. - **Agentless collection of Kubernetes audit log data**: [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/) or GCP Cloud Logging enables and collects audit log data, and sends the collected information to Defender for Cloud for further analysis. Data storage is based on the EKS cluster AWS region, in accordance with GDPR - EU and US.-- **Agent-based Azure Arc-enabled Kubernetes**: Connects your EKS and GKE clusters to Azure using [Azure Arc agents](/azure/azure-arc/kubernetes/conceptual-agent-overview), so that theyΓÇÖre treated as Azure Arc resources.+- **Agent-based Azure Arc-enabled Kubernetes**: Connects your EKS and GKE clusters to Azure using [Azure Arc agents](../azure-arc/kubernetes/conceptual-agent-overview.md), so that theyΓÇÖre treated as Azure Arc resources. - **Microsoft Defender extension**: A DaemonSet that collects signals from hosts using eBPF technology, and provides runtime protection. The extension is registered with a Log Analytics workspace and used as a data pipeline. The audit log data isn't stored in the Log Analytics workspace. - **Azure Policy extension**: configuration information is collected by the Azure Policy add-on. - The Azure Policy add-on extends the open-source Gatekeeper v3 admission controller webhook for Open Policy Agent. Defender for Containers has both agent-based and agentless components. ## Defender for Databases plan -For the [Defender for Databases plan](/azure/defender-for-cloud/quickstart-enable-database-protections) in a multicloud scenario, you leverage Azure Arc to manage the multicloud SQL Server databases. The SQL Server instance is installed in a virtual or physical machine connected to Azure Arc. +For the [Defender for Databases plan](./quickstart-enable-database-protections.md) in a multicloud scenario, you leverage Azure Arc to manage the multicloud SQL Server databases. The SQL Server instance is installed in a virtual or physical machine connected to Azure Arc. -- The [Azure Connected Machine agent](/azure/azure-arc/servers/agent-overview) is installed on machines connected to Azure Arc.+- The [Azure Connected Machine agent](../azure-arc/servers/agent-overview.md) is installed on machines connected to Azure Arc. - The Defender for Databases plan should be enabled in the subscription in which the Azure Arc machines are located. - The Log Analytics agent for Microsoft Defender SQL Servers should be provisioned on the Azure Arc machines. It collects security-related configuration settings and event logs from machines. - Automatic SQL server discovery and registration needs to be set to On to allow SQL database discovery on the machines. When it comes to the actual AWS and GCP resources that are protected by Defender ## Next steps -In this article, you have learned how to determine your data residency requirements when designing a multicloud security solution. Continue with the next step to [determine compliance requirements](plan-multicloud-security-determine-compliance-requirements.md). +In this article, you have learned how to determine your data residency requirements when designing a multicloud security solution. Continue with the next step to [determine compliance requirements](plan-multicloud-security-determine-compliance-requirements.md). |
defender-for-cloud | Plan Multicloud Security Determine Multicloud Dependencies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-multicloud-dependencies.md | Defender for Cloud provides Cloud Security Posture Management (CSPM) features fo - After you onboard AWS and GCP, Defender for Cloud starts assessing your multicloud workloads against industry standards, and reports on your security posture. - CSPM features are agentless and donΓÇÖt rely on any other components except for successful onboarding of AWS/GCP connectors. - ItΓÇÖs important to note that the Security Posture Management plan is turned on by default and canΓÇÖt be turned off.-- Learn about the [IAM permissions](/azure/defender-for-cloud/quickstart-onboard-aws?pivots=env-settings) needed to discover AWS resources for CSPM.+- Learn about the [IAM permissions](./quickstart-onboard-aws.md?pivots=env-settings) needed to discover AWS resources for CSPM. ## CWPP In Defender for Cloud, you enable specific plans to get Cloud Workload Platform Protection (CWPP) features. Plans to protect multicloud resources include: -- [Defender for Servers](/azure/defender-for-cloud/defender-for-servers-introduction): Protect AWS/GCP Windows and Linux machines.-- [Defender for Containers](/azure/defender-for-cloud/defender-for-containers-introduction): Help secure your Kubernetes clusters with security recommendations and hardening, vulnerability assessments, and runtime protection.-- [Defender for SQL](/azure/defender-for-cloud/defender-for-sql-usage): Protect SQL databases running in AWS and GCP.+- [Defender for Servers](./defender-for-servers-introduction.md): Protect AWS/GCP Windows and Linux machines. +- [Defender for Containers](./defender-for-containers-introduction.md): Help secure your Kubernetes clusters with security recommendations and hardening, vulnerability assessments, and runtime protection. +- [Defender for SQL](./defender-for-sql-usage.md): Protect SQL databases running in AWS and GCP. ### What agent do I need? Defender for Servers offers two different plans: - **Licensing:** Charges Defender for Endpoint licenses per hour instead of per seat, lowering costs by protecting virtual machines only when they are in use. - **Plan 2:** Includes all the components of Plan 1 along with additional capabilities such as File Integrity Monitoring (FIM), Just-in-time (JIT) VM access, and more. - Review the [features of each plan](/azure/defender-for-cloud/defender-for-servers-introduction) before onboarding to Defender for Servers. + Review the [features of each plan](./defender-for-servers-introduction.md) before onboarding to Defender for Servers. #### Review components The following components and requirements are needed to receive full protection - **Azure Arc agent**: AWS and GCP machines connect to Azure using Azure Arc. The Azure Arc agent connects them. - The Azure Arc agent is needed to read security information on the host level and allow Defender for Cloud to deploy the agents/extensions required for complete protection.-To auto-provision the Azure Arc agent, the OS configuration agent on [GCP VM instances](/azure/defender-for-cloud/quickstart-onboard-gcp?pivots=env-settings) and the AWS Systems Manager (SSM) agent for [AWS EC2 instances](/azure/defender-for-cloud/quickstart-onboard-aws?pivots=env-settings) must be configured. [Learn more](/azure/azure-arc/servers/agent-overview) about the agent. -- **Defender for Endpoint capabilities**: The [Microsoft Defender for Endpoint](/azure/defender-for-cloud/integration-defender-for-endpoint?tabs=linux) agent provides comprehensive endpoint detection and response (EDR) capabilities.-- **Vulnerability assessment**: Using either the integrated [Qualys vulnerability scanner](/azure/defender-for-cloud/deploy-vulnerability-assessment-vm), or the [Microsoft threat and vulnerability management](/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management?view=o365-worldwide) solution.-- **Log Analytics agent/[Azure Monitor Agent](/azure/azure-monitor/agents/agents-overview) (AMA) (in preview)**: Collects security-related configuration information and event logs from machines.+To auto-provision the Azure Arc agent, the OS configuration agent on [GCP VM instances](./quickstart-onboard-gcp.md?pivots=env-settings) and the AWS Systems Manager (SSM) agent for [AWS EC2 instances](./quickstart-onboard-aws.md?pivots=env-settings) must be configured. [Learn more](../azure-arc/servers/agent-overview.md) about the agent. +- **Defender for Endpoint capabilities**: The [Microsoft Defender for Endpoint](./integration-defender-for-endpoint.md?tabs=linux) agent provides comprehensive endpoint detection and response (EDR) capabilities. +- **Vulnerability assessment**: Using either the integrated [Qualys vulnerability scanner](./deploy-vulnerability-assessment-vm.md), or the [Microsoft threat and vulnerability management](/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management?view=o365-worldwide) solution. +- **Log Analytics agent/[Azure Monitor Agent](../azure-monitor/agents/agents-overview.md) (AMA) (in preview)**: Collects security-related configuration information and event logs from machines. #### Check networking requirements -Machines must meet [network requirements](/azure/azure-arc/servers/network-requirements?tabs=azure-cloud) before onboarding the agents. Auto-provisioning is enabled by default. +Machines must meet [network requirements](../azure-arc/servers/network-requirements.md?tabs=azure-cloud) before onboarding the agents. Auto-provisioning is enabled by default. ### Defender for Containers Enabling Defender for Containers provides GKE and EKS clusters and underlying ho #### Review components-Defender for Containers -The required [components](/azure/defender-for-cloud/defender-for-containers-introduction) are as follows: +The required [components](./defender-for-containers-introduction.md) are as follows: - **Azure Arc Agent**: Connects your GKE and EKS clusters to Azure, and onboards the Defender Profile. - **Defender Profile**: Provides host-level runtime threat protection. To receive the full benefits of Defender for SQL on your multicloud workload, yo - **Azure Arc agent**: AWS and GCP machines connect to Azure using Azure Arc. The Azure Arc agent connects them. - The Azure Arc agent is needed to read security information on the host level and allow Defender for Cloud to deploy the agents/extensions required for complete protection.- - To auto-provision the Azure Arc agent, the OS configuration agent on [GCP VM instances](/azure/defender-for-cloud/quickstart-onboard-gcp?pivots=env-settings) and the AWS Systems Manager (SSM) agent for [AWS EC2 instances](/azure/defender-for-cloud/quickstart-onboard-aws?pivots=env-settings) must be configured. [Learn more](/azure/azure-arc/servers/agent-overview) about the agent. -- **Log Analytics agent/[Azure Monitor Agent](/azure/azure-monitor/agents/agents-overview) (AMA) (in preview)**: Collects security-related configuration information and event logs from machines+ - To auto-provision the Azure Arc agent, the OS configuration agent on [GCP VM instances](./quickstart-onboard-gcp.md?pivots=env-settings) and the AWS Systems Manager (SSM) agent for [AWS EC2 instances](./quickstart-onboard-aws.md?pivots=env-settings) must be configured. [Learn more](../azure-arc/servers/agent-overview.md) about the agent. +- **Log Analytics agent/[Azure Monitor Agent](../azure-monitor/agents/agents-overview.md) (AMA) (in preview)**: Collects security-related configuration information and event logs from machines - **Automatic SQL server discovery and registration**: Supports automatic discovery and registration of SQL servers ## Next steps -In this article, you have learned how to determine multicloud dependencies when designing a multicloud security solution. Continue with the next step to [automate connector deployment](plan-multicloud-security-automate-connector-deployment.md). +In this article, you have learned how to determine multicloud dependencies when designing a multicloud security solution. Continue with the next step to [automate connector deployment](plan-multicloud-security-automate-connector-deployment.md). |
defender-for-cloud | Plan Multicloud Security Determine Ownership Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-ownership-requirements.md | Security leadership, most commonly under the CISO, should specify whoΓÇÖs accoun - Although multicloud security might be divided across different areas of the business, teams should manage security across the multicloud estate. This is better than having different teams secure different cloud environments. For example where one team manages Azure and another team manages AWS. Teams working across multicloud environments helps to prevent sprawl within the organization. It also helps to ensure that security policies and compliance requirements are applied in every environment. - Often, teams that manage Defender for Cloud donΓÇÖt have privileges to remediate recommendations in workloads. For example, the Defender for Cloud team might not be able to remediate vulnerabilities in an AWS EC2 instance. The security team might be responsible for improving the security posture, but unable to fix the resulting security recommendations. To address this issue: - ItΓÇÖs imperative to involve the AWS workload owners.- - [Assigning owners with due dates](/azure/defender-for-cloud/governance-rules) and [defining governance rules](/azure/defender-for-cloud/governance-rules) creates accountability and transparency, as you drive processes to improve security posture. + - [Assigning owners with due dates](./governance-rules.md) and [defining governance rules](./governance-rules.md) creates accountability and transparency, as you drive processes to improve security posture. - Depending on organizational models, we commonly see these options for central security teams operating with workload owners: - **Option 1: Centralized model.** Security controls are defined, deployed, and monitored by a central team. Security leadership, most commonly under the CISO, should specify whoΓÇÖs accoun ## Next steps -In this article, you have learned how to determine ownership requirements when designing a multicloud security solution. Continue with the next step to [determine access control requirements](plan-multicloud-security-determine-access-control-requirements.md). +In this article, you have learned how to determine ownership requirements when designing a multicloud security solution. Continue with the next step to [determine access control requirements](plan-multicloud-security-determine-access-control-requirements.md). |
defender-for-cloud | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md | definitions related to Microsoft Defender for Cloud. The following groupings of available: - The [initiatives](#microsoft-defender-for-cloud-initiatives) group lists the Azure Policy initiative definitions in the "Defender for Cloud" category.-- The [default initiative](#defender-for-clouds-default-initiative-microsoft-cloud-security-benchmark) group lists all the Azure Policy definitions that are part of Defender for Cloud's default initiative, [Microsoft Cloud Security Benchmark](/security/benchmark/azure/introduction). This Microsoft-authored, widely respected benchmark builds on controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud-centric security.+- The [default initiative](#defender-for-clouds-default-initiative-microsoft-cloud-security-benchmark) group lists all the Azure Policy definitions that are part of Defender for Cloud's default initiative, [Microsoft cloud security benchmark](/security/benchmark/azure/introduction). This Microsoft-authored, widely respected benchmark builds on controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud-centric security. - The [category](#microsoft-defender-for-cloud-category) group lists all the Azure Policy definitions in the "Defender for Cloud" category. For more information about security policies, see [Working with security policies](./tutorial-security-policy.md). For additional Azure Policy built-ins for other services, see [Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md). To learn about the built-in initiatives that are monitored by Defender for Cloud [!INCLUDE [azure-policy-reference-policyset-security-center](../../includes/policy/reference/bycat/policysets-security-center.md)] -## Defender for Cloud's default initiative (Microsoft Cloud Security Benchmark) +## Defender for Cloud's default initiative (Microsoft cloud security benchmark) To learn about the built-in policies that are monitored by Defender for Cloud, see the following table: |
defender-for-cloud | Quickstart Onboard Aws | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md | The native cloud connector requires: > [!IMPORTANT] > To present the current status of your recommendations, the CSPM plan queries the AWS resource APIs several times a day. These read-only API calls incur no charges, but they *are* registered in CloudTrail if you've enabled a trail for read events. As explained in [the AWS documentation](https://aws.amazon.com/cloudtrail/pricing/), there are no additional charges for keeping one trail. If you're exporting the data out of AWS (for example, to an external SIEM), this increased volume of calls might also increase ingestion costs. In such cases, We recommend filtering out the read-only calls from the Defender for Cloud user or role ARN: `arn:aws:iam::[accountId]:role/CspmMonitorAws` (this is the default role name, confirm the role name configured on your account). -1. By default the **Servers** plan is set to **On**. This is necessary to extend Defender for server's coverage to your AWS EC2. Ensure you've fulfilled the [network requirements for Azure Arc](/azure/azure-arc/servers/network-requirements?tabs=azure-cloud). +1. By default the **Servers** plan is set to **On**. This is necessary to extend Defender for server's coverage to your AWS EC2. Ensure you've fulfilled the [network requirements for Azure Arc](../azure-arc/servers/network-requirements.md?tabs=azure-cloud). - (Optional) Select **Configure**, to edit the configuration as required. Connecting your AWS account is part of the multicloud experience available in Mi - [Security recommendations for AWS resources - a reference guide](recommendations-reference-aws.md). - [Connect your GCP projects to Microsoft Defender for Cloud](quickstart-onboard-gcp.md)-- [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshooting-the-native-multicloud-connector)+- [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshooting-the-native-multicloud-connector) |
defender-for-cloud | Quickstart Onboard Devops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-devops.md | -To protect your ADO-based resources, you can connect your ADO organizations on the environment settings page. This page provides a simple onboarding experience (including auto discovery). +To protect your ADO-based resources, you can connect your ADO organizations on the environment settings page in Microsoft Defender for Cloud. This page provides a simple onboarding experience (including auto discovery). By connecting your Azure DevOps repositories to Defender for Cloud, you'll extend Defender for Cloud's enhanced security features to your ADO resources. These features include: -- **Defender for Cloud's CSPM features** - Assesses your Azure DevOps resources according to ADO-specific security recommendations. These recommendations are also included in your secure score. Resources will be assessed for compliance with built-in standards that are specific to DevOps. Defender for Cloud's [asset inventory page](asset-inventory.md) is a multicloud enabled feature that helps you manage your Azure DevOps resources alongside your Azure resources.+- **Defender for Cloud's Cloud Security Posture Management (CSPM) features** - Assesses your Azure DevOps resources according to ADO-specific security recommendations. You can also learn about all the [recommendations for DevOps](recommendations-reference.md) resources. Resources are assessed for compliance with built-in standards that are specific to DevOps. Defender for Cloud's [asset inventory page](asset-inventory.md) is a multicloud enabled feature that helps you manage your Azure DevOps resources alongside your Azure resources. -- **Microsoft Defender for DevOps** - Extends Defender for Cloud's threat detection capabilities and advanced defenses to your Azure DevOps resources.---You can view all of the [recommendations for DevOps](recommendations-reference.md) resources. +- **Defender for Cloud's Workload Protection features** - Extends Defender for Cloud's threat detection capabilities and advanced defenses to your Azure DevOps resources. ## Prerequisites - An Azure account with Defender for Cloud onboarded. If you don't already have an Azure account [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). + ## Availability | Aspect | Details | |--|--| | Release state: | Preview <br> The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. |-| Pricing: | The Defender for DevOps plan is free during the Preview. <br><br> After which it will be billed. Pricing to be determined at a later date. | -| Required roles and permissions: | **Contributor** on the relevant Azure subscription <br> **Security Admin Role** in Defender for Cloud <br> **Azure DevOps Organization Administrator** <br> Third-party applications can gain access using an OAuth, which must be set to `On` . [Learn more about Oath](/azure/devops/organizations/accounts/change-application-access-policies?view=azure-devops)| +| Pricing: | For pricing please see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h#pricing). | +| Required permissions: | **- Azure account:** with permissions to sign into Azure portal <br> **- Contributor:** on the Azure subscription where the connector will be created <br> **- Security Admin Role:** in Defender for Cloud <br> **- Organization Administrator:** in Azure DevOps <br> - In Azure DevOps, configure: Third-party applications gain access via OAuth, which must be set to `On` . [Learn more about OAuth](/azure/devops/organizations/accounts/change-application-access-policies?view=azure-devops)| +| Regions: | Central US | | Clouds: | :::image type="icon" source="media/quickstart-onboard-github/check-yes.png" border="false"::: Commercial clouds <br> :::image type="icon" source="media/quickstart-onboard-github/x-no.png" border="false"::: National (Azure Government, Azure China 21Vianet) | ## Connect your Azure DevOps organization You can view all of the [recommendations for DevOps](recommendations-reference.m 1. Select **Next: Authorize connection**. 1. Select **Authorize**.+ + > [!NOTE] + > The authorization will automatically login using the session from your browser's tab. After you select **Authorize**, if you don't see the Azure DevOps organizations you expect to see, check whether you are logged in to Microsoft Defender for Cloud in one browser tab and logged in to Azure DevOps in another browser tab. 1. In the popup screen, read the list of permission requests, and select **Accept**. You can view all of the [recommendations for DevOps](recommendations-reference.m - Select your relevant project(s) from the drop-down menu. > [!NOTE]- > If you select your relevant project(s) from the drop down menu, you will also need select to auto discover repositories or select individual repositories. + > If you select your relevant project(s) from the drop down menu, you will also need to select auto discover repositories or select individual repositories. 1. Select **Next: Review and create**. |
defender-for-cloud | Quickstart Onboard Gcp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md | To have full visibility to Microsoft Defender for Servers security content, ensu - **Manual installation** - You can manually connect your VM instances to Azure Arc for servers. Instances in projects with Defender for Servers plan enabled that are not connected to Arc will be surfaced by the recommendation ΓÇ£GCP VM instances should be connected to Azure ArcΓÇ¥. Use the ΓÇ£FixΓÇ¥ option offered in this recommendation to install Azure Arc on the selected machines. -- Ensure you've fulfilled the [network requirements for Azure Arc](/azure/azure-arc/servers/network-requirements?tabs=azure-cloud).+- Ensure you've fulfilled the [network requirements for Azure Arc](../azure-arc/servers/network-requirements.md?tabs=azure-cloud). - Additional extensions should be enabled on the Arc-connected machines. - Microsoft Defender for Endpoint Connecting your GCP project is part of the multicloud experience available in Mi - [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md) - [Google Cloud resource hierarchy](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy) - Learn about the Google Cloud resource hierarchy in Google's online docs-- [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshooting-the-native-multicloud-connector)+- [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshooting-the-native-multicloud-connector) |
defender-for-cloud | Quickstart Onboard Github | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-github.md | -To protect your GitHub-based resources, you can connect your GitHub organizations on the environment settings page. This page provides a simple onboarding experience (including auto discovery). +To protect your GitHub-based resources, you can connect your GitHub organizations on the environment settings page in Microsoft Defender for Cloud. This page provides a simple onboarding experience (including auto discovery). By connecting your GitHub repositories to Defender for Cloud, you'll extend Defender for Cloud's enhanced security features to your GitHub resources. These features include: -- **Defender for Cloud's CSPM features** - Assesses your GitHub resources according to GitHub-specific security recommendations. These recommendations are also included in your secure score. Resources will be assessed for compliance with built-in standards that are specific to DevOps. Defender for Cloud's [asset inventory page](asset-inventory.md) is a multicloud enabled feature that helps you manage your GitHub resources alongside your Azure resources.+- **Defender for Cloud's Cloud Security Posture Management (CSPM) features** - Assesses your GitHub resources according to GitHub-specific security recommendations. You can also learn about all of the [recommendations for DevOps](recommendations-reference.md) resources. Resources are assessed for compliance with built-in standards that are specific to DevOps. Defender for Cloud's [asset inventory page](asset-inventory.md) is a multicloud enabled feature that helps you manage your GitHub resources alongside your Azure resources. -- **Microsoft Defender for DevOps** - Extends Defender for Cloud's threat detection capabilities and advanced defenses to your GitHub resources.--You can view all of the [recommendations for DevOps](recommendations-reference.md) resources. +- **Defender for Cloud's Cloud Workload Protection features** - Extends Defender for Cloud's threat detection capabilities and advanced defenses to your GitHub resources. ## Prerequisites - - An Azure account with Defender for Cloud onboarded. If you don't already have an Azure account [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). + ## Availability | Aspect | Details | |--|--| | Release state: | Preview <br> The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. |-| Required roles and permissions: | **Contributor** on the relevant Azure subscription <br> **Security Admin Role** in Defender for Cloud <br> **GitHub Organization Administrator** | +| Pricing: | For pricing please see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h#pricing). +| Required permissions: | **- Azure account:** with permissions to sign into Azure portal <br> **- Contributor:** on the Azure subscription where the connector will be created <br> **- Security Admin Role:** in Defender for Cloud <br> **- Organization Administrator:** in GitHub | +| Regions: | Central US | | Clouds: | :::image type="icon" source="media/quickstart-onboard-github/check-yes.png" border="false"::: Commercial clouds <br> :::image type="icon" source="media/quickstart-onboard-github/x-no.png" border="false"::: National (Azure Government, Azure China 21Vianet) | ## Connect your GitHub account You can view all of the [recommendations for DevOps](recommendations-reference.m 1. Enter a name, select your subscription, resource group, and region. -1. Select a **region**, **subscription**, and **resource group** from the drop-down menus. - > [!NOTE] > The subscription will be the location where Defender for DevOps will create and store the GitHub connection. You can view all of the [recommendations for DevOps](recommendations-reference.m 1. Select **Authorize** to grant your Azure subscription access to your GitHub repositories. Sign in, if necessary, with an account that has permissions to the repositories you want to protect - :::image type="content" source="media/quickstart-onboard-github/authorize.png" alt-text="Screenshot that shows where the authorize button is located on the screen."::: + > [!NOTE] + > The authorization will auto-login using the session from your browser tab. After you select Authorize, if you do not see the GitHub organizations you expect to see, check whether you are logged in to MDC in one browser tab and logged in to GitHub in another browser tab. 1. Select **Install**. When the process completes, the GitHub connector appears on your Environment set :::image type="content" source="media/quickstart-onboard-github/github-connector.png" alt-text="Screenshot showing the Environmental page with the GitHub connector now connected." lightbox="media/quickstart-onboard-github/github-connector.png"::: -The Defender for DevOps service automatically discovers the repositories you select and analyzes them for any security issues. The Inventory page populates with your selected repositories, and the Recommendations page shows any security issues related to a selected repository. +The Defender for DevOps service automatically discovers the repositories you selected and analyzes them for any security issues. The Inventory page populates with your selected repositories, and the Recommendations page shows any security issues related to a selected repository. This can take up to an average of 3 hours. ## Learn more -- You can learn more about [how Azure and GitHub integrate](https://docs.microsoft.com/azure/developer/github/).+- You can learn more about [how Azure and GitHub integrate](/azure/developer/github/). - Learn about [security hardening practices for GitHub Actions](https://docs.github.com/actions/security-guides/security-hardening-for-github-actions). Learn more about [Defender for DevOps](defender-for-devops-introduction.md). Learn how to [configure the MSDO GitHub action](github-action.md). -Learn how to [configure pull request annotations](tutorial-enable-pull-request-annotations.md) in Defender for Cloud. +Learn how to [configure pull request annotations](tutorial-enable-pull-request-annotations.md) in Defender for Cloud. |
defender-for-cloud | Regulatory Compliance Dashboard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/regulatory-compliance-dashboard.md | Last updated 09/21/2022 Microsoft Defender for Cloud helps streamline the process for meeting regulatory compliance requirements, using the **regulatory compliance dashboard**. Defender for Cloud continuously assesses your hybrid cloud environment to analyze the risk factors according to the controls and best practices in th |