Updates from: 09/03/2021 03:05:55
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Troubleshoot Device Dsregcmd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/troubleshoot-device-dsregcmd.md
Title: Troubleshoot using the dsregcmd command - Azure Active Directory
-description: Using the output from dsregcmd to understand the state of devices in Azure AD
-
+ Title: Troubleshoot devices by using the dsregcmd command - Azure Active Directory
+description: This article covers how to use the output from the dsregcmd command to understand the state of devices in Azure AD.
-# Troubleshooting devices using the dsregcmd command
+# Troubleshoot devices by using the dsregcmd command
-The dsregcmd /status utility must be run as a domain user account.
+This article covers how to use the output from the `dsregcmd` command to understand the state of devices in Azure Active Directory (Azure AD). The `dsregcmd /status` utility must be run as a domain user account.
## Device state
-This section lists the device join state parameters. The table below lists the criteria for the device to be in various join states.
+This section lists the device join state parameters. The criteria that are required for the device to be in various join states are listed in the following table:
| AzureAdJoined | EnterpriseJoined | DomainJoined | Device state | | | | | |
This section lists the device join state parameters. The table below lists the c
| NO | NO | YES | Domain Joined | | YES | NO | YES | Hybrid AD Joined | | NO | YES | YES | On-premises DRS Joined |
+| | |
> [!NOTE]
-> Workplace Join (Azure AD registered) state is displayed in the "User State" section
+> The Workplace Joined (Azure AD registered) state is displayed in the ["User state"](#user-state) section.
-- **AzureAdJoined:** Set to "YES" if the device is Joined to Azure AD. "NO" otherwise.-- **EnterpriseJoined:** Set to "YES" if the device is Joined to an on-premises DRS. A device cannot be both EnterpriseJoined and AzureAdJoined.-- **DomainJoined:** Set to "YES" if the device is joined to a domain (AD).-- **DomainName:** Set to the name of the domain if the device is joined to a domain.
+- **AzureAdJoined**: Set the state to *YES* if the device is joined to Azure AD. Otherwise, set the state to *NO*.
+- **EnterpriseJoined**: Set the state to *YES* if the device is joined to an on-premises data replication service (DRS). A device can't be both EnterpriseJoined and AzureAdJoined.
+- **DomainJoined**: Set the state to *YES* if the device is joined to a domain (Active Directory).
+- **DomainName**: Set the state to the name of the domain if the device is joined to a domain.
### Sample device state output
This section lists the device join state parameters. The table below lists the c
## Device details
-Displayed only when the device is Azure AD joined or hybrid Azure AD joined (not Azure AD registered). This section lists device identifying details stored in Azure AD.
--- **DeviceId:** Unique ID of the device in the Azure AD tenant-- **Thumbprint:** Thumbprint of the device certificate-- **DeviceCertificateValidity:** Validity of the device certificate-- **KeyContainerId:** - ContainerId of the device private key associated with the device certificate-- **KeyProvider:** KeyProvider (Hardware/Software) used to store the device private key.-- **TpmProtected:** "YES" if the device private key is stored in a Hardware TPM.-
-> [!NOTE]
-> **DeviceAuthStatus** field was added in **Windows 10 May 2021 Update (version 21H1)**.
--- **DeviceAuthStatus:** Performs a check to determine device's health in Azure AD.
-"SUCCESS" if the device is present and Enabled in Azure AD.
-"FAILED. Device is either disabled or deleted" if the device is either disabled or deleted, [More Info](faq.yml#why-do-my-users-see-an-error-message-saying--your-organization-has-deleted-the-device--or--your-organization-has-disabled-the-device--on-their-windows-10-devices).
-"FAILED. ERROR" if the test was unable to run. This test requires network connectivity to Azure AD.
+The state is displayed only when the device is Azure AD-joined or hybrid Azure AD-joined (not Azure AD-registered). This section lists device-identifying details that are stored in Azure AD.
+
+- **DeviceId**: The unique ID of the device in the Azure AD tenant.
+- **Thumbprint**: The thumbprint of the device certificate.
+- **DeviceCertificateValidity**: The validity status of the device certificate.
+- **KeyContainerId**: The containerId of the device private key that's associated with the device certificate.
+- **KeyProvider**: The KeyProvider (Hardware/Software) that's used to store the device private key.
+- **TpmProtected**: The state is set to *YES* if the device private key is stored in a hardware Trusted Platform Module (TPM).
+- **DeviceAuthStatus**: Performs a check to determine the device's health in Azure AD. The health statuses are:
+ * *SUCCESS* if the device is present and enabled in Azure AD.
+ * *FAILED. Device is either disabled or deleted* if the device is either disabled or deleted. For more information about this issue, see [Azure Active Directory device management FAQ](faq.yml#why-do-my-users-see-an-error-message-saying--your-organization-has-deleted-the-device--or--your-organization-has-disabled-the-device--on-their-windows-10-devices).
+ * *FAILED. ERROR* if the test was unable to run. This test requires network connectivity to Azure AD.
+ > [!NOTE]
+ > The **DeviceAuthStatus** field was added in the Windows 10 May 2021 update (version 21H1).
### Sample device details output
Displayed only when the device is Azure AD joined or hybrid Azure AD joined (not
## Tenant details
-Displayed only when the device is Azure AD joined or hybrid Azure AD joined (not Azure AD registered). This section lists the common tenant details when a device is joined to Azure AD.
+The tenant details are displayed only when the device is Azure AD-joined or hybrid Azure AD-joined, not Azure AD-registered. This section lists the common tenant details that are displayed when a device is joined to Azure AD.
> [!NOTE]
-> If the MDM URLs in this section are empty, it indicates that the MDM was either not configured or current user is not in scope of MDM enrollment. Check the Mobility settings in Azure AD to review your MDM configuration.
+> If the mobile device management (MDM) URL fields in this section are empty, it indicates either that the MDM was not configured or that the current user isn't in scope of MDM enrollment. Check the Mobility settings in Azure AD to review your MDM configuration.
> [!NOTE]
-> Even if you see MDM URLs this does not mean that the device is managed by an MDM. The information is displayed if the tenant has MDM configuration for auto-enrollment even if the device itself is not managed.
+> Even if you see MDM URLs, this does not mean that the device is managed by an MDM. The information is displayed if the tenant has MDM configuration for auto-enrollment even if the device itself isn't managed.
### Sample tenant details output
Displayed only when the device is Azure AD joined or hybrid Azure AD joined (not
## User state
-This section lists the status of various attributes for the user currently logged into the device.
+This section lists the statuses of various attributes for users who are currently logged in to the device.
> [!NOTE]
-> The command must run in a user context to retrieve valid status.
--- **NgcSet:** Set to "YES" if a Windows Hello key is set for the current logged on user.-- **NgcKeyId:** ID of the Windows Hello key if one is set for the current logged on user.-- **CanReset:** Denotes if the Windows Hello key can be reset by the user.-- **Possible values:** DestructiveOnly, NonDestructiveOnly, DestructiveAndNonDestructive, or Unknown if error.-- **WorkplaceJoined:** Set to "YES" if Azure AD registered accounts have been added to the device in the current NTUSER context.-- **WamDefaultSet:** Set to "YES" if a WAM default WebAccount is created for the logged in user. This field could display an error if dsregcmd /status is run from an elevated command prompt.-- **WamDefaultAuthority:** Set to "organizations" for Azure AD.-- **WamDefaultId:** Always "https://login.microsoft.com" for Azure AD.-- **WamDefaultGUID:** The WAM provider's (Azure AD/Microsoft account) GUID for the default WAM WebAccount.
+> The command must run in a user context to retrieve a valid status.
+
+- **NgcSet**: Set the state to *YES* if a Windows Hello key is set for the current logged-in user.
+- **NgcKeyId**: The ID of the Windows Hello key if one is set for the current logged-in user.
+- **CanReset**: Denotes whether the Windows Hello key can be reset by the user.
+- **Possible values**: DestructiveOnly, NonDestructiveOnly, DestructiveAndNonDestructive, or Unknown if error.
+- **WorkplaceJoined**: Set the state to *YES* if Azure AD-registered accounts have been added to the device in the current NTUSER context.
+- **WamDefaultSet**: Set the state to *YES* if a Web Account Manager (WAM) default WebAccount is created for the logged-in user. This field could display an error if `dsregcmd /status` is run from an elevated command prompt.
+- **WamDefaultAuthority**: Set the state to *organizations* for Azure AD.
+- **WamDefaultId**: Always use *https://login.microsoft.com* for Azure AD.
+- **WamDefaultGUID**: The WAM provider's (Azure AD/Microsoft account) GUID for the default WAM WebAccount.
### Sample user state output
This section lists the status of various attributes for the user currently logge
## SSO state
-This section can be ignored for Azure AD registered devices.
+You can ignore this section for Azure AD registered devices.
> [!NOTE]
-> The command must run in a user context to retrieve valid status for that user.
--- **AzureAdPrt:** Set to "YES" if a PRT is present on the device for the logged-on user.-- **AzureAdPrtUpdateTime:** Set to the time in UTC when the PRT was last updated.-- **AzureAdPrtExpiryTime:** Set to the time in UTC when the PRT is going to expire if it is not renewed.-- **AzureAdPrtAuthority:** Azure AD authority URL-- **EnterprisePrt:** Set to "YES" if the device has PRT from on-premises ADFS. For hybrid Azure AD joined devices the device could have PRT from both Azure AD and on-premises AD simultaneously. On-premises joined devices will only have an Enterprise PRT.-- **EnterprisePrtUpdateTime:** Set to the time in UTC when the Enterprise PRT was last updated.-- **EnterprisePrtExpiryTime:** Set to the time in UTC when the PRT is going to expire if it is not renewed.-- **EnterprisePrtAuthority:** ADFS authority URL-
->[!NOTE]
-> The following PRT diagnostics fields were added in **Windows 10 May 2021 Update (version 21H1)**
+> The command must run in a user context to retrieve that user's valid status.
+
+- **AzureAdPrt**: Set the state to *YES* if a Primary Refresh Token (PRT) is present on the device for the logged-in user.
+- **AzureAdPrtUpdateTime**: Set the state to the time, in Coordinated Universal Time (UTC), when the PRT was last updated.
+- **AzureAdPrtExpiryTime**: Set the state to the time, in UTC, when the PRT is going to expire if it isn't renewed.
+- **AzureAdPrtAuthority**: The Azure AD authority URL
+- **EnterprisePrt**: Set the state to *YES* if the device has a PRT from on-premises
+Active Directory Federation Services (AD FS). For hybrid Azure AD-joined devices, the device could have a PRT from both Azure AD and on-premises Active Directory simultaneously. On-premises joined devices will have only an Enterprise PRT.
+- **EnterprisePrtUpdateTime**: Set the state to the time, in UTC, when the Enterprise PRT was last updated.
+- **EnterprisePrtExpiryTime**: Set the state to the time, in UTC, when the PRT is going to expire if it isn't renewed.
+- **EnterprisePrtAuthority**: The AD FS authority URL
>[!NOTE]
-> Diagnostic info displayed under **AzureAdPrt** field are for AzureAD PRT acquisition/refresh and diagnostic info displayed under **EnterprisePrt** and for Enterprise PRT acquisition/refresh respectively.
+> The following PRT diagnostics fields were added in the Windows 10 May 2021 update (version 21H1).
>[!NOTE]
->Diagnostic is info is displayed only if the acquisition/refresh failure happened after the the last successful PRT update time (AzureAdPrtUpdateTime/EnterprisePrtUpdateTime).
->On a shared device this diagnostic info could be form a different user's logon attempt.
--- **AcquirePrtDiagnostics:** Set to "PRESENT" if acquire PRT diagnostic info is present in the logs.
-This field is skipped if no diagnostics info is available.
-- **Previous Prt Attempt:** Local time in UTC at which the failed PRT attempt occurred. -- **Attempt Status:** Client error code returned (HRESULT).-- **User Identity:** UPN of the user for whom the PRT attempt happened.-- **Credential Type:** Credential used to acquire/refresh PRT. Common credential types are Password and NGC (Windows Hello).-- **Correlation ID:** Correlation ID sent by the server for the failed PRT attempt.-- **Endpoint URI:** Last endpoint accessed before the failure.-- **HTTP Method:** HTTP method used to access the endpoint.-- **HTTP Error:** WinHttp transport error code. WinHttp errors can be found [here](/windows/win32/winhttp/error-messages).-- **HTTP Status:** HTTP status returned by the endpoint.-- **Server Error Code:** Error code from server. -- **Server Error Description:** Error message from server.-- **RefreshPrtDiagnostics:** Set to "PRESENT" if acquire PRT diagnostic info is present in the logs.
-This field is skipped if no diagnostics info is available.
-The diagnostic info fields are same as **AcquirePrtDiagnostics**
+> * The diagnostics information that's displayed in the **AzureAdPrt** field is for Azure AD PRT acquisition or refresh, and the diagnostics information that's displayed in the **EnterprisePrt** field is for Enterprise PRT acquisition or refresh.
+> * The diagnostics information is displayed only if the acquisition or refresh failure happened after the last successful PRT update time (AzureAdPrtUpdateTime/EnterprisePrtUpdateTime).
+>On a shared device, this diagnostics information could be from a different user's login attempt.
+
+- **AcquirePrtDiagnostics**: Set the state to *PRESENT* if the acquired PRT diagnostics information is present in the logs.
+ This field is skipped if no diagnostics information is available.
+- **Previous Prt Attempt**: The local time, in UTC, at which the failed PRT attempt occurred.
+- **Attempt Status**: The client error code that's returned (HRESULT).
+- **User Identity**: The UPN of the user for whom the PRT attempt happened.
+- **Credential Type**: The credential that's used to acquire or refresh the PRT. Common credential types are Password and Next Generation Credential (NGC) (for Windows Hello).
+- **Correlation ID**: The correlation ID that's sent by the server for the failed PRT attempt.
+- **Endpoint URI**: The last endpoint accessed before the failure.
+- **HTTP Method**: The HTTP method that's used to access the endpoint.
+- **HTTP Error**: WinHttp transport error code. Get additional [network error codes](/windows/win32/winhttp/error-messages).
+- **HTTP Status**: The HTTP status that's returned by the endpoint.
+- **Server Error Code**: The error code from the server.
+- **Server Error Description**: The error message from the server.
+- **RefreshPrtDiagnostics**: Set the state to *PRESENT* if the acquired PRT diagnostics information is present in the logs.
+This field is skipped if no diagnostics information is available.
+The diagnostics information fields are same as **AcquirePrtDiagnostics**
### Sample SSO state output
The diagnostic info fields are same as **AcquirePrtDiagnostics**
+-+ ```
-## Diagnostic data
+## Diagnostics data
### Pre-join diagnostics
-This section is displayed only if the device is domain joined and is unable to hybrid Azure AD join.
+This diagnostics section is displayed only if the device is domain-joined and unable to hybrid Azure AD-join.
-This section performs various tests to help diagnose join failures. This section also includes the details of the previous (?). This information includes the error phase, the error code, the server request ID, server response http status, server response error message.
+This section performs various tests to help diagnose join failures. The information includes the error phase, the error code, the server request ID, the server response http status, and the server response error message.
-- **User Context:** The context in which the diagnostics are run. Possible values: SYSTEM, UN-ELEVATED User, ELEVATED User.
+- **User Context**: The context in which the diagnostics are run. Possible values: SYSTEM, UN-ELEVATED User, ELEVATED User.
> [!NOTE]
- > Since the actual join is performed in SYSTEM context, running the diagnostics in SYSTEM context is closest to the actual join scenario. To run diagnostics in SYSTEM context, the dsregcmd /status command must be run from an elevated command prompt.
--- **Client Time:** The system time in UTC.-- **AD Connectivity Test:** Test performs a connectivity test to the domain controller. Error in this test will likely result in Join errors in pre-check phase.-- **AD Configuration Test:** Test reads and verifies whether the SCP object is configured properly in the on-premises AD forest. Errors in this test would likely result in Join errors in the discover phase with the error code 0x801c001d.-- **DRS Discovery Test:** Test gets the DRS endpoints from discovery metadata endpoint and performs a user realm request. Errors in this test would likely result in Join errors in the discover phase.-- **DRS Connectivity Test:** Test performs basic connectivity test to the DRS endpoint.-- **Token acquisition Test:** Test tries to get an Azure AD authentication token if the user tenant is federated. Errors in this test would likely result in Join errors in the auth phase. If auth fails sync join will be attempted as fallback, unless fallback is explicitly disabled with the below registry key settings.-
-```
-Keyname: Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\CDJ
-Value: FallbackToSyncJoin
-Type: REG_DWORD
-Value: 0x0 -> Disabled
-Value: 0x1 -> Enabled
-Default (No Key): Enabled
-```
--- **Fallback to Sync-Join:** Set to "Enabled" if the above registry key, to prevent the fallback to sync join with auth failures, is NOT present. This option is available from Windows 10 1803 and later.-- **Previous Registration:** Time the previous Join attempt occurred. Only failed Join attempts are logged.-- **Error Phase:** The stage of the join in which it was aborted. Possible values are pre-check, discover, auth, join.-- **Client ErrorCode:** Client error code returned (HRESULT).-- **Server ErrorCode:** Server error code if a request was sent to the server and server responded back with an error code.-- **Server Message:** Server message returned along with the error code.-- **Https Status:** Http status returned by the server.-- **Request ID:** The client requestId sent to the server. Useful to correlate with server-side logs.
+ > Because the actual join is performed in SYSTEM context, running the diagnostics in SYSTEM context is closest to the actual join scenario. To run diagnostics in SYSTEM context, the `dsregcmd /status` command must be run from an elevated command prompt.
+
+- **Client Time**: The system time, in UTC.
+- **AD Connectivity Test**: This test performs a connectivity test to the domain controller. An error in this test will likely result in join errors in the pre-check phase.
+- **AD Configuration Test**: This test reads and verifies whether the Special Containment Procedures (SCP) object is configured properly in the on-premises Active Directory forest. Errors in this test would likely result in join errors in the discover phase with the error code 0x801c001d.
+- **DRS Discovery Test**: This test gets the DRS endpoints from discovery metadata endpoint and performs a user realm request. Errors in this test would likely result in join errors in the discover phase.
+- **DRS Connectivity Test**: This test performs a basic connectivity test to the DRS endpoint.
+- **Token Acquisition Test**: This test tries to get an Azure AD authentication token if the user tenant is federated. Errors in this test would likely result in join errors in the authentication phase. If authentication fails, sync-join will be attempted as fallback, unless fallback is explicitly disabled with the following registry key settings:
+
+ ```
+ Keyname: Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\CDJ
+ Value: FallbackToSyncJoin
+ Type: REG_DWORD
+ Value: 0x0 -> Disabled
+ Value: 0x1 -> Enabled
+ Default (No Key): Enabled
+ ```
+
+- **Fallback to Sync-Join**: Set the state to *Enabled* if the preceding registry key to prevent fallback to sync-join with authentication failures is *not* present. This option is available from Windows 10 1803 and later.
+- **Previous Registration**: The time when the previous join attempt occurred. Only failed join attempts are logged.
+- **Error Phase**: The stage of the join in which it was aborted. Possible values are *pre-check*, *discover*, *auth*, and *join*.
+- **Client ErrorCode**: The client error code that's returned (HRESULT).
+- **Server ErrorCode**: The server error code that's displayed if a request was sent to the server and the server responded with an error code.
+- **Server Message**: The server message that's returned along with the error code.
+- **Https Status**: The HTTP status that's returned by the server.
+- **Request ID**: The client requestId that's sent to the server. The request ID is useful to correlate with server-side logs.
### Sample pre-join diagnostics output
-The following example shows diagnostics test failing with a discovery error.
+The following example shows a diagnostics test failing with a discovery error.
``` +-+
-| Diagnostic Data |
+| Diagnostic Data |
+-+ Diagnostics Reference : www.microsoft.com/aadjerrors
The following example shows diagnostics test failing with a discovery error.
+-+ ```
-The following example shows diagnostics tests are passing but the registration attempt failed with a directory error, which is expected for sync join. Once the Azure AD Connect synchronization job completes, the device will be able to join.
+The following example shows that diagnostics tests are passing but the registration attempt failed with a directory error, which is expected for sync-join. After the Azure AD Connect synchronization job finishes, the device is able to join.
``` +-+
-| Diagnostic Data |
+| Diagnostic Data |
+-+ Diagnostics Reference : www.microsoft.com/aadjerrors
The following example shows diagnostics tests are passing but the registration a
Error Phase : join Client ErrorCode : 0x801c03f2 Server ErrorCode : DirectoryError
- Server Message : The device object by the given id (e92325d0-7ac4-4714-88a1-94ae875d5245) is not found.
+ Server Message : The device object by the given id (e92325d0-7ac4-4714-88a1-94ae875d5245) isn't found.
Https Status : 400 Request Id : 6bff0bd9-820b-484b-ab20-2a4f7b76c58e
The following example shows diagnostics tests are passing but the registration a
### Post-join diagnostics
-This section displays the output of sanity checks performed on a device joined to the cloud.
+This diagnostics section displays the output of sanity checks performed on a device that's joined to the cloud.
-- **AadRecoveryEnabled:** If "YES", the keys stored in the device are not usable and the device is marked for recovery. The next sign-in will trigger the recovery flow and re-register the device.-- **KeySignTest:** If "PASSED" the device keys are in good health. If KeySignTest fails, the device will usually be marked for recovery. The next sign-in will trigger the recovery flow and re-register the device. For hybrid Azure AD joined devices the recovery is silent. While Azure AD joined or Azure AD registered, devices will prompt for user authentication to recover and re-register the device if necessary. **The KeySignTest requires elevated privileges.**
+- **AadRecoveryEnabled**: If the value is *YES*, the keys stored in the device aren't usable, and the device is marked for recovery. The next sign-in will trigger the recovery flow and re-register the device.
+- **KeySignTest**: If the value is *PASSED*, the device keys are in good health. If KeySignTest fails, the device is usually marked for recovery. The next sign-in will trigger the recovery flow and re-register the device. For hybrid Azure AD-joined devices, the recovery is silent. While the devices are Azure AD-joined or Azure AD registered, they will prompt for user authentication to recover and re-register the device, if necessary.
+ > [!NOTE]
+ > The KeySignTest requires elevated privileges.
#### Sample post-join diagnostics output
This section displays the output of sanity checks performed on a device joined t
+-+ ```
-## NGC prerequisite check
+## NGC prerequisites check
-This section performs the prerequisite checks for the provisioning of Windows Hello for Business (WHFB).
+This diagnostics section performs the prerequisites check for setting up Windows Hello for Business (WHFB).
> [!NOTE]
-> You may not see NGC prerequisite check details in dsregcmd /status if the user already successfully configured WHFB.
--- **IsDeviceJoined:** Set to "YES" if the device is joined to Azure AD.-- **IsUserAzureAD:** Set to "YES" if the logged in user is present in Azure AD.-- **PolicyEnabled:** Set to "YES" if the WHFB policy is enabled on the device.-- **PostLogonEnabled:** Set to "YES" if WHFB enrollment is triggered natively by the platform. If it's set to "NO", it indicates that Windows Hello for Business enrollment is triggered by a custom mechanism-- **DeviceEligible:** Set to "YES" if the device meets the hardware requirement for enrolling with WHFB.-- **SessionIsNotRemote:** Set to "YES" if the current user is logged in directly to the device and not remotely.-- **CertEnrollment:** Specific to WHFB Certificate Trust deployment, indicating the certificate enrollment authority for WHFB. Set to "enrollment authority" if source of WHFB policy is Group Policy, "mobile device management" if source is MDM. "none" otherwise-- **AdfsRefreshToken:** Specific to WHFB Certificate Trust deployment. Only present if CertEnrollment is "enrollment authority". Indicates if the device has an enterprise PRT for the user.-- **AdfsRaIsReady:** Specific to WHFB Certificate Trust deployment. Only present if CertEnrollment is "enrollment authority". Set to "YES" if ADFS indicated in discovery metadata that it supports WHFB *and* if logon certificate template is available.-- **LogonCertTemplateReady:** Specific to WHFB Certificate Trust deployment. Only present if CertEnrollment is "enrollment authority". Set to "YES" if state of logon certificate template is valid and helps troubleshoot ADFS RA.-- **PreReqResult:** Provides result of all WHFB prerequisite evaluation. Set to "Will Provision" if WHFB enrollment would be launched as a post-logon task when user signs in next time.-
-### Sample NGC prerequisite check output
+> You might not see NGC prerequisites check details in `dsregcmd /status` if the user has already configured WHFB successfully.
+
+- **IsDeviceJoined**: Set the state to *YES* if the device is joined to Azure AD.
+- **IsUserAzureAD**: Set the state to *YES* if the logged-in user is present in Azure AD.
+- **PolicyEnabled**: Set the state to *YES* if the WHFB policy is enabled on the device.
+- **PostLogonEnabled**: Set the state to *YES* if WHFB enrollment is triggered natively by the platform. If the state is set to *NO*, it indicates that Windows Hello for Business enrollment is triggered by a custom mechanism.
+- **DeviceEligible**: Set the state to *YES* if the device meets the hardware requirement for enrolling with WHFB.
+- **SessionIsNotRemote**: Set the state to *YES* if the current user is logged in directly to the device and not remotely.
+- **CertEnrollment**: This setting is specific to WHFB Certificate Trust deployment, indicating the certificate enrollment authority for WHFB. Set the state to *enrollment authority* if the source of the WHFB policy is Group Policy, or set it to *mobile device management* if the source is MDM. If neither source applies, set the state to *none*.
+- **AdfsRefreshToken**: This setting is specific to WHFB Certificate Trust deployment and present only if the CertEnrollment state is *enrollment authority*. The setting indicates whether the device has an enterprise PRT for the user.
+- **AdfsRaIsReady**: This setting is specific to WHFB Certificate Trust deployment and present only if the CertEnrollment state is *enrollment authority*. Set the state to *YES* if AD FS indicates in discovery metadata that it supports WHFB *and* the logon certificate template is available.
+- **LogonCertTemplateReady**: This setting is specific to WHFB Certificate Trust deployment and present only if the CertEnrollment state is *enrollment authority*. Set the state to *YES* if the state of the login certificate template is valid and helps troubleshoot the AD FS Registration Authority (RA).
+- **PreReqResult**: Provides the result of all WHFB prerequisites evaluation. Set the state to *Will Provision* if WHFB enrollment would be launched as a post-login task when the user signs in next time.
+
+### Sample NGC prerequisites check output
``` +-+
This section performs the prerequisite checks for the provisioning of Windows He
## Next steps -- [The Microsoft Error Lookup Tool](/windows/win32/debug/system-error-code-lookup-tool)
+Go to the [Microsoft Error Lookup Tool](/windows/win32/debug/system-error-code-lookup-tool).
active-directory Troubleshoot Hybrid Join Windows Current https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/troubleshoot-hybrid-join-windows-current.md
Title: Troubleshooting hybrid Azure Active Directory joined devices
-description: Troubleshooting hybrid Azure Active Directory joined Windows 10 and Windows Server 2016 devices.
+ Title: Troubleshoot hybrid Azure Active Directory-joined devices
+description: This article helps you troubleshoot hybrid Azure Active Directory-joined Windows 10 and Windows Server 2016 devices.
-#Customer intent: As an IT admin, I want to fix issues with my hybrid Azure AD joined devices so that my users can use this feature.
+#Customer intent: As an IT admin, I want to fix issues with my hybrid Azure AD-joined devices so that my users can use this feature.
-# Troubleshooting hybrid Azure Active Directory joined devices
+# Troubleshoot hybrid Azure AD-joined devices
-The content of this article is applicable to devices running Windows 10 or Windows Server 2016.
+This article provides troubleshooting guidance to help you resolve potential issues with devices that are running Windows 10 or Windows Server 2016.
-For other Windows clients, see the article [Troubleshooting hybrid Azure Active Directory joined down-level devices](troubleshoot-hybrid-join-windows-legacy.md).
+Hybrid Azure Active Directory (Azure AD) join supports the Windows 10 November 2015 update and later.
-This article assumes that you have [configured hybrid Azure Active Directory joined devices](hybrid-azuread-join-plan.md) to support the following scenarios:
+To troubleshoot other Windows clients, see [Troubleshoot hybrid Azure AD-joined down-level devices](troubleshoot-hybrid-join-windows-legacy.md).
+
+This article assumes that you have [configured hybrid Azure AD-joined devices](hybrid-azuread-join-plan.md) to support the following scenarios:
- Device-based Conditional Access-- [Enterprise roaming of settings](./enterprise-state-roaming-overview.md)
+- [Enterprise state roaming](./enterprise-state-roaming-overview.md)
- [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-identity-verification)
-This document provides troubleshooting guidance to resolve potential issues.
-
-For Windows 10 and Windows Server 2016, hybrid Azure Active Directory join supports the Windows 10 November 2015 Update and above.
## Troubleshoot join failures ### Step 1: Retrieve the join status
-**To retrieve the join status:**
-
-1. Open a command prompt as an administrator
-2. Type `dsregcmd /status`
+1. Open a Command Prompt window as an administrator.
+1. Type `dsregcmd /status`.
``` +-+
WamDefaultAuthority: organizations
### Step 2: Evaluate the join status
-Review the following fields and make sure that they have the expected values:
-
-#### DomainJoined : YES
-
-This field indicates whether the device is joined to an on-premises Active Directory or not. If the value is **NO**, the device cannot perform a hybrid Azure AD join.
-
-#### WorkplaceJoined : NO
-
-This field indicates whether the device is registered with Azure AD as a personal device (marked as *Workplace Joined*). This value should be **NO** for a domain-joined computer that is also hybrid Azure AD joined. If the value is **YES**, a work or school account was added prior to the completion of the hybrid Azure AD join. In this case, the account is ignored when using Windows 10 version 1607 or later.
+Review the fields in the following table, and make sure that they have the expected values:
-#### AzureAdJoined : YES
+| Field | Expected value | Description |
+| | | |
+| DomainJoined | YES | This field indicates whether the device is joined to an on-premises Active Directory. <br><br>If the value is *NO*, the device can't perform a hybrid Azure AD-join. |
+| WorkplaceJoined | NO | This field indicates whether the device is registered with Azure AD as a personal device (marked as *Workplace Joined*). This value should be *NO* for a domain-joined computer that's also hybrid Azure AD-joined. <br><br>If the value is *YES*, a work or school account was added prior to the completion of the hybrid Azure AD-join. In this case, the account is ignored when you're using Windows&nbsp;10 version 1607 or later. |
+| AzureAdJoined | YES | This field indicates whether the device is joined. The value will be *YES* if the device is either an Azure AD-joined device or a hybrid Azure AD-joined device. <br><br>If the value is *NO*, the join to Azure AD has not finished yet. |
+| | |
-This field indicates whether the device is joined. The value will be **YES** if the device is either an Azure AD joined device or a hybrid Azure AD joined device.
-If the value is **NO**, the join to Azure AD has not completed yet.
+Proceed to the next steps for further troubleshooting.
-Proceed to next steps for further troubleshooting.
+### Step 3: Find the phase in which the join failed, and the error code
-### Step 3: Find the phase in which join failed and the errorcode
+**For Windows&nbsp;10 version 1803 or later**
-#### Windows 10 1803 and above
+Look for the "Previous Registration" subsection in the "Diagnostic Data" section of the join status output. This section is displayed only if the device is domain-joined and unable to hybrid Azure AD-join.
-Look for 'Previous Registration' subsection in the 'Diagnostic Data' section of the join status output. This section is displayed only if the device is domain joined and is unable to hybrid Azure AD join.
-The 'Error Phase' field denotes the phase of the join failure while 'Client ErrorCode' denotes the error code of the Join operation.
+The "Error Phase" field denotes the phase of the join failure, and "Client ErrorCode" denotes the error code of the join operation.
``` +-+
The 'Error Phase' field denotes the phase of the join failure while 'Client Erro
Error Phase : join Client ErrorCode : 0x801c03f2 Server ErrorCode : DirectoryError
- Server Message : The device object by the given id (e92325d0-xxxx-xxxx-xxxx-94ae875d5245) is not found.
+ Server Message : The device object by the given id (e92325d0-xxxx-xxxx-xxxx-94ae875d5245) isn't found.
Https Status : 400 Request Id : 6bff0bd9-820b-484b-ab20-2a4f7b76c58e +-+ ```
-#### Older Windows 10 versions
+**For earlier Windows&nbsp;10 versions**
Use Event Viewer logs to locate the phase and error code for the join failures.
-1. Open the **User Device Registration** event logs in event viewer. Located under **Applications and Services Log** > **Microsoft** > **Windows** > **User Device Registration**
-2. Look for events with the following eventIDs 304, 305, 307.
+1. In Event Viewer, open the **User Device Registration** event logs. They're stored under **Applications and Services Log** > **Microsoft** > **Windows** > **User Device Registration**.
+1. Look for events with the following event IDs: 304, 305, and 307.
-### Step 4: Check for possible causes and resolutions from the lists below
+### Step 4: Check for possible causes and resolutions
#### Pre-check phase Possible reasons for failure: -- Device has no line of sight to the Domain controller.
- - The device must be on the organizationΓÇÖs internal network or on VPN with network line of sight to an on-premises Active Directory (AD) domain controller.
+- The device has no line of sight to the domain controller.
+ - The device must be on the organizationΓÇÖs internal network or on a virtual private network with a network line of sight to an on-premises Active Directory domain controller.
#### Discover phase Possible reasons for failure: -- Service Connection Point (SCP) object misconfigured/unable to read SCP object from DC.
- - A valid SCP object is required in the AD forest, to which the device belongs, that points to a verified domain name in Azure AD.
- - Details can be found in the section [Configure a Service Connection Point](hybrid-azuread-join-federated-domains.md#configure-hybrid-azure-ad-join).
-- Failure to connect and fetch the discovery metadata from the discovery endpoint.
- - The device should be able to access `https://enterpriseregistration.windows.net`, in the SYSTEM context, to discover the registration and authorization endpoints.
+- The service connection point object is misconfigured or can't be read from the domain controller.
+ - A valid service connection point object is required in the AD forest, to which the device belongs, that points to a verified domain name in Azure AD.
+ - For more information, see the "Configure a service connection point" section of [Tutorial: Configure hybrid Azure Active Directory join for federated domains](hybrid-azuread-join-federated-domains.md#configure-hybrid-azure-ad-join).
+- Failure to connect to and fetch the discovery metadata from the discovery endpoint.
+ - The device should be able to access `https://enterpriseregistration.windows.net`, in the system context, to discover the registration and authorization endpoints.
- If the on-premises environment requires an outbound proxy, the IT admin must ensure that the computer account of the device is able to discover and silently authenticate to the outbound proxy.-- Failure to connect to user realm endpoint and perform realm discovery. (Windows 10 version 1809 and later only)
- - The device should be able to access `https://login.microsoftonline.com`, in the SYSTEM context, to perform realm discovery for the verified domain and determine the domain type (managed/federated).
- - If the on-premises environment requires an outbound proxy, the IT admin must ensure that the SYSTEM context on the device is able to discover and silently authenticate to the outbound proxy.
+- Failure to connect to the user realm endpoint and perform realm discovery (Windows&nbsp;10 version 1809 and later only).
+ - The device should be able to access `https://login.microsoftonline.com`, in the system context, to perform realm discovery for the verified domain and determine the domain type (managed or federated).
+ - If the on-premises environment requires an outbound proxy, the IT admin must ensure that the system context on the device is able to discover and silently authenticate to the outbound proxy.
**Common error codes:** -- **DSREG_AUTOJOIN_ADCONFIG_READ_FAILED** (0x801c001d/-2145648611)
- - Reason: Unable to read the SCP object and get the Azure AD tenant information.
- - Resolution: Refer to the section [Configure a Service Connection Point](hybrid-azuread-join-federated-domains.md#configure-hybrid-azure-ad-join).
-- **DSREG_AUTOJOIN_DISC_FAILED** (0x801c0021/-2145648607)
- - Reason: Generic Discovery failure. Failed to get the discovery metadata from DRS.
- - Resolution: Find the suberror below to investigate further.
-- **DSREG_AUTOJOIN_DISC_WAIT_TIMEOUT** (0x801c001f/-2145648609)
- - Reason: Operation timed out while performing Discovery.
- - Resolution: Ensure that `https://enterpriseregistration.windows.net` is accessible in the SYSTEM context. For more information, see the section [Network connectivity requirements](hybrid-azuread-join-managed-domains.md#prerequisites).
-- **DSREG_AUTOJOIN_USERREALM_DISCOVERY_FAILED** (0x801c003d/-2145648579)
- - Reason: Generic Realm Discovery failure. Failed to determine domain type (managed/federated) from STS.
- - Resolution: Find the suberror below to investigate further.
+| Error code | Reason | Resolution |
+| | | |
+| **DSREG_AUTOJOIN_ADCONFIG_READ_FAILED** (0x801c001d/-2145648611) | Unable to read the service connection point (SCP) object and get the Azure AD tenant information. | Refer to the [Configure a service connection point](hybrid-azuread-join-federated-domains.md#configure-hybrid-azure-ad-join) section. |
+| **DSREG_AUTOJOIN_DISC_FAILED** (0x801c0021/-2145648607) | Generic discovery failure. Failed to get the discovery metadata from the data replication service (DRS). | To investigate further, find the sub-error in the next sections. |
+| **DSREG_AUTOJOIN_DISC_WAIT_TIMEOUT** (0x801c001f/-2145648609) | Operation timed out while performing discovery. | Ensure that `https://enterpriseregistration.windows.net` is accessible in the system context. For more information, see the [Network connectivity requirements](hybrid-azuread-join-managed-domains.md#prerequisites) section. |
+| **DSREG_AUTOJOIN_USERREALM_DISCOVERY_FAILED** (0x801c003d/-2145648579) | Generic realm discovery failure. Failed to determine domain type (managed/federated) from STS. | To investigate further, find the sub-error in the next sections. |
+| | |
-**Common suberror codes:**
+**Common sub-error codes:**
-To find the suberror code for the discovery error code, use one of the following methods.
+To find the sub-error code for the discovery error code, use one of the following methods.
-##### Windows 10 1803 and above
+##### Windows&nbsp;10 version 1803 or later
-Look for 'DRS Discovery Test' in the 'Diagnostic Data' section of the join status output. This section is displayed only if the device is domain joined and is unable to hybrid Azure AD join.
+Look for "DRS Discovery Test" in the "Diagnostic Data" section of the join status output. This section is displayed only if the device is domain-joined and unable to hybrid Azure AD-join.
``` +-+
Look for 'DRS Discovery Test' in the 'Diagnostic Data' section of the join statu
+-+ ```
-##### Older Windows 10 versions
+##### Earlier Windows&nbsp;10 versions
-Use Event Viewer logs to locate the phase and errorcode for the join failures.
+Use Event Viewer logs to look for the phase and error code for the join failures.
-1. Open the **User Device Registration** event logs in event viewer. Located under **Applications and Services Log** > **Microsoft** > **Windows** > **User Device Registration**
-2. Look for events with the following eventIDs 201
+1. In Event Viewer, open the **User Device Registration** event logs. They're stored under **Applications and Services Log** > **Microsoft** > **Windows** > **User Device Registration**.
+1. Look for event ID 201.
-###### Network errors
+**Network errors**:
-- **WININET_E_CANNOT_CONNECT** (0x80072efd/-2147012867)
- - Reason: Connection with the server could not be established
- - Resolution: Ensure network connectivity to the required Microsoft resources. For more information, see [Network connectivity requirements](hybrid-azuread-join-managed-domains.md#prerequisites).
-- **WININET_E_TIMEOUT** (0x80072ee2/-2147012894)
- - Reason: General network timeout.
- - Resolution: Ensure network connectivity to the required Microsoft resources. For more information, see [Network connectivity requirements](hybrid-azuread-join-managed-domains.md#prerequisites).
-- **WININET_E_DECODING_FAILED** (0x80072f8f/-2147012721)
- - Reason: Network stack was unable to decode the response from the server.
- - Resolution: Ensure that network proxy is not interfering and modifying the server response.
+| Error code | Reason | Resolution |
+| | | |
+| **WININET_E_CANNOT_CONNECT** (0x80072efd/-2147012867) | Connection with the server couldn't be established. | Ensure network connectivity to the required Microsoft resources. For more information, see [Network connectivity requirements](hybrid-azuread-join-managed-domains.md#prerequisites). |
+| **WININET_E_TIMEOUT** (0x80072ee2/-2147012894) | General network timeout. | Ensure network connectivity to the required Microsoft resources. For more information, see [Network connectivity requirements](hybrid-azuread-join-managed-domains.md#prerequisites). |
+| **WININET_E_DECODING_FAILED** (0x80072f8f/-2147012721) | Network stack was unable to decode the response from the server. | Ensure that the network proxy isn't interfering and modifying the server response. |
+| | |
-###### HTTP errors
-- **DSREG_DISCOVERY_TENANT_NOT_FOUND** (0x801c003a/-2145648582)
- - Reason: SCP object configured with wrong tenant ID. Or no active subscriptions were found in the tenant.
- - Resolution: Ensure SCP object is configured with the correct Azure AD tenant ID and active subscriptions or present in the tenant.
-- **DSREG_SERVER_BUSY** (0x801c0025/-2145648603)
- - Reason: HTTP 503 from DRS server.
- - Resolution: Server is currently unavailable. future join attempts will likely succeed once server is back online.
+**HTTP errors**:
-###### Other errors
+| Error code | Reason | Resolution |
+| | | |
+| **DSREG_DISCOVERY_TENANT_NOT_FOUND** (0x801c003a/-2145648582) | The service connection point object is configured with the wrong tenant ID, or no active subscriptions were found in the tenant. | Ensure that the service connection point object is configured with the correct Azure AD tenant ID and active subscriptions or that the service is present in the tenant. |
+| **DSREG_SERVER_BUSY** (0x801c0025/-2145648603) | HTTP 503 from DRS server. | The server is currently unavailable. Future join attempts will likely succeed after the server is back online. |
+| | |
++
+**Other errors**:
+
+| Error code | Reason | Resolution |
+| | | |
+| **E_INVALIDDATA** (0x8007000d/-2147024883) | The server response JSON couldn't be parsed, likely because the proxy is returning an HTTP 200 with an HTML authorization page. | If the on-premises environment requires an outbound proxy, the IT admin must ensure that the system context on the device is able to discover and silently authenticate to the outbound proxy. |
+| | |
-- **E_INVALIDDATA** (0x8007000d/-2147024883)
- - Reason: Server response JSON couldn't be parsed. Likely due to proxy returning HTTP 200 with an HTML auth page.
- - Resolution: If the on-premises environment requires an outbound proxy, the IT admin must ensure that the SYSTEM context on the device is able to discover and silently authenticate to the outbound proxy.
#### Authentication phase
-Applicable only for federated domain accounts.
+This content applies only to federated domain accounts.
Reasons for failure: -- Unable to get an Access token silently for DRS resource.
- - Windows 10 devices acquire auth token from the federation service using Integrated Windows Authentication to an active WS-Trust endpoint. Details: [Federation Service Configuration](hybrid-azuread-join-manual.md#set-up-issuance-of-claims)
+- Unable to get an access token silently for the DRS resource.
+ - Windows&nbsp;10 devices acquire the authentication token from the Federation Service by using Integrated Windows Authentication to an active WS-Trust endpoint. For more information, see [Federation Service configuration](hybrid-azuread-join-manual.md#set-up-issuance-of-claims).
-**Common error codes:**
+**Common error codes**:
+
+Use Event Viewer logs to locate the error code, sub-error code, server error code, and server error message.
+
+1. In Event Viewer, open the **User Device Registration** event logs. They're stored under **Applications and Services Log** > **Microsoft** > **Windows** > **User Device Registration**.
+1. Look for event ID 305.
-Use Event Viewer logs to locate the error code, suberror code, server error code, and server error message.
-
-1. Open the **User Device Registration** event logs in event viewer. Located under **Applications and Services Log** > **Microsoft** > **Windows** > **User Device Registration**
-2. Look for events with the following eventID 305
--
-##### Configuration errors
--- **ERROR_ADAL_PROTOCOL_NOT_SUPPORTED** (0xcaa90017/-894894057)
- - Reason: Authentication protocol is not WS-Trust.
- - Resolution: The on-premises identity provider must support WS-Trust
-- **ERROR_ADAL_FAILED_TO_PARSE_XML** (0xcaa9002c/-894894036)
- - Reason: On-premises federation service did not return an XML response.
- - Resolution: Ensure MEX endpoint is returning a valid XML. Ensure proxy is not interfering and returning non-xml responses.
-- **ERROR_ADAL_COULDNOT_DISCOVER_USERNAME_PASSWORD_ENDPOINT** (0xcaa90023/-894894045)
- - Reason: Could not discover endpoint for username/password authentication.
- - Resolution: Check the on-premises identity provider settings. Ensure that the WS-Trust endpoints are enabled and ensure the MEX response contains these correct endpoints.
-
-##### Network errors
--- **ERROR_ADAL_INTERNET_TIMEOUT** (0xcaa82ee2/-894947614)
- - Reason: General network timeout.
- - Resolution: Ensure that `https://login.microsoftonline.com` is accessible in the SYSTEM context. Ensure the on-premises identity provider is accessible in the SYSTEM context. For more information, see [Network connectivity requirements](hybrid-azuread-join-managed-domains.md#prerequisites).
-- **ERROR_ADAL_INTERNET_CONNECTION_ABORTED** (0xcaa82efe/-894947586)
- - Reason: Connection with the auth endpoint was aborted.
- - Resolution: Retry after sometime or try joining from an alternate stable network location.
-- **ERROR_ADAL_INTERNET_SECURE_FAILURE** (0xcaa82f8f/-894947441)
- - Reason: The Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL), certificate sent by the server could not be validated.
- - Resolution: Check the client time skew. Retry after sometime or try joining from an alternate stable network location.
-- **ERROR_ADAL_INTERNET_CANNOT_CONNECT** (0xcaa82efd/-894947587)
- - Reason: The attempt to connect to `https://login.microsoftonline.com` failed.
- - Resolution: Check network connection to `https://login.microsoftonline.com`.
-
-##### Other errors
--- **ERROR_ADAL_SERVER_ERROR_INVALID_GRANT** (0xcaa20003/-895352829)
- - Reason: SAML token from the on-premises identity provider was not accepted by Azure AD.
- - Resolution: Check the federation server settings. Look for the server error code in the authentication logs.
-- **ERROR_ADAL_WSTRUST_REQUEST_SECURITYTOKEN_FAILED** (0xcaa90014/-894894060)
- - Reason: Server WS-Trust response reported fault exception and it failed to get assertion
- - Resolution: Check the federation server settings. Look for the server error code in the authentication logs.
-- **ERROR_ADAL_WSTRUST_TOKEN_REQUEST_FAIL** (0xcaa90006/-894894074)
- - Reason: Received an error when trying to get access token from the token endpoint.
- - Resolution: Look for the underlying error in the ADAL log.
-- **ERROR_ADAL_OPERATION_PENDING** (0xcaa1002d/-895418323)
- - Reason: General ADAL failure
- - Resolution: Look for the suberror code or server error code from the authentication logs.
-
-#### Join Phase
+
+**Configuration errors**:
+
+| Error code | Reason | Resolution |
+| | | |
+| **ERROR_ADAL_PROTOCOL_NOT_SUPPORTED** (0xcaa90017/-894894057) | The Azure AD Authentication Library (ADAL) authentication protocol isn't WS-Trust. | The on-premises identity provider must support WS-Trust. |
+| **ERROR_ADAL_FAILED_TO_PARSE_XML** (0xcaa9002c/-894894036) | The on-premises Federation Service didn't return an XML response. | Ensure that the Metadata Exchange (MEX) endpoint is returning a valid XML. Ensure that the proxy isn't interfering and returning non-xml responses. |
+| **ERROR_ADAL_COULDNOT_DISCOVER_USERNAME_PASSWORD_ENDPOINT** (0xcaa90023/-894894045) | Couldn't discover an endpoint for username/password authentication. | Check the on-premises identity provider settings. Ensure that the WS-Trust endpoints are enabled and that the MEX response contains these correct endpoints. |
+| | |
++
+**Network errors**:
+
+| Error code | Reason | Resolution |
+| | | |
+| **ERROR_ADAL_INTERNET_TIMEOUT** (0xcaa82ee2/-894947614) | General network timeout. | Ensure that `https://login.microsoftonline.com` is accessible in the system context. Ensure that the on-premises identity provider is accessible in the system context. For more information, see [Network connectivity requirements](hybrid-azuread-join-managed-domains.md#prerequisites). |
+| **ERROR_ADAL_INTERNET_CONNECTION_ABORTED** (0xcaa82efe/-894947586) | Connection with the authorization endpoint was aborted. | Retry the join after a while, or try joining from another stable network location. |
+| **ERROR_ADAL_INTERNET_SECURE_FAILURE** (0xcaa82f8f/-894947441) | The Transport Layer Security (TLS) certificate (previously known as the Secure Sockets Layer [SSL] certificate) sent by the server couldn't be validated. | Check the client time skew. Retry the join after a while, or try joining from another stable network location. |
+| **ERROR_ADAL_INTERNET_CANNOT_CONNECT** (0xcaa82efd/-894947587) | The attempt to connect to `https://login.microsoftonline.com` failed. | Check the network connection to `https://login.microsoftonline.com`. |
+| | |
++
+**Other errors**:
+
+| Error code | Reason | Resolution |
+| | | |
+| **ERROR_ADAL_SERVER_ERROR_INVALID_GRANT** (0xcaa20003/-895352829) | The SAML token from the on-premises identity provider wasn't accepted by Azure AD. | Check the Federation Server settings. Look for the server error code in the authentication logs. |
+| **ERROR_ADAL_WSTRUST_REQUEST_SECURITYTOKEN_FAILED** (0xcaa90014/-894894060) | The Server WS-Trust response reported a fault exception, and it failed to get assertion. | Check the Federation Server settings. Look for the server error code in the authentication logs. |
+| **ERROR_ADAL_WSTRUST_TOKEN_REQUEST_FAIL** (0xcaa90006/-894894074) | Received an error when trying to get access token from the token endpoint. | Look for the underlying error in the ADAL log. |
+| **ERROR_ADAL_OPERATION_PENDING** (0xcaa1002d/-895418323) | General ADAL failure. | Look for the sub-error code or server error code from the authentication logs. |
+| | |
++
+#### Join phase
Reasons for failure:
-Find the registration type and look for the error code from the list below.
+Look for the registration type and error code from the following tables, depending on the Windows 10 version you're using.
-#### Windows 10 1803 and above
+#### Windows&nbsp;10 version 1803 or later
-Look for 'Previous Registration' subsection in the 'Diagnostic Data' section of the join status output. This section is displayed only if the device is domain joined and is unable to hybrid Azure AD join.
-'Registration Type' field denotes the type of join performed.
+Look for the "Previous Registration" subsection in the "Diagnostic Data" section of the join status output. This section is displayed only if the device is domain-joined and is unable to hybrid Azure AD-join.
+
+The "Registration Type" field denotes the type of join that's performed.
``` +-+
Look for 'Previous Registration' subsection in the 'Diagnostic Data' section of
+-+ ```
-#### Older Windows 10 versions
+#### Earlier Windows&nbsp;10 versions
+
+Use Event Viewer logs to locate the phase and error code for the join failures.
-Use Event Viewer logs to locate the phase and errorcode for the join failures.
+1. In Event Viewer, open the **User Device Registration** event logs. They're stored under **Applications and Services Log** > **Microsoft** > **Windows** > **User Device Registration**.
+1. Look for event ID 204.
-1. Open the **User Device Registration** event logs in event viewer. Located under **Applications and Services Log** > **Microsoft** > **Windows** > **User Device Registration**
-2. Look for events with the following eventIDs 204
+**HTTP errors returned from DRS server**:
-##### HTTP errors returned from DRS server
+| Error code | Reason | Resolution |
+| | | |
+| **DSREG_E_DIRECTORY_FAILURE** (0x801c03f2/-2145647630) | Received an error response from DRS with ErrorCode: "DirectoryError". | Refer to the server error code for possible reasons and resolutions. |
+| **DSREG_E_DEVICE_AUTHENTICATION_ERROR** (0x801c0002/-2145648638) | Received an error response from DRS with ErrorCode: "AuthenticationError" and ErrorSubCode is *not* "DeviceNotFound". | Refer to the server error code for possible reasons and resolutions. |
+| **DSREG_E_DEVICE_INTERNALSERVICE_ERROR** (0x801c0006/-2145648634) | Received an error response from DRS with ErrorCode: "DirectoryError". | Refer to the server error code for possible reasons and resolutions. |
+| | |
-- **DSREG_E_DIRECTORY_FAILURE** (0x801c03f2/-2145647630)
- - Reason: Received an error response from DRS with ErrorCode: "DirectoryError"
- - Resolution: Refer to the server error code for possible reasons and resolutions.
-- **DSREG_E_DEVICE_AUTHENTICATION_ERROR** (0x801c0002/-2145648638)
- - Reason: Received an error response from DRS with ErrorCode: "AuthenticationError" and ErrorSubCode is NOT "DeviceNotFound".
- - Resolution: Refer to the server error code for possible reasons and resolutions.
-- **DSREG_E_DEVICE_INTERNALSERVICE_ERROR** (0x801c0006/-2145648634)
- - Reason: Received an error response from DRS with ErrorCode: "DirectoryError"
- - Resolution: Refer to the server error code for possible reasons and resolutions.
-##### TPM errors
+**TPM errors**:
-- **NTE_BAD_KEYSET** (0x80090016/-2146893802)
- - Reason: TPM operation failed or was invalid
- - Resolution: Likely due to a bad sysprep image. Ensure the machine from which the sysprep image was created is not Azure AD joined, hybrid Azure AD joined, or Azure AD registered.
-- **TPM_E_PCP_INTERNAL_ERROR** (0x80290407/-2144795641)
- - Reason: Generic TPM error.
- - Resolution: Disable TPM on devices with this error. Windows 10 version 1809 and higher automatically detects TPM failures and completes hybrid Azure AD join without using the TPM.
-- **TPM_E_NOTFIPS** (0x80280036/-2144862154)
- - Reason: TPM in FIPS mode not currently supported.
- - Resolution: Disable TPM on devices with this error. Windows 1809 automatically detects TPM failures and completes hybrid Azure AD join without using the TPM.
-- **NTE_AUTHENTICATION_IGNORED** (0x80090031/-2146893775)
- - Reason: TPM locked out.
- - Resolution: Transient error. Wait for the cooldown period. Join attempt after some time should succeed. More Information can be found in the article [TPM fundamentals](/windows/security/information-protection/tpm/tpm-fundamentals#anti-hammering)
+| Error code | Reason | Resolution |
+| | | |
+| **NTE_BAD_KEYSET** (0x80090016/-2146893802) | The Trusted Platform Module (TPM) operation failed or was invalid. | The failure likely results from a bad sysprep image. Ensure that the machine from which the sysprep image was created isn't Azure AD-joined, hybrid Azure AD-joined, or Azure AD-registered. |
+| **TPM_E_PCP_INTERNAL_ERROR** (0x80290407/-2144795641) | Generic TPM error. | Disable TPM on devices with this error. Windows&nbsp;10 versions 1809 and later automatically detect TPM failures and complete hybrid Azure AD-join without using the TPM. |
+| **TPM_E_NOTFIPS** (0x80280036/-2144862154) | TPM in FIPS mode isn't currently supported. | Disable TPM on devices with this error. Windows 10 version 1809 automatically detects TPM failures and completes the hybrid Azure AD join without using the TPM. |
+| **NTE_AUTHENTICATION_IGNORED** (0x80090031/-2146893775) | TPM is locked out. | Transient error. Wait for the cool-down period. The join attempt should succeed after a while. For more information, see [TPM fundamentals](/windows/security/information-protection/tpm/tpm-fundamentals#anti-hammering). |
+| | |
-##### Network Errors
-- **WININET_E_TIMEOUT** (0x80072ee2/-2147012894)
- - Reason: General network time out trying to register the device at DRS
- - Resolution: Check network connectivity to `https://enterpriseregistration.windows.net`.
-- **WININET_E_NAME_NOT_RESOLVED** (0x80072ee7/-2147012889)
- - Reason: The server name or address could not be resolved.
- - Resolution: Check network connectivity to `https://enterpriseregistration.windows.net`. Ensure DNS resolution for the hostname is accurate in the n/w and on the device.
-- **WININET_E_CONNECTION_ABORTED** (0x80072efe/-2147012866)
- - Reason: The connection with the server was terminated abnormally.
- - Resolution: Retry after sometime or try joining from an alternate stable network location.
+**Network errors**:
-##### Other Errors
+| Error code | Reason | Resolution |
+| | | |
+| **WININET_E_TIMEOUT** (0x80072ee2/-2147012894) | General network time-out trying to register the device at DRS. | Check network connectivity to `https://enterpriseregistration.windows.net`. |
+| **WININET_E_NAME_NOT_RESOLVED** (0x80072ee7/-2147012889) | The server name or address couldn't be resolved. | Check network connectivity to `https://enterpriseregistration.windows.net`. |
+| **WININET_E_CONNECTION_ABORTED** (0x80072efe/-2147012866) | The connection with the server was terminated abnormally. | Retry the join after a while, or try joining from another stable network location. |
+| | |
-- **DSREG_AUTOJOIN_ADCONFIG_READ_FAILED** (0x801c001d/-2145648611)
- - Reason: EventID 220 is present in User Device Registration event logs. Windows cannot access the computer object in Active Directory. A Windows error code may be included in the event. For error codes ERROR_NO_SUCH_LOGON_SESSION (1312) and ERROR_NO_SUCH_USER (1317), these error codes are related to replication issues in on-premises AD.
- - Resolution: Troubleshoot replication issues in AD. Replication issues may be transient and may go way after a period of time.
-##### Federated join server Errors
+**Other errors**:
+
+| Error code | Reason | Resolution |
+| | | |
+| **DSREG_AUTOJOIN_ADCONFIG_READ_FAILED** (0x801c001d/-2145648611) | Event ID 220 is present in User Device Registration event logs. Windows can't access the computer object in Active Directory. A Windows error code might be included in the event. Error codes ERROR_NO_SUCH_LOGON_SESSION (1312) and ERROR_NO_SUCH_USER (1317) are related to replication issues in on-premises Active Directory. | Troubleshoot replication issues in Active Directory. These replication issues might be transient, and they might go away after a while. |
+| | |
++
+**Federated join server errors**:
| Server error code | Server error message | Possible reasons | Resolution | | | | | |
-| DirectoryError | Your request is throttled temporarily. Please try after 300 seconds. | Expected error. Possibly due to making multiple registration requests in quick succession. | Retry join after the cooldown period |
+| DirectoryError | Your request is throttled temporarily. Please try after 300 seconds. | This is an expected error, possibly because multiple registration requests were made in quick succession. | Retry the join after the cool-down period |
+| | |
-##### Sync join server Errors
+**Sync-join server errors**:
| Server error code | Server error message | Possible reasons | Resolution | | | | | |
-| DirectoryError | AADSTS90002: Tenant `UUID` not found. This error may happen if there are no active subscriptions for the tenant. Check with your subscription administrator. | Tenant ID in SCP object is incorrect | Ensure SCP object is configured with the correct Azure AD tenant ID and active subscriptions and present in the tenant. |
-| DirectoryError | The device object by the given ID is not found. | Expected error for sync join. The device object has not synced from AD to Azure AD | Wait for the Azure AD Connect sync to complete and the next join attempt after sync completion will resolve the issue |
-| AuthenticationError | The verification of the target computer's SID | The certificate on the Azure AD device doesn't match the certificate used to sign the blob during the sync join. This error typically means sync hasnΓÇÖt completed yet. | Wait for the Azure AD Connect sync to complete and the next join attempt after sync completion will resolve the issue |
+| DirectoryError | AADSTS90002: Tenant `UUID` not found. This error might happen if there are no active subscriptions for the tenant. Check with your subscription administrator. | The tenant ID in the service connection point object is incorrect. | Ensure that the service connection point object is configured with the correct Azure AD tenant ID and active subscriptions or that the service is present in the tenant. |
+| DirectoryError | The device object by the given ID is not found. | This is an expected error for sync-join. The device object has not synced from AD to Azure AD | Wait for the Azure AD Connect sync to finish, and the next join attempt after sync completion will resolve the issue. |
+| AuthenticationError | The verification of the target computer's SID | The certificate on the Azure AD device doesn't match the certificate that's used to sign in the blob during the sync-join. This error ordinarily means that sync hasnΓÇÖt finished yet. | Wait for the Azure AD Connect sync to finish, and the next join attempt after the sync completion will resolve the issue. |
### Step 5: Collect logs and contact Microsoft Support
-Download the file Auth.zip from [https://cesdiagtools.blob.core.windows.net/windows/Auth.zip](https://cesdiagtools.blob.core.windows.net/windows/Auth.zip)
+1. [Download the *Auth.zip* file](https://cesdiagtools.blob.core.windows.net/windows/Auth.zip).
-1. Unzip the files to a folder such as c:\temp and change into the folder.
-1. From an elevated PowerShell session, run **.\start-auth.ps1 -v -accepteula**.
-1. Use Switch Account to toggle to another session with the problem user.
+1. Extract the files to a folder, such as *c:\temp*, and then go to the folder.
+1. From an elevated Azure PowerShell session, run `.\start-auth.ps1 -v -accepteula`.
+1. Select **Switch Account** to toggle to another session with the problem user.
1. Reproduce the issue.
-1. Use Switch Account to toggle back to the admin session running the tracing.
-1. From the elevated PowerShell session, run **.\stop-auth.ps1**.
-1. Zip and send the folder **Authlogs** from the folder where the scripts were executed from.
+1. Select **Switch Account** to toggle back to the admin session that's running the tracing.
+1. From the elevated PowerShell session, run `.\stop-auth.ps1`.
+1. Zip (compress) and send the folder *Authlogs* from the folder where the scripts were executed.
-## Troubleshoot Post-Join Authentication issues
-
-### Step 1: Retrieve PRT status using dsregcmd /status
+## Troubleshoot post-join authentication issues
-**To retrieve the PRT status:**
+### Step 1: Retrieve the PRT status by using `dsregcmd /status`
-1. Open a command prompt.
+1. Open a Command Prompt window.
> [!NOTE]
- > To get PRT status the command prompt should be run in the context of the logged in user
+ > To get the Primary Refresh Token (PRT) status, open the Command Prompt window in the context of the logged-in user.
-2. Type dsregcmd /status
+1. Run `dsregcmd /status`.
-3. ΓÇ£SSO stateΓÇ¥ section provides the current PRT status.
+ The ΓÇ£SSO stateΓÇ¥ section provides the current PRT status.
-4. If the AzureAdPrt field is set to ΓÇ£NOΓÇ¥, there was an error acquiring PRT from Azure AD.
+ If the AzureAdPrt field is set to *NO*, there was an error acquiring the PRT status from Azure AD.
-5. If the AzureAdPrtUpdateTime is more than 4 hours, there is likely an issue refreshing PRT. Lock and unlock the device to force PRT refresh and check if the time got updated.
+1. If the AzureAdPrtUpdateTime is more than four hours, there's likely an issue with refreshing the PRT. Lock and unlock the device to force the PRT refresh, and then check to see whether the time has been updated.
``` +-+
Download the file Auth.zip from [https://cesdiagtools.blob.core.windows.net/wind
### Step 2: Find the error code
-### From dsregcmd output
+**From the `dsregcmd` output**
> [!NOTE]
-> Available from **Windows 10 May 2021 Update (version 21H1)**.
+> The output is available from the Windows&nbsp;10 May 2021 update (version 21H1).
-"Attempt Status" field under AzureAdPrt Field will provide the status of previous PRT attempt along with other required debug information. For older Windows versions, this information needs to be extracted from AAD analytic and operational logs.
+The "Attempt Status" field under the "AzureAdPrt" field will provide the status of the previous PRT attempt, along with other required debug information. For earlier Windows versions, extract the information from the Azure AD analytics and operational logs.
``` +-+
Download the file Auth.zip from [https://cesdiagtools.blob.core.windows.net/wind
Server Error Description : AADSTS50126: Error validating credentials due to invalid username or password. ```
-### From AAD Analytic and operational logs
+**From the Azure AD analytics and operational logs**
-Use Event Viewer to locate the log entries logged by AAD CloudAP plugin during PRT acquisition
+Use Event Viewer to look for the log entries that are logged by the Azure AD CloudAP plug-in during PRT acquisition.
-1. Open the AAD event logs in event viewer. Located under Application and Services Log > Microsoft > Windows > AAD
+1. In Event Viewer, open the Azure AD event logs. They're stored under **Applications and Services Log** > **Microsoft** > **Windows** > **User Device Registration**.
> [!NOTE]
- > CloudAP plugin logs error events into the Operational logs while the info events are logged to the Analytic logs. Both Analytic and Operational log events are required to troubleshoot issues.
-
-2. Event 1006 in Analytic logs denotes the start of the PRT acquisition flow and Event 1007 in Analytic logs denotes the end of the PRT acquisition flow. All events in AAD logs (Analytic and Operational) between logged between the events 1006 and 1007 were logged as part of the PRT acquisition flow.
-
-3. Event 1007 logs the final error code.
--
-### Step 3: Follow additional troubleshooting, based on the found error code, from the list below
-
-**STATUS_LOGON_FAILURE** (-1073741715/ 0xc000006d)
-
-**STATUS_WRONG_PASSWORD** (-1073741718/ 0xc000006a)
-
-Reason(s):
-- Device is unable to connect to the AAD authentication service-- Received an error response (HTTP 400) from AAD authentication service or WS-Trust endpoint .
-> [!NOTE]
-> WS-Trust is required for federated authentication
-
-Resolution:
-- If the on-premises environment requires an outbound proxy, the IT admin must ensure that the computer account of the device is able to discover and silently authenticate to the outbound proxy.-- Events 1081 and 1088 (AAD operational logs) would contain the server error code and error description for errors originating from AAD authentication service and WS-Trust endpoint, respectively. Common server error codes and their resolutions are listed in the next section. First instance of Event 1022 (AAD analytic logs), preceding events 1081 or 1088, will contain the URL being accessed.---
-**STATUS_REQUEST_NOT_ACCEPTED** (-1073741616/ 0xc00000d0)
-
-Reason(s):
-- Received an error response (HTTP 400) from AAD authentication service or WS-Trust endpoint.
-> [!NOTE]
-> WS-Trust is required for federated authentication
-
-Resolution:
-- Events 1081 and 1088 (AAD operational logs) would contain the server error code and error description for errors originating from AAD authentication service and WS-Trust endpoint, respectively. Common server error codes and their resolutions are listed in the next section. First instance of Event 1022 (AAD analytic logs), preceding events 1081 or 1088, will contain the URL being accessed.---
-**STATUS_NETWORK_UNREACHABLE** (-1073741252/ 0xc000023c)
-
-**STATUS_BAD_NETWORK_PATH** (-1073741634/ 0xc00000be)
-
-**STATUS_UNEXPECTED_NETWORK_ERROR** (-1073741628/ 0xc00000c4)
-
-Reason(s):
-- Received an error response (HTTP > 400) from AAD authentication service or WS-Trust endpoint.
-> [!NOTE]
-> WS-Trust is required for federated authentication
-- Network connectivity issue to a required endpoint-
-Resolution:
-- For server errors, Events 1081 and 1088 (AAD operational logs) would contain the error code and error description from AAD authentication service and WS-Trust endpoint, respectively. Common server error codes and their resolutions are listed in the next section.-- For connectivity issues, Events 1022 (AAD analytic logs) and 1084 (AAD operational logs) will contain the URL being accessed and the sub-error code from network stack , respectively.--
-**STATUS_NO_SUCH_LOGON_SESSION** (-1073741729/ 0xc000005f)
-
-Reason(s):
-- User realm discovery failed as AAD authentication service was unable to find the userΓÇÖs domain-
-Resolution:
-- The domain of the userΓÇÖs UPN must be added as a custom domain in AAD. Event 1144 (AAD analytic logs) will contain the UPN provided.-- If the on-premises domain name is non-routable (jdoe@contoso.local), configure Alternate Login ID (AltID). References: [prerequisites](hybrid-azuread-join-plan.md) [configuring-alternate-login-id](/windows-server/identity/ad-fs/operations/configuring-alternate-login-id) ---
-**AAD_CLOUDAP_E_OAUTH_USERNAME_IS_MALFORMED** (-1073445812/ 0xc004844c)
-
-Reason(s):
-- UserΓÇÖs UPN is not in expected format.
-> [!NOTE]
-> - For Azure AD joined devices, the UPN is the text entered by the user in the LoginUI.
-> - For Hybrid Azure AD joined devices, the UPN is returned from the domain controller during the login process.
-
-Resolution:
-- UserΓÇÖs UPN should be in the Internet-style login name, based on the Internet standard [RFC 822](https://www.ietf.org/rfc/rfc0822.txt). Event 1144 (AAD analytic logs) will contain the UPN provided.-- For Hybrid joined devices, ensure the domain controller is configured to return the UPN in the correct format. whoami /upn should display the configured UPN in the domain controller.-- If the on-premises domain name is non-routable (jdoe@contoso.local), configure Alternate Login ID (AltID). References: [prerequisites](hybrid-azuread-join-plan.md) [configuring-alternate-login-id](/windows-server/identity/ad-fs/operations/configuring-alternate-login-id) --
+ > The CloudAP plug-in logs error events in the operational logs, and it logs the info events in the analytics logs. The analytics and operational log events are both required to troubleshoot issues.
-**AAD_CLOUDAP_E_OAUTH_USER_SID_IS_EMPTY** (-1073445822/ 0xc0048442)
+1. Event 1006 in the analytics logs denotes the start of the PRT acquisition flow, and event 1007 in the analytics logs denotes the end of the PRT acquisition flow. All events in the Azure AD logs (analytics and operational) that are logged between events 1006 and 1007 were logged as part of the PRT acquisition flow.
-Reason(s):
-- User SID missing in ID Token returned by AAD authentication service
+1. Event 1007 logs the final error code.
-Resolution:
-- Ensure that network proxy is not interfering and modifying the server response. --
-**AAD_CLOUDAP_E_WSTRUST_SAML_TOKENS_ARE_EMPTY** (--1073445695/ 0xc00484c1)
+### Step 3: Troubleshoot further, based on the found error code
-Reason(s):
-- Received an error from WS-Trust endpoint.
-> [!NOTE]
-> WS-Trust is required for federated authentication
+| Error code | Reason | Resolution |
+| | | |
+| **STATUS_LOGON_FAILURE** (-1073741715/ 0xc000006d)<br>**STATUS_WRONG_PASSWORD** (-1073741718/ 0xc000006a) | <li>The device is unable to connect to the Azure AD authentication service.<li>Received an error response (HTTP 400) from the Azure AD authentication service or WS-Trust endpoint.<br>**Note**: WS-Trust is required for federated authentication. | <li>If the on-premises environment requires an outbound proxy, the IT admin must ensure that the computer account of the device is able to discover and silently authenticate to the outbound proxy.<li>Events 1081 and 1088 (Azure AD operational logs) would contain the server error code for errors originating from the Azure AD authentication service and error description for errors originating from the WS-Trust endpoint. Common server error codes and their resolutions are listed in the next section. The first instance of event 1022 (Azure AD analytics logs), preceding events 1081 or 1088, will contain the URL that's being accessed. |
+| **STATUS_REQUEST_NOT_ACCEPTED** (-1073741616/ 0xc00000d0) | Received an error response (HTTP 400) from the Azure AD authentication service or WS-Trust endpoint.<br>**Note**: WS-Trust is required for federated authentication. | Events 1081 and 1088 (Azure AD operational logs) would contain the server error code and error description for errors originating from Azure AD authentication service and WS-Trust endpoint, respectively. Common server error codes and their resolutions are listed in the next section. The first instance of event 1022 (Azure AD analytics logs), preceding events 1081 or 1088, will contain the URL that's being accessed. |
+| **STATUS_NETWORK_UNREACHABLE** (-1073741252/ 0xc000023c)<br>**STATUS_BAD_NETWORK_PATH** (-1073741634/ 0xc00000be)<br>**STATUS_UNEXPECTED_NETWORK_ERROR** (-1073741628/ 0xc00000c4) | <li>Received an error response (HTTP > 400) from the Azure AD authentication service or WS-Trust endpoint.<br>**Note**: WS-Trust is required for federated authentication.<li>Network connectivity issue to a required endpoint. | <li>For server errors, events 1081 and 1088 (Azure AD operational logs) would contain the error code from the Azure AD authentication service and the error description from the WS-Trust endpoint. Common server error codes and their resolutions are listed in the next section.<li>For connectivity issues, event 1022 (Azure AD analytics logs) will contain the URL that's being accessed, and event 1084 (Azure AD operational logs) will contain the sub-error code from the network stack. |
+| **STATUS_NO_SUCH_LOGON_SESSION** (-1073741729/ 0xc000005f) | User realm discovery failed because the Azure AD authentication service was unable to find the userΓÇÖs domain. | <li>The domain of the userΓÇÖs UPN must be added as a custom domain in Azure AD. Event 1144 (Azure AD analytics logs) will contain the UPN provided.<li>If the on-premises domain name is non-routable (jdoe@contoso.local), configure an Alternate Login ID (AltID). References: [Prerequisites](hybrid-azuread-join-plan.md); [Configure Alternate Login ID](/windows-server/identity/ad-fs/operations/configuring-alternate-login-id). |
+| **AAD_CLOUDAP_E_OAUTH_USERNAME_IS_MALFORMED** (-1073445812/ 0xc004844c) | The userΓÇÖs UPN isn't in the expected format.<br>**Notes**:<li>For Azure AD-joined devices, the UPN is the text that's entered by the user in the LoginUI. <li>For hybrid Azure AD-joined devices, the UPN is returned from the domain controller during the login process. | <li>UserΓÇÖs UPN should be in the internet-style login name, based on the internet standard [RFC 822](https://www.ietf.org/rfc/rfc0822.txt). Event 1144 (Azure AD analytics logs) will contain the UPN provided.<li>For hybrid-joined devices, ensure that the domain controller is configured to return the UPN in the correct format. In the domain controller, `whoami /upn` should display the configured UPN.<li>If the on-premises domain name is non-routable (jdoe@contoso.local), configure Alternate Login ID (AltID). References: [Prerequisites](hybrid-azuread-join-plan.md); [Configure Alternate Login ID](/windows-server/identity/ad-fs/operations/configuring-alternate-login-id). |
+| **AAD_CLOUDAP_E_OAUTH_USER_SID_IS_EMPTY** (-1073445822/ 0xc0048442) | The user SID is missing in the ID token that's returned by the Azure AD authentication service. | Ensure that the network proxy isn't interfering with and modifying the server response. |
+| **AAD_CLOUDAP_E_WSTRUST_SAML_TOKENS_ARE_EMPTY** (--1073445695/ 0xc00484c1) | Received an error from the WS-Trust endpoint.<br>**Note**: WS-Trust is required for federated authentication. | <li>Ensure that the network proxy isn't interfering with and modifying the WS-Trust response.<li>Event 1088 (Azure AD operational logs) would contain the server error code and error description from the WS-Trust endpoint. Common server error codes and their resolutions are listed in the next section. |
+| **AAD_CLOUDAP_E_HTTP_PASSWORD_URI_IS_EMPTY** (-1073445749/ 0xc004848b) | The MEX endpoint is incorrectly configured. The MEX response doesn't contain any password URLs. | <li>Ensure that the network proxy isn't interfering with and modifying the server response.<li>Fix the MEX configuration to return valid URLs in response. |
+| **WC_E_DTDPROHIBITED** (-1072894385/ 0xc00cee4f) | The XML response, from the WS-Trust endpoint, included a Document Type Definition (DTD). A DTD isn't expected in XML responses, and parsing the response will fail if a DTD is included.<br>**Note**: WS-Trust is required for federated authentication. | <li>Fix the configuration in the identity provider to avoid sending a DTD in the XML response.<li>Event 1022 (Azure AD analytics logs) will contain the URL that's being accessed that's returning an XML response with a DTD. |
+| | |
-Resolution:
-- Ensure that network proxy is not interfering and modifying the WS-Trust response.-- Event 1088 (AAD operational logs) would contain the server error code and error description from WS-Trust endpoint. Common server error codes and their resolutions are listed in the next section -
+**Common server error codes:**
-**AAD_CLOUDAP_E_HTTP_PASSWORD_URI_IS_EMPTY** (-1073445749/ 0xc004848b)
+| Error code | Reason | Resolution |
+| | | |
+| **AADSTS50155: Device authentication failed** | <li>Azure AD is unable to authenticate the device to issue a PRT.<li>Confirm that the device hasn't been deleted or disabled in the Azure portal. For more information about this issue, see [Azure Active Directory device management FAQ](faq.yml#why-do-my-users-see-an-error-message-saying--your-organization-has-deleted-the-device--or--your-organization-has-disabled-the-device--on-their-windows-10-devices). | Follow the instructions for this issue in [Azure Active Directory device management FAQ](faq.yml#i-disabled-or-deleted-my-device-in-the-azure-portal-or-by-using-windows-powershell--but-the-local-state-on-the-device-says-it-s-still-registered--what-should-i-do) to re-register the device based on the device join type. |
+| **AADSTS50034: The user account `Account` does not exist in the `tenant id` directory** | Azure AD is unable to find the user account in the tenant. | <li>Ensure that the user is typing the correct UPN.<li>Ensure that the on-premises user account is being synced with Azure AD.<li>Event 1144 (Azure AD analytics logs) will contain the UPN provided. |
+| **AADSTS50126: Error validating credentials due to invalid username or password.** | <li>The username and password entered by the user in the Windows LoginUI are incorrect.<li>If the tenant has password hash sync enabled, the device is hybrid-joined, and the user just changed the password, it's likely that the new password hasnΓÇÖt synced with Azure AD. | To acquire a fresh PRT with the new credentials, wait for the Azure AD password sync to finish. |
+| | |
-Reason:
-- MEX endpoint incorrectly configured. MEX response does not contain any password URLs-
-Resolution:
-- Ensure that network proxy is not interfering and modifying the server response-- Fix the MEX configuration to return valid URLs in response. ---
-**WC_E_DTDPROHIBITED** (-1072894385/ 0xc00cee4f)
-
-Reason:
-- XML response, from WS-TRUST endpoint, included a DTD. DTD is not expected in the XML responses and parsing the response will fail if DTD is included.
-> [!NOTE]
-> WS-Trust is required for federated authentication
-
-Resolution:
-- Fix configuration in the identity provider to avoid sending DTD in XML response . -- Event 1022 (AAD analytic logs) will contain the URL being accessed that is returning the XML response with DTD.--
-**Common Server Error codes:**
+**Common network error codes**:
-**AADSTS50155: Device authentication failed**
+| Error code | Reason | Resolution |
+| | | |
+| **ERROR_WINHTTP_TIMEOUT** (12002)<br>**ERROR_WINHTTP_NAME_NOT_RESOLVED** (12007)<br>**ERROR_WINHTTP_CANNOT_CONNECT** (12029)<br>**ERROR_WINHTTP_CONNECTION_ERROR** (12030) | Common general network-related issues. | <li>Events 1022 (Azure AD analytics logs) and 1084 (Azure AD operational logs) will contain the URL that's being accessed.<li>If the on-premises environment requires an outbound proxy, the IT admin must ensure that the computer account of the device is able to discover and silently authenticate to the outbound proxy.<br><br>Get additional [network error codes](/windows/win32/winhttp/error-messages). |
+| | |
-Reason:
-- AAD is unable to authenticate the device to issue a PRT-- Confirm the device has not been deleted or disabled in the Azure portal. [More Info](faq.yml#why-do-my-users-see-an-error-message-saying--your-organization-has-deleted-the-device--or--your-organization-has-disabled-the-device--on-their-windows-10-devices)
-Resolution :
-- Follow steps listed [here](faq.yml#i-disabled-or-deleted-my-device-in-the-azure-portal-or-by-using-windows-powershell--but-the-local-state-on-the-device-says-it-s-still-registered--what-should-i-do) to re-register the device based on the device join type.---
-**AADSTS50034: The user account `Account` does not exist in the `tenant id` directory**
-
-Reason:
-- AAD is unable to find the user account in the tenant.-
-Resolution:
-- Ensure the user is typing the correct UPN.-- Ensure the on-prem user account is being synced to AAD.-- Event 1144 (AAD analytic logs) will contain the UPN provided.---
-**AADSTS50126: Error validating credentials due to invalid username or password.**
-
-Reason:
-- Username and password entered by the user in the windows LoginUI are incorrect.-- If the tenant has Password Hash Sync enabled, the device is Hybrid Joined and the user just changed the password it is likely the new password hasnΓÇÖt synced to AAD. -
-Resolution:
-- Wait for the AAD sync to complete to acquire a fresh PRT with the new credentials. ---
-**Common Network Error codes:**
-
-**ERROR_WINHTTP_TIMEOUT** (12002)
-
-**ERROR_WINHTTP_NAME_NOT_RESOLVED** (12007)
-
-**ERROR_WINHTTP_CANNOT_CONNECT** (12029)
-
-**ERROR_WINHTTP_CONNECTION_ERROR** (12030)
-
-Reason:
-- Common general network related issues. -
-Resolution:
-- Events 1022 (AAD analytic logs) and 1084 (AAD operational logs) will contain the URL being accessed-- If the on-premises environment requires an outbound proxy, the IT admin must ensure that the computer account of the device is able to discover and silently authenticate to the outbound proxy-
-> [!NOTE]
-> Other network error codes located [here](/windows/win32/winhttp/error-messages).
---
-### Step 4: Collect logs ###
+### Step 4: Collect logs
**Regular logs**
-1. Go to https://aka.ms/icesdptool, which will automatically download a .cab file containing the Diagnostic tool.
-2. Run the tool and repro your scenario, once the repro is complete. Finish the process.
-3. For Fiddler traces accept the certificate requests that will pop up.
-4. The wizard will prompt you for a password to safeguard your trace files. Provide a password.
-5. Finally, open the folder where all the logs collected are stored. It is typically in a folder like
- %LOCALAPPDATA%\ElevatedDiagnostics\numbers
-7. Contact support with contents of latest.cab, which contains all the collected logs.
+1. Go to https://aka.ms/icesdptool to automatically download a *.cab* file containing the Diagnostic tool.
+1. Run the tool and repro your scenario.
+1. For Fiddler traces, accept the certificate requests that pop up.
+1. The wizard will prompt you for a password to safeguard your trace files. Provide a password.
+1. Finally, open the folder where all the collected logs are stored, such as *%LOCALAPPDATA%\ElevatedDiagnostics\numbers*.
+1. Contact Support with contents of the latest *.cab* file.
**Network traces** > [!NOTE]
-> Collecting Network Traces: (it is important to NOT use Fiddler during repro)
+> When you're collecting network traces, it's important to *not* use Fiddler during repro.
-1. netsh trace start scenario=InternetClient_dbg capture=yes persistent=yes
-2. Lock and Unlock the device. For Hybrid joined devices wait a > minute to allow PRT acquisition task to complete.
-3. netsh trace stop
-4. Share nettrace.cab
+1. Run `netsh trace start scenario=internetClient_dbg capture=yes persistent=yes`.
+1. Lock and unlock the device. For hybrid-joined devices, wait a minute or more to allow the PRT acquisition task to finish.
+1. Run `netsh trace stop`.
+1. Share the *nettrace.cab* file with Support.
## Known issues-- Under Settings -> Accounts -> Access Work or School, Hybrid Azure AD joined devices may show two different accounts, one for Azure AD and one for on-premises AD, when connected to mobile hotspots or external WiFi networks. This is only a UI issue and does not have any impact on functionality.
+- If you're connected to a mobile hotspot or an external Wi-Fi network and you go to **Settings** > **Accounts** > **Access Work or School**, hybrid Azure AD-joined devices might show two different accounts, one for Azure AD and one for on-premises AD. This is a UI issue only and doesn't affect functionality.
## Next steps -- Continue [troubleshooting devices using the dsregcmd command](troubleshoot-device-dsregcmd.md)--- [The Microsoft Error Lookup Tool](/windows/win32/debug/system-error-code-lookup-tool)
+- [Troubleshoot devices by using the `dsregcmd` command](troubleshoot-device-dsregcmd.md).
+- Go to the [Microsoft Error Lookup Tool](/windows/win32/debug/system-error-code-lookup-tool).
active-directory Directory Delete Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/directory-delete-howto.md
Previously updated : 07/14/2021 Last updated : 09/01/2021
active-directory Directory Overview User Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/directory-overview-user-model.md
Previously updated : 12/02/2020 Last updated : 09/01/2021
active-directory Directory Self Service Signup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/directory-self-service-signup.md
Previously updated : 05/19/2021 Last updated : 09/01/2021
active-directory Directory Service Limits Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/directory-service-limits-restrictions.md
Previously updated : 07/29/2021 Last updated : 09/01/2021
active-directory Domains Admin Takeover https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/domains-admin-takeover.md
Previously updated : 04/18/2021 Last updated : 09/01/2021
active-directory Domains Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/domains-manage.md
Previously updated : 07/30/2021 Last updated : 09/01/2021
active-directory Domains Verify Custom Subdomain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/domains-verify-custom-subdomain.md
Previously updated : 04/18/2021 Last updated : 09/01/2021
active-directory Groups Assign Sensitivity Labels https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-assign-sensitivity-labels.md
Previously updated : 05/28/2021 Last updated : 09/01/2021
active-directory Groups Bulk Download Members https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-bulk-download-members.md
Previously updated : 12/02/2020 Last updated : 09/01/2021
active-directory Groups Bulk Download https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-bulk-download.md
Previously updated : 12/02/2020 Last updated : 09/01/2021
active-directory Groups Bulk Import Members https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-bulk-import-members.md
Previously updated : 12/02/2020 Last updated : 09/02/2021
active-directory Groups Bulk Remove Members https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-bulk-remove-members.md
Previously updated : 11/15/2020 Last updated : 09/02/2021
active-directory Groups Change Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-change-type.md
Previously updated : 12/02/2020 Last updated : 09/02/2021
active-directory Groups Create Rule https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-create-rule.md
Previously updated : 12/02/2020 Last updated : 09/02/2021
active-directory Groups Dynamic Rule Validation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-dynamic-rule-validation.md
Previously updated : 12/02/2020 Last updated : 09/02/2021
active-directory Groups Dynamic Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-dynamic-tutorial.md
Previously updated : 12/02/2020 Last updated : 09/02/2021
active-directory Groups Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-lifecycle.md
Previously updated : 12/02/2020 Last updated : 09/02/2021
active-directory Groups Members Owners Search https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-members-owners-search.md
Previously updated : 12/02/2020 Last updated : 09/02/2021
active-directory Groups Naming Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-naming-policy.md
Previously updated : 08/06/2021 Last updated : 09/02/2021
active-directory Groups Quickstart Expiration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-quickstart-expiration.md
Previously updated : 12/02/2020 Last updated : 09/02/2021
active-directory Customize Branding https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/customize-branding.md
Your custom branding won't immediately appear when your users go to sites such a
- **Square logo image, dark theme.** Same as the square logo image above. This logo image takes the place of the square logo image when used with a dark background, such as with Windows 10 Azure AD joined screens during the out-of-box experience (OOBE). If your logo looks good on white, dark blue, and black backgrounds, you don't need to add this image.
+ >[!IMPORTANT]
+ > Transparent logos are supported with the square logo image. However, the color palette used in the transparent logo could conflict with backgrounds (such as, white, light grey, dark grey, and black backgrounds) used within Microsoft 365 apps and services that consume the square logo image. Solid color backgrounds may need to be used to ensure the square image logo is rendered correctly in all situations.
+
- **Show option to remain signed in.** You can choose to let your users remain signed in to Azure AD until explicitly signing out. If you choose **No**, this option is hidden, and users must sign in each time the browser is closed and reopened. This capability is only available on the default branding object and not on any language-specific object. To learn more about configuring and troubleshooting the option to remain signed in, see [Configure the 'Stay signed in?' prompt for Azure AD accounts](keep-me-signed-in.md)
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new.md
For more information on My Apps, read [Sign in and start apps from the My Apps p
**Service category:** MS Graph **Product capability:** Developer Experience
-Application authentication method policies in MS Graph which allow IT admins to enforce lifetime on application password secret credential or block the use of secrets altogether. Policies can be enforced for an entire tenant as a default configuration and it can be scoped to specific applications or service principals. [Learn more](graph/api/resources/policy-overview?view=graph-rest-beta).
+Application authentication method policies in MS Graph which allow IT admins to enforce lifetime on application password secret credential or block the use of secrets altogether. Policies can be enforced for an entire tenant as a default configuration and it can be scoped to specific applications or service principals. [Learn more](/graph/api/resources/policy-overview?view=graph-rest-beta).
Microsoft Graph support for the Mobility (MDM/MAM) configuration in Azure AD is
**Service category:** User Access Management **Product capability:** Entitlement Management
-Azure AD entitlement management now supports the creation of custom questions in the access package request flow. This feature allows you to configure custom questions in the access package policy. These questions are shown to requestors who can input their answers as part of the access request process. These answers will be displayed to approvers, giving them helpful information that empowers them to make better decisions on the access request. [Learn more](../governance/entitlement-management-access-package-create.md#add-requestor-information-to-an-access-package).
+Azure AD entitlement management now supports the creation of custom questions in the access package request flow. This feature allows you to configure custom questions in the access package policy. These questions are shown to requestors who can input their answers as part of the access request process. These answers will be displayed to approvers, giving them helpful information that empowers them to make better decisions on the access request. [Learn more](../governance/entitlement-management-access-package-create.md).
active-directory Entitlement Management Catalog Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-catalog-create.md
description: Learn how to create a new container of resources and access package
documentationCenter: '' -+ editor: HANKI
na
ms.devlang: na Previously updated : 12/23/2020 Last updated : 8/31/2021
To include resources in an access package, the resources must exist in a catalog
1. Click a resource type: **Groups and Teams**, **Applications**, or **SharePoint sites**.
- If you don't see a resource that you want to add or you are unable to add a resource, make sure you have the required Azure AD directory role and entitlement management role. You might need to have someone with the required roles add the resource to your catalog. For more information, see [Required roles to add resources to a catalog](entitlement-management-delegate.md#required-roles-to-add-resources-to-a-catalog).
+ If you don't see a resource that you want to add or you're unable to add a resource, make sure you have the required Azure AD directory role and entitlement management role. You might need to have someone with the required roles add the resource to your catalog. For more information, see [Required roles to add resources to a catalog](entitlement-management-delegate.md#required-roles-to-add-resources-to-a-catalog).
1. Select one or more resources of the type that you would like to add to the catalog.
To include resources in an access package, the resources must exist in a catalog
These resources can now be included in access packages within the catalog.
+### Add resource attributes (Preview) in the catalog
+
+Attributes are required fields that requestors will be asked to answer before submitting their access request. Their answers for these attributes will be shown to approvers and also stamped on the user object in Azure Active Directory.
+
+> [!NOTE]
+>All attributes set up on a resource would require an answer before a request for an access package containing that resource can be submitted. If requestors don't provide an answer, there request won't be processed.
+
+To require attributes for access requests, use the following steps:
+
+1. Click **Resources** in the left menu, and a list of resources in the catalog will appear.
+
+1. Click on the ellipses next to the resource you want to add attributes, then select **Require attributes (Preview)**.
+
+ ![Add resources - select require attributes](./media/entitlement-management-catalog-create/resources-require-attributes.png)
+
+1. Select the attribute type:
+
+ 1. **Built-in**: includes Azure Active Directory user profile attributes.
+ 1. **Directory schema extension**: provides a way to store additional data in Azure Active Directory on user objects and other directory objects. This includes groups, tenant details, and service principals. Only extension attributes on user objects can be used to send out claims to applications.
+ 1. If you choose **Built-in**, you can choose an attribute from the dropdown list. If you choose **Directory schema extension**, you can enter the attribute name in the textbox.
+
+ > [!NOTE]
+ > The User.mobilePhone attribute can be updated only for non-administrator users. Learn more [here](/graph/permissions-reference#remarks-5).
+
+1. Select the Answer format in which you would like requestors to answer. Answer formats include: **short text**, **multiple choice**, and **long text**.
+
+1. If selecting multiple choice, click on the **Edit and localize** button to configure the answer options.
+ 1. After selecting Edit and localize, the **View/edit question** pane will open.
+ 1. Type in the response options you wish to give the requestor when answering the question in the **Answer values** boxes.
+ 1. Select the language the for the response option. You can localize response options if you choose additional languages.
+ 1. Type in as many responses as you need then click **Save**.
+
+1. If you want the attribute value to be editable during direct assignments and self-service requests, select **Yes**.
+
+ > [!NOTE]
+ > ![Add resources - add attributes - make attributes editable](./media/entitlement-management-catalog-create/attributes-are-editable.png)
+ > - If you select **No** in Attribute value is editable field, and the attribute value **is empty**, users will have the ability to enter the value of that attribute. Once saved, the value will no longer be editable.
+ > - If you select **No** in Attribute value is editable field, and the attribute value **is not empty**, then users will not be able to edit the pre-existing value, both during direct assignments and during self-service requests.
+
+ ![Add resources - add attributes - questions](./media/entitlement-management-catalog-create/add-attributes-questions.png)
+
+1. If you would like to add localization, click **Add localization**.
+
+ 1. Once in the **Add localizations for questions** pane, select the language code for the language in which you want to localize the question related to the selected attribute.
+ 1. In the language you configured, type the question in the **Localized Text** box.
+ 1. Once you've added all of the localizations needed, click **Save**.
+
+ ![Add resources - add attributes - localization](./media/entitlement-management-catalog-create/attributes-add-localization.png)
+
+1. Once all attribute information is completed on the **Require attributes (Preview)** page, click **Save**.
+ ### Add a Multi-geo SharePoint Site 1. If you have [Multi-Geo](/microsoft-365/enterprise/multi-geo-capabilities-in-onedrive-and-sharepoint-online-in-microsoft-365) enabled for SharePoint, select the environment you would like to select sites from.
You can also add a resource to a catalog using Microsoft Graph. A user in an ap
## Remove resources from a catalog
-You can remove resources from a catalog. A resource can only be removed from a catalog if it is not being used in any of the catalog's access packages.
+You can remove resources from a catalog. A resource can only be removed from a catalog if it isn't being used in any of the catalog's access packages.
**Prerequisite role:** See [Required roles to add resources to a catalog](entitlement-management-delegate.md#required-roles-to-add-resources-to-a-catalog)
You can edit the name and description for a catalog. Users see this information
## Delete a catalog
-You can delete a catalog, but only if it does not have any access packages.
+You can delete a catalog, but only if it doesn't have any access packages.
**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, or Catalog owner
You can also delete a catalog using Microsoft Graph. A user in an appropriate r
## Next steps -- [Delegate access governance to access package managers](entitlement-management-delegate-managers.md)
+- [Delegate access governance to access package managers](entitlement-management-delegate-managers.md)
active-directory Entitlement Management Request Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-request-access.md
na
ms.devlang: na Previously updated : 06/18/2020 Last updated : 08/31/2021
You may request access to an access package that requires business justification
![My Access portal - Request access - Fill out requestor information](./media/entitlement-management-request-access/my-access-requestor-information.png)
+> [!NOTE]
+> You may notice that some of the additional requestor information has pre-populated values. This generally occurs if your account already has attribute information set, either from a previous request or other process. These values can be editable or not depending on the settings of the policy selected.
+ ## Resubmit a request When you request access to an access package, your request might be denied or your request might expire if approvers don't respond in time. If you need access, you can try again and resubmit your request. The following procedure explains how to resubmit an access request:
If you submit an access request and the request is still in the **pending approv
## Next steps - [Approve or deny access requests](entitlement-management-request-approve.md)-- [Request process and email notifications](entitlement-management-process.md)
+- [Request process and email notifications](entitlement-management-process.md)
active-directory Migrate Applications From Okta To Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/migrate-applications-from-okta-to-azure-active-directory.md
++
+ Title: Tutorial to migrate your applications from Okta to Azure Active Directory
+
+description: Learn how to migrate your applications from Okta to Azure Active Directory
+++++++ Last updated : 09/01/2021++++
+# Tutorial: Migrate your applications from Okta to Azure Active Directory
+
+In this tutorial, learn how to migrate your applications from Okta to Azure Active Directory (Azure AD).
+
+## Create an inventory of current Okta applications
+
+When converting Okta applications to Azure AD, it's recommended to first document the current environment and application settings before migration.
+
+Okta offers an API that can be used to collect this information from a centralized location. To use the API, an API explorer tool such as [Postman](https://www.postman.com/) is required.
+
+Follow these steps to create an application inventory:
+
+1. Install the Postman app. Post-installation, generate an API token from the Okta admin console.
+
+2. Navigate to the API dashboard under the Security section, select **Tokens** > **Create Token**
+
+ ![image to show token creation](media/migrate-applications-from-okta-to-azure-active-directory/token-creation.png)
+
+3. Insert a token name and then select **Create Token**.
+
+ ![image to show token created](media/migrate-applications-from-okta-to-azure-active-directory/token-created.png)
+
+4. After selecting **Create Token** record the value and save it, as it won't be accessible after selecting **Ok, got it**.
+
+ ![image to show record created](media/migrate-applications-from-okta-to-azure-active-directory/record-created.png)
+
+5. Once you've recorded the API token, return to the Postman app, and select **Import** under the workspace.
+
+ ![image to show import api](media/migrate-applications-from-okta-to-azure-active-directory/import-api.png)
+
+6. In the Import page, select **link**, and use the following link to import the API:
+<https://developer.okta.com/docs/api/postman/example.oktapreview.com.environment>
+
+ ![image to show link to import](media/migrate-applications-from-okta-to-azure-active-directory/link-to-import.png)
+
+>[!NOTE]
+>Do not modify the link with your tenant values.
+
+7. Continue through the next menu by selecting **Import**.
+
+ ![image to show next import menu](media/migrate-applications-from-okta-to-azure-active-directory/next-import-menu.png)
+
+8. Once imported, change the Environment selection to **{yourOktaDomain}**
+
+ ![image to shows change environment](media/migrate-applications-from-okta-to-azure-active-directory/change-environment.png)
+
+9. After changing the Environment selection, edit your Okta environment by selecting the eye, followed by **Edit**.
+
+ ![image to shows edit environment](media/migrate-applications-from-okta-to-azure-active-directory/edit-environment.png)
+
+10. Update the values for URL and API key in both the **Initial Value** and **Current Value** fields also changing the name to reflect your environment, then save the values.
+
+ ![image to shows update values for api](media/migrate-applications-from-okta-to-azure-active-directory/update-values-for-api.png)
+
+11. After saving the API key, [load the apps API into Postman](https://app.getpostman.com/run-collection/377eaf77fdbeaedced17).
+
+12. Once the API has been loaded into Postman, select the **Apps** dropdown, followed by the **Get List Apps** and then select **Send**.
+
+Now you can print all the applications in your Okta tenant to a JSON format.
+
+![image to shows list of applications](media/migrate-applications-from-okta-to-azure-active-directory/list-of-applications.png)
+
+It's recommended to copy and convert this JSON list to CSV using a public converter such as <https://konklone.io/json/> or PowerShell using [ConvertFrom-Json](https://docs.microsoft.com/powershell/module/microsoft.powershell.utility/convertfrom-json?view=powershell-7.1)
+and [ConvertTo-CSV.](https://docs.microsoft.com/powershell/module/microsoft.powershell.utility/convertto-csv?view=powershell-7.1)
+
+After Downloading the CSV, the applications in your Okta tenant have been recorded successfully for future reference.
+
+## Migrate a SAML application to Azure AD
+
+To migrate a SAML 2.0 application to Azure AD, first configure the application in your Azure AD tenant for application access. In this example, we'll be converting a Salesforce instance. Follow [this tutorial](https://docs.microsoft.com/azure/active-directory/saas-apps/salesforce-tutorial) to onboard the applications.
+
+To complete the migration process, repeat configuration steps for all applications discovered in the Okta tenant.
+
+1. Navigate to [Azure AD portal](https://aad.portal.azure.com), and select **Azure Active Directory** > **Enterprise applications** > **New application**.
+
+ ![image to shows list of new applications](media/migrate-applications-from-okta-to-azure-active-directory/list-of-new-applications.png)
+
+2. Salesforce is available in the Azure AD gallery. Search for salesforce, select the application and then select **Create**.
+
+ ![image to shows salesforce application](media/migrate-applications-from-okta-to-azure-active-directory/salesforce-application.png)
+
+3. Once the application has been created, navigate to the **Single sign-on** (SSO) tab and select **SAML**.
+
+ ![image to shows SAML application](media/migrate-applications-from-okta-to-azure-active-directory/saml-application.png)
+
+4. After selecting SAML, download the **Federation Metadata XML and Certificate (Raw)** for import into Salesforce.
+
+ ![image to shows download federation metadata](media/migrate-applications-from-okta-to-azure-active-directory/federation-metadata.png)
+
+5. Once the XML has been captured, navigate to the Salesforce Admin console, and then select **Identity** > **Single sign-on
+Settings** > **New from Metadata File**
+
+ ![image to shows Salesforce Admin console](media/migrate-applications-from-okta-to-azure-active-directory/salesforce-admin-console.png)
+
+6. Upload the XML file downloaded from the Azure AD portal, followed by **Create**.
+
+ ![image to shows upload the XML file](media/migrate-applications-from-okta-to-azure-active-directory/upload-xml-file.png)
+
+7. Upload the Certificate downloaded from Azure, and then continue to select **Save** in the next menu to create the SAML provider in Salesforce.
+
+ ![image to shows creating saml provider](media/migrate-applications-from-okta-to-azure-active-directory/create-saml-provider.png)
+
+8. Record the following values for use in Azure - **Entity ID**, **Login URL**, and **Logout URL** and select the option to **Download metadata**.
+
+ ![image to shows record values in Azure](media/migrate-applications-from-okta-to-azure-active-directory/record-values-for-azure.png)
+
+9. Return to the Azure AD Enterprise Applications menu and **Upload metadata file** into Azure AD portal in the SAML SSO settings. Confirm the imported values match the recorded values before saving.
+
+ ![image to shows upload metadata file in Azure AD](media/migrate-applications-from-okta-to-azure-active-directory/upload-metadata-file.png)
+
+10. Once the SSO configuration has been saved, return to the Salesforce administration console and select **Company Settings** > **My Domain**. Navigate to the **Authentication Configuration** and select **Edit**.
+
+ ![image to shows upload edit company settings](media/migrate-applications-from-okta-to-azure-active-directory/edit-company-settings.png)
+
+11. Select the new SAML provider configured in previous steps as an available sign-in option and select **Save**.
+
+ ![image to shows save saml provider option](media/migrate-applications-from-okta-to-azure-active-directory/save-saml-provider.png)
+
+12. Return to the Enterprise Application in Azure AD, select **Users and Groups**, and add **test users**.
+
+ ![image to shows add test user](media/migrate-applications-from-okta-to-azure-active-directory/add-test-user.png)
+
+13. To test, sign in as one of the test users and navigate to
+<https://aka.ms/myapps> and select the **Salesforce** tile.
+
+ ![image to shows sign-in as test user](media/migrate-applications-from-okta-to-azure-active-directory/test-user-sign-in.png)
+
+14. After selecting the Salesforce tile, select the newly configured Identity Provider (IdP) to sign in.
+
+ ![image to shows select new identity provider](media/migrate-applications-from-okta-to-azure-active-directory/new-identity-provider.png)
+
+15. If everything has been correctly configured, the user will land at the Salesforce homepage. If there are any issues follow the [debugging guide](https://docs.microsoft.com/azure/active-directory/manage-apps/debug-saml-sso-issues).
+
+16. After testing the SSO connection from Azure, return to the enterprise application and assign the remaining users to the Salesforce application with the correct roles.
+
+>[!NOTE]
+>After adding the remaining users to the Azure AD application, it is recommended to have users test the connection and ensure there are no issues with access prior to moving on to the next step.
+
+17. Once users have confirmed there are no issues with signing in, return to the Salesforce administration console and select **Company Settings** > **My Domain**.
+
+18. Navigate to the **Authentication Configuration**, select **Edit** and deselect Okta as an authentication service.
+
+ ![image to shows deselect okta as authentication service](media/migrate-applications-from-okta-to-azure-active-directory/deselect-okta.png)
+
+Salesforce has now been successfully configured to Azure AD for
+SSO. Steps to clean up the Okta portal will be included later in this document.
+
+## Migrate an OIDC/OAuth 2.0 application to Azure AD
+
+First configure the application in your Azure AD tenant for application access. In this example, we'll be converting a custom OIDC app.
+
+To complete the migration process, repeat configuration steps for all applications discovered in the Okta tenant.
+
+1. Navigate to [Azure AD Portal](https://aad.portal.azure.com), and select **Azure Active Directory** > **Enterprise applications**. Under the **All applications** menu, select **New application**.
+
+2. Select **Create your own application**. On the side menu that pops up, give the OIDC app a name and select the radial for **Register an application you're working on to integrate with Azure AD** and then select **Create**.
+
+ ![image to shows new oidc application](media/migrate-applications-from-okta-to-azure-active-directory/new-oidc-application.png)
+
+3. On the next page, you'll be presented with a choice about tenancy of your application registration. See [this article](https://docs.microsoft.com/azure/active-directory/develop/single-and-multi-tenant-apps) for details.
+
+In this example, we are selecting **Accounts in any organizational
+directory**, any Azure AD directory **Multitenant** followed by **Register**.
+
+![image to shows Azure AD directory multitenant](media/migrate-applications-from-okta-to-azure-active-directory/multitenant-azure-ad-directory.png)
+
+4. After registering the application, navigate to the **App registrations** page under **Azure Active Directory**, and open the newly created registration.
+
+ Depending on the [application scenario,](https://docs.microsoft.com/azure/active-directory/develop/authentication-flows-app-scenarios) various configuration actions
+ might be needed. As most scenarios require App client secret, we'll cover those examples.
+
+5. On the **Overview** page, record the Application (client) ID for use in your application later.
+
+ ![image to shows application client id](media/migrate-applications-from-okta-to-azure-active-directory/application-client-id.png)
+
+6. After recording the Application ID, select **Certificates & Secrets** on the left menu. Select **New client secret** and give it a name and set its expiration accordingly.
+
+ ![image to shows new client secret](media/migrate-applications-from-okta-to-azure-active-directory/new-client-secret.png)
+
+7. Record the value and ID of the secret before leaving this page.
+
+>[!NOTE]
+>You will not be able to record this information later and will instead have to regenerate a secret if lost.
+
+8. After recording the information from the steps above, select **API Permissions** on the left, and grant the application access to the OIDC stack.
+
+9. Select **Add Permission** followed by **Microsoft Graph** and **Delegated Permissions**.
+
+10. From the **OpenId permissions** section, add email, OpenID, and profile, and then select **Add permissions**.
+
+ ![image to shows add openid permissions](media/migrate-applications-from-okta-to-azure-active-directory/add-openid-permission.png)
+
+11. After adding the permissions, to improve user experience and suppress user consent prompts, select the **Grant admin consent for Tenant Domain Name** option and wait for the **Granted** status to appear.
+
+ ![image to shows grant admin consent](media/migrate-applications-from-okta-to-azure-active-directory/grant-admin-consent.png)
+
+12. If your application has a redirect URI, or reply URL navigates to the **Authentication** tab, followed by **Add a platform** and **Web**, enter the appropriate URL, followed by selecting Access Tokens, and ID tokens at the bottom, before selecting **Configure**.
+
+ ![image to shows configure tokens](media/migrate-applications-from-okta-to-azure-active-directory/configure-tokens.png)
+
+ If necessary, under **Advanced** settings in the Authentication menu, flip **Allow public client flows** to yes.
+
+ ![image to shows allow public client flows](media/migrate-applications-from-okta-to-azure-active-directory/allow-client-flows.png)
+
+13. Return to your OIDC configured application, and import the application ID, and client secret into your application before testing. Configure your application to use the above configuration such as clientID, secret, and scopes.
+
+## Migrate a custom authorization server to Azure AD
+
+Okta authorization servers map one-to-one to application registrations that [expose an API](https://docs.microsoft.com/azure/active-directory/develop/quickstart-configure-app-expose-web-apis#add-a-scope).
+
+Default Okta authorization server should be mapped to Microsoft Graph scopes/permissions.
+
+![image to shows default okta authorization](media/migrate-applications-from-okta-to-azure-active-directory/default-okta-authorization.png)
+
+## Next steps
+
+- [Migrate Okta federation to Azure AD](migrate-okta-federation-to-azure-active-directory.md)
+
+- [Migrate Okta sync provisioning to Azure AD Connect based synchronization](migrate-okta-sync-provisioning-to-azure-active-directory.md)
+
+- [Migrate Okta sign on policies to Azure AD Conditional Access](migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access.md)
active-directory Migrate Okta Federation To Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/migrate-okta-federation-to-azure-active-directory.md
+
+ Title: Tutorial to migrate Okta federation to Azure AD-managed authentication
+
+description: Learn how to migrate your Okta federated applications to Azure AD-managed authentication
+++++++ Last updated : 09/01/2021++++
+# Tutorial: Migrate Okta federation to Azure Active Directory managed authentication
+
+In this tutorial, learn how to federate your existing Office 365 tenants with Okta for Single sign-on (SSO) capabilities.
+
+Migrating federation to Azure Active Directory (AD) can be done in a staged manner to ensure the desired authentication experience for users. Also, to test reverse federation access back to remaining Okta SSO applications.
+
+## Prerequisites
+
+- Office 365 tenant federated to Okta for SSO
+- Configure Azure AD Connect server or Azure AD connect cloud provisioning agents for user provisioning to
+Azure AD.
+
+## Step 1 - Configure Azure AD Connect for authentication
+
+Customers who have federated their Office 365 domains with Okta may not currently have a valid authentication method configured in Azure AD. Before migrating to managed authentication, Azure AD Connect should be validated and configured with one of the following options to allow user sign-in.
+
+Use the following methods to determine which method is best suited for your environment:
+
+- **Password hash synchronization** - [Password hash synchronization](https://docs.microsoft.com/azure/active-directory/hybrid/whatis-phs) is an extension to the directory synchronization feature implemented by Azure AD Connect server or Cloud provisioning agents. You can use this feature to sign into Azure AD services like Microsoft 365. You sign in to the service by using the same password you use to sign in to your on-premises Active Directory instance.
+
+- **Pass-through authentication** - Azure AD [Pass-through authentication](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-pta) allows users to sign in to both on-premises and cloud-based applications using the same passwords. When users sign in using Azure AD, this feature validates users' passwords directly against the on-premises Active Directory via the Pass-through Authentication agent.
+
+- **Seamless SSO** - [Azure AD Seamless SSO](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-sso) automatically signs in users when they are on their corporate desktops
+that are connected to your corporate network. Seamless SSO provides your users with easy access to your cloud-based applications without needing any other on-premises components.
+
+Seamless SSO can also be deployed to Password hash synchronization or Pass-through authentication to create a seamless authentication experience to users in Azure AD.
+
+Ensure that you deploy all necessary pre-requisites of Seamless SSO to your end users by following the [deployment guide](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-sso-quick-start#step-1-check-the-prerequisites).
+
+For our example, we'll be configuring Password hash sync and Seamless SSO.
+
+### Configure Azure AD Connect for password hash synchronization and seamless SSO
+
+Follow these steps to configure Azure AD Connect for Password hash synchronization:
+
+1. On your Azure AD Connect server, launch the **Azure AD Connect** app from the start menu or desktop icon and select **Configure**.
+
+ ![image shows the Azure AD icon and configure button](media/migrate-okta-federation-to-azure-active-directory/configure-azure-ad.png)
+
+2. Select **Change user sign-in** > **Next**.
+
+ ![image shows the change user sign-in screen](media/migrate-okta-federation-to-azure-active-directory/change-user-signin.png)
+
+3. Enter Global Administrator credentials on the next page.
+
+ ![image shows enter global admin credentials](media/migrate-okta-federation-to-azure-active-directory/global-admin-credentials.png)
+
+4. Currently the server is configured for Federation with Okta. Update the selection to Password hash synchronization. Select the checkbox to **Enable Single sign-on**
+
+5. After updating the selection select **Next**.
+
+Follow these steps to enable Seamless SSO:
+
+1. Enter a domain administrator credential to the local on-premises and select **Next**
+
+ ![image shows enter domain admin credentials](media/migrate-okta-federation-to-azure-active-directory/domain-admin-credentials.png)
+
+2. On the final page, select **Configure** to update the Azure AD Connect Server.
+
+ ![image shows update the azure ad connect server](media/migrate-okta-federation-to-azure-active-directory/update-azure-ad-connect-server.png)
+
+3. Ignore the warning for Hybrid Azure AD join for now, however the **Device options** needs to be reconfigured after disabling federation from Okta.
+
+ ![image shows reconfigure device options](media/migrate-okta-federation-to-azure-active-directory/reconfigure-device-options.png)
+
+## Step 2 - Configure staged rollout features
+
+[Staged rollout of cloud authentication](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-staged-rollout) is a feature of Azure AD that can be used to test de-federating users before de-federating an entire
+domain. Before the deployment review the [pre-requisites](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-staged-rollout#prerequisites).
+
+After enabling Password Hash Sync and Seamless SSO on the Azure AD Connect server, follow these steps to configure staged rollout.
+
+1. Go to the [Azure portal](https://portal.azure.com/#home) and select **View** or **Manage Azure Active Directory**.
+
+ ![image shows the azure portal](media/migrate-okta-federation-to-azure-active-directory/azure-portal.png)
+
+2. In the Azure Active directory menu, select **Azure AD Connect** and confirm that Password Hash Sync is showing as enabled in the tenant.
+
+3. After confirming, select **Enable staged rollout for managed user sign-in**
+
+ ![image shows enable staged rollout](media/migrate-okta-federation-to-azure-active-directory/enable-staged-rollout.png)
+
+4. Your Password hash sync setting may have been converted to **On** after configuring the server, if it's not, enable it now, you'll notice that Seamless SSO is set to **Off**. If you attempt to enable it in the menu, you'll get an error that its already enabled for users in the tenant.
+
+5. After enabling Password Hash Sync, select **Manage Groups**.
+
+ ![image shows enable password hash sync](media/migrate-okta-federation-to-azure-active-directory/password-hash-sync.png)
+
+Follow the instructions for adding a group to the Password hash sync rollout. In the following example, a security group is used with 10 members to start with.
+
+![image shows example of security group](media/migrate-okta-federation-to-azure-active-directory/example-security-group.png)
+
+After adding in the group, wait about 30 minutes for the feature to take effect in your tenant. When the feature has taken effect, your users will no longer be redirected to Okta when they attempt to access Office 365 services.
+
+The staged rollout feature have some unsupported scenarios, they are as follows:
+
+- Legacy authentication such as POP3 and SMTP aren't supported.
+
+- If you have configured hybrid Azure AD join for use with Okta, all of the hybrid Azure AD join flows will still go to Okta until the domain has been de-federated. A sign-on policy should remain in Okta that allows Legacy authentication for Hybrid Azure AD join windows clients.
+
+## Step 3 - Create Okta app in Azure AD
+
+Users that are converted to managed authentication may still have
+applications in Okta that they need to access, to allow the users easy access to those applications. Learn how to configure an Azure AD application registration that links to the Okta home page for users.
+
+1. To configure the Enterprise application rRegistration for Okta, go to the [Azure portal](https://portal.azure.com/#home). Select **View** on **Manage Azure Active Directory**.
+
+2. Next, select **Enterprise Applications** from the menu under the Manage section.
+
+ ![image shows enterprise applications](media/migrate-okta-federation-to-azure-active-directory/enterprise-application.png)
+
+3. In the **All Applications** menu, select **New Application**.
+
+ ![image shows new applications](media/migrate-okta-federation-to-azure-active-directory/new-application.png)
+
+4. Select **Create your own application** and on the side menu that pops up, give the Okta app a name and select the radial for **Register an application you're working on to integrate with Azure AD** and then select **Create**.
+
+ ![image shows register an application](media/migrate-okta-federation-to-azure-active-directory/register-application.png)
+
+5. After registering the application, change the application to account in any organizational directory such as any Azure AD Directory - Multi-tenant and then select **register**.
+
+ ![image shows register an application and change the application account](media/migrate-okta-federation-to-azure-active-directory/register-change-application.png)
+
+6. After adding registration, go back to the Azure AD menu, and select **App Registrations** and then open the newly created registration.
+
+ ![image shows app registration](media/migrate-okta-federation-to-azure-active-directory/app-registration.png)
+
+7. After opening the application, record your tenant ID and application ID.
+
+ >[!Note]
+ >You'll need tenant ID, and application ID later to configure Identity Provider in Okta.
+
+ ![image shows record tenant id and application id](media/migrate-okta-federation-to-azure-active-directory/record-ids.png)
+
+8. Select **Certificates & Secrets** on the left menu. Select **New Client Secret** and give it a generic name and set its expiration time.
+
+9. Record the value and ID of the secret before leaving this page.
+
+ >[!NOTE]
+ >You'll not be able to record this info later and will instead have to regenerate a secret if lost.
+
+ ![image shows record secrets](media/migrate-okta-federation-to-azure-active-directory/record-secrets.png)
+
+10. Select **API Permissions** on the left menu, and grant the application access to the OpenID Connect (OIDC) stack.
+
+11. Select **Add Permission** > **Microsoft Graph** > **Delegated Permissions**.
+
+ ![image shows delegated permissions](media/migrate-okta-federation-to-azure-active-directory/delegated-permissions.png)
+
+12. From the OpenID permissions section, add **Email**, **OpenID**, and **Profile**, then and select **Add permissions**.
+
+ ![image shows add permissions](media/migrate-okta-federation-to-azure-active-directory/add-permissions.png)
+
+13. Select the **Grant admin consent for Tenant Domain Name** option and wait for the Granted status to appear.
+
+ ![image shows grant consent](media/migrate-okta-federation-to-azure-active-directory/grant-consent.png)
+
+14. Once the permissions have been consented, add in the Home page URL under **Branding** for your user's application homepage.
+
+ ![image shows adding branding](media/migrate-okta-federation-to-azure-active-directory/add-branding.png)
+
+15. After configuring the application, transition to the Okta Administration portal and configure Microsoft as an Identity Provider. Select **Security** > **Identity Providers** and add a new Identity Provider, and add the default **Microsoft** Option.
+
+ ![image shows configure idp](media/migrate-okta-federation-to-azure-active-directory/configure-idp.png)
+
+16. On the Identity Provider page, copy your application ID to the Client ID field, and the client secret to the Client Secret field.
+
+17. Select **Show Advanced Settings**. By default this will tie User Principal Name (UPN) in Okta and Azure AD for the reverse federation access.
+
+ >[!IMPORTANT]
+ >If your UPNs do not match in Okta and Azure AD, select a common attribute between users to match against.
+
+18. Finalize your selection for auto provisioning. By default, if a user doesn't match to Okta, it will attempt to provision them in Azure AD. If you have migrated Provisioning away from Okta, select the **Redirect to Okta Sign-in page** option.
+
+ ![image shows redirect okta sign-in](media/migrate-okta-federation-to-azure-active-directory/redirect-okta.png)
+
+ After creating the IDP, extra configuration is needed to send users to the correct IDP.
+
+19. Select **Routing Rules** from the Identity Providers menu, and then select **Add Routing Rule** using one of the available attributes in the Okta profile.
+
+20. Configure the policy as shown to direct sign-ins from all devices and IPs to Azure AD.
+
+ In the example, our attribute **Division** is unused on all our Okta profiles, which makes it an easy candidate to use for IDP routing.
+
+ ![image shows division for idp routing](media/migrate-okta-federation-to-azure-active-directory/division-idp-routing.png)
+
+21. After adding the routing rule, record the Redirect URI, and add it to the **Application Registration**.
+
+ ![image shows application registration](media/migrate-okta-federation-to-azure-active-directory/application-registration.png)
+
+22. Navigate back to your application registration and select the authentication tab, followed by **Add a platform** and **Web**.
+
+ ![image shows add platform](media/migrate-okta-federation-to-azure-active-directory/add-platform.png)
+
+23. Add in the redirect URI from the IDP in Okta, then select **Access and ID tokens**.
+
+ ![image shows okta access and id tokens](media/migrate-okta-federation-to-azure-active-directory/access-id-tokens.png)
+
+24. Select **Directory** > **People** from the admin console. Select your first test user, and their profile.
+
+25. While editing the profile, add **ToAzureAD** to match the example and select **save**
+
+ ![image shows profile editing](media/migrate-okta-federation-to-azure-active-directory/profile-editing.png)
+
+26. After saving the user attributes, attempt to sign in as the modified user to [Microsoft 356 portal](https://portal.office.com). You'll notice it loops if your user isn't a part of the Managed Authentication pilot. To have the users exit the loop, add them to the Managed Authentication experience.
+
+## Step 4 - Test Okta app access on pilot members
+
+After configuring the Okta app in Azure AD and the Identity Provider in the Okta portal, you must assign the application to users.
+
+1. Navigate to Azure portal, select **Azure Active Directory** > **Enterprise Applications**.
+
+2. Select the App registration created earlier, navigate to **Users and Groups**. Add the group that correlates with the Managed Authentication pilot.
+
+>[!NOTE]
+>Users and groups can only be added from the Enterprise Applications selection, you cannot add users under the App Registrations menu.
+
+ ![image shows adding a group](media/migrate-okta-federation-to-azure-active-directory/add-group.png)
+
+3. After about 15 minutes, sign in as one of the Managed Authentication Pilot users and access [Myapplications](https://myapplications.microsoft.com).
+
+ ![image shows access myapplications](media/migrate-okta-federation-to-azure-active-directory/my-applications.png)
+
+4. Once authenticated, there will be an **Okta Application Access** tile, that will link the user back to the Okta homepage.
+
+## Step 5 - Test-managed authentication on pilot members
+
+After configuring the Okta reverse federation app, have your users conduct full testing on the Managed authentication experience. Its recommended to set up Company branding to help your users distinguish the proper tenant they are signing into. Get [guidance](https://docs.microsoft.com/azure/active-directory/fundamentals/customize-branding) for setting up company branding.
+
+>[!IMPORTANT]
+>Determine any additional Conditional Access Policies
+that may be needed before de-federating the domains as a whole from Okta. Refer to **Okta sign-on policies to Azure AD Conditional Access migration for steps to secure your environment prior to full cut-off**.
+
+## Step 6 - Remove federation for Office 365 domains
+
+Once your organization is comfortable with the Managed authentication experience, its time to de-federate your domain from Okta. To accomplish this connect to the MSOnline PowerShell with the following commands- if you don't already have the MSOnline PowerShell module, you can download it first by doing an install-module MSOnline.
+
+```PowerShell
+
+import-module MSOnline
+Connect-Msolservice
+Set-msoldomainauthentication
+-domainname yourdomain.com -authentication managed
+
+```
+
+After setting the domain to Managed authentication, you've
+successfully de-federated your Office 365 Tenant from Okta, while
+maintaining user access to the Okta homepage.
+
+## Next steps
+
+- [Migrate Okta sync provisioning to Azure AD Connect based synchronization](migrate-okta-sync-provisioning-to-azure-active-directory.md)
+
+- [Migrate Okta sign on policies to Azure AD Conditional Access](migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access.md)
+
+- [Migrate applications from Okta to Azure AD](migrate-applications-from-okta-to-azure-active-directory.md)
active-directory Migrate Okta Sign On Policies To Azure Active Directory Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access.md
++
+ Title: Tutorial to migrate Okta sign on policies to Azure Active Directory Conditional Access
+
+description: Learn how to migrate Okta sign on policies to Azure Active Directory Conditional Access
+++++++ Last updated : 09/01/2021++++
+# Tutorial: Migrate Okta sign on policies to Azure Active Directory Conditional Access
+
+In this tutorial, learn how organizations can migrate from global or application-level sign-on policies in Okta to Azure Active Directory (AD) Conditional Access (CA) policies to secure user access in Azure AD and connected applications.
+
+This tutorial assumes you have an Office 365 tenant federated to Okta for sign-on and multifactor authentication (MFA). You should also have Azure AD Connect server or Azure AD Connect cloud provisioning agents configured for user provisioning to Azure AD.
+
+## Prerequisites
+
+When switching from Okta sign on to Azure AD CA, it's important to understand licensing requirements. Azure AD CA requires users have an Azure AD Premium P1 License assigned before registration for Azure AD Multi-Factor Authentication.
+
+Before you do any of the steps for hybrid Azure AD join, you'll need an enterprise administrator credential in the on-premises forest to configure the Service Connection Point (SCP) record.
+
+## Step 1 - Catalog current Okta sign on policies
+
+To complete a successful transition to CA, the existing Okta sign on policies should be evaluated to determine use cases and requirements that will be transitioned to Azure AD.
+
+1. Check the global sign-on policies by navigating to **Security**, selecting **Authentication**, and then **Sign On**.
+
+ ![image shows global sign on policies](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/global-sign-on-policies.png)
+
+ In this example, our global sign-on policy is enforcing MFA on all sessions outside of our configured network zones.
+
+ ![image shows global sign on policies enforc mfa](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/global-sign-on-policies-enforce-mfa.png)
+
+2. Next, navigate to **Applications**, and check the application-level sign-on policies. Select **Applications** from the submenu, and then select your Office 365 connected instance from the **Active apps list**.
+
+3. Finally, select **Sign On** and scroll to the bottom of the page.
+
+In the following example, our Office 365 application sign-on policy has four separate rules.
+
+- **Enforce MFA for mobile sessions** - Requires MFA from every modern authentication or browser session on iOS or Android.
+
+- **Allow trusted Windows devices** - Prevents your trusted Okta devices from being prompted for additional verification or factors.
+
+- **Require MFA from untrusted Windows devices** - Requires MFA from every modern authentication or browser session on untrusted Windows devices.
+
+- **Block legacy authentication** - Prevents any legacy authentication clients from connecting to the service.
+
+ ![image shows o365 sign on rules](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/sign-on-rules.png)
+
+## Step 2 - Configure condition pre-requisites
+
+Azure AD CA policies can be configured to match Okta's conditions for most scenarios without additional configuration.
+
+In some scenarios, you may need additional setup before you configure the CA policies. The two known scenarios at the time of writing this article are:
+
+- **Okta network locations to named locations in Azure AD** - Follow [this article](https://docs.microsoft.com/azure/active-directory/conditional-access/location-condition#named-locations) to configure named locations in Azure AD.
+
+- **Okta device trust to device-based CA** - CA offers two possible options when evaluating a user's device.
+
+ - [Hybrid Azure AD join](#hybrid-azure-ad-join-configuration) - A feature enabled within the Azure AD Connect server that synchronizes Windows current devices such as Windows 10, Server 2016 and 2019, to Azure AD.
+
+ - [Enroll the device into Microsoft Endpoint Manager](#configure-device-compliance) and assign a compliance policy.
+
+### Hybrid Azure AD join configuration
+
+Enabling hybrid Azure AD join can be done on your Azure AD Connect server by running the configuration wizard. Post configuration, steps will need to be taken to automatically enroll devices.
+
+>[!NOTE]
+>Hybrid Azure AD join isn't supported with the Azure AD Connect cloud provisioning agents.
+
+1. Follow these [instructions](https://docs.microsoft.com/azure/active-directory/devices/hybrid-azuread-join-managed-domains#configure-hybrid-azure-ad-join) to enable Hybrid Azure AD join.
+
+2. On the SCP configuration page, select the **Authentication Service** drop-down. Choose your Okta federation provider URL followed by **Add**. Enter your on-premises enterprise administrator credentials then select **Next**.
+
+ ![image shows scp configuration](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/scp-configuration.png)
+
+3. If you have blocked legacy authentication on Windows clients in either the global or app level sign on policy, make a rule to allow the hybrid Azure AD join process to finish.
+
+4. You can either allow the entire legacy authentication stack through for all Windows clients or contact Okta support to enable their custom client string on your existing app policies.
+
+### Configure device compliance
+
+While hybrid Azure AD join is direct replacement for Okta device trust on Windows, CA policies can also look at device compliance for devices that have fully enrolled into Microsoft Endpoint Manager.
+
+- **Compliance overview** - Refer to [device compliance policies in Microsoft Intune](https://docs.microsoft.com/mem/intune/protect/device-compliance-get-started#:~:text=Reference%20for%20non-compliance%20and%20Conditional%20Access%20on%20the,applicable%20%20...%20%203%20more%20rows).
+
+- **Device compliance** - Create [policies in Microsoft Intune](https://docs.microsoft.com/mem/intune/protect/create-compliance-policy).
+
+- **Windows enrollment** - If you've opted to deploy hybrid Azure AD join, an additional group policy can be deployed to complete the [auto-enrollment process of these devices into Microsoft Intune](https://docs.microsoft.com/windows/client-management/mdm/enroll-a-windows-10-device-automatically-using-group-policy).
+
+- **iOS/iPadOS enrollment** - Before enrolling an iOS device, [additional configurations](https://docs.microsoft.com/mem/intune/enrollment/ios-enroll) must be made in the Endpoint Management Console.
+
+- **Android enrollment** - Before enrolling an Android device, [additional configurations](https://docs.microsoft.com/mem/intune/enrollment/android-enroll) must be made in the Endpoint Management Console.
+
+## Step 3 - Configure Azure AD Multi-Factor Authentication tenant settings
+
+Before converting to CA, confirm the base Azure AD Multi-Factor Authentication
+tenant settings for your organization.
+
+1. Navigate to the [Azure portal](https://portal.azure.com) and sign in with a global administrator account.
+
+2. Select **Azure Active Directory**, followed by **Users**, and then **Multi-Factor Authentication** this will take you to the Legacy Azure MFA portal.
+
+ ![image shows legacy Azure AD Multi-Factor Authentication portal](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/legacy-azure-ad-portal.png)
+
+Instead, you can use **<https://aka.ms/mfaportal>**.
+
+4. From the **Legacy Azure MFA** menu, change the status menu through **enabled** and **enforced** to confirm you have no users enabled for Legacy MFA. If your tenant has users in the below views, you must disable them in the legacy menu. Only then CA policies will take effect on their account.
+
+ ![image shows disable user in legacy Azure AD Multi-Factor Authentication portal](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/disable-user-legacy-azure-ad-portal.png)
+
+ **Enforced** field should also be empty.
+
+ ![image shows enforced field is empty in legacy Azure AD Multi-Factor Authentication portal](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/enforced-empty-legacy-azure-ad-portal.png)
+
+5. After confirming no users are configured for legacy MFA, select the **Service settings** option. Change the **App passwords** selection to **Do not allow users to create app passwords to sign in to non-browser apps**.
+
+6. Ensure the **Skip multi-factor authentication for requests from federated users on my intranet** and **Allow users to remember multi-factor authentication on devices they trust (between one to 365 days)** boxes are unchecked and then select **Save**.
+
+ >[!NOTE]
+ >See [best practices for configuring MFA prompt settings](https://aka.ms/mfaprompts).
+
+ ![image shows uncheck fields in legacy Azure AD Multi-Factor Authentication portal](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/uncheck-fields-legacy-azure-ad-portal.png)
+
+## Step 4 - Configure CA policies
+
+After you configured the pre-requisites, and established the base settings its time to build the first CA policy.
+
+1. To configure CA policies in Azure AD, navigate to the [Azure portal](https://portal.azure.com). Select **View** on Manage Azure Active Directory.
+
+2. Configuration of CA policies should keep in mind [best
+practices for deploying and designing CA](https://docs.microsoft.com/azure/active-directory/conditional-access/plan-conditional-access#understand-conditional-access-policy-components).
+
+3. To mimic global sign-on MFA policy from Okta, [create a policy](https://docs.microsoft.com/azure/active-directory/conditional-access/howto-conditional-access-policy-all-users-mfa).
+
+4. Create a [device trust based CA rule](https://docs.microsoft.com/azure/active-directory/conditional-access/require-managed-devices).
+
+5. This policy as any other in this tutorial can be targeted to a specific application, test group of users or both.
+
+ ![image shows testing user](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/test-user.png)
+
+ ![image shows success in testing user](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/success-test-user.png)
+
+6. After you configured the location-based policy, and device
+trust policy, its time to configure the equivalent [**Block legacy authentication**](https://docs.microsoft.com/azure/active-directory/conditional-access/howto-conditional-access-policy-block-legacy) policy.
+
+With these three CA policies, the original Okta sign on policies experience has been replicated in Azure AD. Next steps involve enrolling the user to Azure MFA and testing the policies.
+
+## Step 5 - Enroll pilot members in Azure AD Multi-Factor Authentication
+
+Once the CA policies have been configured, users will
+need to register for Azure MFA methods. Users can be required to register through several different methods.
+
+1. For individual registration, you can direct users to
+<https://aka.ms/mfasetup> to manually enter the registration information.
+
+2. User can go to <https://aka.ms/mysecurityinfo> to
+enter information or manage form of MFA registration.
+
+See [this guide](https://docs.microsoft.com/azure/active-directory/authentication/howto-registration-mfa-sspr-combined) to fully understand the MFA registration process.
+
+Navigate to <https://aka.ms/mfasetup> after signing in with Okta MFA, you're instructed to register for MFA with Azure AD.
+
+>[!NOTE]
+>If registration already happened in the past for that user,
+they'll be taken to **My Security** information page after satisfying the MFA prompt.
+
+See the [end-user documentation for MFA enrollment](https://docs.microsoft.com/azure/active-directory/user-help/security-info-setup-signin).
+
+## Step 6 - Enable CA policies
+
+1. To roll out testing, change the policies created in the earlier examples to **Enabled test user login**.
+
+ ![image shows enable test user](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/enable-test-user.png)
+
+2. On the next sign-in to Office 365, the test user John Smith is prompted to sign in with Okta MFA, and Azure AD Multi-Factor Authentication.
+
+ ![image shows sign-in through okta](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/sign-in-through-okta.png)
+
+3. Complete the MFA verification through Okta.
+
+ ![image shows mfa verification through okta](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/mfa-verification-through-okta.png)
+
+4. After the user completes the Okta MFA prompt, the user will be prompted for CA. Ensure that the policies have been configured appropriately and is within conditions to be triggered for MFA.
+
+ ![image shows mfa verification through okta prompted for CA](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/mfa-verification-through-okta-prompted-ca.png)
+
+## Step 7 - Cutover from sign on to CA policies
+
+After conducting thorough testing on the pilot members to ensure that CA is in effect as expected, the remaining organization members can be added into CA policies after registration has been completed.
+
+To avoid double-prompting between Azure MFA and Okta MFA, you should opt out from Okta MFA by modifying sign-on policies.
+
+The final migration step to CA can be done in a staged or cut-over fashion.
+
+1. Navigate to the Okta admin console, select **Security**, followed by **Authentication**, and then navigate to the **Sign On Policy**.
+
+>[!NOTE]
+>Global policies should be set to inactive only if all applications from Okta are protected by their own application sign on policies.
+
+2. Set the Enforce MFA policy to **Inactive** or assign the policy to a new group that doesn't include our Azure AD users.
+
+ ![image shows mfa policy to inactive](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/mfa-policy-inactive.png)
+
+3. On the application-level sign-on policy, update the policies to inactive by selecting the **Disable Rule** option. You can also assign the policy to a new group that doesn't include the Azure AD users.
+
+4. Ensure there is at least one application level sign-on policy that is enabled for the application that allows access without MFA.
+
+ ![image shows application access without mfa](media/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access/application-access-without-mfa.png)
+
+5. After disabling the Okta sign on policies, or excluding the migrated Azure AD users from the enforcement groups, the users should be prompted **only** for CA on their next sign-in.
+
+## Next steps
+
+- [Migrate applications from Okta to Azure AD](migrate-applications-from-okta-to-azure-active-directory.md)
+
+- [Migrate Okta federation to Azure AD](migrate-okta-federation-to-azure-active-directory.md)
+
+- [Migrate Okta sync provisioning to Azure AD Connect based synchronization](migrate-okta-sync-provisioning-to-azure-active-directory.md)
active-directory Migrate Okta Sync Provisioning To Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/migrate-okta-sync-provisioning-to-azure-active-directory.md
++
+ Title: Tutorial to migrate Okta sync provisioning to Azure AD Connect based synchronization
+
+description: Learn how to migrate your Okta sync provisioning to Azure AD Connect based synchronization
+++++++ Last updated : 09/01/2021+++++
+# Tutorial: Migrate Okta sync provisioning to Azure Active Directory Connect based synchronization
+
+This article will guide organizations who currently use User provisioning from Okta to Azure Active Directory (Azure AD), migrate either User sync, or Universal sync to Azure AD Connect. This will enable further provisioning into Azure AD and Office 365.
+
+Migrating synchronization platforms isn't a small change. Each step of the process mentioned in this article should be validated against your own environment before you remove the Azure AD Connect from staging mode or enable the Azure AD cloud provisioning agent.
+
+## Prerequisites
+
+When switching from Okta provisioning to Azure AD, customers have
+two choices, either Azure AD Connect Server, or Azure AD cloud
+provisioning. It is recommended to read the full [comparison article from Microsoft](https://docs.microsoft.com/azure/active-directory/cloud-sync/what-is-cloud-sync#comparison-between-azure-ad-connect-and-cloud-provisioning) to understand the differences between the two products.
+
+Azure AD cloud provisioning will be most familiar migration path for Okta customers using Universal or User sync. The cloud provisioning agents are lightweight, and can be installed on or near domain controllers like the Okta directory sync agents. It is not recommended to install them on the same server.
+
+Azure AD Connect server should be chosen if your organization needs to take advantage of any of the following technologies when synchronizing users.
+
+- Device synchronization - Hybrid Azure AD join or Hello for
+ Business
+
+- Passthrough authentication
+
+- More than 150k object support
+
+- Support for writeback
+
+>[!NOTE]
+>All pre-requisites should be taken into consideration when installing Azure AD Connect or Azure AD cloud provisioning. Refer to [this article to learn more](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-install-prerequisites) before installation.
+
+## Step 1 - Confirm ImmutableID attribute synchronized by Okta
+
+ImmutableID is the core attribute used to tie synchronized objects to their on-premises counterparts. Okta takes the Active Directory objectGUID of an on-premises object and converts it to a Base64 encoded string. Then, by default stamps that string to the ImmutableID field in Azure AD.
+
+You can connect to Azure AD PowerShell and examine the current ImmutableID value. If you've never used the Azure AD PowerShell module, run an
+`Install-Module AzureAD` in an administrative PowerShell session before you run the following commands.
+
+```Powershell
+Import-module AzureAD
+Connect-AzureAD
+```
+
+In case you already have the module, you may receive a warning to update to the latest version if it is out of date.
+
+After the module is installed, import it, and follow these steps to connect to the Azure AD service:
+
+1. Enter your global administrator credentials in the modern authentication window.
+
+ ![image shows import module](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/import-module.png)
+
+2. After connecting to the tenant, verify what your ImmutableID's are set as. The example shown is using Okta defaults of objectGUID to ImmutableID.
+
+ ![image shows Okta defaults of objectGUID to ImmutableID](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/okta-default-objectid.png)
+
+3. There are several ways to manually confirm the objectGUID to Base64 conversion on-premises, for individual validation use this example:
+
+ ```PowerShell
+ Get-ADUser onpremupn | fl objectguid
+ $objectguid = 'your-guid-here-1010'
+
+ [system.convert]::ToBase64String(([GUID]$objectGUID).ToByteArray())
+ ```
+
+ ![image shows how manually change Okta objectGUID to ImmutableID](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/manual-objectguid.png)
+
+## Step 2 - Mass validation methods for objectGUID
+
+Before cutting over to Azure AD Connect, it's critical to validate that the ImmutableID's in Azure AD are going to exactly match their on-premises values.
+
+The example will grab **all** on-premises AD users, and export a list of their objectGUID's and ImmutableID's already calculated to a CSV file.
+
+1. Run these commands in PowerShell on a domain controller on-premises.
+
+ ```PowerShell
+ Get-ADUser -Filter * -Properties objectGUID | Select-Object
+ UserPrincipalName, Name, objectGUID, @{Name = 'ImmutableID';
+ Expression = {
+ [system.convert\]::ToBase64String(([GUID\]\$_.objectGUID).ToByteArray())
+ } } | export-csv C:\\Temp\\OnPremIDs.csv
+ ```
+
+ ![image shows domain controller on-premises commands](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/domain-controller.png)
+
+2. Run these commands in an Azure AD PowerShell session to gather the already synchronized values:
+
+ ```powershell
+
+ Get-AzureADUser -all $true | Where-Object {$_.dirsyncenabled -like
+ "true"} | Select-Object UserPrincipalName, @{Name = 'objectGUID';
+ Expression = {
+ [GUID][System.Convert]::FromBase64String($_.ImmutableID) } },
+ ImmutableID | export-csv C:\\temp\\AzureADSyncedIDS.csv
+ ```
+
+ ![image shows azure ad powershell session](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/azure-ad-powershell.png)
+
+ Once you have both exports, confirm that the ImmutableID for each user matches.
+
+ >[!IMPORTANT]
+ >If your ImmutableIDs in the cloud don;t match objectGUID values, you've modified the defaults for Okta sync. You've
+ likely chosen another attribute to determine ImmutableIDs. Before moving onto the next section, it's critical to identify which source attribute is populating ImmutableID's. Ensure that you update the attribute Okta is syncing before disabling Okta sync.
+
+## Step 3 - Install Azure AD Connect in staging mode
+
+Once you've prepared your list of source and destination targets, its time to install Azure AD Connect server. If you've opted to use Azure AD Connect cloud provisioning, skip this section.
+
+1. Continue with [downloading and installing Azure AD Connect](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-install-custom) to your chosen server.
+
+2. On the **Identifying Users** page, under the **select how users should be identified with Azure AD** select the radial for **Choose a specific attribute**. Then, select **mS-DS-ConsistencyGUID** if you haven't modified the Okta defaults.
+
+ >[!WARNING]
+ >This is the most critical step before selecting **next**
+ on this page. Ensure that the attribute you're selecting for source anchor is what **currently** populates your existing Azure AD users. If you select the wrong attribute, you must uninstall and reinstall Azure AD Connect to reselect this option.
+
+ ![image shows consistency guid](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/consistency-guid.png)
+
+3. On the **Configure** page, make sure to select the checkbox for **Enable staging mode** followed by **Install**.
+
+ ![image shows enable staging mode](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/enable-staging-mode.png)
+
+4. After the configuration is complete, select **Exit**.
+
+Before exiting the staging mode, it's important to verify that the ImmutableID's have matched properly.
+
+1. Open the Synchronization service as an **Administrator**.
+
+ ![image shows opening sync service](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/open-sync-service.png)
+
+2. First check that the Full Synchronization to the domain.onmicrosoft.com connector space has users displaying under the **Connectors with Flow Updates** tab.
+
+ ![image shows connector with flow update](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/connector-flow-update.png)
+
+3. Next, verify there are no deletions pending in the export. Select the **Connectors** tab and then highlight the domain.onmicrosoft.com connector space. Then, select **Search Connector Space**.
+
+ ![image shows search connector space](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/search-connector-space.png)
+
+4. In the Connector Space search, select the Scope dropdown and select **Pending Export**.
+
+ ![image shows pending export](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/pending-export.png)
+
+5. Select **Delete** followed by **Search** if all objects have matched properly, there should be zero matching records for Deletes. Record any objects pending deletion and their on-premises values.
+
+ ![image shows deleted matching records](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/delete-matching-records.png)
+
+6. Next, uncheck **Delete**, and select **Add and Modify**, followed by a search. You should see update functions for all users currently being synchronized to Azure AD via Okta. Add any new objects that Okta isn't currently syncing, but exist in the Organizational Unit (OU) structure that was selected during the Azure AD Connect install.
+
+ ![image shows add new object](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/add-new-object.png)
+
+7. Double-clicking on updates will show what Azure AD Connect will communicate with Azure AD.
+
+8. If there are any Add functions for a user who already exists in Azure AD, their on-premises account isn't matching to their cloud account and AD Connect has determined it will create a new object, record any new adds that are unexpected. Make sure to correct the ImmutableID value in Azure AD before exiting staging mode.
+
+ In this example, Okta had been stamping the Mail attribute to the user's account, even though the on-premises value wasn't properly filled in. When Azure AD Connect takes over John Smith's account, the Mail attribute is deleted from his object.
+
+ Verify that your updates still include all attributes expected in Azure AD. If multiple attributes are being deleted, you may need to manually populate these on-premises AD values before removing staging mode.
+
+ ![image shows populate on-premises ad values](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/on-premises-ad-values.png)
+
+ >[!NOTE]
+ >Before you continue to the next step, ensure all user attributes are syncing properly and are showing in the **Pending Export** tab as expected. If they're deleted, make sure their ImmutableID's match and the User is in one of the selected OUs for synchronization.
+
+## Step 4 - Install Azure AD cloud sync agents
+
+Once you've prepared your list of source and destination targets, its time to [install and configure Azure AD cloud sync agents](https://docs.microsoft.com/azure/active-directory/cloud-sync/tutorial-single-forest). If you've opted to use Azure AD Connect server, skip this section.
+
+## Step 5 - Disable Okta provisioning to Azure AD
+
+Once the Azure AD Connect install has been verified and your pending exports are in order, it's time to disable Okta provisioning to Azure AD.
+
+1. Navigate to your Okta portal, select **Applications**, followed by your Okta app used to provision users to Azure AD. Open provisioning tab and **Integration** section.
+
+ ![image shows integration section in Okta](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/integration-section.png)
+
+2. Select **Edit**, uncheck **Enable API integration** option and **Save**.
+
+ ![image shows edit enable api integration in Okta](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/edit-api-integration.png)
+
+ >[!NOTE]
+ >If you have multiple Office 365 apps handling provisioning to Azure AD, ensure that all are switched off.
+
+## Step 6 - Disable staging mode in Azure AD Connect
+
+After disabling Okta Provisioning, the Azure AD Connect server is ready to begin synchronizing objects. If you have chosen to go with Azure AD cloud sync agents, skip this section.
+
+1. Run the installation wizard from the desktop again, and select **Configure**.
+
+ ![image shows azure AD connect server](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/azure-ad-connect-server.png)
+
+2. Select **Configure Staging Mode** followed by **Next** and enter your global administrator credentials.
+
+ ![image shows configure staging mode](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/configure-staging-mode.png)
+
+3. Uncheck **Enable Staging Mode** followed by next.
+
+ ![image shows uncheck enable staging mode](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/uncheck-enable-staging-mode.png)
+
+4. Select **Configure** to continue.
+
+ ![image shows ready to configure](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/ready-to-configure.png)
+
+5. After the configuration completes, open the **Synchronization Service** as an administrator. View the Export on the domain.onmicrosoft.com connector. Verify all adds, updates, and deletes are done as expected.
+
+ ![image shows verify sync service](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/verify-sync-service.png)
+
+You've now successfully migrated to Azure AD Connect server based provisioning. Updates and expansions to the feature set
+of Azure AD connect can be done by rerunning to the installation wizard.
+
+## Step 7 - Enable Cloud sync agents
+
+After disabling Okta Provisioning, the Azure AD cloud sync agent is ready to begin synchronizing objects, return to the [Azure AD Portal](https://aad.portal.azure.com/).
+
+1. Modify the **Configuration profile** to **Enabled**.
+
+2. After enabling, return to the provisioning menu and select **Logs**.
+
+3. Evaluate that the provisioning connector has properly updated in place objects. The cloud sync agents are non-destructive. They'll fail their updates if a match didn't occur properly.
+
+4. If a user is mismatched, make the necessary updates to bind the immutableID's, then restart the cloud provisioning sync.
+
+## Next steps
+
+- [Migrate applications from Okta to Azure AD](migrate-applications-from-okta-to-azure-active-directory.md)
+
+- [Migrate Okta federation to Azure AD managed authentication](migrate-okta-federation-to-azure-active-directory.md)
+
+- [Migrate Okta sign on policies to Azure AD Conditional Access](migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access.md)
active-directory Migration Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/migration-resources.md
Resources to help you migrate application access and authentication to Azure Act
| [Deployment plan: Migrating from AD FS to pass-through authentication](https://aka.ms/ADFSTOPTADPDownload)|Azure AD pass-through authentication helps users sign in to both on-premises and cloud-based applications by using the same password. This feature provides your users with a better experience since they have one less password to remember. It also reduces IT helpdesk costs because users are less likely to forget how to sign in when they only need to remember one password. When people sign in using Azure AD, this feature validates users' passwords directly against your on-premises Active Directory.| | [Deployment plan: Enabling Single Sign-on to a SaaS app with Azure AD](https://aka.ms/SSODPDownload) | Single sign-on (SSO) helps you access all the apps and resources you need to do business, while signing in only once, using a single user account. For example, after a user has signed in, the user can move from Microsoft Office, to SalesForce, to Box without authenticating (for example, typing a password) a second time. | [Deployment plan: Extending apps to Azure AD with Application Proxy](https://aka.ms/AppProxyDPDownload)| Providing access from employee laptops and other devices to on-premises applications has traditionally involved virtual private networks (VPNs) or demilitarized zones (DMZs). Not only are these solutions complex and hard to make secure, but they are costly to set up and manage. Azure AD Application Proxy makes it easier to access on-premises applications. |
-| [Deployment plans](../fundamentals/active-directory-deployment-plans.md) | Find more deployment plans for deploying features such as multi-Factor authentication, Conditional Access, user provisioning, seamless SSO, self-service password reset, and more! |
+| [Deployment plans](../fundamentals/active-directory-deployment-plans.md) | Find more deployment plans for deploying features such as multi-factor authentication, Conditional Access, user provisioning, seamless SSO, self-service password reset, and more! |
| [Migrating apps from Symantec SiteMinder to Azure AD](https://azure.microsoft.com/mediahandler/files/resourcefiles/migrating-applications-from-symantec-siteminder-to-azure-active-directory/Migrating-applications-from-Symantec-SiteMinder-to-Azure-Active-Directory.pdf) | Get step by step guidance on application migration and integration options with an example, that walks you through migrating applications from Symantec SiteMinder to Azure AD. |
+| [Migrating apps from Okta to Azure AD](migrate-applications-from-okta-to-azure-active-directory.md) | Get step by step guidance on application migration from Okta to Azure AD. |
+| [Migrating Okta federation to Azure AD managed authentication](migrate-okta-federation-to-azure-active-directory.md) | Learn how to federate your existing Office 365 tenants with Okta for Single sign-on capabilities. |
+| [Migrating Okta sync provisioning to Azure AD Connect based synchronization](migrate-okta-sync-provisioning-to-azure-active-directory.md) | Step by step guidance for organizations who currently use User provisioning from Okta to Azure AD, migrating either User sync, or Universal sync to Azure AD Connect. |
+| [Migrating Okta sign on policies to Azure AD Conditional Access](migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access.md) | Get step by step guidance on migrating from global or application-level sign-on policies in Okta to Azure AD Conditional Access policies to secure user access in Azure AD and connected applications. |
+
active-directory Createweb Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/createweb-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Create!Webフロー | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Create!Webフロー.
++++++++ Last updated : 08/31/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Create!Webフロー
+
+In this tutorial, you'll learn how to integrate Create!Webフロー with Azure Active Directory (Azure AD). When you integrate Create!Webフロー with Azure AD, you can:
+
+* Control in Azure AD who has access to Create!Webフロー.
+* Enable your users to be automatically signed-in to Create!Webフロー with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Create!Webフロー single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Create!Webフロー supports **SP and IDP** initiated SSO.
+
+## Add Create!Webフロー from the gallery
+
+To configure the integration of Create!Webフロー into Azure AD, you need to add Create!Webフロー from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Create!Webフロー** in the search box.
+1. Select **Create!Webフロー** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Create!Webフロー
+
+Configure and test Azure AD SSO with Create!Webフロー using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Create!Webフロー.
+
+To configure and test Azure AD SSO with Create!Webフロー, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Create!Webフロー SSO](#configure-createwebフロー-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Create!Webフロー test user](#create-createwebフロー-test-user)** - to have a counterpart of B.Simon in Create!Webフロー that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Create!Webフロー** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
+
+ a. In the **Identifier** text box, type a URL using one of the following patterns:
+
+ | **Identifier** |
+ |--|
+ | `https://<user-hostname>/XFV20` |
+ | `https://<user-Environment>/XFV20` |
+
+ b. In the **Reply URL** text box, type a URL using one of the following patterns:
+
+ | **Reply URL** |
+ ||
+ | `https://<user-hostname>/XFV20/LoginSaml.do` |
+ | `https://<user-Environment>/XFV20/LoginSaml.do` |
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<user-hostname>:8443/XFV20`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Create!Webフロー Client support team](mailto:solution-cwf@iftc.co.jp) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up Create!Webフロー** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Create!Webフロー.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Create!Webフロー**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Create!Webフロー SSO
+
+To configure single sign-on on **Create!Webフロー** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Create!Webフロー support team](mailto:solution-cwf@iftc.co.jp). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Create!Webフロー test user
+
+In this section, you create a user called Britta Simon in Create!Webフロー. Work with [Create!Webフロー support team](mailto:solution-cwf@iftc.co.jp) to add the users in the Create!Webフロー platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Create!Webフロー Sign on URL where you can initiate the login flow.
+
+* Go to Create!Webフロー Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Create!Webフロー for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Create!Webフロー tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Create!Webフロー for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Create!Webフロー you can enforce session control, which protects exfiltration and infiltration of your organization’s sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
aks Csi Secrets Store Driver https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/csi-secrets-store-driver.md
The Secrets Store CSI Driver for Kubernetes allows for the integration of Azure
- Before you start, install the latest version of the [Azure CLI](/cli/azure/install-azure-cli-windows) and the *aks-preview* extension.
+### Supported Kubernetes versions
+
+The minimum recommended Kubernetes version for this feature is 1.18.
+ ## Features - Mount secrets, keys, and/or certs to a pod using a CSI volume
aks Csi Storage Drivers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/csi-storage-drivers.md
Title: Enable Container Storage Interface (CSI) drivers on Azure Kubernetes Serv
description: Learn how to enable the Container Storage Interface (CSI) drivers for Azure disks and Azure Files in an Azure Kubernetes Service (AKS) cluster. Previously updated : 08/27/2020 Last updated : 08/31/2021
The CSI storage driver support on AKS allows you to natively use:
> [!IMPORTANT] > Starting in Kubernetes version 1.21, Kubernetes will use CSI drivers only and by default. These drivers are the future of storage support in Kubernetes.
->
+>
+> Please remove manual installed open source Azure Disk and Azure File CSI drivers before upgrading to AKS 1.21.
+>
> *In-tree drivers* refers to the current storage drivers that are part of the core Kubernetes code versus the new CSI drivers, which are plug-ins. ## Limitations
$ echo $(kubectl get CSINode <NODE NAME> -o jsonpath="{.spec.drivers[1].allocata
[az-extension-update]: /cli/azure/extension#az_extension_update [az-feature-register]: /cli/azure/feature#az_feature_register [az-feature-list]: /cli/azure/feature#az_feature_list
-[az-provider-register]: /cli/azure/provider#az_provider_register
+[az-provider-register]: /cli/azure/provider#az_provider_register
aks Open Service Mesh About https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/open-service-mesh-about.md
+
+ Title: Open Service Mesh (Preview)
+description: Open Service Mesh (OSM) in Azure Kubernetes Service (AKS)
++ Last updated : 3/12/2021++++
+# Open Service Mesh AKS add-on (preview)
+
+[Open Service Mesh (OSM)](https://docs.openservicemesh.io/) is a lightweight, extensible, Cloud Native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.
+
+OSM runs an Envoy-based control plane on Kubernetes, can be configured with [SMI](https://smi-spec.io/) APIs, and works by injecting an Envoy proxy as a sidecar container next to each instance of your application. The Envoy proxy contains and executes rules around access control policies, implements routing configuration, and captures metrics. The control plane continually configures proxies to ensure policies and routing rules are up to date and ensures proxies are healthy.
+
+The OSM project was originated by Microsoft and has since been donated and is governed by the [Cloud Native Computing Foundation (CNCF)](https://www.cncf.io/). The OSM open source project, will continue to be a community led collaboration around features and functionality and contributions to the project are welcomed and encouraged. Please see our [Contributor Ladder](https://github.com/openservicemesh/osm/blob/main/CONTRIBUTOR_LADDER.md) guide on how you can get involved.
++
+## Capabilities and features
+
+OSM provides the following set of capabilities and features to provide a cloud native service mesh for your Azure Kubernetes Service (AKS) clusters:
+
+- OSM has been integrated into the AKS service to provide a fully supported and managed service mesh experience with the convenience of the AKS feature add-on
+
+- Secure service to service communication by enabling mTLS
+
+- Easily onboard applications onto the mesh by enabling automatic sidecar injection of Envoy proxy
+
+- Easily and transparent configurations for traffic shifting on deployments
+
+- Ability to define and execute fine grained access control policies for services
+
+- Observability and insights into application metrics for debugging and monitoring services
+
+- Integration with external certificate management services/solutions with a pluggable interface
+
+## Scenarios
+
+OSM can assist your AKS deployments with the following scenarios:
+
+- Provide encrypted communications between service endpoints deployed in the cluster
+
+- Traffic authorization of both HTTP/HTTPS and TCP traffic in the mesh
+
+- Configuration of weighted traffic controls between two or more services for A/B or canary deployments
+
+- Collection and viewing of KPIs from application traffic
+
+## OSM service quotas and limits (preview)
+
+OSM preview limitations for service quotas and limits can be found on the AKS [Quotas and regional limits page](./quotas-skus-regions.md).
+
+<!-- LINKS - internal -->
+
+[kubernetes-service]: concepts-network.md#services
+[az-feature-register]: /cli/azure/feature?view=azure-cli-latest&preserve-view=true#az_feature_register
+[az-feature-list]: /cli/azure/feature?view=azure-cli-latest&preserve-view=true#az_feature_list
+[az-provider-register]: /cli/azure/provider?view=azure-cli-latest&preserve-view=true#az_provider_register
aks Open Service Mesh Azure Application Gateway Ingress https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/open-service-mesh-azure-application-gateway-ingress.md
+
+ Title: Using Azure Application Gateway Ingress
+description: How to use Azure Application Gateway Ingress with Open Service Mesh
++ Last updated : 8/26/2021++++
+# Deploy an application managed by Open Service Mesh (OSM) using Azure Application Gateway ingress AKS add-on
+
+In this tutorial, you will:
+
+> [!div class="checklist"]
+>
+> - View the current OSM cluster configuration
+> - Create the namespace(s) for OSM to manage deployed applications in the namespace(s)
+> - Onboard the namespaces to be managed by OSM
+> - Deploy the sample application
+> - Verify the application running inside the AKS cluster
+> - Create an Azure Application Gateway to be used as the ingress controller for the application
+> - Expose a service via the Azure Application Gateway ingress to the internet
+
+## Before you begin
+
+The steps detailed in this walkthrough assume that you have previously enabled the OSM AKS add-on for your AKS cluster. If not, review the article [Deploy the OSM AKS add-on](./open-service-mesh-deploy-add-on.md) before proceeding. Also, your AKS cluster needs to be version Kubernetes `1.19+` and above, have Kubernetes RBAC enabled, and have established a `kubectl` connection with the cluster (If you need help with any of these items, then see the [AKS quickstart](./kubernetes-walkthrough.md), and have installed the AKS OSM add-on.
+
+You must have the following resources installed:
+
+- The Azure CLI, version 2.20.0 or later
+- The `aks-preview` extension version 0.5.5 or later
+- OSM version v0.8.0 or later
+- JSON processor "jq" version 1.6+
++
+## View and verify the current OSM cluster configuration
+
+Once the OSM add-on for AKS has been enabled on the AKS cluster, you can view the current configuration parameters in the osm-mesh-config resource. Run the following command to view the properties:
+
+```azurecli-interactive
+kubectl get meshconfig osm-mesh-config -n kube-system -o yaml
+```
+
+Output shows the current OSM MeshConfig for the cluster.
+
+```
+apiVersion: config.openservicemesh.io/v1alpha1
+kind: MeshConfig
+metadata:
+ creationTimestamp: "0000-00-00A00:00:00A"
+ generation: 1
+ name: osm-mesh-config
+ namespace: kube-system
+ resourceVersion: "2494"
+ uid: 6c4d67f3-c241-4aeb-bf4f-b029b08faa31
+spec:
+ certificate:
+ serviceCertValidityDuration: 24h
+ featureFlags:
+ enableEgressPolicy: true
+ enableMulticlusterMode: false
+ enableWASMStats: true
+ observability:
+ enableDebugServer: true
+ osmLogLevel: info
+ tracing:
+ address: jaeger.osm-system.svc.cluster.local
+ enable: false
+ endpoint: /api/v2/spans
+ port: 9411
+ sidecar:
+ configResyncInterval: 0s
+ enablePrivilegedInitContainer: false
+ envoyImage: mcr.microsoft.com/oss/envoyproxy/envoy:v1.18.3
+ initContainerImage: mcr.microsoft.com/oss/openservicemesh/init:v0.9.1
+ logLevel: error
+ maxDataPlaneConnections: 0
+ resources: {}
+ traffic:
+ enableEgress: true
+ enablePermissiveTrafficPolicyMode: true
+ inboundExternalAuthorization:
+ enable: false
+ failureModeAllow: false
+ statPrefix: inboundExtAuthz
+ timeout: 1s
+ useHTTPSIngress: false
+```
+
+Notice the **enablePermissiveTrafficPolicyMode** is configured to **true**. Permissive traffic policy mode in OSM is a mode where the [SMI](https://smi-spec.io/) traffic policy enforcement is bypassed. In this mode, OSM automatically discovers services that are a part of the service mesh and programs traffic policy rules on each Envoy proxy sidecar to be able to communicate with these services.
+
+## Create namespaces for the application
+
+In this tutorial we will be using the OSM bookstore application that has the following application components:
+
+- bookbuyer
+- bookthief
+- bookstore
+- bookwarehouse
+
+Create namespaces for each of these application components.
+
+```azurecli-interactive
+for i in bookstore bookbuyer bookthief bookwarehouse; do kubectl create ns $i; done
+```
+
+You should see the following output:
+
+```Output
+namespace/bookstore created
+namespace/bookbuyer created
+namespace/bookthief created
+namespace/bookwarehouse created
+```
+
+## Onboard the namespaces to be managed by OSM
+
+When you add the namespaces to the OSM mesh, this will allow the OSM controller to automatically inject the Envoy sidecar proxy containers with your application. Run the following command to onboard the OSM bookstore application namespaces.
+
+```azurecli-interactive
+osm namespace add bookstore bookbuyer bookthief bookwarehouse
+```
+
+You should see the following output:
+
+```Output
+Namespace [bookstore] successfully added to mesh [osm]
+Namespace [bookbuyer] successfully added to mesh [osm]
+Namespace [bookthief] successfully added to mesh [osm]
+Namespace [bookwarehouse] successfully added to mesh [osm]
+```
+
+## Deploy the Bookstore application
+
+```azurecli-interactive
+kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.9/docs/example/manifests/apps/bookbuyer.yaml
+```
+
+```azurecli-interactive
+kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.9/docs/example/manifests/apps/bookthief.yaml
+```
+
+```azurecli-interactive
+kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.9/docs/example/manifests/apps/bookstore.yaml
+```
+
+```azurecli-interactive
+kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.9/docs/example/manifests/apps/bookwarehouse.yaml
+```
+
+All of the deployment outputs are summarized below.
+
+```Output
+serviceaccount/bookbuyer created
+service/bookbuyer created
+deployment.apps/bookbuyer created
+
+serviceaccount/bookthief created
+service/bookthief created
+deployment.apps/bookthief created
+
+service/bookstore created
+serviceaccount/bookstore created
+deployment.apps/bookstore created
+
+serviceaccount/bookwarehouse created
+service/bookwarehouse created
+deployment.apps/bookwarehouse created
+```
+
+## Update the Bookbuyer Service
+
+Update the bookbuyer service to the correct inbound port configuration with the following service manifest.
+
+```azurecli-interactive
+kubectl apply -f - <<EOF
+apiVersion: v1
+kind: Service
+metadata:
+ name: bookbuyer
+ namespace: bookbuyer
+ labels:
+ app: bookbuyer
+spec:
+ ports:
+ - port: 14001
+ name: inbound-port
+ selector:
+ app: bookbuyer
+EOF
+```
+
+## Verify the Bookstore application
+
+As of now we have deployed the bookstore multi-container application, but it is only accessible from within the AKS cluster. Later we will add the Azure Application Gateway ingress controller to expose the application outside the AKS cluster. To verify that the application is running inside the cluster, we will use a port forward to view the bookbuyer component UI.
+
+First let's get the bookbuyer pod's name
+
+```azurecli-interactive
+kubectl get pod -n bookbuyer
+```
+
+You should see output similar to the following. Your bookbuyer pod will have a unique name appended.
+
+```Output
+NAME READY STATUS RESTARTS AGE
+bookbuyer-7676c7fcfb-mtnrz 2/2 Running 0 7m8s
+```
+
+Once we have the pod's name, we can now use the port-forward command to set up a tunnel from our local system to the application inside the AKS cluster. Run the following command to set up the port forward for the local system port 8080. Again use your specific bookbuyer pod name.
+
+```azurecli-interactive
+kubectl port-forward bookbuyer-7676c7fcfb-mtnrz -n bookbuyer 8080:14001
+```
+
+You should see output similar to this.
+
+```Output
+Forwarding from 127.0.0.1:8080 -> 14001
+Forwarding from [::1]:8080 -> 14001
+```
+
+While the port forwarding session is in place, navigate to the following url from a browser `http://localhost:8080`. You should now be able to see the bookbuyer application UI in the browser similar to the image below.
+
+![OSM bookbuyer app for App Gateway UI image](./media/aks-osm-addon/osm-agic-bookbuyer-img.png)
+
+## Create an Azure Application Gateway to expose the bookbuyer application
+
+> [!NOTE]
+> The following directions will create a new instance of the Azure Application Gateway to be used for ingress. If you have an existing Azure Application Gateway you wish to use, skip to the section for enabling the Application Gateway Ingress Controller add-on.
+
+### Deploy a new Application Gateway
+
+> [!NOTE]
+> We are referencing existing documentation for enabling the Application Gateway Ingress Controller add-on for an existing AKS cluster. Some modifications have been made to suit the OSM materials. More detailed documentation on the subject can be found [here](../application-gateway/tutorial-ingress-controller-add-on-existing.md).
+
+You'll now deploy a new Application Gateway, to simulate having an existing Application Gateway that you want to use to load balance traffic to your AKS cluster, _myCluster_. The name of the Application Gateway will be _myApplicationGateway_, but you will need to first create a public IP resource, named _myPublicIp_, and a new virtual network called _myVnet_ with address space 11.0.0.0/8, and a subnet with address space 11.1.0.0/16 called _mySubnet_, and deploy your Application Gateway in _mySubnet_ using _myPublicIp_.
+
+When using an AKS cluster and Application Gateway in separate virtual networks, the address spaces of the two virtual networks must not overlap. The default address space that an AKS cluster deploys in is 10.0.0.0/8, so we set the Application Gateway virtual network address prefix to 11.0.0.0/8.
+
+```azurecli-interactive
+az group create --name myResourceGroup --location eastus2
+az network public-ip create -n myPublicIp -g MyResourceGroup --allocation-method Static --sku Standard
+az network vnet create -n myVnet -g myResourceGroup --address-prefix 11.0.0.0/8 --subnet-name mySubnet --subnet-prefix 11.1.0.0/16
+az network application-gateway create -n myApplicationGateway -l eastus2 -g myResourceGroup --sku Standard_v2 --public-ip-address myPublicIp --vnet-name myVnet --subnet mySubnet
+```
+
+> [!NOTE]
+> Application Gateway Ingress Controller (AGIC) add-on **only** supports Application Gateway v2 SKUs (Standard and WAF), and **not** the Application Gateway v1 SKUs.
+
+### Enable the AGIC add-on for an existing AKS cluster through Azure CLI
+
+If you'd like to continue using Azure CLI, you can continue to enable the AGIC add-on in the AKS cluster you created, _myCluster_, and specify the AGIC add-on to use the existing Application Gateway you created, _myApplicationGateway_.
+
+```azurecli-interactive
+appgwId=$(az network application-gateway show -n myApplicationGateway -g myResourceGroup -o tsv --query "id")
+az aks enable-addons -n myCluster -g myResourceGroup -a ingress-appgw --appgw-id $appgwId
+```
+
+You can verify the Azure Application Gateway AKS add-on has been enabled by the following command.
+
+```azurecli-interactive
+az aks list -g osm-aks-rg -o json | jq -r .[].addonProfiles.ingressApplicationGateway.enabled
+```
+
+This command should show the output as `true`.
+
+### Peer the two virtual networks together
+
+Since we deployed the AKS cluster in its own virtual network and the Application Gateway in another virtual network, you'll need to peer the two virtual networks together in order for traffic to flow from the Application Gateway to the pods in the cluster. Peering the two virtual networks requires running the Azure CLI command two separate times, to ensure that the connection is bi-directional. The first command will create a peering connection from the Application Gateway virtual network to the AKS virtual network; the second command will create a peering connection in the other direction.
+
+```azurecli-interactive
+nodeResourceGroup=$(az aks show -n myCluster -g myResourceGroup -o tsv --query "nodeResourceGroup")
+aksVnetName=$(az network vnet list -g $nodeResourceGroup -o tsv --query "[0].name")
+
+aksVnetId=$(az network vnet show -n $aksVnetName -g $nodeResourceGroup -o tsv --query "id")
+az network vnet peering create -n AppGWtoAKSVnetPeering -g myResourceGroup --vnet-name myVnet --remote-vnet $aksVnetId --allow-vnet-access
+
+appGWVnetId=$(az network vnet show -n myVnet -g myResourceGroup -o tsv --query "id")
+az network vnet peering create -n AKStoAppGWVnetPeering -g $nodeResourceGroup --vnet-name $aksVnetName --remote-vnet $appGWVnetId --allow-vnet-access
+```
+
+## Expose the bookbuyer service to the internet
+
+Apply the following ingress manifest to the AKS cluster to expose the bookbuyer service to the internet via the Azure Application Gateway.
+
+```azurecli-interactive
+kubectl apply -f - <<EOF
+
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: bookbuyer-ingress
+ namespace: bookbuyer
+ annotations:
+ kubernetes.io/ingress.class: azure/application-gateway
+
+spec:
+
+ rules:
+ - host: bookbuyer.contoso.com
+ http:
+ paths:
+ - path: /
+ backend:
+ serviceName: bookbuyer
+ servicePort: 14001
+
+ backend:
+ serviceName: bookbuyer
+ servicePort: 14001
+EOF
+```
+
+You should see the following output
+
+```Output
+Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
+ingress.extensions/bookbuyer-ingress created
+```
+
+Since the host name in the ingress manifest is a pseudo name used for testing, the DNS name will not be available on the internet. We can alternatively use the curl program and past the hostname header to the Azure Application Gateway public IP address and receive a 200 code successfully connecting us to the bookbuyer service.
+
+```azurecli-interactive
+appGWPIP=$(az network public-ip show -g MyResourceGroup -n myPublicIp -o tsv --query "ipAddress")
+curl -H 'Host: bookbuyer.contoso.com' http://$appGWPIP/
+```
+
+You should see the following output
+
+```Output
+<!doctype html>
+<html itemscope="" itemtype="http://schema.org/WebPage" lang="en">
+ <head>
+ <meta content="Bookbuyer" name="description">
+ <meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
+ <title>Bookbuyer</title>
+ <style>
+ #navbar {
+ width: 100%;
+ height: 50px;
+ display: table;
+ border-spacing: 0;
+ white-space: nowrap;
+ line-height: normal;
+ background-color: #0078D4;
+ background-position: left top;
+ background-repeat-x: repeat;
+ background-image: none;
+ color: white;
+ font: 2.2em "Fira Sans", sans-serif;
+ }
+ #main {
+ padding: 10pt 10pt 10pt 10pt;
+ font: 1.8em "Fira Sans", sans-serif;
+ }
+ li {
+ padding: 10pt 10pt 10pt 10pt;
+ font: 1.2em "Consolas", sans-serif;
+ }
+ </style>
+ <script>
+ setTimeout(function(){window.location.reload(1);}, 1500);
+ </script>
+ </head>
+ <body bgcolor="#fff">
+ <div id="navbar">
+ &#128214; Bookbuyer
+ </div>
+ <div id="main">
+ <ul>
+ <li>Total books bought: <strong>5969</strong>
+ <ul>
+ <li>from bookstore V1: <strong>277</strong>
+ <li>from bookstore V2: <strong>5692</strong>
+ </ul>
+ </li>
+ </ul>
+ </div>
+
+ <br/><br/><br/><br/>
+ <br/><br/><br/><br/>
+ <br/><br/><br/><br/>
+
+ Current Time: <strong>Fri, 26 Mar 2021 16:34:30 UTC</strong>
+ </body>
+</html>
+```
+
+## Troubleshooting
+
+- [AGIC Troubleshooting Documentation](../application-gateway/ingress-controller-troubleshoot.md)
+- [Additional troubleshooting tools are available on AGIC's GitHub repo](https://github.com/Azure/application-gateway-kubernetes-ingress/blob/master/docs/troubleshootings/troubleshooting-installing-a-simple-application.md)
aks Open Service Mesh Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/open-service-mesh-azure-monitor.md
+
+ Title: Using Azure Monitor and Application Insights
+description: How to use Azure Monitor and Application Insights with Open Service Mesh
++ Last updated : 8/26/2021++++
+# Open Service Mesh (OSM) Monitoring and Observability using Azure Monitor and Applications Insights
+
+Both Azure Monitor and Azure Application Insights assist with maximizing the availability and performance of your applications and services. This is done by delivering a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments.
++
+The OSM AKS add-on will have deep integrations into both of these Azure services, and provide a seamless Azure experience for viewing and responding to critical KPIs provided by OSM metrics. For more information on how to enable and configure these services for the OSM AKS add-on, visit the [Azure Monitor for OSM](https://aka.ms/azmon/osmpreview) page for more information.
aks Open Service Mesh Binary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/open-service-mesh-binary.md
+
+ Title: Download the OSM client Library
+description: Download the Open Service Mesh (OSM) client library
++ Last updated : 8/26/2021++
+zone_pivot_groups: client-operating-system
++
+# Download the Open Service Mesh (OSM) client library
+This article will discuss how to download the OSM client library to be used to operate and configure the OSM add-on for AKS.
+++++++++++
+> [!WARNING]
+> Do not attempt to install OSM from the binary using `osm install`. This will result in a installation of OSM that is not integrated as an add-on for AKS.
aks Open Service Mesh Deploy Add On https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/open-service-mesh-deploy-add-on.md
+
+ Title: Deploy Open Service Mesh (Preview)
+description: Deploy Open Service Mesh on Azure Kubernetes Service (AKS)
++ Last updated : 8/26/2021++++
+# Deploy the Open Service Mesh AKS add-on (Preview)
+This article will discuss how to deploy the OSM add-on to AKS.
++
+## Prerequisites
+
+- The Azure CLI, version 2.20.0 or later
+- The `aks-preview` extension version 0.5.5 or later
+- OSM version v0.9.1 or later
+- JSON processor "jq" version 1.6+
+
+## Install the aks-preview extension
+
+You will need the *aks-preview* Azure CLI extension version 0.5.24 or greater. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+## Register the `AKS-OpenServiceMesh` preview feature
+
+To create an AKS cluster that can use the Open Service Mesh add-on, you must enable the `AKS-OpenServiceMesh` feature flag on your subscription.
+
+Register the `AKS-OpenServiceMesh` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "AKS-OpenServiceMesh"
+```
+
+It takes a few minutes for the status to show _Registered_. Verify the registration status by using the [az feature list][az-feature-list] command:
+
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKS-OpenServiceMesh')].{Name:name,State:properties.state}"
+```
+
+When ready, refresh the registration of the _Microsoft.ContainerService_ resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+## Install Open Service Mesh (OSM) Azure Kubernetes Service (AKS) add-on for a new AKS cluster
+
+For a new AKS cluster deployment scenario, you will start with a brand new deployment of an AKS cluster enabling the OSM add-on at the cluster create operation.
+
+### Create a resource group
+
+In Azure, you allocate related resources to a resource group. Create a resource group by using [az group create](/cli/azure/group#az_group_create). The following example creates a resource group named _myOsmAksGroup_ in the _eastus2_ location (region):
+
+```azurecli-interactive
+az group create --name <my-osm-aks-cluster-rg> --location <azure-region>
+```
+
+### Deploy an AKS cluster with the OSM add-on enabled
+
+You'll now deploy a new AKS cluster with the OSM add-on enabled.
+
+> [!NOTE]
+> Please be aware the following AKS deployment command utilizes OS ephemeral disks for an example AKS deployment. You can find more information here about [Ephemeral OS disks for AKS](./cluster-configuration.md#ephemeral-os)
+
+```azurecli-interactive
+az aks create -n <my-osm-aks-cluster-name> -g <my-osm-aks-cluster-rg> --node-osdisk-type Ephemeral --node-osdisk-size 30 --network-plugin azure --enable-managed-identity -a open-service-mesh
+```
+
+#### Get AKS Cluster Access Credentials
+
+Get access credentials for the new managed Kubernetes cluster.
+
+```azurecli-interactive
+az aks get-credentials -n <my-osm-aks-cluster-name> -g <my-osm-aks-cluster-rg>
+```
+
+## Enable Open Service Mesh (OSM) Azure Kubernetes Service (AKS) add-on for an existing AKS cluster
+
+For an existing AKS cluster scenario, you will enable the OSM add-on to an existing AKS cluster that has already been deployed.
+
+### Enable the OSM add-on to existing AKS cluster
+
+To enable the AKS OSM add-on, you will need to run the `az aks enable-addons --addons` command passing the parameter `open-service-mesh`
+
+```azurecli-interactive
+az aks enable-addons --addons open-service-mesh -g <my-osm-aks-cluster-rg> -n <my-osm-aks-cluster-name>
+```
+
+You should see output similar to the output shown below to confirm the AKS OSM add-on has been installed.
+
+```json
+{- Finished ..
+ "aadProfile": null,
+ "addonProfiles": {
+ "KubeDashboard": {
+ "config": null,
+ "enabled": false,
+ "identity": null
+ },
+ "openServiceMesh": {
+ "config": {},
+ "enabled": true,
+ "identity": {
+...
+```
+
+## Validate the AKS OSM add-on installation
+
+There are several commands to run to check all of the components of the AKS OSM add-on are enabled and running:
+
+First we can query the add-on profiles of the cluster to check the enabled state of the add-ons installed. The following command should return "true".
+
+```azurecli-interactive
+az aks list -g <my-osm-aks-cluster-rg> -o json | jq -r '.[].addonProfiles.openServiceMesh.enabled'
+```
+
+The following `kubectl` commands will report the status of the osm-controller.
+
+```azurecli-interactive
+kubectl get deployments -n kube-system --selector app=osm-controller
+kubectl get pods -n kube-system --selector app=osm-controller
+kubectl get services -n kube-system --selector app=osm-controller
+```
+
+## Accessing the AKS OSM add-on configuration
+
+Currently you can access and configure the OSM controller configuration via the OSM MeshConfig resource. To view the OSM controller configuration settings via the CLI use the **kubectl** get command as shown below.
+
+```azurecli-interactive
+kubectl get meshconfig osm-mesh-config -n kube-system -o yaml
+```
+
+Output of the MeshConfig should look like the following:
+
+```
+apiVersion: config.openservicemesh.io/v1alpha1
+kind: MeshConfig
+metadata:
+ creationTimestamp: "0000-00-00A00:00:00A"
+ generation: 1
+ name: osm-mesh-config
+ namespace: kube-system
+ resourceVersion: "2494"
+ uid: 6c4d67f3-c241-4aeb-bf4f-b029b08faa31
+spec:
+ certificate:
+ serviceCertValidityDuration: 24h
+ featureFlags:
+ enableEgressPolicy: true
+ enableMulticlusterMode: false
+ enableWASMStats: true
+ observability:
+ enableDebugServer: true
+ osmLogLevel: info
+ tracing:
+ address: jaeger.osm-system.svc.cluster.local
+ enable: false
+ endpoint: /api/v2/spans
+ port: 9411
+ sidecar:
+ configResyncInterval: 0s
+ enablePrivilegedInitContainer: false
+ envoyImage: mcr.microsoft.com/oss/envoyproxy/envoy:v1.18.3
+ initContainerImage: mcr.microsoft.com/oss/openservicemesh/init:v0.9.1
+ logLevel: error
+ maxDataPlaneConnections: 0
+ resources: {}
+ traffic:
+ enableEgress: true
+ enablePermissiveTrafficPolicyMode: true
+ inboundExternalAuthorization:
+ enable: false
+ failureModeAllow: false
+ statPrefix: inboundExtAuthz
+ timeout: 1s
+ useHTTPSIngress: false
+```
+
+Notice the **enablePermissiveTrafficPolicyMode** is configured to **true**. Permissive traffic policy mode in OSM is a mode where the [SMI](https://smi-spec.io/) traffic policy enforcement is bypassed. In this mode, OSM automatically discovers services that are a part of the service mesh and programs traffic policy rules on each Envoy proxy sidecar to be able to communicate with these services. For more detailed information about permissive traffic mode, please visit and read the [Permissive Traffic Policy Mode](https://docs.openservicemesh.io/docs/guides/traffic_management/permissive_mode/) article.
+
+> [!WARNING]
+> Before proceeding please verify that your permissive traffic policy mode is set to true, if not please change it to **true** using the command below
+
+```OSM Permissive Mode to True
+kubectl patch meshconfig osm-mesh-config -n kube-system -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":true}}}' --type=merge
+```
aks Open Service Mesh Deploy Existing Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/open-service-mesh-deploy-existing-application.md
+
+ Title: Manage an existing application with Open Service Mesh
+description: How to manage an existing application with Open Service Mesh
++ Last updated : 8/26/2021++++
+# Manage an existing application with the Open Service Mesh (OSM) Azure Kubernetes Service (AKS) add-on
+
+## Before you begin
+
+The steps detailed in this walkthrough assume that you have previously enabled the OSM AKS add-on for your AKS cluster. If not, review the article [Deploy the OSM AKS add-on](./open-service-mesh-deploy-add-on.md) before proceeding. Also, your AKS cluster needs to be version Kubernetes `1.19+` and above, have Kubernetes RBAC enabled, and have established a `kubectl` connection with the cluster (If you need help with any of these items, then see the [AKS quickstart](./kubernetes-walkthrough.md), and have installed the AKS OSM add-on.
+
+You must have the following resources installed:
+
+- The Azure CLI, version 2.20.0 or later
+- The `aks-preview` extension version 0.5.5 or later
+- OSM version v0.8.0 or later
+- JSON processor "jq" version 1.6+
++
+## Verify the Open Service Mesh (OSM) Permissive Traffic Mode Policy
+
+The OSM Permissive Traffic Policy mode is a mode where the [SMI](https://smi-spec.io/) traffic policy enforcement is bypassed. In this mode, OSM automatically discovers services that are a part of the service mesh and programs traffic policy rules on each Envoy proxy sidecar to be able to communicate with these services.
+
+To verify the current permissive traffic mode of OSM for your cluster, run the following command:
+
+```azurecli-interactive
+kubectl get meshconfig osm-mesh-config -n kube-system -o yaml
+```
+
+Output of the OSM MeshConfig should look like the following:
+
+```Output
+apiVersion: config.openservicemesh.io/v1alpha1
+kind: MeshConfig
+metadata:
+ creationTimestamp: "0000-00-00A00:00:00A"
+ generation: 1
+ name: osm-mesh-config
+ namespace: kube-system
+ resourceVersion: "2494"
+ uid: 6c4d67f3-c241-4aeb-bf4f-b029b08faa31
+spec:
+ certificate:
+ serviceCertValidityDuration: 24h
+ featureFlags:
+ enableEgressPolicy: true
+ enableMulticlusterMode: false
+ enableWASMStats: true
+ observability:
+ enableDebugServer: true
+ osmLogLevel: info
+ tracing:
+ address: jaeger.osm-system.svc.cluster.local
+ enable: false
+ endpoint: /api/v2/spans
+ port: 9411
+ sidecar:
+ configResyncInterval: 0s
+ enablePrivilegedInitContainer: false
+ envoyImage: mcr.microsoft.com/oss/envoyproxy/envoy:v1.18.3
+ initContainerImage: mcr.microsoft.com/oss/openservicemesh/init:v0.9.1
+ logLevel: error
+ maxDataPlaneConnections: 0
+ resources: {}
+ traffic:
+ enableEgress: true
+ enablePermissiveTrafficPolicyMode: true
+ inboundExternalAuthorization:
+ enable: false
+ failureModeAllow: false
+ statPrefix: inboundExtAuthz
+ timeout: 1s
+ useHTTPSIngress: false
+```
+
+If the **enablePermissiveTrafficPolicyMode** is configured to **true**, you can safely onboard your namespaces without any disruption to your service-to-service communications. If the **enablePermissiveTrafficPolicyMode** is configured to **false**, You will need to ensure you have the correct [SMI](https://smi-spec.io/) traffic access policy manifests deployed as well as ensuring you have a service account representing each service deployed in the namespace. For more detailed information about permissive traffic mode, please visit and read the [Permissive Traffic Policy Mode](https://docs.openservicemesh.io/docs/guides/traffic_management/permissive_mode/) article.
+
+## Onboard existing deployed applications with Open Service Mesh (OSM) Permissive Traffic Policy configured as True
+
+The first thing we'll do is add the deployed application namespace(s) to OSM to manage. The example below will onboard the **bookstore** namespace to OSM.
+
+```azurecli-interactive
+osm namespace add bookstore
+```
+
+You should see the following output:
+
+```Output
+Namespace [bookstore] successfully added to mesh [osm]
+```
+
+Next we will take a look at the current pod deployment in the namespace. Run the following command to view the pods in the designated namespace.
+
+```azurecli-interactive
+kubectl get pod -n bookbuyer
+```
+
+You should see the following similar output:
+
+```Output
+NAME READY STATUS RESTARTS AGE
+bookbuyer-78666dcff8-wh6wl 1/1 Running 0 43s
+```
+
+Notice the **READY** column showing **1/1**, meaning that the application pod has only one container. Next we will need to restart your application deployments so that OSM can inject the Envoy sidecar proxy container with your application pod. Let's get a list of deployments in the namespace.
+
+```azurecli-interactive
+kubectl get deployment -n bookbuyer
+```
+
+You should see the following output:
+
+```Output
+NAME READY UP-TO-DATE AVAILABLE AGE
+bookbuyer 1/1 1 1 23h
+```
+
+Now we will restart the deployment to inject the Envoy sidecar proxy container with your application pod. Run the following command.
+
+```azurecli-interactive
+kubectl rollout restart deployment bookbuyer -n bookbuyer
+```
+
+You should see the following output:
+
+```Output
+deployment.apps/bookbuyer restarted
+```
+
+If we take a look at the pods in the namespace again:
+
+```azurecli-interactive
+kubectl get pod -n bookbuyer
+```
+
+You will now notice that the **READY** column is now showing **2/2** containers being ready for your pod. The second container is the Envoy sidecar proxy.
+
+```Output
+NAME READY STATUS RESTARTS AGE
+bookbuyer-84446dd5bd-j4tlr 2/2 Running 0 3m30s
+```
+
+We can further inspect the pod to view the Envoy proxy by running the describe command to view the configuration.
+
+```azurecli-interactive
+kubectl describe pod bookbuyer-84446dd5bd-j4tlr -n bookbuyer
+```
+
+```Output
+Containers:
+ bookbuyer:
+ Container ID: containerd://b7503b866f915711002292ea53970bd994e788e33fb718f1c4f8f12cd4a88198
+ Image: openservicemesh/bookbuyer:v0.8.0
+ Image ID: docker.io/openservicemesh/bookbuyer@sha256:813874bd2dc9c5a259b9657995348cf0822b905e29c4e86f21fdefa0ef21dcee
+ Port: <none>
+ Host Port: <none>
+ Command:
+ /bookbuyer
+ State: Running
+ Started: Tue, 23 Mar 2021 10:52:53 -0400
+ Ready: True
+ Restart Count: 0
+ Environment:
+ BOOKSTORE_NAMESPACE: bookstore
+ BOOKSTORE_SVC: bookstore
+ Mounts:
+ /var/run/secrets/kubernetes.io/serviceaccount from bookbuyer-token-zft2r (ro)
+ envoy:
+ Container ID: containerd://f5f1cb5db8d5304e23cc984eb08146ea162a3e82d4262c4472c28d5579c25e10
+ Image: envoyproxy/envoy-alpine:v1.17.1
+ Image ID: docker.io/envoyproxy/envoy-alpine@sha256:511e76b9b73fccd98af2fbfb75c34833343d1999469229fdfb191abd2bbe3dfb
+ Ports: 15000/TCP, 15003/TCP, 15010/TCP
+ Host Ports: 0/TCP, 0/TCP, 0/TCP
+```
+
+Verify your application is still functional after the Envoy sidecar proxy injection.
+
+## Onboard existing deployed applications with Open Service Mesh (OSM) Permissive Traffic Policy configured as False
+
+When the OSM configuration for the permissive traffic policy is set to `false`, OSM will require explicit [SMI](https://smi-spec.io/) traffic access policies deployed for the service-to-service communication to happen within your cluster. Currently, OSM also uses Kubernetes service accounts as part of authorizing service-to-service communications as well. To ensure your existing deployed applications will communicate when managed by the OSM mesh, we will need to verify the existence of a service account to utilize, update the application deployment with the service account information, apply the [SMI](https://smi-spec.io/) traffic access policies.
+
+### Verify Kubernetes Service Accounts
+
+Verify if you have a kubernetes service account in the namespace your application is deployed to.
+
+```azurecli-interactive
+kubectl get serviceaccounts -n bookbuyer
+```
+
+In the following there is a service account named `bookbuyer` in the bookbuyer namespace.
+
+```Output
+NAME SECRETS AGE
+bookbuyer 1 25h
+default 1 25h
+```
+
+If you do not have a service account listed other than the default account, you will need to create one for your application. Use the following command as an example to create a service account in the application's deployed namespace.
+
+```azurecli-interactive
+kubectl create serviceaccount myserviceaccount -n bookbuyer
+```
+
+```Output
+serviceaccount/myserviceaccount created
+```
+
+### View your application's current deployment specification
+
+If you had to create a service account from the earlier section, chances are your application deployment is not configured with a specific `serviceAccountName` in the deployment spec. We can view your application's deployment spec with the following commands:
+
+```azurecli-interactive
+kubectl get deployment -n bookbuyer
+```
+
+A list of deployments will be listed in the output.
+
+```Output
+NAME READY UP-TO-DATE AVAILABLE AGE
+bookbuyer 1/1 1 1 25h
+```
+
+We will now describe the deployment as a check to see if there is a service account listed in the Pod Template section.
+
+```azurecli-interactive
+kubectl describe deployment bookbuyer -n bookbuyer
+```
+
+In this particular deployment you can see that there is a service account associated with the deployment listed under the Pod Template section. This deployment is using the service account bookbuyer. If you do not see the **Service Account:** property, your deployment is not configured to use a service account.
+
+```Output
+Pod Template:
+ Labels: app=bookbuyer
+ version=v1
+ Annotations: kubectl.kubernetes.io/restartedAt: 2021-03-23T10:52:49-04:00
+ Service Account: bookbuyer
+ Containers:
+ bookbuyer:
+ Image: openservicemesh/bookbuyer:v0.8.0
+
+```
+
+There are several techniques to update your deployment to add a kubernetes service account. Review the Kubernetes documentation on [Updating a Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment) inline, or [Configure Service Accounts for Pods](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/). Once you have updated your deployment spec with the service account, redeploy (kubectl apply -f your-deployment.yaml) your deployment to the cluster.
+
+### Deploy the necessary Service Mesh Interface (SMI) Policies
+
+The last step to allowing authorized traffic to flow in the mesh is to deploy the necessary [SMI](https://smi-spec.io/) traffic access policies for your application. The amount of configuration you can achieve with [SMI](https://smi-spec.io/) traffic access policies is beyond the scope of this walkthrough, but we will detail some of the common components of the specification and show how to configure both a simple TrafficTarget and HTTPRouteGroup policy to enable service-to-service communication for your application.
+
+The [SMI](https://smi-spec.io/) [**Traffic Access Control**](https://github.com/servicemeshinterface/smi-spec/blob/main/apis/traffic-access/v1alpha3/traffic-access.md#traffic-access-control) specification allows users to define the access control policy for their applications. We will focus on the **TrafficTarget** and **HTTPRoutGroup** api resources.
+
+The TrafficTarget resource consists of three main configuration settings destination, rules, and sources. An example TrafficTarget is shown below.
+
+```TrafficTarget Example spec
+apiVersion: access.smi-spec.io/v1alpha3
+kind: TrafficTarget
+metadata:
+ name: bookbuyer-access-bookstore-v1
+ namespace: bookstore
+spec:
+ destination:
+ kind: ServiceAccount
+ name: bookstore
+ namespace: bookstore
+ rules:
+ - kind: HTTPRouteGroup
+ name: bookstore-service-routes
+ matches:
+ - buy-a-book
+ - books-bought
+ sources:
+ - kind: ServiceAccount
+ name: bookbuyer
+ namespace: bookbuyer
+```
+
+In the above TrafficTarget spec, the `destination` denotes the service account that is configured for the destination source service. Remember the service account that was added to the deployment earlier will be used to authorize access to the deployment it is attached to. The `rules` section , in this particular example, defines the type of HTTP traffic that is allowed over the connection. You can configure fine grain regex patterns for the HTTP headers to be specific on what traffic is allowed via HTTP. The `sources` section is the service originating communications. This spec reads bookbuyer needs to communicate to the bookstore.
+
+The HTTPRouteGroup resource consists of one or an array of matches of HTTP header information and is a requirement for the TrafficTarget spec. In the example below, you can see that the HTTPRouteGroup is authorizing three HTTP actions, two GET and one POST.
+
+```HTTPRouteGroup Example Spec
+apiVersion: specs.smi-spec.io/v1alpha4
+kind: HTTPRouteGroup
+metadata:
+ name: bookstore-service-routes
+ namespace: bookstore
+spec:
+ matches:
+ - name: books-bought
+ pathRegex: /books-bought
+ methods:
+ - GET
+ headers:
+ - "user-agent": ".*-http-client/*.*"
+ - "client-app": "bookbuyer"
+ - name: buy-a-book
+ pathRegex: ".*a-book.*new"
+ methods:
+ - GET
+ - name: update-books-bought
+ pathRegex: /update-books-bought
+ methods:
+ - POST
+```
+
+If you are not familiar with the type of HTTP traffic your front-end application makes to other tiers of the application, since the TrafficTarget spec requires a rule, you can create the equivalent of an allow all rule using the below spec for HTTPRouteGroup.
+
+```HTTPRouteGroup Allow All Example
+apiVersion: specs.smi-spec.io/v1alpha4
+kind: HTTPRouteGroup
+metadata:
+ name: allow-all
+ namespace: yournamespace
+spec:
+ matches:
+ - name: allow-all
+ pathRegex: '.*'
+ methods: ["GET","PUT","POST","DELETE","PATCH"]
+```
+
+Once you configure your TrafficTarget and HTTPRouteGroup spec, you can put them together as one YAML and deploy. Below is the bookstore example configuration.
+
+```Bookstore Example TrafficTarget and HTTPRouteGroup configuration
+kubectl apply -f - <<EOF
+
+apiVersion: access.smi-spec.io/v1alpha3
+kind: TrafficTarget
+metadata:
+ name: bookbuyer-access-bookstore-v1
+ namespace: bookstore
+spec:
+ destination:
+ kind: ServiceAccount
+ name: bookstore
+ namespace: bookstore
+ rules:
+ - kind: HTTPRouteGroup
+ name: bookstore-service-routes
+ matches:
+ - buy-a-book
+ - books-bought
+ sources:
+ - kind: ServiceAccount
+ name: bookbuyer
+ namespace: bookbuyer
+
+apiVersion: specs.smi-spec.io/v1alpha4
+kind: HTTPRouteGroup
+metadata:
+ name: bookstore-service-routes
+ namespace: bookstore
+spec:
+ matches:
+ - name: books-bought
+ pathRegex: /books-bought
+ methods:
+ - GET
+ headers:
+ - "user-agent": ".*-http-client/*.*"
+ - "client-app": "bookbuyer"
+ - name: buy-a-book
+ pathRegex: ".*a-book.*new"
+ methods:
+ - GET
+ - name: update-books-bought
+ pathRegex: /update-books-bought
+ methods:
+ - POST
+EOF
+```
+
+Visit the [SMI](https://smi-spec.io/) site for more detailed information on the specification.
+
+## Manage the application's namespace with OSM
+
+Next we will configure OSM to manage the namespace and restart the deployments to get the Envoy sidecar proxy injected with the application.
+
+Run the following command to configure the `azure-vote` namespace to be managed my OSM.
+
+```azurecli-interactive
+osm namespace add azure-vote
+```
+
+```Output
+Namespace [azure-vote] successfully added to mesh [osm]
+```
+
+Next restart both the `azure-vote-front` and `azure-vote-back` deployments with the following commands.
+
+```azurecli-interactive
+kubectl rollout restart deployment azure-vote-front -n azure-vote
+kubectl rollout restart deployment azure-vote-back -n azure-vote
+```
+
+```Output
+deployment.apps/azure-vote-front restarted
+deployment.apps/azure-vote-back restarted
+```
+
+If we view the pods for the `azure-vote` namespace, we will see the **READY** stage of both the `azure-vote-front` and `azure-vote-back` as 2/2, meaning the Envoy sidecar proxy has been injected alongside the application.
aks Open Service Mesh Deploy New Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/open-service-mesh-deploy-new-application.md
+
+ Title: Manage a new application with Open Service Mesh
+description: How to manage a new application with Open Service Mesh
++ Last updated : 8/26/2021++++
+# Manage a new application with the Open Service Mesh (OSM) Azure Kubernetes Service (AKS) add-on
+
+## Before you begin
+
+The steps detailed in this walkthrough assume that you've created an AKS cluster (Kubernetes `1.19+` and above, with Kubernetes RBAC enabled), have established a `kubectl` connection with the cluster (If you need help with any of these items, then see the [AKS quickstart](./kubernetes-walkthrough.md), and have installed the AKS OSM add-on.
+
+You must have the following resources installed:
+
+- The Azure CLI, version 2.20.0 or later
+- The `aks-preview` extension version 0.5.5 or later
+- OSM version v0.8.0 or later
+- JSON processor "jq" version 1.6+
++
+## Create namespaces for the application
+
+In this walkthrough, we will be using the OSM bookstore application that has the following Kubernetes
+
+- `bookbuyer`
+- `bookthief`
+- `bookstore`
+- `bookwarehouse`
+
+Create namespaces for each of these application components.
+
+```azurecli-interactive
+for i in bookstore bookbuyer bookthief bookwarehouse; do kubectl create ns $i; done
+```
+
+You should see the following output:
+
+```Output
+namespace/bookstore created
+namespace/bookbuyer created
+namespace/bookthief created
+namespace/bookwarehouse created
+```
+
+## Onboard the namespaces to be managed by OSM
+
+When you add the namespaces to the OSM mesh, this action will allow the OSM controller to automatically inject the Envoy sidecar proxy containers with your application. Run the following command to onboard the OSM bookstore application namespaces.
+
+```azurecli-interactive
+osm namespace add bookstore bookbuyer bookthief bookwarehouse
+```
+
+You should see the following output:
+
+```Output
+Namespace [bookstore] successfully added to mesh [osm]
+Namespace [bookbuyer] successfully added to mesh [osm]
+Namespace [bookthief] successfully added to mesh [osm]
+Namespace [bookwarehouse] successfully added to mesh [osm]
+```
+
+## Deploy the Bookstore application to the AKS cluster
+
+```azurecli-interactive
+kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.9/docs/example/manifests/apps/bookbuyer.yaml
+```
+
+```azurecli-interactive
+kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.9/docs/example/manifests/apps/bookthief.yaml
+```
+
+```azurecli-interactive
+kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.9/docs/example/manifests/apps/bookstore.yaml
+```
+
+```azurecli-interactive
+kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.9/docs/example/manifests/apps/bookwarehouse.yaml
+```
+
+All of the deployment outputs are summarized below.
+
+```Output
+serviceaccount/bookbuyer created
+service/bookbuyer created
+deployment.apps/bookbuyer created
+
+serviceaccount/bookthief created
+service/bookthief created
+deployment.apps/bookthief created
+
+service/bookstore created
+serviceaccount/bookstore created
+deployment.apps/bookstore created
+
+serviceaccount/bookwarehouse created
+service/bookwarehouse created
+deployment.apps/bookwarehouse created
+```
+
+## Checkpoint: What got installed?
+
+The Bookstore application is an example multi-tiered application that works well for testing service mesh functionality. The application consists of four services, being the `bookbuyer`, `bookthief`, `bookstore`, and `bookwarehouse`. Both the `bookbuyer` and `bookthief` service communicate to the `bookstore` service to retrieve books from the `bookstore` service. The `bookstore` service retrieves books out of the `bookwarehouse` service to supply the `bookbuyer` and `bookthief`. This is a simple multi-tiered application that works well in showing how a service mesh can be used to protect and authorize communications between the applications services. As we continue through the walkthrough, we will be enabling and disabling Service Mesh Interface (SMI) policies to both allow and disallow the services to communicate via OSM. Below is an architecture diagram of what got installed for the bookstore application.
+
+![OSM bookbuyer app architecture](./media/aks-osm-addon/osm-bookstore-app-arch.png)
+
+## Verify the Bookstore application running inside the AKS cluster
+
+As of now we have deployed the `bookstore` mulit-container application, but it is only accessible from within the AKS cluster. Later tutorials will assist you in exposing the application outside the cluster via an ingress controller. For now we will be utilizing port forwarding to access the `bookbuyer` application inside the AKS cluster to verify it is buying books from the `bookstore` service.
+
+To verify that the application is running inside the cluster, we will use a port forward to view both the `bookbuyer` and `bookthief` components UI.
+
+First let's get the `bookbuyer` pod's name
+
+```azurecli-interactive
+kubectl get pod -n bookbuyer
+```
+
+You should see output similar to the following. Your `bookbuyer` pod will have a unique name appended.
+
+```Output
+NAME READY STATUS RESTARTS AGE
+bookbuyer-7676c7fcfb-mtnrz 2/2 Running 0 7m8s
+```
+
+Once we have the pod's name, we can now use the port-forward command to set up a tunnel from our local system to the application inside the AKS cluster. Run the following command to set up the port forward for the local system port 8080. Again use your specified bookbuyer pod name.
+
+> [!NOTE]
+> For all port forwarding commands it is best to use an additional terminal so that you can continue to work through this walkthrough and not disconnect the tunnel. It is also best that you establish the port forward tunnel outside of the Azure Cloud Shell.
+
+```Bash
+kubectl port-forward bookbuyer-7676c7fcfb-mtnrz -n bookbuyer 8080:14001
+```
+
+You should see output similar to the following:
+
+```Output
+Forwarding from 127.0.0.1:8080 -> 14001
+Forwarding from [::1]:8080 -> 14001
+```
+
+While the port forwarding session is in place, navigate to the following url from a browser `http://localhost:8080`. You should now be able to see the `bookbuyer` application UI in the browser similar to the image below.
+
+![OSM bookbuyer app UI image](./media/aks-osm-addon/osm-bookbuyer-service-ui.png)
+
+You will also notice that the total books bought number continues to increment to the `bookstore` v1 service. The `bookstore` v2 service has not been deployed yet. We will deploy the `bookstore` v2 service when we demonstrate the SMI traffic split policies.
+
+You can also check the same for the `bookthief` service.
+
+```azurecli-interactive
+kubectl get pod -n bookthief
+```
+
+You should see output similar to the following. Your `bookthief` pod will have a unique name appended.
+
+```Output
+NAME READY STATUS RESTARTS AGE
+bookthief-59549fb69c-cr8vl 2/2 Running 0 15m54s
+```
+
+Port forward to `bookthief` pod.
+
+```Bash
+kubectl port-forward bookthief-59549fb69c-cr8vl -n bookthief 8080:14001
+```
+
+Navigate to the following url from a browser `http://localhost:8080`. You should see the `bookthief` is currently stealing books from the `bookstore` service! Later on we will implement a traffic policy to stop the `bookthief`.
+
+![OSM bookthief app UI image](./media/aks-osm-addon/osm-bookthief-service-ui.png)
+
+## Disable OSM Permissive Traffic Mode for the mesh
+
+We will now disable the permissive traffic mode policy and OSM will need explicit [SMI](https://smi-spec.io/) policies deployed to the cluster to allow communications in the mesh from each service. To disable permissive traffic mode, run the following command to update the OSM MeshConfig resource property changing the value from `true` to `false`.
+
+```azurecli-interactive
+kubectl patch meshconfig osm-mesh-config -n kube-system -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":false}}}' --type=merge
+```
+
+You should see output similar to the following.
+
+```Output
+meshconfig.config.openservicemesh.io/osm-mesh-config patched
+```
+
+To verify permissive traffic mode has been disabled, port forward back into either the `bookbuyer` or `bookthief` pod to view their UI in the browser and see if the books bought or books stolen is no longer incrementing. Ensure to refresh the browser. If the incrementing has stopped, the policy was applied correctly. You have successfully stopped the `bookthief` from stealing books, but neither the `bookbuyer` can purchase from the `bookstore` nor the `bookstore` can retrieve books from the `bookwarehouse`. Next we will implement [SMI](https://smi-spec.io/) policies to allow only the services in the mesh you'd like to communicate to do so. For more detailed information about permissive traffic mode, please visit and read the [Permissive Traffic Policy Mode](https://docs.openservicemesh.io/docs/guides/traffic_management/permissive_mode/) article.
+
+## Apply Service Mesh Interface (SMI) traffic access policies
+
+Now that we have disabled all communications in the mesh, let's allow our `bookbuyer` service to communicate to our `bookstore` service for purchasing books, and allow our `bookstore` service to communicate to our `bookwarehouse` service to retrieving books to sell.
+
+Deploy the following [SMI](https://smi-spec.io/) policies.
+
+```azurecli-interactive
+kubectl apply -f - <<EOF
+
+apiVersion: access.smi-spec.io/v1alpha3
+kind: TrafficTarget
+metadata:
+ name: bookbuyer-access-bookstore
+ namespace: bookstore
+spec:
+ destination:
+ kind: ServiceAccount
+ name: bookstore
+ namespace: bookstore
+ rules:
+ - kind: HTTPRouteGroup
+ name: bookstore-service-routes
+ matches:
+ - buy-a-book
+ - books-bought
+ sources:
+ - kind: ServiceAccount
+ name: bookbuyer
+ namespace: bookbuyer
+
+apiVersion: specs.smi-spec.io/v1alpha4
+kind: HTTPRouteGroup
+metadata:
+ name: bookstore-service-routes
+ namespace: bookstore
+spec:
+ matches:
+ - name: books-bought
+ pathRegex: /books-bought
+ methods:
+ - GET
+ headers:
+ - "user-agent": ".*-http-client/*.*"
+ - "client-app": "bookbuyer"
+ - name: buy-a-book
+ pathRegex: ".*a-book.*new"
+ methods:
+ - GET
+ - name: update-books-bought
+ pathRegex: /update-books-bought
+ methods:
+ - POST
+
+kind: TrafficTarget
+apiVersion: access.smi-spec.io/v1alpha3
+metadata:
+ name: bookstore-access-bookwarehouse
+ namespace: bookwarehouse
+spec:
+ destination:
+ kind: ServiceAccount
+ name: bookwarehouse
+ namespace: bookwarehouse
+ rules:
+ - kind: HTTPRouteGroup
+ name: bookwarehouse-service-routes
+ matches:
+ - restock-books
+ sources:
+ - kind: ServiceAccount
+ name: bookstore
+ namespace: bookstore
+ - kind: ServiceAccount
+ name: bookstore-v2
+ namespace: bookstore
+
+apiVersion: specs.smi-spec.io/v1alpha4
+kind: HTTPRouteGroup
+metadata:
+ name: bookwarehouse-service-routes
+ namespace: bookwarehouse
+spec:
+ matches:
+ - name: restock-books
+ methods:
+ - POST
+ headers:
+ - host: bookwarehouse.bookwarehouse
+EOF
+```
+
+You should see output similar to the following.
+
+```Output
+traffictarget.access.smi-spec.io/bookbuyer-access-bookstore-v1 created
+httproutegroup.specs.smi-spec.io/bookstore-service-routes created
+traffictarget.access.smi-spec.io/bookstore-access-bookwarehouse created
+httproutegroup.specs.smi-spec.io/bookwarehouse-service-routes created
+```
+
+You can now set up a port forwarding session on either the `bookbuyer` or `bookstore` pods and see that both the books bought and books sold metrics are back incrementing. You can also do the same for the `bookthief` pod to verify it is still no longer able to steal books.
+
+## Apply Service Mesh Interface (SMI) traffic split policies
+
+For our final demonstration, we will create an [SMI](https://smi-spec.io/) traffic split policy to configure the weight of communications from one service to multiple services as a backend. The traffic split functionality allows you to progressively move connections to one service over to another by weighting the traffic on a scale of 0 to 100.
+
+The below graphic is a diagram of the [SMI](https://smi-spec.io/) Traffic Split policy to be deployed. We will deploy another `Bookstore` application as version 2 and then split the incoming traffic from the `bookbuyer`, weighting 25% of the traffic to the `bookstore` v1 service and 75% to the `bookstore` v2 service.
+
+![OSM bookbuyer traffic split diagram](./media/aks-osm-addon/osm-bookbuyer-traffic-split-diagram.png)
+
+Deploy the `bookstore` v2 service.
+
+```azurecli-interactive
+kubectl apply -f - <<EOF
+
+apiVersion: v1
+kind: Service
+metadata:
+ name: bookstore-v2
+ namespace: bookstore
+ labels:
+ app: bookstore-v2
+spec:
+ ports:
+ - port: 14001
+ name: bookstore-port
+ selector:
+ app: bookstore-v2
+
+# Deploy bookstore-v2 Service Account
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: bookstore-v2
+ namespace: bookstore
+
+# Deploy bookstore-v2 Deployment
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: bookstore-v2
+ namespace: bookstore
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: bookstore-v2
+ template:
+ metadata:
+ labels:
+ app: bookstore-v2
+ spec:
+ serviceAccountName: bookstore-v2
+ containers:
+ - name: bookstore
+ image: openservicemesh/bookstore:v0.8.0
+ imagePullPolicy: Always
+ ports:
+ - containerPort: 14001
+ name: web
+ command: ["/bookstore"]
+ args: ["--path", "./", "--port", "14001"]
+ env:
+ - name: BOOKWAREHOUSE_NAMESPACE
+ value: bookwarehouse
+ - name: IDENTITY
+ value: bookstore-v2
+
+kind: TrafficTarget
+apiVersion: access.smi-spec.io/v1alpha3
+metadata:
+ name: bookbuyer-access-bookstore-v2
+ namespace: bookstore
+spec:
+ destination:
+ kind: ServiceAccount
+ name: bookstore-v2
+ namespace: bookstore
+ rules:
+ - kind: HTTPRouteGroup
+ name: bookstore-service-routes
+ matches:
+ - buy-a-book
+ - books-bought
+ sources:
+ - kind: ServiceAccount
+ name: bookbuyer
+ namespace: bookbuyer
+EOF
+```
+
+You should see the following output.
+
+```Output
+service/bookstore-v2 configured
+serviceaccount/bookstore-v2 created
+deployment.apps/bookstore-v2 created
+traffictarget.access.smi-spec.io/bookstore-v2 created
+```
+
+Now deploy the traffic split policy to split the `bookbuyer` traffic between the two `bookstore` v1 and v2 service.
+
+```azurecli-interactive
+kubectl apply -f - <<EOF
+apiVersion: split.smi-spec.io/v1alpha2
+kind: TrafficSplit
+metadata:
+ name: bookstore-split
+ namespace: bookstore
+spec:
+ service: bookstore.bookstore
+ backends:
+ - service: bookstore
+ weight: 25
+ - service: bookstore-v2
+ weight: 75
+EOF
+```
+
+You should see the following output.
+
+```Output
+trafficsplit.split.smi-spec.io/bookstore-split created
+```
+
+Set up a port forward tunnel to the `bookbuyer` pod and you should now see books being purchased from the `bookstore` v2 service. If you continue to watch the increment of purchases, you should notice a faster increment of purchases happening through the `bookstore` v2 service.
+
+![OSM bookbuyer books boough UI](./media/aks-osm-addon/osm-bookbuyer-traffic-split-ui.png)
aks Open Service Mesh Disable Add On https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/open-service-mesh-disable-add-on.md
+
+ Title: Disable OSM
+description: Disable Open Service Mesh
++ Last updated : 8/26/2021++++
+# Disable Open Service Mesh (OSM) add-on for your AKS cluster
++
+To disable the OSM add-on, run the following command:
+
+```azurecli-interactive
+az aks disable-addons -n <AKS-cluster-name> -g <AKS-resource-group-name> -a open-service-mesh
+```
aks Open Service Mesh Nginx Ingress https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/open-service-mesh-nginx-ingress.md
+
+ Title: Using NGINX Ingress
+description: How to use NGINX Ingress with Open Service Mesh
++ Last updated : 8/26/2021++++
+# Deploy an application managed by Open Service Mesh (OSM) with NGINX ingress
+
+Open Service Mesh (OSM) is a lightweight, extensible, Cloud Native service mesh, allowing users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.
+
+In this tutorial, you will:
+
+> [!div class="checklist"]
+>
+> - View the current OSM cluster configuration
+> - Create the namespace(s) for OSM to manage deployed applications in the namespace(s)
+> - Onboard the namespaces to be managed by OSM
+> - Deploy the sample application
+> - Verify the application running inside the AKS cluster
+> - Create a NGINX ingress controller used for the appliction
+> - Expose a service via the Azure Application Gateway ingress to the internet
+
+## Before you begin
+
+The steps detailed in this article assume that you've created an AKS cluster (Kubernetes `1.19+` and above, with Kubernetes RBAC enabled), have established a `kubectl` connection with the cluster (If you need help with any of these items, then see the [AKS quickstart](./kubernetes-walkthrough.md), and have installed the AKS OSM add-on.
+
+You must have the following resources installed:
+
+- The Azure CLI, version 2.20.0 or later
+- The `aks-preview` extension version 0.5.5 or later
+- OSM version v0.8.0 or later
+- JSON processor "jq" version 1.6+
++
+### View and verify the current OSM cluster configuration
+
+Once the OSM add-on for AKS has been enabled on the AKS cluster, you can view the current configuration parameters in the osm-mesh-config resource. Run the following command to view the properties:
+
+```azurecli-interactive
+kubectl get meshconfig osm-mesh-config -n osm-system -o yaml
+```
+
+Output shows the current OSM configuration for the cluster.
+
+```
+apiVersion: config.openservicemesh.io/v1alpha1
+kind: MeshConfig
+metadata:
+ creationTimestamp: "0000-00-00A00:00:00A"
+ generation: 1
+ name: osm-mesh-config
+ namespace: kube-system
+ resourceVersion: "2494"
+ uid: 6c4d67f3-c241-4aeb-bf4f-b029b08faa31
+spec:
+ certificate:
+ serviceCertValidityDuration: 24h
+ featureFlags:
+ enableEgressPolicy: true
+ enableMulticlusterMode: false
+ enableWASMStats: true
+ observability:
+ enableDebugServer: true
+ osmLogLevel: info
+ tracing:
+ address: jaeger.osm-system.svc.cluster.local
+ enable: false
+ endpoint: /api/v2/spans
+ port: 9411
+ sidecar:
+ configResyncInterval: 0s
+ enablePrivilegedInitContainer: false
+ envoyImage: mcr.microsoft.com/oss/envoyproxy/envoy:v1.18.3
+ initContainerImage: mcr.microsoft.com/oss/openservicemesh/init:v0.9.1
+ logLevel: error
+ maxDataPlaneConnections: 0
+ resources: {}
+ traffic:
+ enableEgress: true
+ enablePermissiveTrafficPolicyMode: true
+ inboundExternalAuthorization:
+ enable: false
+ failureModeAllow: false
+ statPrefix: inboundExtAuthz
+ timeout: 1s
+ useHTTPSIngress: false
+```
+
+Notice the **enablePermissiveTrafficPolicyMode** is configured to **true**. Permissive traffic policy mode in OSM is a mode where the [SMI](https://smi-spec.io/) traffic policy enforcement is bypassed. In this mode, OSM automatically discovers services that are a part of the service mesh and programs traffic policy rules on each Envoy proxy sidecar to be able to communicate with these services. For more detailed information about permissive traffic mode, please visit and read the [Permissive Traffic Policy Mode](https://docs.openservicemesh.io/docs/guides/traffic_management/permissive_mode/) article.
+
+## Create namespaces for the application
+
+In this tutorial we will be using the OSM `bookstore` application that has the following application components:
+
+- `bookbuyer`
+- `bookthief`
+- `bookstore`
+- `bookwarehouse`
+
+Create namespaces for each of these application components.
+
+```azurecli-interactive
+for i in bookstore bookbuyer bookthief bookwarehouse; do kubectl create ns $i; done
+```
+
+You should see the following output:
+
+```Output
+namespace/bookstore created
+namespace/bookbuyer created
+namespace/bookthief created
+namespace/bookwarehouse created
+```
+
+## Onboard the namespaces to be managed by OSM
+
+Adding the namespaces to the OSM mesh will allow the OSM controller to automatically inject the Envoy sidecar proxy containers with your application. Run the following command to onboard the OSM `bookstore` application namespaces.
+
+```azurecli-interactive
+osm namespace add bookstore bookbuyer bookthief bookwarehouse
+```
+
+You should see the following output:
+
+```Output
+Namespace [bookstore] successfully added to mesh [osm]
+Namespace [bookbuyer] successfully added to mesh [osm]
+Namespace [bookthief] successfully added to mesh [osm]
+Namespace [bookwarehouse] successfully added to mesh [osm]
+```
+
+## Deploy the Bookstore application to the AKS cluster
+
+```azurecli-interactive
+kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.8/docs/example/manifests/apps/bookbuyer.yaml
+```
+
+```azurecli-interactive
+kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.8/docs/example/manifests/apps/bookthief.yaml
+```
+
+```azurecli-interactive
+kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.8/docs/example/manifests/apps/bookstore.yaml
+```
+
+```azurecli-interactive
+kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.8/docs/example/manifests/apps/bookwarehouse.yaml
+```
+
+All of the deployment outputs are summarized below.
+
+```Output
+serviceaccount/bookbuyer created
+service/bookbuyer created
+deployment.apps/bookbuyer created
+
+serviceaccount/bookthief created
+service/bookthief created
+deployment.apps/bookthief created
+
+service/bookstore created
+serviceaccount/bookstore created
+deployment.apps/bookstore created
+
+serviceaccount/bookwarehouse created
+service/bookwarehouse created
+deployment.apps/bookwarehouse created
+```
+
+## Update the Bookbuyer Service
+
+Update the `bookbuyer` service to the correct inbound port configuration with the following service manifest.
+
+```azurecli-interactive
+kubectl apply -f - <<EOF
+apiVersion: v1
+kind: Service
+metadata:
+ name: bookbuyer
+ namespace: bookbuyer
+ labels:
+ app: bookbuyer
+spec:
+ ports:
+ - port: 14001
+ name: inbound-port
+ selector:
+ app: bookbuyer
+EOF
+```
+
+## Verify the Bookstore application running inside the AKS cluster
+
+As of now we have deployed the `bookstore` mulit-container application, but it is only accessible from within the AKS cluster. Later we will add the Azure Application Gateway ingress controller to expose the application outside the AKS cluster. To verify that the application is running inside the cluster, we will use a port forward to view the `bookbuyer` component UI.
+
+First let's get the `bookbuyer` pod's name
+
+```azurecli-interactive
+kubectl get pod -n bookbuyer
+```
+
+You should see output similar to the following. Your `bookbuyer` pod will have a unique name appended.
+
+```Output
+NAME READY STATUS RESTARTS AGE
+bookbuyer-7676c7fcfb-mtnrz 2/2 Running 0 7m8s
+```
+
+Once we have the pod's name, we can now use the port-forward command to set up a tunnel from our local system to the application inside the AKS cluster. Run the following command to set up the port forward for the local system port 8080. Again use your specified bookbuyer pod name.
+
+```azurecli-interactive
+kubectl port-forward bookbuyer-7676c7fcfb-mtnrz -n bookbuyer 8080:14001
+```
+
+You should see similar output below:
+
+```Output
+Forwarding from 127.0.0.1:8080 -> 14001
+Forwarding from [::1]:8080 -> 14001
+```
+
+While the port forwarding session is in place, navigate to the following url from a browser `http://localhost:8080`. You should now be able to see the `bookbuyer` application UI in the browser similar to the image below.
+
+![OSM bookbuyer app for NGINX UI image](./media/aks-osm-addon/osm-agic-bookbuyer-img.png)
+
+## Create an NGINX ingress controller in Azure Kubernetes Service (AKS)
+
+An ingress controller is a piece of software that provides reverse proxy, configurable traffic routing, and TLS termination for Kubernetes services. Kubernetes ingress resources are used to configure the ingress rules and routes for individual Kubernetes services. Using an ingress controller and ingress rules, a single IP address can be used to route traffic to multiple services in a Kubernetes cluster.
+
+We will utilize the ingress controller to expose the application managed by OSM to the internet. To create the ingress controller, use Helm to install nginx-ingress. For added redundancy, two replicas of the NGINX ingress controllers are deployed with the `--set controller.replicaCount` parameter. To fully benefit from running replicas of the ingress controller, make sure there's more than one node in your AKS cluster.
+
+The ingress controller will be scheduled on a Linux node. Windows Server nodes shouldn't run the ingress controller. A node selector is specified using the `--set nodeSelector` parameter to tell the Kubernetes scheduler to run the NGINX ingress controller on a Linux-based node.
+
+> [!TIP]
+> The following example creates a Kubernetes namespace for the ingress resources named _ingress-basic_. Specify a namespace for your own environment as needed.
+
+```azurecli-interactive
+# Create a namespace for your ingress resources
+kubectl create namespace ingress-basic
+
+# Add the ingress-nginx repository
+helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
+
+# Update the helm repo(s)
+helm repo update
+
+# Use Helm to deploy an NGINX ingress controller in the ingress-basic namespace
+helm install nginx-ingress ingress-nginx/ingress-nginx \
+ --namespace ingress-basic \
+ --set controller.replicaCount=1 \
+ --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux
+```
+
+A Kubernetes load balancer service is created for the NGINX ingress controller. A dynamic public IP address is assigned, as shown in the following example output:
+
+```Output
+$ kubectl --namespace ingress-basic get services -o wide -w nginx-ingress-ingress-nginx-controller
+
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
+nginx-ingress-ingress-nginx-controller LoadBalancer 10.0.74.133 EXTERNAL_IP 80:32486/TCP,443:30953/TCP 44s app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx
+```
+
+No ingress rules have been created. Currently the NGINX ingress controller's default 404 page is displayed if you browse to the internal IP address. Ingress rules are configured in the following steps.
+
+## Expose the bookbuyer service to the internet
+
+```azurecli-interactive
+kubectl apply -f - <<EOF
+
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ name: bookbuyer-ingress
+ namespace: bookbuyer
+ annotations:
+ kubernetes.io/ingress.class: nginx
+
+spec:
+
+ rules:
+ - host: bookbuyer.contoso.com
+ http:
+ paths:
+ - path: /
+ backend:
+ serviceName: bookbuyer
+ servicePort: 14001
+
+ backend:
+ serviceName: bookbuyer
+ servicePort: 14001
+EOF
+```
+
+You should see the following output:
+
+```Output
+Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
+ingress.extensions/bookbuyer-ingress created
+```
+
+## View the NGINX logs
+
+```azurecli-interactive
+POD=$(kubectl get pods -n ingress-basic | grep 'nginx-ingress' | awk '{print $1}')
+
+kubectl logs $POD -n ingress-basic -f
+```
+
+Output shows the NGINX ingress controller status when ingress rule has been applied successfully:
+
+```Output
+I0321 <date> 6 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-basic", Name:"nginx-ingress-ingress-nginx-controller-54cf6c8bf4-jdvrw", UID:"3ebbe5e5-50ef-481d-954d-4b82a499ebe1", APIVersion:"v1", ResourceVersion:"3272", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
+I0321 <date> 6 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"bookbuyer", Name:"bookbuyer-ingress", UID:"e1018efc-8116-493c-9999-294b4566819e", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"5460", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
+I0321 <date> 6 controller.go:146] "Configuration changes detected, backend reload required"
+I0321 <date> 6 controller.go:163] "Backend successfully reloaded"
+I0321 <date> 6 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-basic", Name:"nginx-ingress-ingress-nginx-controller-54cf6c8bf4-jdvrw", UID:"3ebbe5e5-50ef-481d-954d-4b82a499ebe1", APIVersion:"v1", ResourceVersion:"3272", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
+```
+
+## View the NGINX services and bookbuyer service externally
+
+```azurecli-interactive
+kubectl get services -n ingress-basic
+```
+
+```Output
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+nginx-ingress-ingress-nginx-controller LoadBalancer 10.0.100.23 20.193.1.74 80:31742/TCP,443:32683/TCP 4m15s
+nginx-ingress-ingress-nginx-controller-admission ClusterIP 10.0.163.98 <none> 443/TCP 4m15s
+```
+
+Since the host name in the ingress manifest is a pseudo name used for testing, the DNS name will not be available on the internet. We can alternatively use the curl program and past the hostname header to the NGINX public IP address and receive a 200 code successfully connecting us to the bookbuyer service.
+
+```azurecli-interactive
+curl -H 'Host: bookbuyer.contoso.com' http://EXTERNAL-IP/
+```
+
+You should see the following output:
+
+```Output
+<!doctype html>
+<html itemscope="" itemtype="http://schema.org/WebPage" lang="en">
+ <head>
+ <meta content="Bookbuyer" name="description">
+ <meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
+ <title>Bookbuyer</title>
+ <style>
+ #navbar {
+ width: 100%;
+ height: 50px;
+ display: table;
+ border-spacing: 0;
+ white-space: nowrap;
+ line-height: normal;
+ background-color: #0078D4;
+ background-position: left top;
+ background-repeat-x: repeat;
+ background-image: none;
+ color: white;
+ font: 2.2em "Fira Sans", sans-serif;
+ }
+ #main {
+ padding: 10pt 10pt 10pt 10pt;
+ font: 1.8em "Fira Sans", sans-serif;
+ }
+ li {
+ padding: 10pt 10pt 10pt 10pt;
+ font: 1.2em "Consolas", sans-serif;
+ }
+ </style>
+ <script>
+ setTimeout(function(){window.location.reload(1);}, 1500);
+ </script>
+ </head>
+ <body bgcolor="#fff">
+ <div id="navbar">
+ &#128214; Bookbuyer
+ </div>
+ <div id="main">
+ <ul>
+ <li>Total books bought: <strong>1833</strong>
+ <ul>
+ <li>from bookstore V1: <strong>277</strong>
+ <li>from bookstore V2: <strong>1556</strong>
+ </ul>
+ </li>
+ </ul>
+ </div>
+
+ <br/><br/><br/><br/>
+ <br/><br/><br/><br/>
+ <br/><br/><br/><br/>
+
+ Current Time: <strong>Fri, 26 Mar 2021 15:02:53 UTC</strong>
+ </body>
+</html>
+```
aks Open Service Mesh Open Source Observability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/open-service-mesh-open-source-observability.md
+
+ Title: OSM OSS Observability
+description: How to configure open-source observability for Open Service Mesh
++ Last updated : 8/26/2021++++
+# Manually deploy Prometheus, Grafana, and Jaeger to view Open Service Mesh (OSM) metrics for observability
+
+> [!WARNING]
+> The installation of Prometheus, Grafana and Jaeger are provided as general guidance to show how these tools can be utilized to view OSM metric data. The installation guidance is not to be utilized for a production setup. Please refer to each tool's documentation on how best to suit thier installations to your needs. Most notable will be the lack of persistent storage, meaning that all data is lost once a Prometheus Grafana, and/or Jaeger pod(s) are terminated.
+
+Open Service Mesh (OSM) generates detailed metrics related to all traffic within the mesh. These metrics provide insights into the behavior of applications in the mesh helping users to troubleshoot, maintain, and analyze their applications.
+
+As of today OSM collects metrics directly from the sidecar proxies (Envoy). OSM provides rich metrics for incoming and outgoing traffic for all services in the mesh. With these metrics, the user can get information about the overall volume of traffic, errors within traffic and the response time for requests.
+
+OSM uses Prometheus to gather and store consistent traffic metrics and statistics for all applications running in the mesh. Prometheus is an open-source monitoring and alerting toolkit, which is commonly used on (but not limited to) Kubernetes and Service Mesh environments.
+
+Each application that is part of the mesh runs in a Pod that contains an Envoy sidecar that exposes metrics (proxy metrics) in the Prometheus format. Furthermore, every Pod that is a part of the mesh has Prometheus annotations, which makes it possible for the Prometheus server to scrape the application dynamically. This mechanism automatically enables scraping of metrics whenever a new namespace/pod/service is added to the mesh.
+
+OSM metrics can be viewed with Grafana, which is an open-source visualization and analytics software. It allows you to query, visualize, alert on, and explore your metrics.
+
+In this tutorial, you will:
+
+> [!div class="checklist"]
+>
+> - Create and deploy a Prometheus instance
+> - Configure OSM to allow Prometheus scraping
+> - Update the Prometheus `Configmap`
+> - Create and deploy a Grafana instance
+> - Configure Grafana with the Prometheus datasource
+> - Import OSM dashboard for Grafana
+> - Create and deploy a Jaeger instance
+> - Configure Jaeger tracing for OSM
++
+## Before you begin
+
+You must have the following resources installed:
+
+- The Azure CLI, version 2.20.0 or later
+- The `aks-preview` extension version 0.5.5 or later
+- OSM version v0.8.0 or later
+- JSON processor "jq" version 1.6+
+
+## Deploy and configure a Prometheus instance for OSM
+
+We will use Helm to deploy the Prometheus instance. Run the following commands to install Prometheus via Helm:
+
+```azurecli-interactive
+helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
+helm repo update
+helm install stable prometheus-community/prometheus
+```
+
+You should see similar output below if the installation was successful. Make note of the Prometheus server port and cluster DNS name. This information will be used later for to configure Prometheus as a data source for Grafana.
+
+```Output
+NAME: stable
+LAST DEPLOYED: Fri Mar 26 13:34:51 2021
+NAMESPACE: default
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+NOTES:
+The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
+stable-prometheus-server.default.svc.cluster.local
++
+Get the Prometheus server URL by running these commands in the same shell:
+ export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
+ kubectl --namespace default port-forward $POD_NAME 9090
++
+The Prometheus alertmanager can be accessed via port 80 on the following DNS name from within your cluster:
+stable-prometheus-alertmanager.default.svc.cluster.local
++
+Get the Alertmanager URL by running these commands in the same shell:
+ export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}")
+ kubectl --namespace default port-forward $POD_NAME 9093
+#################################################################################
+###### WARNING: Pod Security Policy has been moved to a global property. #####
+###### use .Values.podSecurityPolicy.enabled with pod-based #####
+###### annotations #####
+###### (e.g. .Values.nodeExporter.podSecurityPolicy.annotations) #####
+#################################################################################
++
+The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster:
+stable-prometheus-pushgateway.default.svc.cluster.local
++
+Get the PushGateway URL by running these commands in the same shell:
+ export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=pushgateway" -o jsonpath="{.items[0].metadata.name}")
+ kubectl --namespace default port-forward $POD_NAME 9091
+
+For more information on running Prometheus, visit:
+https://prometheus.io/
+```
+
+### Configure OSM to allow Prometheus scraping
+
+To ensure that the OSM components are configured for Prometheus scrapes, we'll want to check the **prometheus_scraping** configuration located in the osm-config config file. View the configuration with the following command:
+
+```azurecli-interactive
+kubectl get configmap -n kube-system osm-config -o json | jq '.data.prometheus_scraping'
+```
+
+The output of the previous command should return `true` if OSM is configured for Prometheus scraping. If the returned value is `false`, we will need to update the configuration to be `true`. Run the following command to turn **on** OSM Prometheus scraping:
+
+```azurecli-interactive
+kubectl patch configmap -n kube-system osm-config --type merge --patch '{"data":{"prometheus_scraping":"true"}}'
+```
+
+You should see the following output.
+
+```Output
+configmap/osm-config patched
+```
+
+### Update the Prometheus Configmap
+
+The default installation of Prometheus will contain two Kubernetes `configmaps`. You can view the list of Prometheus `configmaps` with the following command.
+
+```azurecli-interactive
+kubectl get configmap | grep prometheus
+```
+
+```Output
+stable-prometheus-alertmanager 1 4h34m
+stable-prometheus-server 5 4h34m
+```
+
+We will need to replace the prometheus.yml configuration located in the **stable-prometheus-server** `configmap` with the following OSM configuration. There are several file editing techniques to accomplish this task. A simple and safe way is to export the `configmap`, create a copy of it for backup, then edit it with an editor such as Visual Studio code.
+
+> [!NOTE]
+> If you do not have Visual Studio Code installed you can go download and install it [here](https://code.visualstudio.com/Download).
+
+Let's first export out the **stable-prometheus-server** configmap and then make a copy for backup.
+
+```azurecli-interactive
+kubectl get configmap stable-prometheus-server -o yaml > cm-stable-prometheus-server.yml
+cp cm-stable-prometheus-server.yml cm-stable-prometheus-server.yml.copy
+```
+
+Next let's open the file using Visual Studio Code to edit.
+
+```azurecli-interactive
+code cm-stable-prometheus-server.yml
+```
+
+Once you have the `configmap` opened in the Visual Studio Code editor, replace the prometheus.yml file with the OSM configuration below and save the file.
+
+> [!WARNING]
+> It is extremely important that you ensure you keep the indention structure of the yaml file. Any changes to the yaml file structure could result in the configmap not being able to be re-applied.
+
+```OSM Prometheus Configmap Configuration
+prometheus.yml: |
+ global:
+ scrape_interval: 10s
+ scrape_timeout: 10s
+ evaluation_interval: 1m
+
+ scrape_configs:
+ - job_name: 'kubernetes-apiservers'
+ kubernetes_sd_configs:
+ - role: endpoints
+ scheme: https
+ tls_config:
+ ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
+ # TODO need to remove this when the CA and SAN match
+ insecure_skip_verify: true
+ bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
+ metric_relabel_configs:
+ - source_labels: [__name__]
+ regex: '(apiserver_watch_events_total|apiserver_admission_webhook_rejection_count)'
+ action: keep
+ relabel_configs:
+ - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
+ action: keep
+ regex: default;kubernetes;https
+
+ - job_name: 'kubernetes-nodes'
+ scheme: https
+ tls_config:
+ ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
+ bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
+ kubernetes_sd_configs:
+ - role: node
+ relabel_configs:
+ - action: labelmap
+ regex: __meta_kubernetes_node_label_(.+)
+ - target_label: __address__
+ replacement: kubernetes.default.svc:443
+ - source_labels: [__meta_kubernetes_node_name]
+ regex: (.+)
+ target_label: __metrics_path__
+ replacement: /api/v1/nodes/${1}/proxy/metrics
+
+ - job_name: 'kubernetes-pods'
+ kubernetes_sd_configs:
+ - role: pod
+ metric_relabel_configs:
+ - source_labels: [__name__]
+ regex: '(envoy_server_live|envoy_cluster_upstream_rq_xx|envoy_cluster_upstream_cx_active|envoy_cluster_upstream_cx_tx_bytes_total|envoy_cluster_upstream_cx_rx_bytes_total|envoy_cluster_upstream_cx_destroy_remote_with_active_rq|envoy_cluster_upstream_cx_connect_timeout|envoy_cluster_upstream_cx_destroy_local_with_active_rq|envoy_cluster_upstream_rq_pending_failure_eject|envoy_cluster_upstream_rq_pending_overflow|envoy_cluster_upstream_rq_timeout|envoy_cluster_upstream_rq_rx_reset|^osm.*)'
+ action: keep
+ relabel_configs:
+ - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
+ action: keep
+ regex: true
+ - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
+ action: replace
+ target_label: __metrics_path__
+ regex: (.+)
+ - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
+ action: replace
+ regex: ([^:]+)(?::\d+)?;(\d+)
+ replacement: $1:$2
+ target_label: __address__
+ - source_labels: [__meta_kubernetes_namespace]
+ action: replace
+ target_label: source_namespace
+ - source_labels: [__meta_kubernetes_pod_name]
+ action: replace
+ target_label: source_pod_name
+ - regex: '(__meta_kubernetes_pod_label_app)'
+ action: labelmap
+ replacement: source_service
+ - regex: '(__meta_kubernetes_pod_label_osm_envoy_uid|__meta_kubernetes_pod_label_pod_template_hash|__meta_kubernetes_pod_label_version)'
+ action: drop
+ # for non-ReplicaSets (DaemonSet, StatefulSet)
+ # __meta_kubernetes_pod_controller_kind=DaemonSet
+ # __meta_kubernetes_pod_controller_name=foo
+ # =>
+ # workload_kind=DaemonSet
+ # workload_name=foo
+ - source_labels: [__meta_kubernetes_pod_controller_kind]
+ action: replace
+ target_label: source_workload_kind
+ - source_labels: [__meta_kubernetes_pod_controller_name]
+ action: replace
+ target_label: source_workload_name
+ # for ReplicaSets
+ # __meta_kubernetes_pod_controller_kind=ReplicaSet
+ # __meta_kubernetes_pod_controller_name=foo-bar-123
+ # =>
+ # workload_kind=Deployment
+ # workload_name=foo-bar
+ # deplyment=foo
+ - source_labels: [__meta_kubernetes_pod_controller_kind]
+ action: replace
+ regex: ^ReplicaSet$
+ target_label: source_workload_kind
+ replacement: Deployment
+ - source_labels:
+ - __meta_kubernetes_pod_controller_kind
+ - __meta_kubernetes_pod_controller_name
+ action: replace
+ regex: ^ReplicaSet;(.*)-[^-]+$
+ target_label: source_workload_name
+
+ - job_name: 'smi-metrics'
+ kubernetes_sd_configs:
+ - role: pod
+ relabel_configs:
+ - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
+ action: keep
+ regex: true
+ - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
+ action: replace
+ target_label: __metrics_path__
+ regex: (.+)
+ - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
+ action: replace
+ regex: ([^:]+)(?::\d+)?;(\d+)
+ replacement: $1:$2
+ target_label: __address__
+ metric_relabel_configs:
+ - source_labels: [__name__]
+ regex: 'envoy_.*osm_request_(total|duration_ms_(bucket|count|sum))'
+ action: keep
+ - source_labels: [__name__]
+ action: replace
+ regex: envoy_response_code_(\d{3})_source_namespace_.*_source_kind_.*_source_name_.*_source_pod_.*_destination_namespace_.*_destination_kind_.*_destination_name_.*_destination_pod_.*_osm_request_total
+ target_label: response_code
+ - source_labels: [__name__]
+ action: replace
+ regex: envoy_response_code_\d{3}_source_namespace_(.*)_source_kind_.*_source_name_.*_source_pod_.*_destination_namespace_.*_destination_kind_.*_destination_name_.*_destination_pod_.*_osm_request_total
+ target_label: source_namespace
+ - source_labels: [__name__]
+ action: replace
+ regex: envoy_response_code_\d{3}_source_namespace_.*_source_kind_(.*)_source_name_.*_source_pod_.*_destination_namespace_.*_destination_kind_.*_destination_name_.*_destination_pod_.*_osm_request_total
+ target_label: source_kind
+ - source_labels: [__name__]
+ action: replace
+ regex: envoy_response_code_\d{3}_source_namespace_.*_source_kind_.*_source_name_(.*)_source_pod_.*_destination_namespace_.*_destination_kind_.*_destination_name_.*_destination_pod_.*_osm_request_total
+ target_label: source_name
+ - source_labels: [__name__]
+ action: replace
+ regex: envoy_response_code_\d{3}_source_namespace_.*_source_kind_.*_source_name_.*_source_pod_(.*)_destination_namespace_.*_destination_kind_.*_destination_name_.*_destination_pod_.*_osm_request_total
+ target_label: source_pod
+ - source_labels: [__name__]
+ action: replace
+ regex: envoy_response_code_\d{3}_source_namespace_.*_source_kind_.*_source_name_.*_source_pod_.*_destination_namespace_(.*)_destination_kind_.*_destination_name_.*_destination_pod_.*_osm_request_total
+ target_label: destination_namespace
+ - source_labels: [__name__]
+ action: replace
+ regex: envoy_response_code_\d{3}_source_namespace_.*_source_kind_.*_source_name_.*_source_pod_.*_destination_namespace_.*_destination_kind_(.*)_destination_name_.*_destination_pod_.*_osm_request_total
+ target_label: destination_kind
+ - source_labels: [__name__]
+ action: replace
+ regex: envoy_response_code_\d{3}_source_namespace_.*_source_kind_.*_source_name_.*_source_pod_.*_destination_namespace_.*_destination_kind_.*_destination_name_(.*)_destination_pod_.*_osm_request_total
+ target_label: destination_name
+ - source_labels: [__name__]
+ action: replace
+ regex: envoy_response_code_\d{3}_source_namespace_.*_source_kind_.*_source_name_.*_source_pod_.*_destination_namespace_.*_destination_kind_.*_destination_name_.*_destination_pod_(.*)_osm_request_total
+ target_label: destination_pod
+ - source_labels: [__name__]
+ action: replace
+ regex: .*(osm_request_total)
+ target_label: __name__
+
+ - source_labels: [__name__]
+ action: replace
+ regex: envoy_source_namespace_(.*)_source_kind_.*_source_name_.*_source_pod_.*_destination_namespace_.*_destination_kind_.*_destination_name_.*_destination_pod_.*_osm_request_duration_ms_(bucket|sum|count)
+ target_label: source_namespace
+ - source_labels: [__name__]
+ action: replace
+ regex: envoy_source_namespace_.*_source_kind_(.*)_source_name_.*_source_pod_.*_destination_namespace_.*_destination_kind_.*_destination_name_.*_destination_pod_.*_osm_request_duration_ms_(bucket|sum|count)
+ target_label: source_kind
+ - source_labels: [__name__]
+ action: replace
+ regex: envoy_source_namespace_.*_source_kind_.*_source_name_(.*)_source_pod_.*_destination_namespace_.*_destination_kind_.*_destination_name_.*_destination_pod_.*_osm_request_duration_ms_(bucket|sum|count)
+ target_label: source_name
+ - source_labels: [__name__]
+ action: replace
+ regex: envoy_source_namespace_.*_source_kind_.*_source_name_.*_source_pod_(.*)_destination_namespace_.*_destination_kind_.*_destination_name_.*_destination_pod_.*_osm_request_duration_ms_(bucket|sum|count)
+ target_label: source_pod
+ - source_labels: [__name__]
+ action: replace
+ regex: envoy_source_namespace_.*_source_kind_.*_source_name_.*_source_pod_.*_destination_namespace_(.*)_destination_kind_.*_destination_name_.*_destination_pod_.*_osm_request_duration_ms_(bucket|sum|count)
+ target_label: destination_namespace
+ - source_labels: [__name__]
+ action: replace
+ regex: envoy_source_namespace_.*_source_kind_.*_source_name_.*_source_pod_.*_destination_namespace_.*_destination_kind_(.*)_destination_name_.*_destination_pod_.*_osm_request_duration_ms_(bucket|sum|count)
+ target_label: destination_kind
+ - source_labels: [__name__]
+ action: replace
+ regex: envoy_source_namespace_.*_source_kind_.*_source_name_.*_source_pod_.*_destination_namespace_.*_destination_kind_.*_destination_name_(.*)_destination_pod_.*_osm_request_duration_ms_(bucket|sum|count)
+ target_label: destination_name
+ - source_labels: [__name__]
+ action: replace
+ regex: envoy_source_namespace_.*_source_kind_.*_source_name_.*_source_pod_.*_destination_namespace_.*_destination_kind_.*_destination_name_.*_destination_pod_(.*)_osm_request_duration_ms_(bucket|sum|count)
+ target_label: destination_pod
+ - source_labels: [__name__]
+ action: replace
+ regex: .*(osm_request_duration_ms_(bucket|sum|count))
+ target_label: __name__
+
+ - job_name: 'kubernetes-cadvisor'
+ scheme: https
+ tls_config:
+ ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
+ bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
+ kubernetes_sd_configs:
+ - role: node
+ metric_relabel_configs:
+ - source_labels: [__name__]
+ regex: '(container_cpu_usage_seconds_total|container_memory_rss)'
+ action: keep
+ relabel_configs:
+ - action: labelmap
+ regex: __meta_kubernetes_node_label_(.+)
+ - target_label: __address__
+ replacement: kubernetes.default.svc:443
+ - source_labels: [__meta_kubernetes_node_name]
+ regex: (.+)
+ target_label: __metrics_path__
+ replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
+```
+
+Apply the updated `configmap` yaml file with the following command.
+
+```azurecli-interactive
+kubectl apply -f cm-stable-prometheus-server.yml
+```
+
+```Output
+configmap/stable-prometheus-server configured
+```
+
+> [!NOTE]
+> You may receive a message about a missing kubernetes annotation needed. This can be ignored for now.
+
+### Verify Prometheus is configured to scrape the OSM mesh and API endpoints
+
+To verify that Prometheus is correctly configured to scrape the OSM mesh and API endpoints, we will port forward to the Prometheus pod and view the target configuration. Run the following commands.
+
+```azurecli-interactive
+PROM_POD_NAME=$(kubectl get pods -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
+kubectl --namespace <promNamespace> port-forward $PROM_POD_NAME 9090
+```
+
+Open a browser up to `http://localhost:9090/targets`
+
+If you scroll down you should see all the SMI metric endpoints state being **UP** as well as other OSM metrics defined as pictured below.
+
+![OSM Prometheus Target Metrics UI image](./media/aks-osm-addon/osm-prometheus-smi-metrics-target-scrape.png)
+
+## Deploy and configure a Grafana Instance for OSM
+
+We will use Helm to deploy the Grafana instance. Run the following commands to install Grafana via Helm:
+
+```
+helm repo add grafana https://grafana.github.io/helm-charts
+helm repo update
+helm install osm-grafana grafana/grafana
+```
+
+Next we'll retrieve the default Grafana password to log into the Grafana site.
+
+```azurecli-interactive
+kubectl get secret --namespace default osm-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
+```
+
+Make note of the Grafana password.
+
+Next we will retrieve the Grafana pod to port forward to the Grafana dashboard to login.
+
+```azurecli-interactive
+GRAF_POD_NAME=$(kubectl get pods -l "app.kubernetes.io/name=grafana" -o jsonpath="{.items[0].metadata.name}")
+kubectl port-forward $GRAF_POD_NAME 3000
+```
+
+Open a browser up to `http://localhost:3000`
+
+At the login screen pictured below, enter **admin** as the username and use the Grafana password captured earlier.
+
+![OSM Grafana Login Page UI image](./media/aks-osm-addon/osm-grafana-ui-login.png)
+
+### Configure the Grafana Prometheus data source
+
+Once you have successfully logged into Grafana, the next step is to add Prometheus as data sources for Grafana. To do so, navigate on the configuration icon on the left menu and select Data Sources as shown below.
+
+![OSM Grafana Datasources Page UI image](./media/aks-osm-addon/osm-grafana-ui-datasources.png)
+
+Click the **Add data source** button and select Prometheus under time series databases.
+
+![OSM Grafana Datasources Selection Page UI image](./media/aks-osm-addon/osm-grafana-ui-datasources-select-prometheus.png)
+
+On the **Configure your Prometheus data source below** page, enter the Kubernetes cluster FQDN for the Prometheus service for the HTTP URL setting. The default FQDN should be `stable-prometheus-server.default.svc.cluster.local`. Once you have entered that Prometheus service endpoint, scroll to the bottom of the page and select **Save & Test**. You should receive a green checkbox indicating the data source is working.
+
+### Importing OSM Dashboards
+
+OSM Dashboards are available both through:
+
+- [Our repository](https://github.com/grafana/grafana), and are importable as json blobs through the web admin portal
+- or [online at Grafana.com](https://grafana.com/grafana/dashboards/14145)
+
+To import a dashboard, look for the `+` sign on the left menu and select `import`.
+You can directly import dashboard by their ID on `Grafana.com`. For example, our `OSM Mesh Details` dashboard uses ID `14145`, you can use the ID directly on the form and select `import`:
+
+![OSM Grafana Dashboard Import Page UI image](./media/aks-osm-addon/osm-grafana-dashboard-import.png)
+
+As soon as you select import, it will bring you automatically to your imported dashboard.
+
+![OSM Grafana Dashboard Mesh Details Page UI image](./media/aks-osm-addon/osm-grafana-mesh-dashboard-details.png)
+
+## Deploy and configure a Jaeger Operator on Kubernetes for OSM
+
+[Jaeger](https://www.jaegertracing.io/) is an open-source tracing system used for monitoring and troubleshooting distributed systems. It can be deployed with OSM as a new instance or you may bring your own instance. The following instructions deploy a new instance of Jaeger to the `jaeger` namespace on the AKS cluster.
+
+### Deploy Jaeger to the AKS cluster
+
+Apply the following manifest to install Jaeger:
+
+```azurecli-interactive
+kubectl apply -f - <<EOF
+
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: jaeger
+ namespace: jaeger
+ labels:
+ app: jaeger
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: jaeger
+ template:
+ metadata:
+ labels:
+ app: jaeger
+ spec:
+ containers:
+ - name: jaeger
+ image: jaegertracing/all-in-one
+ args:
+ - --collector.zipkin.host-port=9411
+ imagePullPolicy: IfNotPresent
+ ports:
+ - containerPort: 9411
+ resources:
+ limits:
+ cpu: 500m
+ memory: 512M
+ requests:
+ cpu: 100m
+ memory: 256M
+
+kind: Service
+apiVersion: v1
+metadata:
+ name: jaeger
+ namespace: jaeger
+ labels:
+ app: jaeger
+spec:
+ selector:
+ app: jaeger
+ ports:
+ - protocol: TCP
+ # Service port and target port are the same
+ port: 9411
+ type: ClusterIP
+EOF
+```
+
+```Output
+deployment.apps/jaeger created
+service/jaeger created
+```
+
+### Enable Tracing for the OSM add-on
+
+Next we will need to enable tracing for the OSM add-on.
+
+Run the following command to enable tracing for the OSM add-on:
+
+```azurecli-interactive
+kubectl patch meshconfig osm-mesh-config -n kube-system -p '{"spec":{"observability":{"tracing":{"enable":true}}}}' --type=merge
+```
+
+```Output
+meshconfig.config.openservicemesh.io/osm-mesh-config patched
+```
+
+### View the Jaeger UI with port forwarding
+
+Jaeger's UI is running on port 16686. To view the web UI, you can use kubectl port-forward:
+
+```azurecli-interactive
+JAEGER_POD=$(kubectl get pods -n jaeger --no-headers --selector app=jaeger | awk 'NR==1{print $1}')
+kubectl port-forward -n jaeger $JAEGER_POD 16686:16686
+http://localhost:16686/
+```
+
+In the browser, you should see a Service dropdown, which allows you to select from the various applications deployed by the bookstore demo. Select a service to view all spans from it. For example, if you select `bookbuyer` with a Lookback of one hour, you can see its interactions with bookstore-v1 and bookstore-v2 sorted by time.
+
+![OSM Jaeger Tracing Page UI image](./media/aks-osm-addon/osm-jaeger-trace-view-ui.png)
+
+Select any item to view it in further detail. Select multiple items to compare traces. For example, you can compare the `bookbuyer`'s interactions with bookstore and bookstore-v2 at a particular moment in time.
+
+You can also select the System Architecture tab to view a graph of how the various applications have been interacting/communicating. This provides an idea of how traffic is flowing between the applications.
+
+![OSM Jaeger System Architecture UI image](./media/aks-osm-addon/osm-jaeger-sys-arc-view-ui.png)
aks Open Service Mesh Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/open-service-mesh-troubleshoot.md
+
+ Title: Troubleshooting Open Service Mesh
+description: How to troubleshoot Open Service Mesh
++ Last updated : 8/26/2021++++
+# Open Service Mesh (OSM) AKS add-on Troubleshooting Guides
+
+When you deploy the OSM AKS add-on, you could possibly experience problems associated with configuration of the service mesh. The following guide will assist you on how to troubleshoot errors and resolve common problems.
++
+## Verifying and Troubleshooting OSM components
+
+### Check OSM Controller Deployment
+
+```azurecli-interactive
+kubectl get deployment -n kube-system --selector app=osm-controller
+```
+
+A healthy OSM Controller would look like this:
+
+```Output
+NAME READY UP-TO-DATE AVAILABLE AGE
+osm-controller 1/1 1 1 59m
+```
+
+### Check the OSM Controller Pod
+
+```azurecli-interactive
+kubectl get pods -n kube-system --selector app=osm-controller
+```
+
+A healthy OSM Pod would look like this:
+
+```Output
+NAME READY STATUS RESTARTS AGE
+osm-controller-b5bd66db-wglzl 0/1 Evicted 0 61m
+osm-controller-b5bd66db-wvl9w 1/1 Running 0 31m
+```
+
+Even though we had one controller evicted at some point, we have another one that is READY 1/1 and Running with 0 restarts. If the column READY is anything other than 1/1 the service mesh would be in a broken state.
+Column READY with 0/1 indicates the control plane container is crashing - we need to get logs. See Get OSM Controller Logs from Azure Support Center section below. Column READY with a number higher than 1 after the / would indicate that there are sidecars installed. OSM Controller would most likely not work with any sidecars attached to it.
+
+### Check OSM Controller Service
+
+```azurecli-interactive
+kubectl get service -n kube-system osm-controller
+```
+
+A healthy OSM Controller service would look like this:
+
+```Output
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+osm-controller ClusterIP 10.0.31.254 <none> 15128/TCP,9092/TCP 67m
+```
+
+> [!NOTE]
+> The CLUSTER-IP would be different. The service NAME and PORT(S) must be the same as the example above.
+
+### Check OSM Controller Endpoints
+
+```azurecli-interactive
+kubectl get endpoints -n kube-system osm-controller
+```
+
+A healthy OSM Controller endpoint(s) would look like this:
+
+```Output
+NAME ENDPOINTS AGE
+osm-controller 10.240.1.115:9092,10.240.1.115:15128 69m
+```
+
+### Check OSM Injector Deployment
+
+```azurecli-interactive
+kubectl get pod -n kube-system --selector app=osm-injector
+```
+
+A healthy OSM Injector deployment would look like this:
+
+```Output
+NAME READY STATUS RESTARTS AGE
+osm-injector-5986c57765-vlsdk 1/1 Running 0 73m
+```
+
+### Check OSM Injector Pod
+
+```azurecli-interactive
+kubectl get pod -n kube-system --selector app=osm-injector
+```
+
+A healthy OSM Injector pod would look like this:
+
+```Output
+NAME READY STATUS RESTARTS AGE
+osm-injector-5986c57765-vlsdk 1/1 Running 0 73m
+```
+
+### Check OSM Injector Service
+
+```azurecli-interactive
+kubectl get service -n kube-system osm-injector
+```
+
+A healthy OSM Injector service would look like this:
+
+```Output
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+osm-injector ClusterIP 10.0.39.54 <none> 9090/TCP 75m
+```
+
+### Check OSM Endpoints
+
+```azurecli-interactive
+kubectl get endpoints -n kube-system osm-injector
+```
+
+A healthy OSM endpoint would look like this:
+
+```Output
+NAME ENDPOINTS AGE
+osm-injector 10.240.1.172:9090 75m
+```
+
+### Check Validating and Mutating webhooks
+
+```azurecli-interactive
+kubectl get ValidatingWebhookConfiguration --selector app=osm-controller
+```
+
+A healthy OSM Validating Webhook would look like this:
+
+```Output
+NAME WEBHOOKS AGE
+aks-osm-webhook-osm 1 81m
+```
+
+```azurecli-interactive
+kubectl get MutatingWebhookConfiguration --selector app=osm-injector
+```
+
+A healthy OSM Mutating Webhook would look like this:
+
+```Output
+NAME WEBHOOKS AGE
+aks-osm-webhook-osm 1 102m
+```
+
+### Check for the service and the CA bundle of the Validating webhook
+
+```azurecli-interactive
+kubectl get ValidatingWebhookConfiguration aks-osm-webhook-osm -o json | jq '.webhooks[0].clientConfig.service'
+```
+
+A well configured Validating Webhook Configuration would look exactly like this:
+
+```json
+{
+ "name": "osm-config-validator",
+ "namespace": "kube-system",
+ "path": "/validate-webhook",
+ "port": 9093
+}
+```
+
+### Check for the service and the CA bundle of the Mutating webhook
+
+```azurecli-interactive
+kubectl get MutatingWebhookConfiguration aks-osm-webhook-osm -o json | jq '.webhooks[0].clientConfig.service'
+```
+
+A well configured Mutating Webhook Configuration would look exactly like this:
+
+```json
+{
+ "name": "osm-injector",
+ "namespace": "kube-system",
+ "path": "/mutate-pod-creation",
+ "port": 9090
+}
+```
+
+### Check whether OSM Controller has given the Validating (or Mutating) Webhook a CA Bundle
+
+> [!NOTE]
+> As of v0.8.2 It is important to know that AKS RP installs the Validating Webhook, AKS Reconciler ensures it exists, but OSM Controller is the one that fills the CA Bundle.
+
+```azurecli-interactive
+kubectl get ValidatingWebhookConfiguration aks-osm-webhook-osm -o json | jq -r '.webhooks[0].clientConfig.caBundle' | wc -c
+```
+
+```azurecli-interactive
+kubectl get MutatingWebhookConfiguration aks-osm-webhook-osm -o json | jq -r '.webhooks[0].clientConfig.caBundle' | wc -c
+```
+
+```Example Output
+1845
+```
+
+This number indicates the number of bytes, or the size of the CA Bundle. If this is empty, 0, or some number under 1000 it would indicate that the CA Bundle is not correctly provisioned. Without a correct CA Bundle, the Validating Webhook would error out and prohibit the user from making changes to the osm-config ConfigMap in the kube-system namespace.
+
+A sample error when the CA Bundle is incorrect:
+
+- An attempt to change the osm-config ConfigMap:
+
+ ```azurecli-interactive
+ kubectl patch ConfigMap osm-config -n kube-system --type merge --patch '{"data":{"config_resync_interval":"2m"}}'
+ ```
+
+- Error:
+
+ ```
+ Error from server (InternalError): Internal error occurred: failed calling webhook "osm-config-webhook.k8s.io": Post https://osm-config-validator.kube-system.svc:9093/validate-webhook?timeout=30s: x509: certificate signed by unknown authority
+ ```
+
+Work around for when the **Validating** Webhook Configuration has a bad certificate:
+
+- Option 1 - Restart OSM Controller - this will restart the OSM Controller. On start, it will overwrite the CA Bundle of both the Mutating and Validating webhooks.
+
+ ```azurecli-interactive
+ kubectl rollout restart deployment -n kube-system osm-controller
+ ```
+
+- Option 2 - Option 2. Delete the Validating Webhook - removing the Validating Webhook makes mutations of the `osm-config` ConfigMap no longer validated. Any patch will go through. The AKS Reconciler will at some point ensure the Validating Webhook exists and will recreate it. The OSM Controller may have to be restarted to quickly rewrite the CA Bundle.
+
+ ```azurecli-interactive
+ kubectl delete ValidatingWebhookConfiguration aks-osm-webhook-osm
+ ```
+
+- Option 3 - Delete and Patch: The following command will delete the validating webhook, allowing us to add any values, and will immediately try to apply a patch. Most likely the AKS Reconciler will not have enough time to reconcile and restore the Validating Webhook giving us the opportunity to apply a change as a last resort:
+
+ ```azurecli-interactive
+ kubectl delete ValidatingWebhookConfiguration aks-osm-webhook-osm; kubectl patch ConfigMap osm-config -n kube-system --type merge --patch '{"data":{"config_resync_interval":"15s"}}'
+ ```
+
+### Check the `osm-mesh-config` resource
+
+Check for the existence:
+
+```azurecli-interactive
+kubectl get meshconfig osm-mesh-config -n kube-system
+```
+
+Check the content of the OSM MeshConfig
+
+```azurecli-interactive
+kubectl get meshconfig osm-mesh-config -n osm-system -o yaml
+```
+
+```
+apiVersion: config.openservicemesh.io/v1alpha1
+kind: MeshConfig
+metadata:
+ creationTimestamp: "0000-00-00A00:00:00A"
+ generation: 1
+ name: osm-mesh-config
+ namespace: kube-system
+ resourceVersion: "2494"
+ uid: 6c4d67f3-c241-4aeb-bf4f-b029b08faa31
+spec:
+ certificate:
+ serviceCertValidityDuration: 24h
+ featureFlags:
+ enableEgressPolicy: true
+ enableMulticlusterMode: false
+ enableWASMStats: true
+ observability:
+ enableDebugServer: true
+ osmLogLevel: info
+ tracing:
+ address: jaeger.osm-system.svc.cluster.local
+ enable: false
+ endpoint: /api/v2/spans
+ port: 9411
+ sidecar:
+ configResyncInterval: 0s
+ enablePrivilegedInitContainer: false
+ envoyImage: mcr.microsoft.com/oss/envoyproxy/envoy:v1.18.3
+ initContainerImage: mcr.microsoft.com/oss/openservicemesh/init:v0.9.1
+ logLevel: error
+ maxDataPlaneConnections: 0
+ resources: {}
+ traffic:
+ enableEgress: true
+ enablePermissiveTrafficPolicyMode: true
+ inboundExternalAuthorization:
+ enable: false
+ failureModeAllow: false
+ statPrefix: inboundExtAuthz
+ timeout: 1s
+ useHTTPSIngress: false
+```
+
+`osm-mesh-config` resource values:
+
+| Key | Type | Default Value | Kubectl Patch Command Examples |
+|--|||--|
+| spec.traffic.enableEgress | bool | `false` | `kubectl patch meshconfig osm-mesh-config -n osm-system -p '{"spec":{"traffic":{"enableEgress":true}}}' --type=merge` |
+| spec.traffic.enablePermissiveTrafficPolicyMode | bool | `false` | `kubectl patch meshconfig osm-mesh-config -n osm-system -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":true}}}' --type=merge` |
+| spec.traffic.useHTTPSIngress | bool | `false` | `kubectl patch meshconfig osm-mesh-config -n osm-system -p '{"spec":{"traffic":{"useHTTPSIngress":true}}}' --type=merge` |
+| spec.traffic.outboundPortExclusionList | array | `[]` | `kubectl patch meshconfig osm-mesh-config -n osm-system -p '{"spec":{"traffic":{"outboundPortExclusionList":6379,8080}}}' --type=merge` |
+| spec.traffic.outboundIPRangeExclusionList | array | `[]` | `kubectl patch meshconfig osm-mesh-config -n osm-system -p '{"spec":{"traffic":{"outboundIPRangeExclusionList":"10.0.0.0/32,1.1.1.1/24"}}}' --type=merge` |
+| spec.certificate.serviceCertValidityDuration | string | `"24h"` | `kubectl patch meshconfig osm-mesh-config -n osm-system -p '{"spec":{"certificate":{"serviceCertValidityDuration":"24h"}}}' --type=merge` |
+| spec.observability.enableDebugServer | bool | `false` | `kubectl patch meshconfig osm-mesh-config -n osm-system -p '{"spec":{"observability":{"serviceCertValidityDuration":true}}}' --type=merge` |
+| spec.observability.tracing.enable | bool | `"jaeger.osm-system.svc.cluster.local"`| `kubectl patch meshconfig osm-mesh-config -n osm-system -p '{"spec":{"observability":{"tracing":{"address": "jaeger.osm-system.svc.cluster.local"}}}}' --type=merge` |
+| spec.observability.tracing.address | string | `"/api/v2/spans"` | `kubectl patch meshconfig osm-mesh-config -n osm-system -p '{"spec":{"observability":{"tracing":{"endpoint":"/api/v2/spans"}}}}' --type=merge' --type=merge` |
+| spec.observability.tracing.endpoint | string | `false`| `kubectl patch meshconfig osm-mesh-config -n osm-system -p '{"spec":{"observability":{"tracing":{"enable":true}}}}' --type=merge` |
+| spec.observability.tracing.port | int | `9411`| `kubectl patch meshconfig osm-mesh-config -n osm-system -p '{"spec":{"observability":{"tracing":{"port":9411}}}}' --type=merge` |
+| spec.sidecar.enablePrivilegedInitContainer | bool | `false` | `kubectl patch meshconfig osm-mesh-config -n osm-system -p '{"spec":{"sidecar":{"enablePrivilegedInitContainer":true}}}' --type=merge` |
+| spec.sidecar.logLevel | string | `"error"` | `kubectl patch meshconfig osm-mesh-config -n osm-system -p '{"spec":{"sidecar":{"logLevel":"error"}}}' --type=merge` |
+| spec.sidecar.maxDataPlaneConnections | int | `0` | `kubectl patch meshconfig osm-mesh-config -n osm-system -p '{"spec":{"sidecar":{"maxDataPlaneConnections":"error"}}}' --type=merge` |
+| spec.sidecar.envoyImage | string | `"envoyproxy/envoy-alpine:v1.17.2"` | `kubectl patch meshconfig osm-mesh-config -n osm-system -p '{"spec":{"sidecar":{"envoyImage":"envoyproxy/envoy-alpine:v1.17.2"}}}' --type=merge` |
+| spec.sidecar.initContainerImage | string | `"openservicemesh/init:v0.9.2"` | `kubectl patch meshconfig osm-mesh-config -n osm-system -p '{"spec":{"sidecar":{"initContainerImage":"openservicemesh/init:v0.9.2"}}}' --type=merge` |
+| spec.sidecar.configResyncInterval | string | `"0s"` | `kubectl patch meshconfig osm-mesh-config -n osm-system -p '{"spec":{"sidecar":{"configResyncInterval":"30s"}}}' --type=merge` |
+
+### Check Namespaces
+
+> [!NOTE]
+> The kube-system namespace will never participate in a service mesh and will never be labeled and/or annotated with the key/values below.
+
+We use the `osm namespace add` command to join namespaces to a given service mesh.
+When a k8s namespace is part of the mesh (or for it to be part of the mesh) the following must be true:
+
+View the annotations with
+
+```azurecli-interactive
+kubectl get namespace bookbuyer -o json | jq '.metadata.annotations'
+```
+
+The following annotation must be present:
+
+```Output
+{
+ "openservicemesh.io/sidecar-injection": "enabled"
+}
+```
+
+View the labels with
+
+```azurecli-interactive
+kubectl get namespace bookbuyer -o json | jq '.metadata.labels'
+```
+
+The following label must be present:
+
+```Output
+{
+ "openservicemesh.io/monitored-by": "osm"
+}
+```
+
+If a namespace is not annotated with `"openservicemesh.io/sidecar-injection": "enabled"` or not labeled with `"openservicemesh.io/monitored-by": "osm"` the OSM Injector will not add Envoy sidecars.
+
+> [!NOTE]
+> After `osm namespace add` is called only **new** pods will be injected with an Envoy sidecar. Existing pods must be restarted with `kubectl rollout restart deployment ...`
+
+### Verify the SMI CRDs:
+
+Check whether the cluster has the required CRDs:
+
+```azurecli-interactive
+kubectl get crds
+```
+
+We must have the following installed on the cluster:
+
+- httproutegroups.specs.smi-spec.io
+- tcproutes.specs.smi-spec.io
+- trafficsplits.split.smi-spec.io
+- traffictargets.access.smi-spec.io
+- udproutes.specs.smi-spec.io
+
+Get the versions of the CRDs installed with this command:
+
+```azurecli-interactive
+for x in $(kubectl get crds --no-headers | awk '{print $1}' | grep 'smi-spec.io'); do
+ kubectl get crd $x -o json | jq -r '(.metadata.name, "-" , .spec.versions[].name, "\n")'
+done
+```
+
+Expected output:
+
+```Output
+httproutegroups.specs.smi-spec.io
+-
+v1alpha4
+v1alpha3
+v1alpha2
+v1alpha1
++
+tcproutes.specs.smi-spec.io
+-
+v1alpha4
+v1alpha3
+v1alpha2
+v1alpha1
++
+trafficsplits.split.smi-spec.io
+-
+v1alpha2
++
+traffictargets.access.smi-spec.io
+-
+v1alpha3
+v1alpha2
+v1alpha1
++
+udproutes.specs.smi-spec.io
+-
+v1alpha4
+v1alpha3
+v1alpha2
+v1alpha1
+```
+
+OSM Controller v0.8.2 requires the following versions:
+
+- traffictargets.access.smi-spec.io - [v1alpha3](https://github.com/servicemeshinterface/smi-spec/blob/v0.6.0/apis/traffic-access/v1alpha3/traffic-access.md)
+- httproutegroups.specs.smi-spec.io - [v1alpha4](https://github.com/servicemeshinterface/smi-spec/blob/v0.6.0/apis/traffic-specs/v1alpha4/traffic-specs.md#httproutegroup)
+- tcproutes.specs.smi-spec.io - [v1alpha4](https://github.com/servicemeshinterface/smi-spec/blob/v0.6.0/apis/traffic-specs/v1alpha4/traffic-specs.md#tcproute)
+- udproutes.specs.smi-spec.io - Not supported
+- trafficsplits.split.smi-spec.io - [v1alpha2](https://github.com/servicemeshinterface/smi-spec/blob/v0.6.0/apis/traffic-split/v1alpha2/traffic-split.md)
+- \*.metrics.smi-spec.io - [v1alpha1](https://github.com/servicemeshinterface/smi-spec/blob/v0.6.0/apis/traffic-metrics/v1alpha1/traffic-metrics.md)
+
+If CRDs are missing use the following commands to install these on the cluster:
+
+```azurecli-interactive
+kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/v0.8.2/charts/osm/crds/access.yaml
+```
+
+```azurecli-interactive
+kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/v0.8.2/charts/osm/crds/specs.yaml
+```
+
+```azurecli-interactive
+kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/v0.8.2/charts/osm/crds/split.yaml
+```
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/private-clusters.md
Where `--enable-private-cluster` is a mandatory flag for a private cluster.
The following parameters can be leveraged to configure Private DNS Zone. -- "System", which is also the default value. If the --private-dns-zone argument is omitted, AKS will create a Private DNS Zone in the Node Resource Group.-- "None", defaults to public DNS which means AKS will not create a Private DNS Zone.
+- "system", which is also the default value. If the --private-dns-zone argument is omitted, AKS will create a Private DNS Zone in the Node Resource Group.
+- "none", defaults to public DNS which means AKS will not create a Private DNS Zone.
- "CUSTOM_PRIVATE_DNS_ZONE_RESOURCE_ID", which requires you to create a Private DNS Zone in this format for Azure global cloud: `privatelink.<region>.azmk8s.io`. You will need the Resource ID of that Private DNS Zone going forward. Additionally, you will need a user assigned identity or service principal with at least the `private dns zone contributor` and `vnet contributor` roles. - If the Private DNS Zone is in a different subscription than the AKS cluster, you need to register Microsoft.ContainerServices in both the subscriptions. - "fqdn-subdomain" can be utilized with "CUSTOM_PRIVATE_DNS_ZONE_RESOURCE_ID" only to provide subdomain capabilities to `privatelink.<region>.azmk8s.io`
aks Scale Down Mode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/scale-down-mode.md
+
+ Title: Use Scale-down Mode for your Azure Kubernetes Service (AKS) cluster (preview)
+
+description: Learn how to use Scale-down Mode in Azure Kubernetes Service (AKS).
++ Last updated : 09/01/2021++++
+# Use Scale-down Mode to delete/deallocate nodes in Azure Kubernetes Service (AKS) (preview)
+
+By default, scale-up operations performed manually or by the cluster autoscaler require the allocation and provisioning of new nodes, and scale-down operations delete nodes. Scale-down Mode allows you to decide whether you would like to delete or deallocate the nodes in your Azure Kubernetes Service (AKS) cluster upon scaling down.
+
+When an Azure VM is in the `Stopped` (deallocated) state, you will not be charged for the VM compute resources. However, you will still need to pay for any OS and data storage disks attached to the VM. This also means that the container images will be preserved on those nodes. For more information, see [States and billing of Azure Virtual Machines][state-billing-azure-vm]. This behavior allows for faster operation speeds, as your deployment leverages cached images. Scale-down Mode allows you to no longer have to pre-provision nodes and pre-pull container images, saving you compute cost.
++
+## Before you begin
+
+> [!WARNING]
+> In order to preserve any deallocated VMs, you must set Scale-down Mode to Deallocate. That includes VMs that have been deallocated using IaaS APIs (Virtual Machine Scale Set APIs). Setting Scale-down Mode to Delete will remove any deallocate VMs.
+
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+
+### Limitations
+
+- [Ephemeral OS][ephemeral-os] disks are not supported. Be sure to specify managed OS disks via `--node-osdisk-type Managed` when creating a cluster or node pool.
+- [Spot node pools][spot-node-pool] are not supported.
+
+### Install aks-preview CLI extension
+
+You also need the *aks-preview* Azure CLI extension version 0.5.30 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+### Register the `AKS-ScaleDownModePreview` preview feature
+
+To use the feature, you must also enable the `AKS-ScaleDownModePreview` feature flag on your subscription.
+
+Register the `AKS-ScaleDownModePreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "AKS-ScaleDownModePreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
+
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKS-ScaleDownModePreview')].{Name:name,State:properties.state}"
+```
+
+When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+## Using Scale-down Mode to deallocate nodes on scale-down
+
+By setting `--scale-down-mode Deallocate`, nodes will be deallocated during a scale-down of your cluster/node pool. All deallocated nodes are stopped. When your cluster/node pool needs to scale up, the deallocated nodes will be started first before any new nodes are provisioned.
+
+In this example, we create a new node pool with 20 nodes and specify that upon scale-down, nodes are to be deallocated via `--scale-down-mode Deallocate`.
+
+```azurecli-interactive
+az aks nodepool add --node-count 20 --scale-down-mode Deallocate --node-osdisk-type Managed --max-pods 10 --name nodepool2 --cluster-name myAKSCluster --resource-group myResourceGroup
+```
+
+By scaling the node pool and changing the node count to 5, we will deallocate 15 nodes.
+
+```azurecli-interactive
+az aks nodepool scale --node-count 5 --name nodepool2 --cluster-name myAKSCluster --resource-group myResourceGroup
+```
+
+### Deleting previously deallocated nodes
+
+To delete your deallocated nodes, you can change your Scale-down Mode to `Delete` by setting `--scale-down-mode Delete`. The 15 deallocated nodes will now be deleted.
+
+```azurecli-interactive
+az aks nodepool update --scale-down-mode Delete --name nodepool2 --cluster-name myAKSCluster --resource-group myResourceGroup
+```
+
+## Using Scale-down Mode to delete nodes on scale-down
+
+The default behavior of AKS without using Scale-down Mode is to delete your nodes when you scale-down your cluster. Using Scale-down Mode, this can be explicitly achieved by setting `--scale-down-mode Delete`.
+
+In this example, we create a new node pool and specify that our nodes will be deleted upon scale-down via `--scale-down-mode Delete`. Scaling operations will be handled via the cluster autoscaler.
+
+```azurecli-interactive
+az aks nodepool add --enable-cluster-autoscaler --min-count 1 --max-count 10 --max-pods 10 --node-osdisk-type Managed --scale-down-mode Delete --name nodepool3 --cluster-name myAKSCluster --resource-group myResourceGroup
+```
+
+## Next steps
+
+- To learn more about upgrading your AKS cluster, see [Upgrade an AKS cluster][aks-upgrade]
+- To learn more about the cluster autoscaler, see [Automatically scale a cluster to meet application demands on AKS][cluster-autoscaler]
+
+<!-- LINKS - Internal -->
+[aks-quickstart-cli]: kubernetes-walkthrough.md
+[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-support-policies]: support-policies.md
+[aks-faq]: faq.md
+[az-extension-add]: /cli/azure/extension#az_extension_add
+[az-extension-update]: /cli/azure/extension#az_extension_update
+[az-feature-list]: /cli/azure/feature#az_feature_list
+[az-feature-register]: /cli/azure/feature#az_feature_register
+[az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli
+[az-provider-register]: /cli/azure/provider#az_provider_register
+[aks-upgrade]: upgrade-cluster.md
+[cluster-autoscaler]: cluster-autoscaler.md
+[ephemeral-os]: cluster-configuration.md#ephemeral-os
+[state-billing-azure-vm]: ../virtual-machines/states-billing.md
+[spot-node-pool]: spot-node-pool.md
aks Servicemesh About https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/servicemesh-about.md
You may also want to explore the various service mesh standardization efforts:
[smp]: https://github.com/service-mesh-performance/service-mesh-performance <!-- LINKS - internal -->
-[osm-about]: ./servicemesh-osm-about.md
+[osm-about]: ./open-service-mesh-about.md
aks Servicemesh Osm About https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/servicemesh-osm-about.md
- Title: Open Service Mesh (Preview)
-description: Open Service Mesh (OSM) in Azure Kubernetes Service (AKS)
-- Previously updated : 3/12/2021--
-zone_pivot_groups: client-operating-system
--
-# Open Service Mesh AKS add-on (Preview)
-
-## Overview
-
-[Open Service Mesh (OSM)](https://docs.openservicemesh.io/) is a lightweight, extensible, Cloud Native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.
-
-OSM runs an Envoy-based control plane on Kubernetes, can be configured with [SMI](https://smi-spec.io/) APIs, and works by injecting an Envoy proxy as a sidecar container next to each instance of your application. The Envoy proxy contains and executes rules around access control policies, implements routing configuration, and captures metrics. The control plane continually configures proxies to ensure policies and routing rules are up to date and ensures proxies are healthy.
--
-## Capabilities and Features
-
-OSM provides the following set of capabilities and features to provide a cloud native service mesh for your Azure Kubernetes Service (AKS) clusters:
--- Secure service to service communication by enabling mTLS--- Easily onboard applications onto the mesh by enabling automatic sidecar injection of Envoy proxy--- Easily and transparent configurations for traffic shifting on deployments--- Ability to define and execute fine grained access control policies for services--- Observability and insights into application metrics for debugging and monitoring services--- Integration with external certificate management services/solutions with a pluggable interface-
-## Scenarios
-
-OSM can assist your AKS deployments with the following scenarios:
--- Provide encrypted communications between service endpoints deployed in the cluster--- Traffic authorization of both HTTP/HTTPS and TCP traffic in the mesh--- Configuration of weighted traffic controls between two or more services for A/B or canary deployments--- Collection and viewing of KPIs from application traffic-
-## Prerequisites
--- The Azure CLI, version 2.20.0 or later-- The `aks-preview` extension version 0.5.5 or later-- OSM version v0.8.0 or later-
-## OSM Service Quotas and Limits (Preview)
-
-OSM preview limitations for service quotas and limits can be found on the AKS [Quotas and regional limits page](./quotas-skus-regions.md).
----------
-> [!WARNING]
-> Do not attempt to install OSM from the binary using `osm install`. This will result in a installation of OSM that is not integrated as an add-on for AKS.
-
-### Register the `AKS-OpenServiceMesh` preview feature
-
-To create an AKS cluster that can use the Open Service Mesh add-on, you must enable the `AKS-OpenServiceMesh` feature flag on your subscription.
-
-Register the `AKS-OpenServiceMesh` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "AKS-OpenServiceMesh"
-```
-
-It takes a few minutes for the status to show _Registered_. Verify the registration status by using the [az feature list][az-feature-list] command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKS-OpenServiceMesh')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the _Microsoft.ContainerService_ resource provider by using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
--
-## Install Open Service Mesh (OSM) Azure Kubernetes Service (AKS) add-on for a new AKS cluster
-
-For a new AKS cluster deployment scenario, you will start with a brand new deployment of an AKS cluster enabling the OSM add-on at the cluster create operation.
-
-### Create a resource group
-
-In Azure, you allocate related resources to a resource group. Create a resource group by using [az group create](/cli/azure/group#az_group_create). The following example creates a resource group named _myOsmAksGroup_ in the _eastus2_ location (region):
-
-```azurecli-interactive
-az group create --name <myosmaksgroup> --location <eastus2>
-```
-
-### Deploy an AKS cluster with the OSM add-on enabled
-
-You'll now deploy a new AKS cluster with the OSM add-on enabled.
-
-> [!NOTE]
-> Please be aware the following AKS deployment command utilizes OS ephemeral disks. You can find more information here about [Ephemeral OS disks for AKS](./cluster-configuration.md#ephemeral-os)
-
-```azurecli-interactive
-az aks create -n osm-addon-cluster -g <myosmaksgroup> --kubernetes-version 1.19.6 --node-osdisk-type Ephemeral --node-osdisk-size 30 --network-plugin azure --enable-managed-identity -a open-service-mesh
-```
-
-#### Get AKS Cluster Access Credentials
-
-Get access credentials for the new managed Kubernetes cluster.
-
-```azurecli-interactive
-az aks get-credentials -n <myosmakscluster> -g <myosmaksgroup>
-```
-
-## Enable Open Service Mesh (OSM) Azure Kubernetes Service (AKS) add-on for an existing AKS cluster
-
-For an existing AKS cluster scenario, you will enable the OSM add-on to an existing AKS cluster that has already been deployed.
-
-### Enable the OSM add-on to existing AKS cluster
-
-To enable the AKS OSM add-on, you will need to run the `az aks enable-addons --addons` command passing the parameter `open-service-mesh`
-
-```azurecli-interactive
-az aks enable-addons --addons open-service-mesh -g <resource group name> -n <AKS cluster name>
-```
-
-You should see output similar to the output shown below to confirm the AKS OSM add-on has been installed.
-
-```json
-{- Finished ..
- "aadProfile": null,
- "addonProfiles": {
- "KubeDashboard": {
- "config": null,
- "enabled": false,
- "identity": null
- },
- "openServiceMesh": {
- "config": {},
- "enabled": true,
- "identity": {
-...
-```
-
-## Validate the AKS OSM add-on installation
-
-There are several commands to run to check all of the components of the AKS OSM add-on are enabled and running:
-
-First we can query the add-on profiles of the cluster to check the enabled state of the add-ons installed. The following command should return "true".
-
-```azurecli-interactive
-az aks list -g <resource group name> -o json | jq -r '.[].addonProfiles.openServiceMesh.enabled'
-```
-
-The following `kubectl` commands will report the status of the osm-controller.
-
-```azurecli-interactive
-kubectl get deployments -n kube-system --selector app=osm-controller
-kubectl get pods -n kube-system --selector app=osm-controller
-kubectl get services -n kube-system --selector app=osm-controller
-```
-
-## Accessing the AKS OSM add-on
-
-Currently you can access and configure the OSM controller configuration via the configmap. To view the OSM controller configuration settings, query the osm-config configmap via `kubectl` to view its configuration settings.
-
-```azurecli-interactive
-kubectl get configmap -n kube-system osm-config -o json | jq '.data'
-```
-
-Output of the OSM configmap should look like the following:
-
-```json
-{
- "egress": "true",
- "enable_debug_server": "true",
- "enable_privileged_init_container": "false",
- "envoy_log_level": "error",
- "outbound_ip_range_exclusion_list": "169.254.169.254/32,168.63.129.16/32,<YOUR_API_SERVER_PUBLIC_IP>/32",
- "permissive_traffic_policy_mode": "true",
- "prometheus_scraping": "false",
- "service_cert_validity_duration": "24h",
- "use_https_ingress": "false"
-}
-```
-
-Notice the **permissive_traffic_policy_mode** is configured to **true**. Permissive traffic policy mode in OSM is a mode where the [SMI](https://smi-spec.io/) traffic policy enforcement is bypassed. In this mode, OSM automatically discovers services that are a part of the service mesh and programs traffic policy rules on each Envoy proxy sidecar to be able to communicate with these services.
-
-> [!WARNING]
-> Before proceeding please verify that your permissive traffic policy mode is set to true, if not please change it to **true** using the command below
-
-```OSM Permissive Mode to True
-kubectl patch ConfigMap -n kube-system osm-config --type merge --patch '{"data":{"permissive_traffic_policy_mode":"true"}}'
-```
-
-## Deploy a new application to be managed by the Open Service Mesh (OSM) Azure Kubernetes Service (AKS) add-on
-
-### Before you begin
-
-The steps detailed in this walkthrough assume that you've created an AKS cluster (Kubernetes `1.19+` and above, with Kubernetes RBAC enabled), have established a `kubectl` connection with the cluster (If you need help with any of these items, then see the [AKS quickstart](./kubernetes-walkthrough.md), and have installed the AKS OSM add-on.
-
-You must have the following resources installed:
--- The Azure CLI, version 2.20.0 or later-- The `aks-preview` extension version 0.5.5 or later-- OSM version v0.8.0 or later-- apt-get install jq-
-### Create namespaces for the application
-
-In this walkthrough, we will be using the OSM bookstore application that has the following Kubernetes
--- bookbuyer-- bookthief-- bookstore-- bookwarehouse-
-Create namespaces for each of these application components.
-
-```azurecli-interactive
-for i in bookstore bookbuyer bookthief bookwarehouse; do kubectl create ns $i; done
-```
-
-You should see the following output:
-
-```Output
-namespace/bookstore created
-namespace/bookbuyer created
-namespace/bookthief created
-namespace/bookwarehouse created
-```
-
-### Onboard the namespaces to be managed by OSM
-
-When you add the namespaces to the OSM mesh, this will allow the OSM controller to automatically inject the Envoy sidecar proxy containers with your application. Run the following command to onboard the OSM bookstore application namespaces.
-
-```azurecli-interactive
-osm namespace add bookstore bookbuyer bookthief bookwarehouse
-```
-
-You should see the following output:
-
-```Output
-Namespace [bookstore] successfully added to mesh [osm]
-Namespace [bookbuyer] successfully added to mesh [osm]
-Namespace [bookthief] successfully added to mesh [osm]
-Namespace [bookwarehouse] successfully added to mesh [osm]
-```
-
-### Deploy the Bookstore application to the AKS cluster
-
-```azurecli-interactive
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.8/docs/example/manifests/apps/bookbuyer.yaml
-```
-
-```azurecli-interactive
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.8/docs/example/manifests/apps/bookthief.yaml
-```
-
-```azurecli-interactive
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.8/docs/example/manifests/apps/bookstore.yaml
-```
-
-```azurecli-interactive
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.8/docs/example/manifests/apps/bookwarehouse.yaml
-```
-
-All of the deployment outputs are summarized below.
-
-```Output
-serviceaccount/bookbuyer created
-service/bookbuyer created
-deployment.apps/bookbuyer created
-
-serviceaccount/bookthief created
-service/bookthief created
-deployment.apps/bookthief created
-
-service/bookstore created
-serviceaccount/bookstore created
-deployment.apps/bookstore created
-
-serviceaccount/bookwarehouse created
-service/bookwarehouse created
-deployment.apps/bookwarehouse created
-```
-
-### Checkpoint: What got installed?
-
-The example Bookstore application is a multi-tiered app that consists of four services, being the bookbuyer, bookthief, bookstore, and bookwarehouse. Both the bookbuyer and bookthief service communicate to the bookstore service to retrieve books from the bookstore service. The bookstore service retrieves books out of the bookwarehouse service to supply the bookbuyer and bookthief. This is a simple multi-tiered application that works well in showing how a service mesh can be used to protect and authorize communications between the applications services. As we continue through the walkthrough, we will be enabling and disabling Service Mesh Interface (SMI) policies to both allow and disallow the services to communicate via OSM. Below is an architecture diagram of what got installed for the bookstore application.
-
-![OSM bookbuyer app architecture](./media/aks-osm-addon/osm-bookstore-app-arch.png)
-
-### Verify the Bookstore application running inside the AKS cluster
-
-As of now we have deployed the bookstore mulit-container application, but it is only accessible from within the AKS cluster. Later tutorials will assist you in exposing the application outside the cluster via an ingress controller. For now we will be utilizing port forwarding to access the bookbuyer application inside the AKS cluster to verify it is buying books from the bookstore service.
-
-To verify that the application is running inside the cluster, we will use a port forward to view both the bookbuyer and bookthief components UI.
-
-First let's get the bookbuyer pod's name
-
-```azurecli-interactive
-kubectl get pod -n bookbuyer
-```
-
-You should see output similar to the following. Your bookbuyer pod will have a unique name appended.
-
-```Output
-NAME READY STATUS RESTARTS AGE
-bookbuyer-7676c7fcfb-mtnrz 2/2 Running 0 7m8s
-```
-
-Once we have the pod's name, we can now use the port-forward command to set up a tunnel from our local system to the application inside the AKS cluster. Run the following command to set up the port forward for the local system port 8080. Again use your specified bookbuyer pod name.
-
-> [!NOTE]
-> For all port forwarding commands it is best to use an additional terminal so that you can continue to work through this walkthrough and not disconnect the tunnel. It is also best that you establish the port forward tunnel outside of the Azure Cloud Shell.
-
-```Bash
-kubectl port-forward bookbuyer-7676c7fcfb-mtnrz -n bookbuyer 8080:14001
-```
-
-You should see output similar to this.
-
-```Output
-Forwarding from 127.0.0.1:8080 -> 14001
-Forwarding from [::1]:8080 -> 14001
-```
-
-While the port forwarding session is in place, navigate to the following url from a browser `http://localhost:8080`. You should now be able to see the bookbuyer application UI in the browser similar to the image below.
-
-![OSM bookbuyer app UI image](./media/aks-osm-addon/osm-bookbuyer-service-ui.png)
-
-You will also notice that the total books bought number continues to increment to the bookstore v1 service. The bookstore v2 service has not been deployed yet. We will deploy the bookstore v2 service when we demonstrate the SMI traffic split policies.
-
-You can also check the same for the bookthief service.
-
-```azurecli-interactive
-kubectl get pod -n bookthief
-```
-
-You should see output similar to the following. Your bookthief pod will have a unique name appended.
-
-```Output
-NAME READY STATUS RESTARTS AGE
-bookthief-59549fb69c-cr8vl 2/2 Running 0 15m54s
-```
-
-Port forward to bookthief pod.
-
-```Bash
-kubectl port-forward bookthief-59549fb69c-cr8vl -n bookthief 8080:14001
-```
-
-Navigate to the following url from a browser `http://localhost:8080`. You should see the bookthief is currently stealing books from the bookstore service! Later on we will implement a traffic policy to stop the bookthief.
-
-![OSM bookthief app UI image](./media/aks-osm-addon/osm-bookthief-service-ui.png)
-
-### Disable OSM Permissive Traffic Mode for the mesh
-
-As mentioned earlier when viewing the OSM cluster configuration, the OSM configuration defaults to enabling permissive traffic mode policy. In this mode traffic policy enforcement is bypassed and OSM automatically discovers services that are a part of the service mesh and programs traffic policy rules on each Envoy proxy sidecar to be able to communicate with these services.
-
-We will now disable the permissive traffic mode policy and OSM will need explicit [SMI](https://smi-spec.io/) policies deployed to the cluster to allow communications in the mesh from each service. To disable permissive traffic mode, run the following command to update the configmap property changing the value from `true` to `false`.
-
-```azurecli-interactive
-kubectl patch ConfigMap -n kube-system osm-config --type merge --patch '{"data":{"permissive_traffic_policy_mode":"false"}}'
-```
-
-You should see output similar to the following. Your bookthief pod will have a unique name appended.
-
-```Output
-configmap/osm-config patched
-```
-
-To verify permissive traffic mode has been disabled, port forward back into either the bookbuyer or bookthief pod to view their UI in the browser and see if the books bought or books stolen is no longer incrementing. Ensure to refresh the browser. If the incrementing has stopped, the policy was applied correctly. You have successfully stopped the bookthief from stealing books, but neither the bookbuyer can purchase from the bookstore nor the bookstore can retrieve books from the bookwarehouse. Next we will implement [SMI](https://smi-spec.io/) policies to allow only the services in the mesh you'd like to communicate to do so.
-
-### Apply Service Mesh Interface (SMI) traffic access policies
-
-Now that we have disabled all communications in the mesh, let's allow our bookbuyer service to communicate to our bookstore service for purchasing books, and allow our bookstore service to communicate to our bookwarehouse service to retrieving books to sell.
-
-Deploy the following [SMI](https://smi-spec.io/) policies.
-
-```azurecli-interactive
-kubectl apply -f - <<EOF
-
-apiVersion: access.smi-spec.io/v1alpha3
-kind: TrafficTarget
-metadata:
- name: bookbuyer-access-bookstore
- namespace: bookstore
-spec:
- destination:
- kind: ServiceAccount
- name: bookstore
- namespace: bookstore
- rules:
- - kind: HTTPRouteGroup
- name: bookstore-service-routes
- matches:
- - buy-a-book
- - books-bought
- sources:
- - kind: ServiceAccount
- name: bookbuyer
- namespace: bookbuyer
-
-apiVersion: specs.smi-spec.io/v1alpha4
-kind: HTTPRouteGroup
-metadata:
- name: bookstore-service-routes
- namespace: bookstore
-spec:
- matches:
- - name: books-bought
- pathRegex: /books-bought
- methods:
- - GET
- headers:
- - "user-agent": ".*-http-client/*.*"
- - "client-app": "bookbuyer"
- - name: buy-a-book
- pathRegex: ".*a-book.*new"
- methods:
- - GET
- - name: update-books-bought
- pathRegex: /update-books-bought
- methods:
- - POST
-
-kind: TrafficTarget
-apiVersion: access.smi-spec.io/v1alpha3
-metadata:
- name: bookstore-access-bookwarehouse
- namespace: bookwarehouse
-spec:
- destination:
- kind: ServiceAccount
- name: bookwarehouse
- namespace: bookwarehouse
- rules:
- - kind: HTTPRouteGroup
- name: bookwarehouse-service-routes
- matches:
- - restock-books
- sources:
- - kind: ServiceAccount
- name: bookstore
- namespace: bookstore
- - kind: ServiceAccount
- name: bookstore-v2
- namespace: bookstore
-
-apiVersion: specs.smi-spec.io/v1alpha4
-kind: HTTPRouteGroup
-metadata:
- name: bookwarehouse-service-routes
- namespace: bookwarehouse
-spec:
- matches:
- - name: restock-books
- methods:
- - POST
- headers:
- - host: bookwarehouse.bookwarehouse
-EOF
-```
-
-You should see output similar to the following.
-
-```Output
-traffictarget.access.smi-spec.io/bookbuyer-access-bookstore-v1 created
-httproutegroup.specs.smi-spec.io/bookstore-service-routes created
-traffictarget.access.smi-spec.io/bookstore-access-bookwarehouse created
-httproutegroup.specs.smi-spec.io/bookwarehouse-service-routes created
-```
-
-You can now set up a port forwarding session on either the bookbuyer or bookstore pods and see that both the books bought and books sold metrics are back incrementing. You can also do the same for the bookthief pod to verify it is still no longer able to steal books.
-
-### Apply Service Mesh Interface (SMI) traffic split policies
-
-For our final demonstration, we will create an [SMI](https://smi-spec.io/) traffic split policy to configure the weight of communications from one service to multiple services as a backend. The traffic split functionality allows you to progressively move connections to one service over to another by weighting the traffic on a scale of 0 to 100.
-
-The below graphic is a diagram of the [SMI](https://smi-spec.io/) Traffic Split policy to be deployed. We will deploy an additional Bookstore version 2 and then split the incoming traffic from the bookbuyer, weighting 25% of the traffic to the bookstore v1 service and 75% to the bookstore v2 service.
-
-![OSM bookbuyer traffic split diagram](./media/aks-osm-addon/osm-bookbuyer-traffic-split-diagram.png)
-
-Deploy the bookstore v2 service.
-
-```azurecli-interactive
-kubectl apply -f - <<EOF
-
-apiVersion: v1
-kind: Service
-metadata:
- name: bookstore-v2
- namespace: bookstore
- labels:
- app: bookstore-v2
-spec:
- ports:
- - port: 14001
- name: bookstore-port
- selector:
- app: bookstore-v2
-
-# Deploy bookstore-v2 Service Account
-apiVersion: v1
-kind: ServiceAccount
-metadata:
- name: bookstore-v2
- namespace: bookstore
-
-# Deploy bookstore-v2 Deployment
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: bookstore-v2
- namespace: bookstore
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: bookstore-v2
- template:
- metadata:
- labels:
- app: bookstore-v2
- spec:
- serviceAccountName: bookstore-v2
- containers:
- - name: bookstore
- image: openservicemesh/bookstore:v0.8.0
- imagePullPolicy: Always
- ports:
- - containerPort: 14001
- name: web
- command: ["/bookstore"]
- args: ["--path", "./", "--port", "14001"]
- env:
- - name: BOOKWAREHOUSE_NAMESPACE
- value: bookwarehouse
- - name: IDENTITY
- value: bookstore-v2
-
-kind: TrafficTarget
-apiVersion: access.smi-spec.io/v1alpha3
-metadata:
- name: bookbuyer-access-bookstore-v2
- namespace: bookstore
-spec:
- destination:
- kind: ServiceAccount
- name: bookstore-v2
- namespace: bookstore
- rules:
- - kind: HTTPRouteGroup
- name: bookstore-service-routes
- matches:
- - buy-a-book
- - books-bought
- sources:
- - kind: ServiceAccount
- name: bookbuyer
- namespace: bookbuyer
-EOF
-```
-
-You should see the following output.
-
-```Output
-service/bookstore-v2 configured
-serviceaccount/bookstore-v2 created
-deployment.apps/bookstore-v2 created
-traffictarget.access.smi-spec.io/bookstore-v2 created
-```
-
-Now deploy the traffic split policy to split the bookbuyer traffic between the two bookstore v1 and v2 service.
-
-```azurecli-interactive
-kubectl apply -f - <<EOF
-apiVersion: split.smi-spec.io/v1alpha2
-kind: TrafficSplit
-metadata:
- name: bookstore-split
- namespace: bookstore
-spec:
- service: bookstore.bookstore
- backends:
- - service: bookstore
- weight: 25
- - service: bookstore-v2
- weight: 75
-EOF
-```
-
-You should see the following output.
-
-```Output
-trafficsplit.split.smi-spec.io/bookstore-split created
-```
-
-Set up a port forward tunnel to the bookbuyer pod and you should now see books being purchased from the bookstore v2 service. If you continue to watch the increment of purchases you should notice a faster increment of purchases happening through the bookstore v2 service.
-
-![OSM bookbuyer books boough UI](./media/aks-osm-addon/osm-bookbuyer-traffic-split-ui.png)
-
-## Manage existing deployed applications to be managed by the Open Service Mesh (OSM) Azure Kubernetes Service (AKS) add-on
-
-### Before you begin
-
-The steps detailed in this walkthrough assume that you have previously enabled the OSM AKS add-on for your AKS cluster. If not, review the section [Enable Open Service Mesh (OSM) Azure Kubernetes Service (AKS) add-on for an existing AKS cluster](#enable-open-service-mesh-osm-azure-kubernetes-service-aks-add-on-for-an-existing-aks-cluster) before proceeding. Also, your AKS cluster needs to be version Kubernetes `1.19+` and above, have Kubernetes RBAC enabled, and have established a `kubectl` connection with the cluster (If you need help with any of these items, then see the [AKS quickstart](./kubernetes-walkthrough.md), and have installed the AKS OSM add-on.
-
-You must have the following resources installed:
--- The Azure CLI, version 2.20.0 or later-- The `aks-preview` extension version 0.5.5 or later-- OSM version v0.8.0 or later-- apt-get install jq-
-### Verify the Open Service Mesh (OSM) Permissive Traffic Mode Policy
-
-The OSM Permissive Traffic Policy mode is a mode where the [SMI](https://smi-spec.io/) traffic policy enforcement is bypassed. In this mode, OSM automatically discovers services that are a part of the service mesh and programs traffic policy rules on each Envoy proxy sidecar to be able to communicate with these services.
-
-To verify the current permissive traffic mode of OSM for your cluster, run the following command:
-
-```azurecli-interactive
-kubectl get configmap -n kube-system osm-config -o json | jq '.data'
-```
-
-Output of the OSM configmap should look like the following:
-
-```Output
-{
- "egress": "true",
- "enable_debug_server": "true",
- "envoy_log_level": "error",
- "permissive_traffic_policy_mode": "true",
- "prometheus_scraping": "false",
- "service_cert_validity_duration": "24h",
- "use_https_ingress": "false"
-}
-```
-
-If the **permissive_traffic_policy_mode** is configured to **true**, you can safely onboard your namespaces without any disruption to your service-to-service communications. If the **permissive_traffic_policy_mode** is configured to **false**, You will need to ensure you have the correct [SMI](https://smi-spec.io/) traffic access policy manifests deployed as well as ensuring you have a service account representing each service deployed in the namespace. Please follow the guidance for [Onboard existing deployed applications with Open Service Mesh (OSM) Permissive Traffic Policy configured as False](#onboard-existing-deployed-applications-with-open-service-mesh-osm-permissive-traffic-policy-configured-as-false)
-
-### Onboard existing deployed applications with Open Service Mesh (OSM) Permissive Traffic Policy configured as True
-
-The first thing we'll do is add the deployed application namespace(s) to OSM to manage.
-
-```azurecli-interactive
-osm namespace add bookstore
-```
-
-You should see the following output:
-
-```Output
-Namespace [bookstore] successfully added to mesh [osm]
-```
-
-Next we will take a look at the current pod deployment in the namespace. Run the following command to view the pods in the designated namespace.
-
-```azurecli-interactive
-kubectl get pod -n bookbuyer
-```
-
-You should see the following similar output:
-
-```Output
-NAME READY STATUS RESTARTS AGE
-bookbuyer-78666dcff8-wh6wl 1/1 Running 0 43s
-```
-
-Notice the **READY** column showing **1/1**, meaning that the application pod has only one container. Next we will need to restart your application deployments so that OSM can inject the Envoy sidecar proxy container with your application pod. Let's get a list of deployments in the namespace.
-
-```azurecli-interactive
-kubectl get deployment -n bookbuyer
-```
-
-You should see the following output:
-
-```Output
-NAME READY UP-TO-DATE AVAILABLE AGE
-bookbuyer 1/1 1 1 23h
-```
-
-Now we will restart the deployment to inject the Envoy sidecar proxy container with your application pod. Run the following command.
-
-```azurecli-interactive
-kubectl rollout restart deployment bookbuyer -n bookbuyer
-```
-
-You should see the following output:
-
-```Output
-deployment.apps/bookbuyer restarted
-```
-
-If we take a look at the pods in the namespace again:
-
-```azurecli-interactive
-kubectl get pod -n bookbuyer
-```
-
-You will now notice that the **READY** column is now showing **2/2** containers being ready for your pod. The second container is the Envoy sidecar proxy.
-
-```Output
-NAME READY STATUS RESTARTS AGE
-bookbuyer-84446dd5bd-j4tlr 2/2 Running 0 3m30s
-```
-
-We can further inspect the pod to view the Envoy proxy by running the describe command to view the configuration.
-
-```azurecli-interactive
-kubectl describe pod bookbuyer-84446dd5bd-j4tlr -n bookbuyer
-```
-
-```Output
-Containers:
- bookbuyer:
- Container ID: containerd://b7503b866f915711002292ea53970bd994e788e33fb718f1c4f8f12cd4a88198
- Image: openservicemesh/bookbuyer:v0.8.0
- Image ID: docker.io/openservicemesh/bookbuyer@sha256:813874bd2dc9c5a259b9657995348cf0822b905e29c4e86f21fdefa0ef21dcee
- Port: <none>
- Host Port: <none>
- Command:
- /bookbuyer
- State: Running
- Started: Tue, 23 Mar 2021 10:52:53 -0400
- Ready: True
- Restart Count: 0
- Environment:
- BOOKSTORE_NAMESPACE: bookstore
- BOOKSTORE_SVC: bookstore
- Mounts:
- /var/run/secrets/kubernetes.io/serviceaccount from bookbuyer-token-zft2r (ro)
- envoy:
- Container ID: containerd://f5f1cb5db8d5304e23cc984eb08146ea162a3e82d4262c4472c28d5579c25e10
- Image: envoyproxy/envoy-alpine:v1.17.1
- Image ID: docker.io/envoyproxy/envoy-alpine@sha256:511e76b9b73fccd98af2fbfb75c34833343d1999469229fdfb191abd2bbe3dfb
- Ports: 15000/TCP, 15003/TCP, 15010/TCP
- Host Ports: 0/TCP, 0/TCP, 0/TCP
-```
-
-Verify your application is still functional after the Envoy sidecar proxy injection.
-
-### Onboard existing deployed applications with Open Service Mesh (OSM) Permissive Traffic Policy configured as False
-
-When the OSM configuration for the permissive traffic policy is set to `false`, OSM will require explicit [SMI](https://smi-spec.io/) traffic access policies deployed for the service-to-service communication to happen within your cluster. Currently, OSM also uses Kubernetes service accounts as part of authorizing service-to-service communications as well. To ensure your existing deployed applications will communicate when managed by the OSM mesh, we will need to verify the existence of a service account to utilize, update the application deployment with the service account information, apply the [SMI](https://smi-spec.io/) traffic access policies.
-
-#### Verify Kubernetes Service Accounts
-
-Verify if you have a kubernetes service account in the namespace your application is deployed to.
-
-```azurecli-interactive
-kubectl get serviceaccounts -n bookbuyer
-```
-
-In the following there is a service account named `bookbuyer` in the bookbuyer namespace.
-
-```Output
-NAME SECRETS AGE
-bookbuyer 1 25h
-default 1 25h
-```
-
-If you do not have a service account listed other than the default account, you will need to create one for your application. Use the following command as an example to create a service account in the application's deployed namespace.
-
-```azurecli-interactive
-kubectl create serviceaccount myserviceaccount -n bookbuyer
-```
-
-```Output
-serviceaccount/myserviceaccount created
-```
-
-#### View your application's current deployment specification
-
-If you had to create a service account from the earlier section, chances are your application deployment is not configured with a specific `serviceAccountName` in the deployment spec. We can view your application's deployment spec with the following commands:
-
-```azurecli-interactive
-kubectl get deployment -n bookbuyer
-```
-
-A list of deployments will be listed in the output.
-
-```Output
-NAME READY UP-TO-DATE AVAILABLE AGE
-bookbuyer 1/1 1 1 25h
-```
-
-We will now describe the deployment as a check to see if there is a service account listed in the Pod Template section.
-
-```azurecli-interactive
-kubectl describe deployment bookbuyer -n bookbuyer
-```
-
-In this particular deployment you can see that there is a service account associated with the deployment listed under the Pod Template section. This deployment is using the service account bookbuyer. If you do not see the **Serivce Account:** property, your deployment is not configured to use a service account.
-
-```Output
-Pod Template:
- Labels: app=bookbuyer
- version=v1
- Annotations: kubectl.kubernetes.io/restartedAt: 2021-03-23T10:52:49-04:00
- Service Account: bookbuyer
- Containers:
- bookbuyer:
- Image: openservicemesh/bookbuyer:v0.8.0
-
-```
-
-There are several techniques to update your deployment to add a kubernetes service account. Review the Kubernetes documentation on [Updating a Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment) inline, or [Configure Service Accounts for Pods](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/). Once you have updated your deployment spec with the service account, redeploy (kubectl apply -f your-deployment.yaml) your deployment to the cluster.
-
-#### Deploy the necessary Service Mesh Interface (SMI) Policies
-
-The last step to allowing authorized traffic to flow in the mesh is to deploy the necessary [SMI](https://smi-spec.io/) traffic access policies for your application. The amount of configuration you can achieve with [SMI](https://smi-spec.io/) traffic access policies is beyond the scope of this walkthrough, but we will detail some of the common components of the specification and show how to configure both a simple TrafficTarget and HTTPRouteGroup policy to enable service-to-service communication for your application.
-
-The [SMI](https://smi-spec.io/) [**Traffic Access Control**](https://github.com/servicemeshinterface/smi-spec/blob/main/apis/traffic-access/v1alpha3/traffic-access.md#traffic-access-control) specification allows users to define the access control policy for their applications. We will focus on the **TrafficTarget** and **HTTPRoutGroup** api resources.
-
-The TrafficTarget resource consists of three main configuration settings destination, rules, and sources. An example TrafficTarget is shown below.
-
-```TrafficTarget Example spec
-apiVersion: access.smi-spec.io/v1alpha3
-kind: TrafficTarget
-metadata:
- name: bookbuyer-access-bookstore-v1
- namespace: bookstore
-spec:
- destination:
- kind: ServiceAccount
- name: bookstore
- namespace: bookstore
- rules:
- - kind: HTTPRouteGroup
- name: bookstore-service-routes
- matches:
- - buy-a-book
- - books-bought
- sources:
- - kind: ServiceAccount
- name: bookbuyer
- namespace: bookbuyer
-```
-
-In the above TrafficTarget spec, the `destination` denotes the service account that is configured for the destination source service. Remember the service account that was added to the deployment earlier will be used to authorize access to the deployment it is attached to. The `rules` section , in this particular example, defines the type of HTTP traffic that is allowed over the connection. You can configure fine grain regex patterns for the HTTP headers to be specific on what traffic is allowed via HTTP. The `sources` section is the service originating communications. This spec reads bookbuyer needs to communicate to the bookstore.
-
-The HTTPRouteGroup resource consists of one or an array of matches of HTTP header information and is a requirement for the TrafficTarget spec. In the example below, you can see that the HTTPRouteGroup is authorizing three HTTP actions, two GET and one POST.
-
-```HTTPRouteGroup Example Spec
-apiVersion: specs.smi-spec.io/v1alpha4
-kind: HTTPRouteGroup
-metadata:
- name: bookstore-service-routes
- namespace: bookstore
-spec:
- matches:
- - name: books-bought
- pathRegex: /books-bought
- methods:
- - GET
- headers:
- - "user-agent": ".*-http-client/*.*"
- - "client-app": "bookbuyer"
- - name: buy-a-book
- pathRegex: ".*a-book.*new"
- methods:
- - GET
- - name: update-books-bought
- pathRegex: /update-books-bought
- methods:
- - POST
-```
-
-If you are not familiar with the type of HTTP traffic your front-end application makes to other tiers of the application, since the TrafficTarget spec requires a rule, you can create the equivalent of an allow all rule using the below spec for HTTPRouteGroup.
-
-```HTTPRouteGroup Allow All Example
-apiVersion: specs.smi-spec.io/v1alpha4
-kind: HTTPRouteGroup
-metadata:
- name: allow-all
- namespace: yournamespace
-spec:
- matches:
- - name: allow-all
- pathRegex: '.*'
- methods: ["GET","PUT","POST","DELETE","PATCH"]
-```
-
-Once you configure your TrafficTarget and HTTPRouteGroup spec, you can put them together as one YAML and deploy. Below is the bookstore example configuration.
-
-```Bookstore Example TrafficTarget and HTTPRouteGroup configuration
-kubectl apply -f - <<EOF
-
-apiVersion: access.smi-spec.io/v1alpha3
-kind: TrafficTarget
-metadata:
- name: bookbuyer-access-bookstore-v1
- namespace: bookstore
-spec:
- destination:
- kind: ServiceAccount
- name: bookstore
- namespace: bookstore
- rules:
- - kind: HTTPRouteGroup
- name: bookstore-service-routes
- matches:
- - buy-a-book
- - books-bought
- sources:
- - kind: ServiceAccount
- name: bookbuyer
- namespace: bookbuyer
-
-apiVersion: specs.smi-spec.io/v1alpha4
-kind: HTTPRouteGroup
-metadata:
- name: bookstore-service-routes
- namespace: bookstore
-spec:
- matches:
- - name: books-bought
- pathRegex: /books-bought
- methods:
- - GET
- headers:
- - "user-agent": ".*-http-client/*.*"
- - "client-app": "bookbuyer"
- - name: buy-a-book
- pathRegex: ".*a-book.*new"
- methods:
- - GET
- - name: update-books-bought
- pathRegex: /update-books-bought
- methods:
- - POST
-EOF
-```
-
-Visit the [SMI](https://smi-spec.io/) site for more detailed information on the specification.
-
-### Manage the application's namespace with OSM
-
-Next we will configure OSM to manage the namespace and restart the deployments to get the Envoy sidecar proxy injected with the application.
-
-Run the following command to configure the `azure-vote` namespace to be managed my OSM.
-
-```azurecli-interactive
-osm namespace add azure-vote
-```
-
-```Output
-Namespace [azure-vote] successfully added to mesh [osm]
-```
-
-Next restart both the `azure-vote-front` and `azure-vote-back` deployments with the following commands.
-
-```azurecli-interactive
-kubectl rollout restart deployment azure-vote-front -n azure-vote
-kubectl rollout restart deployment azure-vote-back -n azure-vote
-```
-
-```Output
-deployment.apps/azure-vote-front restarted
-deployment.apps/azure-vote-back restarted
-```
-
-If we view the pods for the `azure-vote` namespace, we will see the **READY** stage of both the `azure-vote-front` and `azure-vote-back` as 2/2, meaning the Envoy sidecar proxy has been injected alongside the application.
-
-## Tutorial: Deploy an application managed by Open Service Mesh (OSM) with NGINX ingress
-
-Open Service Mesh (OSM) is a lightweight, extensible, Cloud Native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.
-
-In this tutorial, you will:
-
-> [!div class="checklist"]
->
-> - View the current OSM cluster configuration
-> - Create the namespace(s) for OSM to manage deployed applications in the namespace(s)
-> - Onboard the namespaces to be managed by OSM
-> - Deploy the sample application
-> - Verify the application running inside the AKS cluster
-> - Create a NGINX ingress controller used for the appliction
-> - Expose a service via the Azure Application Gateway ingress to the internet
-
-### Before you begin
-
-The steps detailed in this article assume that you've created an AKS cluster (Kubernetes `1.19+` and above, with Kubernetes RBAC enabled), have established a `kubectl` connection with the cluster (If you need help with any of these items, then see the [AKS quickstart](./kubernetes-walkthrough.md), and have installed the AKS OSM add-on.
-
-You must have the following resources installed:
--- The Azure CLI, version 2.20.0 or later-- The `aks-preview` extension version 0.5.5 or later-- OSM version v0.8.0 or later-- apt-get install jq-
-### View and verify the current OSM cluster configuration
-
-Once the OSM add-on for AKS has been enabled on the AKS cluster, you can view the current configuration parameters in the osm-config Kubernetes ConfigMap. Run the following command to view the ConfigMap properties:
-
-```azurecli-interactive
-kubectl get configmap -n kube-system osm-config -o json | jq '.data'
-```
-
-Output shows the current OSM configuration for the cluster.
-
-```json
-{
- "egress": "true",
- "enable_debug_server": "true",
- "enable_privileged_init_container": "false",
- "envoy_log_level": "error",
- "outbound_ip_range_exclusion_list": "169.254.169.254,168.63.129.16,20.193.57.43",
- "permissive_traffic_policy_mode": "false",
- "prometheus_scraping": "false",
- "service_cert_validity_duration": "24h",
- "use_https_ingress": "false"
-}
-```
-
-Notice the **permissive_traffic_policy_mode** is configured to **true**. Permissive traffic policy mode in OSM is a mode where the [SMI](https://smi-spec.io/) traffic policy enforcement is bypassed. In this mode, OSM automatically discovers services that are a part of the service mesh and programs traffic policy rules on each Envoy proxy sidecar to be able to communicate with these services.
-
-### Create namespaces for the application
-
-In this tutorial we will be using the OSM bookstore application that has the following application components:
--- bookbuyer-- bookthief-- bookstore-- bookwarehouse-
-Create namespaces for each of these application components.
-
-```azurecli-interactive
-for i in bookstore bookbuyer bookthief bookwarehouse; do kubectl create ns $i; done
-```
-
-You should see the following output:
-
-```Output
-namespace/bookstore created
-namespace/bookbuyer created
-namespace/bookthief created
-namespace/bookwarehouse created
-```
-
-### Onboard the namespaces to be managed by OSM
-
-Adding the namespaces to the OSM mesh will allow the OSM controller to automatically inject the Envoy sidecar proxy containers with your application. Run the following command to onboard the OSM bookstore application namespaces.
-
-```azurecli-interactive
-osm namespace add bookstore bookbuyer bookthief bookwarehouse
-```
-
-You should see the following output:
-
-```Output
-Namespace [bookstore] successfully added to mesh [osm]
-Namespace [bookbuyer] successfully added to mesh [osm]
-Namespace [bookthief] successfully added to mesh [osm]
-Namespace [bookwarehouse] successfully added to mesh [osm]
-```
-
-### Deploy the Bookstore application to the AKS cluster
-
-```azurecli-interactive
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.8/docs/example/manifests/apps/bookbuyer.yaml
-```
-
-```azurecli-interactive
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.8/docs/example/manifests/apps/bookthief.yaml
-```
-
-```azurecli-interactive
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.8/docs/example/manifests/apps/bookstore.yaml
-```
-
-```azurecli-interactive
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.8/docs/example/manifests/apps/bookwarehouse.yaml
-```
-
-All of the deployment outputs are summarized below.
-
-```Output
-serviceaccount/bookbuyer created
-service/bookbuyer created
-deployment.apps/bookbuyer created
-
-serviceaccount/bookthief created
-service/bookthief created
-deployment.apps/bookthief created
-
-service/bookstore created
-serviceaccount/bookstore created
-deployment.apps/bookstore created
-
-serviceaccount/bookwarehouse created
-service/bookwarehouse created
-deployment.apps/bookwarehouse created
-```
-
-### Update the Bookbuyer Service
-
-Update the bookbuyer service to the correct inbound port configuration with the following service manifest.
-
-```azurecli-interactive
-kubectl apply -f - <<EOF
-apiVersion: v1
-kind: Service
-metadata:
- name: bookbuyer
- namespace: bookbuyer
- labels:
- app: bookbuyer
-spec:
- ports:
- - port: 14001
- name: inbound-port
- selector:
- app: bookbuyer
-EOF
-```
-
-### Verify the Bookstore application running inside the AKS cluster
-
-As of now we have deployed the bookstore mulit-container application, but it is only accessible from within the AKS cluster. Later we will add the Azure Application Gateway ingress controller to expose the application outside the AKS cluster. To verify that the application is running inside the cluster, we will use a port forward to view the bookbuyer component UI.
-
-First let's get the bookbuyer pod's name
-
-```azurecli-interactive
-kubectl get pod -n bookbuyer
-```
-
-You should see output similar to the following. Your bookbuyer pod will have a unique name appended.
-
-```Output
-NAME READY STATUS RESTARTS AGE
-bookbuyer-7676c7fcfb-mtnrz 2/2 Running 0 7m8s
-```
-
-Once we have the pod's name, we can now use the port-forward command to set up a tunnel from our local system to the application inside the AKS cluster. Run the following command to set up the port forward for the local system port 8080. Again use your specified bookbuyer pod name.
-
-```azurecli-interactive
-kubectl port-forward bookbuyer-7676c7fcfb-mtnrz -n bookbuyer 8080:14001
-```
-
-You should see output similar to this.
-
-```Output
-Forwarding from 127.0.0.1:8080 -> 14001
-Forwarding from [::1]:8080 -> 14001
-```
-
-While the port forwarding session is in place, navigate to the following url from a browser `http://localhost:8080`. You should now be able to see the bookbuyer application UI in the browser similar to the image below.
-
-![OSM bookbuyer app for NGINX UI image](./media/aks-osm-addon/osm-agic-bookbuyer-img.png)
-
-### Create an NGINX ingress controller in Azure Kubernetes Service (AKS)
-
-An ingress controller is a piece of software that provides reverse proxy, configurable traffic routing, and TLS termination for Kubernetes services. Kubernetes ingress resources are used to configure the ingress rules and routes for individual Kubernetes services. Using an ingress controller and ingress rules, a single IP address can be used to route traffic to multiple services in a Kubernetes cluster.
-
-We will utilize the ingress controller to expose the application managed by OSM to the internet. To create the ingress controller, use Helm to install nginx-ingress. For added redundancy, two replicas of the NGINX ingress controllers are deployed with the `--set controller.replicaCount` parameter. To fully benefit from running replicas of the ingress controller, make sure there's more than one node in your AKS cluster.
-
-The ingress controller also needs to be scheduled on a Linux node. Windows Server nodes shouldn't run the ingress controller. A node selector is specified using the `--set nodeSelector` parameter to tell the Kubernetes scheduler to run the NGINX ingress controller on a Linux-based node.
-
-> [!TIP]
-> The following example creates a Kubernetes namespace for the ingress resources named _ingress-basic_. Specify a namespace for your own environment as needed.
-
-```azurecli-interactive
-# Create a namespace for your ingress resources
-kubectl create namespace ingress-basic
-
-# Add the ingress-nginx repository
-helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
-
-# Update the helm repo(s)
-helm repo update
-
-# Use Helm to deploy an NGINX ingress controller in the ingress-basic namespace
-helm install nginx-ingress ingress-nginx/ingress-nginx \
- --namespace ingress-basic \
- --set controller.replicaCount=1 \
- --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
- --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
- --set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux
-```
-
-When the Kubernetes load balancer service is created for the NGINX ingress controller, a dynamic public IP address is assigned, as shown in the following example output:
-
-```Output
-$ kubectl --namespace ingress-basic get services -o wide -w nginx-ingress-ingress-nginx-controller
-
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
-nginx-ingress-ingress-nginx-controller LoadBalancer 10.0.74.133 EXTERNAL_IP 80:32486/TCP,443:30953/TCP 44s app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx
-```
-
-No ingress rules have been created yet, so the NGINX ingress controller's default 404 page is displayed if you browse to the internal IP address. Ingress rules are configured in the following steps.
-
-### Expose the bookbuyer service to the internet
-
-```azurecli-interactive
-kubectl apply -f - <<EOF
-
-apiVersion: extensions/v1beta1
-kind: Ingress
-metadata:
- name: bookbuyer-ingress
- namespace: bookbuyer
- annotations:
- kubernetes.io/ingress.class: nginx
-
-spec:
-
- rules:
- - host: bookbuyer.contoso.com
- http:
- paths:
- - path: /
- backend:
- serviceName: bookbuyer
- servicePort: 14001
-
- backend:
- serviceName: bookbuyer
- servicePort: 14001
-EOF
-```
-
-You should see the following output:
-
-```Output
-Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
-ingress.extensions/bookbuyer-ingress created
-```
-
-### View the NGINX logs
-
-```azurecli-interactive
-POD=$(kubectl get pods -n ingress-basic | grep 'nginx-ingress' | awk '{print $1}')
-
-kubectl logs $POD -n ingress-basic -f
-```
-
-Output shows the NGINX ingress controller status when ingress rule has been applied successfully:
-
-```Output
-I0321 <date> 6 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-basic", Name:"nginx-ingress-ingress-nginx-controller-54cf6c8bf4-jdvrw", UID:"3ebbe5e5-50ef-481d-954d-4b82a499ebe1", APIVersion:"v1", ResourceVersion:"3272", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
-I0321 <date> 6 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"bookbuyer", Name:"bookbuyer-ingress", UID:"e1018efc-8116-493c-9999-294b4566819e", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"5460", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
-I0321 <date> 6 controller.go:146] "Configuration changes detected, backend reload required"
-I0321 <date> 6 controller.go:163] "Backend successfully reloaded"
-I0321 <date> 6 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-basic", Name:"nginx-ingress-ingress-nginx-controller-54cf6c8bf4-jdvrw", UID:"3ebbe5e5-50ef-481d-954d-4b82a499ebe1", APIVersion:"v1", ResourceVersion:"3272", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
-```
-
-### View the NGINX services and bookbuyer service externally
-
-```azurecli-interactive
-kubectl get services -n ingress-basic
-```
-
-```Output
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-nginx-ingress-ingress-nginx-controller LoadBalancer 10.0.100.23 20.193.1.74 80:31742/TCP,443:32683/TCP 4m15s
-nginx-ingress-ingress-nginx-controller-admission ClusterIP 10.0.163.98 <none> 443/TCP 4m15s
-```
-
-Since the host name in the ingress manifest is a psuedo name used for testing, the DNS name will not be available on the internet. We can alternatively use the curl program and past the hostname header to the NGINX public IP address and receive a 200 code successfully connecting us to the bookbuyer service.
-
-```azurecli-interactive
-curl -H 'Host: bookbuyer.contoso.com' http://EXTERNAL-IP/
-```
-
-You should see the following output:
-
-```Output
-<!doctype html>
-<html itemscope="" itemtype="http://schema.org/WebPage" lang="en">
- <head>
- <meta content="Bookbuyer" name="description">
- <meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
- <title>Bookbuyer</title>
- <style>
- #navbar {
- width: 100%;
- height: 50px;
- display: table;
- border-spacing: 0;
- white-space: nowrap;
- line-height: normal;
- background-color: #0078D4;
- background-position: left top;
- background-repeat-x: repeat;
- background-image: none;
- color: white;
- font: 2.2em "Fira Sans", sans-serif;
- }
- #main {
- padding: 10pt 10pt 10pt 10pt;
- font: 1.8em "Fira Sans", sans-serif;
- }
- li {
- padding: 10pt 10pt 10pt 10pt;
- font: 1.2em "Consolas", sans-serif;
- }
- </style>
- <script>
- setTimeout(function(){window.location.reload(1);}, 1500);
- </script>
- </head>
- <body bgcolor="#fff">
- <div id="navbar">
- &#128214; Bookbuyer
- </div>
- <div id="main">
- <ul>
- <li>Total books bought: <strong>1833</strong>
- <ul>
- <li>from bookstore V1: <strong>277</strong>
- <li>from bookstore V2: <strong>1556</strong>
- </ul>
- </li>
- </ul>
- </div>
-
- <br/><br/><br/><br/>
- <br/><br/><br/><br/>
- <br/><br/><br/><br/>
-
- Current Time: <strong>Fri, 26 Mar 2021 15:02:53 UTC</strong>
- </body>
-</html>
-```
-
-## Tutorial: Deploy an application managed by Open Service Mesh (OSM) using Azure Application Gateway ingress AKS add-on
-
-Open Service Mesh (OSM) is a lightweight, extensible, Cloud Native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.
-
-In this tutorial, you will:
-
-> [!div class="checklist"]
->
-> - View the current OSM cluster configuration
-> - Create the namespace(s) for OSM to manage deployed applications in the namespace(s)
-> - Onboard the namespaces to be managed by OSM
-> - Deploy the sample application
-> - Verify the application running inside the AKS cluster
-> - Create an Azure Application Gateway to be used as the ingress controller for the appliction
-> - Expose a service via the Azure Application Gateway ingress to the internet
-
-### Before you begin
-
-The steps detailed in this article assume that you've created an AKS cluster (Kubernetes `1.19+` and above, with Kubernetes RBAC enabled), have established a `kubectl` connection with the cluster (If you need help with any of these items, then see the [AKS quickstart](./kubernetes-walkthrough.md), have installed the AKS OSM add-on, and will be creating a new Azure Application Gateway for ingress.
-
-You must have the following resources installed:
--- The Azure CLI, version 2.20.0 or later-- The `aks-preview` extension version 0.5.5 or later-- AKS cluster version 1.19+ using Azure CNI networking (Attached to an Azure Vnet)-- OSM version v0.8.0 or later-- apt-get install jq-
-### View and verify the current OSM cluster configuration
-
-Once the OSM add-on for AKS has been enabled on the AKS cluster, you can view the current configuration parameters in the osm-config Kubernetes ConfigMap. Run the following command to view the ConfigMap properties:
-
-```azurecli-interactive
-kubectl get configmap -n kube-system osm-config -o json | jq '.data'
-```
-
-Output shows the current OSM configuration for the cluster.
-
-```json
-{
- "egress": "true",
- "enable_debug_server": "true",
- "enable_privileged_init_container": "false",
- "envoy_log_level": "error",
- "outbound_ip_range_exclusion_list": "169.254.169.254,168.63.129.16,20.193.57.43",
- "permissive_traffic_policy_mode": "false",
- "prometheus_scraping": "false",
- "service_cert_validity_duration": "24h",
- "use_https_ingress": "false"
-}
-```
-
-Notice the **permissive_traffic_policy_mode** is configured to **true**. Permissive traffic policy mode in OSM is a mode where the [SMI](https://smi-spec.io/) traffic policy enforcement is bypassed. In this mode, OSM automatically discovers services that are a part of the service mesh and programs traffic policy rules on each Envoy proxy sidecar to be able to communicate with these services.
-
-### Create namespaces for the application
-
-In this tutorial we will be using the OSM bookstore application that has the following application components:
--- bookbuyer-- bookthief-- bookstore-- bookwarehouse-
-Create namespaces for each of these application components.
-
-```azurecli-interactive
-for i in bookstore bookbuyer bookthief bookwarehouse; do kubectl create ns $i; done
-```
-
-You should see the following output:
-
-```Output
-namespace/bookstore created
-namespace/bookbuyer created
-namespace/bookthief created
-namespace/bookwarehouse created
-```
-
-### Onboard the namespaces to be managed by OSM
-
-When you add the namespaces to the OSM mesh, this will allow the OSM controller to automatically inject the Envoy sidecar proxy containers with your application. Run the following command to onboard the OSM bookstore application namespaces.
-
-```azurecli-interactive
-osm namespace add bookstore bookbuyer bookthief bookwarehouse
-```
-
-You should see the following output:
-
-```Output
-Namespace [bookstore] successfully added to mesh [osm]
-Namespace [bookbuyer] successfully added to mesh [osm]
-Namespace [bookthief] successfully added to mesh [osm]
-Namespace [bookwarehouse] successfully added to mesh [osm]
-```
-
-### Deploy the Bookstore application to the AKS cluster
-
-```azurecli-interactive
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.8/docs/example/manifests/apps/bookbuyer.yaml
-```
-
-```azurecli-interactive
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.8/docs/example/manifests/apps/bookthief.yaml
-```
-
-```azurecli-interactive
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.8/docs/example/manifests/apps/bookstore.yaml
-```
-
-```azurecli-interactive
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.8/docs/example/manifests/apps/bookwarehouse.yaml
-```
-
-All of the deployment outputs are summarized below.
-
-```Output
-serviceaccount/bookbuyer created
-service/bookbuyer created
-deployment.apps/bookbuyer created
-
-serviceaccount/bookthief created
-service/bookthief created
-deployment.apps/bookthief created
-
-service/bookstore created
-serviceaccount/bookstore created
-deployment.apps/bookstore created
-
-serviceaccount/bookwarehouse created
-service/bookwarehouse created
-deployment.apps/bookwarehouse created
-```
-
-### Update the Bookbuyer Service
-
-Update the bookbuyer service to the correct inbound port configuration with the following service manifest.
-
-```azurecli-interactive
-kubectl apply -f - <<EOF
-apiVersion: v1
-kind: Service
-metadata:
- name: bookbuyer
- namespace: bookbuyer
- labels:
- app: bookbuyer
-spec:
- ports:
- - port: 14001
- name: inbound-port
- selector:
- app: bookbuyer
-EOF
-```
-
-### Verify the Bookstore application running inside the AKS cluster
-
-As of now we have deployed the bookstore multi-container application, but it is only accessible from within the AKS cluster. Later we will add the Azure Application Gateway ingress controller to expose the application outside the AKS cluster. To verify that the application is running inside the cluster, we will use a port forward to view the bookbuyer component UI.
-
-First let's get the bookbuyer pod's name
-
-```azurecli-interactive
-kubectl get pod -n bookbuyer
-```
-
-You should see output similar to the following. Your bookbuyer pod will have a unique name appended.
-
-```Output
-NAME READY STATUS RESTARTS AGE
-bookbuyer-7676c7fcfb-mtnrz 2/2 Running 0 7m8s
-```
-
-Once we have the pod's name, we can now use the port-forward command to set up a tunnel from our local system to the application inside the AKS cluster. Run the following command to set up the port forward for the local system port 8080. Again use your specific bookbuyer pod name.
-
-```azurecli-interactive
-kubectl port-forward bookbuyer-7676c7fcfb-mtnrz -n bookbuyer 8080:14001
-```
-
-You should see output similar to this.
-
-```Output
-Forwarding from 127.0.0.1:8080 -> 14001
-Forwarding from [::1]:8080 -> 14001
-```
-
-While the port forwarding session is in place, navigate to the following url from a browser `http://localhost:8080`. You should now be able to see the bookbuyer application UI in the browser similar to the image below.
-
-![OSM bookbuyer app for App Gateway UI image](./media/aks-osm-addon/osm-agic-bookbuyer-img.png)
-
-### Create an Azure Application Gateway to expose the bookbuyer application outside the AKS cluster
-
-> [!NOTE]
-> The following directions will create a new instance of the Azure Application Gateway to be used for ingress. If you have an existing Azure Application Gateway you wish to use, skip to the section for enabling the Application Gateway Ingress Controller add-on.
-
-#### Deploy a new Application Gateway
-
-> [!NOTE]
-> We are referencing existing documentation for enabling the Application Gateway Ingress Controller add-on for an existing AKS cluster. Some modifications have been made to suit the OSM materials. More detailed documentation on the subject can be found [here](../application-gateway/tutorial-ingress-controller-add-on-existing.md).
-
-You'll now deploy a new Application Gateway, to simulate having an existing Application Gateway that you want to use to load balance traffic to your AKS cluster, _myCluster_. The name of the Application Gateway will be _myApplicationGateway_, but you will need to first create a public IP resource, named _myPublicIp_, and a new virtual network called _myVnet_ with address space 11.0.0.0/8, and a subnet with address space 11.1.0.0/16 called _mySubnet_, and deploy your Application Gateway in _mySubnet_ using _myPublicIp_.
-
-When using an AKS cluster and Application Gateway in separate virtual networks, the address spaces of the two virtual networks must not overlap. The default address space that an AKS cluster deploys in is 10.0.0.0/8, so we set the Application Gateway virtual network address prefix to 11.0.0.0/8.
-
-```azurecli-interactive
-az group create --name myResourceGroup --location eastus2
-az network public-ip create -n myPublicIp -g MyResourceGroup --allocation-method Static --sku Standard
-az network vnet create -n myVnet -g myResourceGroup --address-prefix 11.0.0.0/8 --subnet-name mySubnet --subnet-prefix 11.1.0.0/16
-az network application-gateway create -n myApplicationGateway -l eastus2 -g myResourceGroup --sku Standard_v2 --public-ip-address myPublicIp --vnet-name myVnet --subnet mySubnet
-```
-
-> [!NOTE]
-> Application Gateway Ingress Controller (AGIC) add-on **only** supports Application Gateway v2 SKUs (Standard and WAF), and **not** the Application Gateway v1 SKUs.
-
-#### Enable the AGIC add-on for an existing AKS cluster through Azure CLI
-
-If you'd like to continue using Azure CLI, you can continue to enable the AGIC add-on in the AKS cluster you created, _myCluster_, and specify the AGIC add-on to use the existing Application Gateway you created, _myApplicationGateway_.
-
-```azurecli-interactive
-appgwId=$(az network application-gateway show -n myApplicationGateway -g myResourceGroup -o tsv --query "id")
-az aks enable-addons -n myCluster -g myResourceGroup -a ingress-appgw --appgw-id $appgwId
-```
-
-You can verify the Azure Application Gateway AKS add-on has been enabled by the following command.
-
-```azurecli-interactive
-az aks list -g osm-aks-rg -o json | jq -r .[].addonProfiles.ingressApplicationGateway.enabled
-```
-
-This command should show the output as `true`.
-
-#### Peer the two virtual networks together
-
-Since we deployed the AKS cluster in its own virtual network and the Application Gateway in another virtual network, you'll need to peer the two virtual networks together in order for traffic to flow from the Application Gateway to the pods in the cluster. Peering the two virtual networks requires running the Azure CLI command two separate times, to ensure that the connection is bi-directional. The first command will create a peering connection from the Application Gateway virtual network to the AKS virtual network; the second command will create a peering connection in the other direction.
-
-```azurecli-interactive
-nodeResourceGroup=$(az aks show -n myCluster -g myResourceGroup -o tsv --query "nodeResourceGroup")
-aksVnetName=$(az network vnet list -g $nodeResourceGroup -o tsv --query "[0].name")
-
-aksVnetId=$(az network vnet show -n $aksVnetName -g $nodeResourceGroup -o tsv --query "id")
-az network vnet peering create -n AppGWtoAKSVnetPeering -g myResourceGroup --vnet-name myVnet --remote-vnet $aksVnetId --allow-vnet-access
-
-appGWVnetId=$(az network vnet show -n myVnet -g myResourceGroup -o tsv --query "id")
-az network vnet peering create -n AKStoAppGWVnetPeering -g $nodeResourceGroup --vnet-name $aksVnetName --remote-vnet $appGWVnetId --allow-vnet-access
-```
-
-### Expose the bookbuyer service to the internet
-
-Apply the following ingress manifest to the AKS cluster to expose the bookbuyer service to the internet via the Azure Application Gateway.
-
-```azurecli-interactive
-kubectl apply -f - <<EOF
-
-apiVersion: extensions/v1beta1
-kind: Ingress
-metadata:
- name: bookbuyer-ingress
- namespace: bookbuyer
- annotations:
- kubernetes.io/ingress.class: azure/application-gateway
-
-spec:
-
- rules:
- - host: bookbuyer.contoso.com
- http:
- paths:
- - path: /
- backend:
- serviceName: bookbuyer
- servicePort: 14001
-
- backend:
- serviceName: bookbuyer
- servicePort: 14001
-EOF
-```
-
-You should see the following output
-
-```Output
-Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
-ingress.extensions/bookbuyer-ingress created
-```
-
-Since the host name in the ingress manifest is a pseudo name used for testing, the DNS name will not be available on the internet. We can alternatively use the curl program and past the hostname header to the Azure Application Gateway public IP address and receive a 200 code successfully connecting us to the bookbuyer service.
-
-```azurecli-interactive
-appGWPIP=$(az network public-ip show -g MyResourceGroup -n myPublicIp -o tsv --query "ipAddress")
-curl -H 'Host: bookbuyer.contoso.com' http://$appGWPIP/
-```
-
-You should see the following output
-
-```Output
-<!doctype html>
-<html itemscope="" itemtype="http://schema.org/WebPage" lang="en">
- <head>
- <meta content="Bookbuyer" name="description">
- <meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
- <title>Bookbuyer</title>
- <style>
- #navbar {
- width: 100%;
- height: 50px;
- display: table;
- border-spacing: 0;
- white-space: nowrap;
- line-height: normal;
- background-color: #0078D4;
- background-position: left top;
- background-repeat-x: repeat;
- background-image: none;
- color: white;
- font: 2.2em "Fira Sans", sans-serif;
- }
- #main {
- padding: 10pt 10pt 10pt 10pt;
- font: 1.8em "Fira Sans", sans-serif;
- }
- li {
- padding: 10pt 10pt 10pt 10pt;
- font: 1.2em "Consolas", sans-serif;
- }
- </style>
- <script>
- setTimeout(function(){window.location.reload(1);}, 1500);
- </script>
- </head>
- <body bgcolor="#fff">
- <div id="navbar">
- &#128214; Bookbuyer
- </div>
- <div id="main">
- <ul>
- <li>Total books bought: <strong>5969</strong>
- <ul>
- <li>from bookstore V1: <strong>277</strong>
- <li>from bookstore V2: <strong>5692</strong>
- </ul>
- </li>
- </ul>
- </div>
-
- <br/><br/><br/><br/>
- <br/><br/><br/><br/>
- <br/><br/><br/><br/>
-
- Current Time: <strong>Fri, 26 Mar 2021 16:34:30 UTC</strong>
- </body>
-</html>
-```
-
-### Troubleshooting
--- [AGIC Troubleshooting Documentation](../application-gateway/ingress-controller-troubleshoot.md)-- [Additional troubleshooting tools are available on AGIC's GitHub repo](https://github.com/Azure/application-gateway-kubernetes-ingress/blob/master/docs/troubleshootings/troubleshooting-installing-a-simple-application.md)-
-## Open Service Mesh (OSM) Monitoring and Observability using Azure Monitor and Applications Insights
-
-Both Azure Monitor and Azure Application Insights helps you maximize the availability and performance of your applications and services by delivering a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments.
-
-The OSM AKS add-on will have deep integrations into both of these Azure services, and provide a seemless Azure experience for viewing and responding to critical KPIs provided by OSM metrics. For more information on how to enable and configure these services for the OSM AKS add-on, visit the [Azure Monitor for OSM](https://aka.ms/azmon/osmpreview) page for more information.
-
-## Tutorial: Manually deploy Prometheus, Grafana, and Jaeger to view Open Service Mesh (OSM) metrics for observability
-
-> [!WARNING]
-> The installation of Prometheus, Grafana and Jaeger are provided as general guidance to show how these tools can be utilized to view OSM metric data. The installation guidance is not to be utilized for a production setup. Please refer to each tool's documentation on how best to suit thier installations to your needs. Most notable will be the lack of persistent storage, meaning that all data is lost once a Prometheus Grafana, and/or Jaeger pod(s) are terminated.
-
-Open Service Mesh (OSM) generates detailed metrics related to all traffic within the mesh. These metrics provide insights into the behavior of applications in the mesh helping users to troubleshoot, maintain, and analyze their applications.
-
-As of today OSM collects metrics directly from the sidecar proxies (Envoy). OSM provides rich metrics for incoming and outgoing traffic for all services in the mesh. With these metrics, the user can get information about the overall volume of traffic, errors within traffic and the response time for requests.
-
-OSM uses Prometheus to gather and store consistent traffic metrics and statistics for all applications running in the mesh. Prometheus is an open-source monitoring and alerting toolkit, which is commonly used on (but not limited to) Kubernetes and Service Mesh environments.
-
-Each application that is part of the mesh runs in a Pod that contains an Envoy sidecar that exposes metrics (proxy metrics) in the Prometheus format. Furthermore, every Pod that is a part of the mesh has Prometheus annotations, which makes it possible for the Prometheus server to scrape the application dynamically. This mechanism automatically enables scraping of metrics whenever a new namespace/pod/service is added to the mesh.
-
-OSM metrics can be viewed with Grafana, which is an open-source visualization and analytics software. It allows you to query, visualize, alert on, and explore your metrics.
-
-In this tutorial, you will:
-
-> [!div class="checklist"]
->
-> - Create and deploy a Prometheus instance
-> - Configure OSM to allow Prometheus scraping
-> - Update the Prometheus Configmap
-> - Create and deploy a Grafana instance
-> - Configure Grafana with the Prometheus datasource
-> - Import OSM dashboard for Grafana
-> - Create and deploy a Jaeger instance
-> - Configure Jaeger tracing for OSM
-
-### Deploy and configure a Prometheus instance for OSM
-
-We will use Helm to deploy the Prometheus instance. Run the following commands to install Prometheus via Helm:
-
-```azurecli-interactive
-helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
-helm repo update
-helm install stable prometheus-community/prometheus
-```
-
-You should see similar output below if the installation was successful. Make note of the Prometheus server port and cluster DNS name. This information will be used later for to configure Prometheus as a data source for Grafana.
-
-```Output
-NAME: stable
-LAST DEPLOYED: Fri Mar 26 13:34:51 2021
-NAMESPACE: default
-STATUS: deployed
-REVISION: 1
-TEST SUITE: None
-NOTES:
-The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
-stable-prometheus-server.default.svc.cluster.local
--
-Get the Prometheus server URL by running these commands in the same shell:
- export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
- kubectl --namespace default port-forward $POD_NAME 9090
--
-The Prometheus alertmanager can be accessed via port 80 on the following DNS name from within your cluster:
-stable-prometheus-alertmanager.default.svc.cluster.local
--
-Get the Alertmanager URL by running these commands in the same shell:
- export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}")
- kubectl --namespace default port-forward $POD_NAME 9093
-#################################################################################
-###### WARNING: Pod Security Policy has been moved to a global property. #####
-###### use .Values.podSecurityPolicy.enabled with pod-based #####
-###### annotations #####
-###### (e.g. .Values.nodeExporter.podSecurityPolicy.annotations) #####
-#################################################################################
--
-The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster:
-stable-prometheus-pushgateway.default.svc.cluster.local
--
-Get the PushGateway URL by running these commands in the same shell:
- export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=pushgateway" -o jsonpath="{.items[0].metadata.name}")
- kubectl --namespace default port-forward $POD_NAME 9091
-
-For more information on running Prometheus, visit:
-https://prometheus.io/
-```
-
-#### Configure OSM to allow Prometheus scraping
-
-To ensure that the OSM components are configured for Prometheus scrapes, we'll want to check the **prometheus_scraping** configuration located in the osm-config config file. View the configuration with the following command:
-
-```azurecli-interactive
-kubectl get configmap -n kube-system osm-config -o json | jq '.data.prometheus_scraping'
-```
-
-The output of the previous command should return `true` if OSM is configured for Prometheus scraping. If the returned value is `false`, we will need to update the configuration to be `true`. Run the following command to turn **on** OSM Prometheus scraping:
-
-```azurecli-interactive
-kubectl patch ConfigMap -n kube-system osm-config --type merge --patch '{"data":{"prometheus_scraping":"true"}}'
-```
-
-You should see the following output.
-
-```Output
-configmap/osm-config patched
-```
-
-#### Update the Prometheus Configmap
-
-The default installation of Prometheus will contain two Kubernetes configmaps. You can view the list of Prometheus configmaps with the following command.
-
-```azurecli-interactive
-kubectl get configmap | grep prometheus
-```
-
-```Output
-stable-prometheus-alertmanager 1 4h34m
-stable-prometheus-server 5 4h34m
-```
-
-We will need to replace the prometheus.yml configuration located in the **stable-prometheus-server** configmap with the following OSM configuration. There are several file editing techniques to accomplish this task. A simple and safe way is to export the configmap, create a copy of it for backup, then edit it with an editor such as Visual Studio code.
-
-> [!NOTE]
-> If you do not have Visual Studio Code installed you can go download and install it [here](https://code.visualstudio.com/Download).
-
-Let's first export out the **stable-prometheus-server** configmap and then make a copy for backup.
-
-```azurecli-interactive
-kubectl get configmap stable-prometheus-server -o yaml > cm-stable-prometheus-server.yml
-cp cm-stable-prometheus-server.yml cm-stable-prometheus-server.yml.copy
-```
-
-Next let's open the file using Visual Studio Code to edit.
-
-```azurecli-interactive
-code cm-stable-prometheus-server.yml
-```
-
-Once you have the configmap opened in the Visual Studio Code editor, replace the prometheus.yml file with the OSM configuration below and save the file.
-
-> [!WARNING]
-> It is extremely important that you ensure you keep the indention structure of the yaml file. Any changes to the yaml file structure could result in the configmap not being able to be re-applied.
-
-```OSM Prometheus Configmap Configuration
-prometheus.yml: |
- global:
- scrape_interval: 10s
- scrape_timeout: 10s
- evaluation_interval: 1m
-
- scrape_configs:
- - job_name: 'kubernetes-apiservers'
- kubernetes_sd_configs:
- - role: endpoints
- scheme: https
- tls_config:
- ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
- # TODO need to remove this when the CA and SAN match
- insecure_skip_verify: true
- bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- metric_relabel_configs:
- - source_labels: [__name__]
- regex: '(apiserver_watch_events_total|apiserver_admission_webhook_rejection_count)'
- action: keep
- relabel_configs:
- - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
- action: keep
- regex: default;kubernetes;https
-
- - job_name: 'kubernetes-nodes'
- scheme: https
- tls_config:
- ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
- bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- kubernetes_sd_configs:
- - role: node
- relabel_configs:
- - action: labelmap
- regex: __meta_kubernetes_node_label_(.+)
- - target_label: __address__
- replacement: kubernetes.default.svc:443
- - source_labels: [__meta_kubernetes_node_name]
- regex: (.+)
- target_label: __metrics_path__
- replacement: /api/v1/nodes/${1}/proxy/metrics
-
- - job_name: 'kubernetes-pods'
- kubernetes_sd_configs:
- - role: pod
- metric_relabel_configs:
- - source_labels: [__name__]
- regex: '(envoy_server_live|envoy_cluster_upstream_rq_xx|envoy_cluster_upstream_cx_active|envoy_cluster_upstream_cx_tx_bytes_total|envoy_cluster_upstream_cx_rx_bytes_total|envoy_cluster_upstream_cx_destroy_remote_with_active_rq|envoy_cluster_upstream_cx_connect_timeout|envoy_cluster_upstream_cx_destroy_local_with_active_rq|envoy_cluster_upstream_rq_pending_failure_eject|envoy_cluster_upstream_rq_pending_overflow|envoy_cluster_upstream_rq_timeout|envoy_cluster_upstream_rq_rx_reset|^osm.*)'
- action: keep
- relabel_configs:
- - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
- action: keep
- regex: true
- - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
- action: replace
- target_label: __metrics_path__
- regex: (.+)
- - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
- action: replace
- regex: ([^:]+)(?::\d+)?;(\d+)
- replacement: $1:$2
- target_label: __address__
- - source_labels: [__meta_kubernetes_namespace]
- action: replace
- target_label: source_namespace
- - source_labels: [__meta_kubernetes_pod_name]
- action: replace
- target_label: source_pod_name
- - regex: '(__meta_kubernetes_pod_label_app)'
- action: labelmap
- replacement: source_service
- - regex: '(__meta_kubernetes_pod_label_osm_envoy_uid|__meta_kubernetes_pod_label_pod_template_hash|__meta_kubernetes_pod_label_version)'
- action: drop
- # for non-ReplicaSets (DaemonSet, StatefulSet)
- # __meta_kubernetes_pod_controller_kind=DaemonSet
- # __meta_kubernetes_pod_controller_name=foo
- # =>
- # workload_kind=DaemonSet
- # workload_name=foo
- - source_labels: [__meta_kubernetes_pod_controller_kind]
- action: replace
- target_label: source_workload_kind
- - source_labels: [__meta_kubernetes_pod_controller_name]
- action: replace
- target_label: source_workload_name
- # for ReplicaSets
- # __meta_kubernetes_pod_controller_kind=ReplicaSet
- # __meta_kubernetes_pod_controller_name=foo-bar-123
- # =>
- # workload_kind=Deployment
- # workload_name=foo-bar
- # deplyment=foo
- - source_labels: [__meta_kubernetes_pod_controller_kind]
- action: replace
- regex: ^ReplicaSet$
- target_label: source_workload_kind
- replacement: Deployment
- - source_labels:
- - __meta_kubernetes_pod_controller_kind
- - __meta_kubernetes_pod_controller_name
- action: replace
- regex: ^ReplicaSet;(.*)-[^-]+$
- target_label: source_workload_name
-
- - job_name: 'smi-metrics'
- kubernetes_sd_configs:
- - role: pod
- relabel_configs:
- - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
- action: keep
- regex: true
- - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
- action: replace
- target_label: __metrics_path__
- regex: (.+)
- - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
- action: replace
- regex: ([^:]+)(?::\d+)?;(\d+)
- replacement: $1:$2
- target_label: __address__
- metric_relabel_configs:
- - source_labels: [__name__]
- regex: 'envoy_.*osm_request_(total|duration_ms_(bucket|count|sum))'
- action: keep
- - source_labels: [__name__]
- action: replace
- regex: envoy_response_code_(\d{3})_source_namespace_.*_source_kind_.*_source_name_.*_source_pod_.*_destination_namespace_.*_destination_kind_.*_destination_name_.*_destination_pod_.*_osm_request_total
- target_label: response_code
- - source_labels: [__name__]
- action: replace
- regex: envoy_response_code_\d{3}_source_namespace_(.*)_source_kind_.*_source_name_.*_source_pod_.*_destination_namespace_.*_destination_kind_.*_destination_name_.*_destination_pod_.*_osm_request_total
- target_label: source_namespace
- - source_labels: [__name__]
- action: replace
- regex: envoy_response_code_\d{3}_source_namespace_.*_source_kind_(.*)_source_name_.*_source_pod_.*_destination_namespace_.*_destination_kind_.*_destination_name_.*_destination_pod_.*_osm_request_total
- target_label: source_kind
- - source_labels: [__name__]
- action: replace
- regex: envoy_response_code_\d{3}_source_namespace_.*_source_kind_.*_source_name_(.*)_source_pod_.*_destination_namespace_.*_destination_kind_.*_destination_name_.*_destination_pod_.*_osm_request_total
- target_label: source_name
- - source_labels: [__name__]
- action: replace
- regex: envoy_response_code_\d{3}_source_namespace_.*_source_kind_.*_source_name_.*_source_pod_(.*)_destination_namespace_.*_destination_kind_.*_destination_name_.*_destination_pod_.*_osm_request_total
- target_label: source_pod
- - source_labels: [__name__]
- action: replace
- regex: envoy_response_code_\d{3}_source_namespace_.*_source_kind_.*_source_name_.*_source_pod_.*_destination_namespace_(.*)_destination_kind_.*_destination_name_.*_destination_pod_.*_osm_request_total
- target_label: destination_namespace
- - source_labels: [__name__]
- action: replace
- regex: envoy_response_code_\d{3}_source_namespace_.*_source_kind_.*_source_name_.*_source_pod_.*_destination_namespace_.*_destination_kind_(.*)_destination_name_.*_destination_pod_.*_osm_request_total
- target_label: destination_kind
- - source_labels: [__name__]
- action: replace
- regex: envoy_response_code_\d{3}_source_namespace_.*_source_kind_.*_source_name_.*_source_pod_.*_destination_namespace_.*_destination_kind_.*_destination_name_(.*)_destination_pod_.*_osm_request_total
- target_label: destination_name
- - source_labels: [__name__]
- action: replace
- regex: envoy_response_code_\d{3}_source_namespace_.*_source_kind_.*_source_name_.*_source_pod_.*_destination_namespace_.*_destination_kind_.*_destination_name_.*_destination_pod_(.*)_osm_request_total
- target_label: destination_pod
- - source_labels: [__name__]
- action: replace
- regex: .*(osm_request_total)
- target_label: __name__
-
- - source_labels: [__name__]
- action: replace
- regex: envoy_source_namespace_(.*)_source_kind_.*_source_name_.*_source_pod_.*_destination_namespace_.*_destination_kind_.*_destination_name_.*_destination_pod_.*_osm_request_duration_ms_(bucket|sum|count)
- target_label: source_namespace
- - source_labels: [__name__]
- action: replace
- regex: envoy_source_namespace_.*_source_kind_(.*)_source_name_.*_source_pod_.*_destination_namespace_.*_destination_kind_.*_destination_name_.*_destination_pod_.*_osm_request_duration_ms_(bucket|sum|count)
- target_label: source_kind
- - source_labels: [__name__]
- action: replace
- regex: envoy_source_namespace_.*_source_kind_.*_source_name_(.*)_source_pod_.*_destination_namespace_.*_destination_kind_.*_destination_name_.*_destination_pod_.*_osm_request_duration_ms_(bucket|sum|count)
- target_label: source_name
- - source_labels: [__name__]
- action: replace
- regex: envoy_source_namespace_.*_source_kind_.*_source_name_.*_source_pod_(.*)_destination_namespace_.*_destination_kind_.*_destination_name_.*_destination_pod_.*_osm_request_duration_ms_(bucket|sum|count)
- target_label: source_pod
- - source_labels: [__name__]
- action: replace
- regex: envoy_source_namespace_.*_source_kind_.*_source_name_.*_source_pod_.*_destination_namespace_(.*)_destination_kind_.*_destination_name_.*_destination_pod_.*_osm_request_duration_ms_(bucket|sum|count)
- target_label: destination_namespace
- - source_labels: [__name__]
- action: replace
- regex: envoy_source_namespace_.*_source_kind_.*_source_name_.*_source_pod_.*_destination_namespace_.*_destination_kind_(.*)_destination_name_.*_destination_pod_.*_osm_request_duration_ms_(bucket|sum|count)
- target_label: destination_kind
- - source_labels: [__name__]
- action: replace
- regex: envoy_source_namespace_.*_source_kind_.*_source_name_.*_source_pod_.*_destination_namespace_.*_destination_kind_.*_destination_name_(.*)_destination_pod_.*_osm_request_duration_ms_(bucket|sum|count)
- target_label: destination_name
- - source_labels: [__name__]
- action: replace
- regex: envoy_source_namespace_.*_source_kind_.*_source_name_.*_source_pod_.*_destination_namespace_.*_destination_kind_.*_destination_name_.*_destination_pod_(.*)_osm_request_duration_ms_(bucket|sum|count)
- target_label: destination_pod
- - source_labels: [__name__]
- action: replace
- regex: .*(osm_request_duration_ms_(bucket|sum|count))
- target_label: __name__
-
- - job_name: 'kubernetes-cadvisor'
- scheme: https
- tls_config:
- ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
- bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- kubernetes_sd_configs:
- - role: node
- metric_relabel_configs:
- - source_labels: [__name__]
- regex: '(container_cpu_usage_seconds_total|container_memory_rss)'
- action: keep
- relabel_configs:
- - action: labelmap
- regex: __meta_kubernetes_node_label_(.+)
- - target_label: __address__
- replacement: kubernetes.default.svc:443
- - source_labels: [__meta_kubernetes_node_name]
- regex: (.+)
- target_label: __metrics_path__
- replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
-```
-
-Apply the updated configmap yaml file with the following command.
-
-```azurecli-interactive
-kubectl apply -f cm-stable-prometheus-server.yml
-```
-
-```Output
-configmap/stable-prometheus-server configured
-```
-
-> [!NOTE]
-> You may receive a message about a missing kubernetes annotation needed. This can be ignored for now.
-
-#### Verify Prometheus is configured to scrape the OSM mesh and API endpoints
-
-To verify that Prometheus is correctly configured to scrape the OSM mesh and API endpoints, we will port forward to the Prometheus pod and view the target configuration. Run the following commands.
-
-```azurecli-interactive
-PROM_POD_NAME=$(kubectl get pods -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
-kubectl --namespace <promNamespace> port-forward $PROM_POD_NAME 9090
-```
-
-Open a browser up to `http://localhost:9090/targets`
-
-If you scroll down you should see all the SMI metric endpoints state being **UP** as well as other OSM metrics defined as pictured below.
-
-![OSM Prometheus Target Metrics UI image](./media/aks-osm-addon/osm-prometheus-smi-metrics-target-scrape.png)
-
-### Deploy and configure a Grafana Instance for OSM
-
-We will use Helm to deploy the Grafana instance. Run the following commands to install Grafana via Helm:
-
-```
-helm repo add grafana https://grafana.github.io/helm-charts
-helm repo update
-helm install osm-grafana grafana/grafana
-```
-
-Next we'll retrieve the default Grafana password to log into the Grafana site.
-
-```azurecli-interactive
-kubectl get secret --namespace default osm-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
-```
-
-Make note of the Grafana password.
-
-Next we will retrieve the Grafana pod to port forward to the Grafana dashboard to login.
-
-```azurecli-interactive
-GRAF_POD_NAME=$(kubectl get pods -l "app.kubernetes.io/name=grafana" -o jsonpath="{.items[0].metadata.name}")
-kubectl port-forward $GRAF_POD_NAME 3000
-```
-
-Open a browser up to `http://localhost:3000`
-
-At the login screen pictured below, enter **admin** as the username and use the Grafana password captured earlier.
-
-![OSM Grafana Login Page UI image](./media/aks-osm-addon/osm-grafana-ui-login.png)
-
-#### Configure the Grafana Prometheus data source
-
-Once you have successfully logged into Grafana, the next step is to add Prometheus as data sources for Grafana. To do so, navigate on the configuration icon on the left menu and select Data Sources as shown below.
-
-![OSM Grafana Datasources Page UI image](./media/aks-osm-addon/osm-grafana-ui-datasources.png)
-
-Click the **Add data source** button and select Prometheus under time series databases.
-
-![OSM Grafana Datasources Selection Page UI image](./media/aks-osm-addon/osm-grafana-ui-datasources-select-prometheus.png)
-
-On the **Configure your Prometheus data source below** page, enter the Kubernetes cluster FQDN for the Prometheus service for the HTTP URL setting. The default FQDN should be `stable-prometheus-server.default.svc.cluster.local`. Once you have entered that Prometheus service endpoint, scroll to the bottom of the page and select **Save & Test**. You should receive a green checkbox indicating the data source is working.
-
-#### Importing OSM Dashboards
-
-OSM Dashboards are available both through:
--- [Our repository](https://github.com/grafana/grafana), and are importable as json blobs through the web admin portal-- or [online at Grafana.com](https://grafana.com/grafana/dashboards/14145)-
-To import a dashboard, look for the `+` sign on the left menu and select `import`.
-You can directly import dashboard by their ID on `Grafana.com`. For example, our `OSM Mesh Details` dashboard uses ID `14145`, you can use the ID directly on the form and select `import`:
-
-![OSM Grafana Dashboard Import Page UI image](./media/aks-osm-addon/osm-grafana-dashboard-import.png)
-
-As soon as you select import, it will bring you automatically to your imported dashboard.
-
-![OSM Grafana Dashboard Mesh Details Page UI image](./media/aks-osm-addon/osm-grafana-mesh-dashboard-details.png)
-
-### Deploy and configure a Jaeger Operator on Kubernetes for OSM
-
-[Jaeger](https://www.jaegertracing.io/) is an open-source tracing system used for monitoring and troubleshooting distributed systems. It can be deployed with OSM as a new instance or you may bring your own instance. The following instructions deploy a new instance of Jaeger to the `jaeger` namespace on the AKS cluster.
-
-#### Deploy Jaeger to the AKS cluster
-
-Apply the following manifest to install Jaeger:
-
-```azurecli-interactive
-kubectl apply -f - <<EOF
-
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: jaeger
- namespace: jaeger
- labels:
- app: jaeger
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: jaeger
- template:
- metadata:
- labels:
- app: jaeger
- spec:
- containers:
- - name: jaeger
- image: jaegertracing/all-in-one
- args:
- - --collector.zipkin.host-port=9411
- imagePullPolicy: IfNotPresent
- ports:
- - containerPort: 9411
- resources:
- limits:
- cpu: 500m
- memory: 512M
- requests:
- cpu: 100m
- memory: 256M
-
-kind: Service
-apiVersion: v1
-metadata:
- name: jaeger
- namespace: jaeger
- labels:
- app: jaeger
-spec:
- selector:
- app: jaeger
- ports:
- - protocol: TCP
- # Service port and target port are the same
- port: 9411
- type: ClusterIP
-EOF
-```
-
-```Output
-deployment.apps/jaeger created
-service/jaeger created
-```
-
-#### Enable Tracing for the OSM add-on
-
-Next we will need to enable tracing for the OSM add-on.
-
-> [!NOTE]
-> As of now the tracing properties are not visable in the osm-config configmap at this time. This will be made visable in a new release of the OSM AKS add-on.
-
-Run the following command to enable tracing for the OSM add-on:
-
-```azurecli-interactive
-kubectl patch configmap osm-config -n kube-system -p '{"data":{"tracing_enable":"true", "tracing_address":"jaeger.jaeger.svc.cluster.local", "tracing_port":"9411", "tracing_endpoint":"/api/v2/spans"}}' --type=merge
-```
-
-```Output
-configmap/osm-config patched
-```
-
-#### View the Jaeger UI with port forwarding
-
-Jaeger's UI is running on port 16686. To view the web UI, you can use kubectl port-forward:
-
-```azurecli-interactive
-JAEGER_POD=$(kubectl get pods -n jaeger --no-headers --selector app=jaeger | awk 'NR==1{print $1}')
-kubectl port-forward -n jaeger $JAEGER_POD 16686:16686
-http://localhost:16686/
-```
-
-In the browser, you should see a Service dropdown, which allows you to select from the various applications deployed by the bookstore demo. Select a service to view all spans from it. For example, if you select bookbuyer with a Lookback of one hour, you can see its interactions with bookstore-v1 and bookstore-v2 sorted by time.
-
-![OSM Jaeger Tracing Page UI image](./media/aks-osm-addon/osm-jaeger-trace-view-ui.png)
-
-Select any item to view it in further detail. Select multiple items to compare traces. For example, you can compare the bookbuyer's interactions with bookstore and bookstore-v2 at a particular moment in time.
-
-You can also select the System Architecture tab to view a graph of how the various applications have been interacting/communicating. This provides an idea of how traffic is flowing between the applications.
-
-![OSM Jaeger System Architecture UI image](./media/aks-osm-addon/osm-jaeger-sys-arc-view-ui.png)
-
-## Open Service Mesh (OSM) AKS add-on Troubleshooting Guides
-
-When you deploy the OSM AKS add-on, you might occasionally experience a problem. The following guides will assist you on how to troubleshoot errors and resolve common problems.
-
-### Verifying and Troubleshooting OSM components
-
-#### Check OSM Controller Deployment
-
-```azurecli-interactive
-kubectl get deployment -n kube-system --selector app=osm-controller
-```
-
-A healthy OSM Controller would look like this:
-
-```Output
-NAME READY UP-TO-DATE AVAILABLE AGE
-osm-controller 1/1 1 1 59m
-```
-
-#### Check the OSM Controller Pod
-
-```azurecli-interactive
-kubectl get pods -n kube-system --selector app=osm-controller
-```
-
-A healthy OSM Pod would look like this:
-
-```Output
-NAME READY STATUS RESTARTS AGE
-osm-controller-b5bd66db-wglzl 0/1 Evicted 0 61m
-osm-controller-b5bd66db-wvl9w 1/1 Running 0 31m
-```
-
-Even though we had one controller evicted at some point, we have another one that is READY 1/1 and Running with 0 restarts. If the column READY is anything other than 1/1 the service mesh would be in a broken state.
-Column READY with 0/1 indicates the control plane container is crashing - we need to get logs. See Get OSM Controller Logs from Azure Support Center section below. Column READY with a number higher than 1 after the / would indicate that there are sidecars installed. OSM Controller would most likely not work with any sidecars attached to it.
-
-> [!NOTE]
-> As of version v0.8.2 the OSM Controller is not in HA mode and will run in a deployed with replica count of 1 - single pod. The pod does have health probes and will be restarted by the kubelet if needed.
-
-#### Check OSM Controller Service
-
-```azurecli-interactive
-kubectl get service -n kube-system osm-controller
-```
-
-A healthy OSM Controller service would look like this:
-
-```Output
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-osm-controller ClusterIP 10.0.31.254 <none> 15128/TCP,9092/TCP 67m
-```
-
-> [!NOTE]
-> The CLUSTER-IP would be different. The service NAME and PORT(S) must be the same as the example above.
-
-#### Check OSM Controller Endpoints
-
-```azurecli-interactive
-kubectl get endpoints -n kube-system osm-controller
-```
-
-A healthy OSM Controller endpoint(s) would look like this:
-
-```Output
-NAME ENDPOINTS AGE
-osm-controller 10.240.1.115:9092,10.240.1.115:15128 69m
-```
-
-#### Check OSM Injector Deployment
-
-```azurecli-interactive
-kubectl get pod -n kube-system --selector app=osm-injector
-```
-
-A healthy OSM Injector deployment would look like this:
-
-```Output
-NAME READY STATUS RESTARTS AGE
-osm-injector-5986c57765-vlsdk 1/1 Running 0 73m
-```
-
-#### Check OSM Injector Pod
-
-```azurecli-interactive
-kubectl get pod -n kube-system --selector app=osm-injector
-```
-
-A healthy OSM Injector pod would look like this:
-
-```Output
-NAME READY STATUS RESTARTS AGE
-osm-injector-5986c57765-vlsdk 1/1 Running 0 73m
-```
-
-#### Check OSM Injector Service
-
-```azurecli-interactive
-kubectl get service -n kube-system osm-injector
-```
-
-A healthy OSM Injector service would look like this:
-
-```Output
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-osm-injector ClusterIP 10.0.39.54 <none> 9090/TCP 75m
-```
-
-#### Check OSM Endpoints
-
-```azurecli-interactive
-kubectl get endpoints -n kube-system osm-injector
-```
-
-A healthy OSM endpoint would look like this:
-
-```Output
-NAME ENDPOINTS AGE
-osm-injector 10.240.1.172:9090 75m
-```
-
-#### Check Validating and Mutating webhooks
-
-```azurecli-interactive
-kubectl get ValidatingWebhookConfiguration --selector app=osm-controller
-```
-
-A healthy OSM Validating Webhook would look like this:
-
-```Output
-NAME WEBHOOKS AGE
-aks-osm-webhook-osm 1 81m
-```
-
-```azurecli-interactive
-kubectl get MutatingWebhookConfiguration --selector app=osm-injector
-```
-
-A healthy OSM Mutating Webhook would look like this:
-
-```Output
-NAME WEBHOOKS AGE
-aks-osm-webhook-osm 1 102m
-```
-
-#### Check for the service and the CA bundle of the Validating webhook
-
-```azurecli-interactive
-kubectl get ValidatingWebhookConfiguration aks-osm-webhook-osm -o json | jq '.webhooks[0].clientConfig.service'
-```
-
-A well configured Validating Webhook Configuration would look exactly like this:
-
-```json
-{
- "name": "osm-config-validator",
- "namespace": "kube-system",
- "path": "/validate-webhook",
- "port": 9093
-}
-```
-
-#### Check for the service and the CA bundle of the Mutating webhook
-
-```azurecli-interactive
-kubectl get MutatingWebhookConfiguration aks-osm-webhook-osm -o json | jq '.webhooks[0].clientConfig.service'
-```
-
-A well configured Mutating Webhook Configuration would look exactly like this:
-
-```json
-{
- "name": "osm-injector",
- "namespace": "kube-system",
- "path": "/mutate-pod-creation",
- "port": 9090
-}
-```
-
-#### Check whether OSM Controller has given the Validating (or Mutating) Webhook a CA Bundle
-
-> [!NOTE]
-> As of v0.8.2 It is important to know that AKS RP installs the Validating Webhook, AKS Reconciler ensures it exists, but OSM Controller is the one that fills the CA Bundle.
-
-```azurecli-interactive
-kubectl get ValidatingWebhookConfiguration aks-osm-webhook-osm -o json | jq -r '.webhooks[0].clientConfig.caBundle' | wc -c
-```
-
-```azurecli-interactive
-kubectl get MutatingWebhookConfiguration aks-osm-webhook-osm -o json | jq -r '.webhooks[0].clientConfig.caBundle' | wc -c
-```
-
-```Example Output
-1845
-```
-
-This number indicates the number of bytes, or the size of the CA Bundle. If this is empty, 0, or some number under 1000 it would indicate that the CA Bundle is not correctly provisioned. Without a correct CA Bundle, the Validating Webhook would be erroring out and prohibiting the user from making changes to the osm-config ConfigMap in the kube-system namespace.
-
-A sample error when the CA Bundle is incorrect:
--- An attempt to change the osm-config ConfigMap:-
-```azurecli-interactive
-kubectl patch ConfigMap osm-config -n kube-system --type merge --patch '{"data":{"config_resync_interval":"2m"}}'
-```
--- Error:-
-```
-Error from server (InternalError): Internal error occurred: failed calling webhook "osm-config-webhook.k8s.io": Post https://osm-config-validator.kube-system.svc:9093/validate-webhook?timeout=30s: x509: certificate signed by unknown authority
-```
-
-Work around for when the **Validating** Webhook Configuration has a bad certificate:
--- Option 1 - Restart OSM Controller - this will restart the OSM Controller. On start, it will overwrite the CA Bundle of both the Mutating and Validating webhooks.-
-```azurecli-interactive
-kubectl rollout restart deployment -n kube-system osm-controller
-```
--- Option 2 - Option 2. Delete the Validating Webhook - removing the Validating Webhook makes mutations of the `osm-config` ConfigMap no longer validated. Any patch will go through. The AKS Reconciler will at some point ensure the Validating Webhook exists and will recreate it. The OSM Controller may have to be restarted to quickly rewrite the CA Bundle.-
-```azurecli-interactive
-kubectl delete ValidatingWebhookConfiguration aks-osm-webhook-osm
-```
--- Option 3 - Delete and Patch: The following command will delete the validating webhook, allowing us to add any values, and will immediately try to apply a patch. Most likely the AKS Reconciler will not have enough time to reconcile and restore the Validating Webhook giving us the opportunity to apply a change as a last resort:-
-```azurecli-interactive
-kubectl delete ValidatingWebhookConfiguration aks-osm-webhook-osm; kubectl patch ConfigMap osm-config -n kube-system --type merge --patch '{"data":{"config_resync_interval":"15s"}}'
-```
-
-#### Check the `osm-config` **ConfigMap**
-
-> [!NOTE]
-> The OSM Controller does not require for the `osm-config` ConfigMap to be present in the kube-system namespace. The controller has reasonable default values for the config and can operate without it.
-
-Check for the existence:
-
-```azurecli-interactive
-kubectl get ConfigMap -n kube-system osm-config
-```
-
-Check the content of the osm-config ConfigMap
-
-```azurecli-interactive
-kubectl get ConfigMap -n kube-system osm-config -o json | jq '.data'
-```
-
-```json
-{
- "egress": "true",
- "enable_debug_server": "true",
- "enable_privileged_init_container": "false",
- "envoy_log_level": "error",
- "outbound_ip_range_exclusion_list": "169.254.169.254,168.63.129.16,20.193.20.233",
- "permissive_traffic_policy_mode": "true",
- "prometheus_scraping": "false",
- "service_cert_validity_duration": "24h",
- "use_https_ingress": "false"
-}
-```
-
-`osm-config` ConfigMap values:
-
-| Key | Type | Allowed Values | Default Value | Function |
-| -- | | - | -- | |
-| egress | bool | true, false | `"false"` | Enables egress in the mesh. |
-| enable_debug_server | bool | true, false | `"true"` | Enables a debug endpoint on the osm-controller pod to list information regarding the mesh such as proxy connections, certificates, and SMI policies. |
-| enable_privileged_init_container | bool | true, false | `"false"` | Enables privileged init containers for pods in mesh. When false, init containers only have NET_ADMIN. |
-| envoy_log_level | string | trace, debug, info, warning, warn, error, critical, off | `"error"` | Sets the logging verbosity of Envoy proxy sidecar, only applicable to newly created pods joining the mesh. To update the log level for existing pods, restart the deployment with `kubectl rollout restart`. |
-| outbound_ip_range_exclusion_list | string | comma-separated list of IP ranges of the form a.b.c.d/x | `-` | Global list of IP address ranges to exclude from outbound traffic interception by the sidecar proxy. |
-| permissive_traffic_policy_mode | bool | true, false | `"false"` | Setting to `true`, enables allow-all mode in the mesh i.e. no traffic policy enforcement in the mesh. If set to `false`, enables deny-all traffic policy in mesh i.e. an `SMI Traffic Target` is necessary for services to communicate. |
-| prometheus_scraping | bool | true, false | `"true"` | Enables Prometheus metrics scraping on sidecar proxies. |
-| service_cert_validity_duration | string | 24h, 1h30m (any time duration) | `"24h"` | Sets the service certificate validity duration, represented as a sequence of decimal numbers each with optional fraction and a unit suffix. |
-| tracing_enable | bool | true, false | `"false"` | Enables Jaeger tracing for the mesh. |
-| tracing_address | string | jaeger.mesh-namespace.svc.cluster.local | `jaeger.kube-system.svc.cluster.local` | Address of the Jaeger deployment, if tracing is enabled. |
-| tracing_endpoint | string | /api/v2/spans | /api/v2/spans | Endpoint for tracing data, if tracing enabled. |
-| tracing_port | int | any non-zero integer value | `"9411"` | Port on which tracing is enabled. |
-| use_https_ingress | bool | true, false | `"false"` | Enables HTTPS ingress on the mesh. |
-| config_resync_interval | string | under 1 minute disables this | 0 (disabled) | When a value above 1m (60s) is provided, OSM Controller will send all available config to each connected Envoy at the given interval |
-
-#### Check Namespaces
-
-> [!NOTE]
-> The kube-system namespace will never participate in a service mesh and will never be labeled and/or annotated with the key/values below.
-
-We use the `osm namespace add` command to join namespaces to a given service mesh.
-When a k8s namespace is part of the mesh (or for it to be part of the mesh) the following must be true:
-
-View the annotations with
-
-```azurecli-interactive
-kubectl get namespace bookbuyer -o json | jq '.metadata.annotations'
-```
-
-The following annotation must be present:
-
-```Output
-{
- "openservicemesh.io/sidecar-injection": "enabled"
-}
-```
-
-View the labels with
-
-```azurecli-interactive
-kubectl get namespace bookbuyer -o json | jq '.metadata.labels'
-```
-
-The following label must be present:
-
-```Output
-{
- "openservicemesh.io/monitored-by": "osm"
-}
-```
-
-If a namespace is not annotated with `"openservicemesh.io/sidecar-injection": "enabled"` or not labeled with `"openservicemesh.io/monitored-by": "osm"` the OSM Injector will not add Envoy sidecars.
-
-> Note: After `osm namespace add` is called only **new** pods will be injected with an Envoy sidecar. Existing pods must be restarted with `kubectl rollout restart deployment ...`
-
-#### Verify the SMI CRDs:
-
-Check whether the cluster has the required CRDs:
-
-```azurecli-interactive
-kubectl get crds
-```
-
-We must have the following installed on the cluster:
--- httproutegroups.specs.smi-spec.io-- tcproutes.specs.smi-spec.io-- trafficsplits.split.smi-spec.io-- traffictargets.access.smi-spec.io-- udproutes.specs.smi-spec.io-
-Get the versions of the CRDs installed with this command:
-
-```azurecli-interactive
-for x in $(kubectl get crds --no-headers | awk '{print $1}' | grep 'smi-spec.io'); do
- kubectl get crd $x -o json | jq -r '(.metadata.name, "-" , .spec.versions[].name, "\n")'
-done
-```
-
-Expected output:
-
-```Output
-httproutegroups.specs.smi-spec.io
--
-v1alpha4
-v1alpha3
-v1alpha2
-v1alpha1
--
-tcproutes.specs.smi-spec.io
--
-v1alpha4
-v1alpha3
-v1alpha2
-v1alpha1
--
-trafficsplits.split.smi-spec.io
--
-v1alpha2
--
-traffictargets.access.smi-spec.io
--
-v1alpha3
-v1alpha2
-v1alpha1
--
-udproutes.specs.smi-spec.io
--
-v1alpha4
-v1alpha3
-v1alpha2
-v1alpha1
-```
-
-OSM Controller v0.8.2 requires the following versions:
--- traffictargets.access.smi-spec.io - [v1alpha3](https://github.com/servicemeshinterface/smi-spec/blob/v0.6.0/apis/traffic-access/v1alpha3/traffic-access.md)-- httproutegroups.specs.smi-spec.io - [v1alpha4](https://github.com/servicemeshinterface/smi-spec/blob/v0.6.0/apis/traffic-specs/v1alpha4/traffic-specs.md#httproutegroup)-- tcproutes.specs.smi-spec.io - [v1alpha4](https://github.com/servicemeshinterface/smi-spec/blob/v0.6.0/apis/traffic-specs/v1alpha4/traffic-specs.md#tcproute)-- udproutes.specs.smi-spec.io - Not supported-- trafficsplits.split.smi-spec.io - [v1alpha2](https://github.com/servicemeshinterface/smi-spec/blob/v0.6.0/apis/traffic-split/v1alpha2/traffic-split.md)-- \*.metrics.smi-spec.io - [v1alpha1](https://github.com/servicemeshinterface/smi-spec/blob/v0.6.0/apis/traffic-metrics/v1alpha1/traffic-metrics.md)-
-If CRDs are missing use the following commands to install these on the cluster:
-
-```azurecli-interactive
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/v0.8.2/charts/osm/crds/access.yaml
-```
-
-```azurecli-interactive
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/v0.8.2/charts/osm/crds/specs.yaml
-```
-
-```azurecli-interactive
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/v0.8.2/charts/osm/crds/split.yaml
-```
-
-## Disable Open Service Mesh (OSM) add-on for your AKS cluster
-
-To disable the OSM add-on, run the following command:
-
-```azurecli-interactive
-az aks disable-addons -n <AKS-cluster-name> -g <AKS-resource-group-name> -a open-service-mesh
-```
-
-<!-- LINKS - internal -->
-
-[kubernetes-service]: concepts-network.md#services
-[az-feature-register]: /cli/azure/feature?view=azure-cli-latest&preserve-view=true#az_feature_register
-[az-feature-list]: /cli/azure/feature?view=azure-cli-latest&preserve-view=true#az_feature_list
-[az-provider-register]: /cli/azure/provider?view=azure-cli-latest&preserve-view=true#az_provider_register
aks Use Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-azure-policy.md
To apply a policy definition or initiative, use the Azure portal.
1. Select the **Parameters** page and update the **Effect** from `audit` to `deny` to block new deployments violating the baseline initiative. You can also add additional namespaces to exclude from evaluation. For this example, keep the default values. 1. Select **Review + create** then **Create** to submit the policy assignment.
+## Create and assign a custom policy definition (preview)
++
+Custom policies allow you to define rules for using Azure. For example, you can enforce:
+- Security practices
+- Cost management
+- Organization-specific rules (like naming or locations)
+
+Before creating a custom policy, check the [list of common patterns and samples][azure-policy-samples] to see if your case is already covered.
+
+Custom policy definitions are written in JSON. To learn more about creating a custom policy, see [Azure Policy definition structure][azure-policy-definition-structure] and [Create a custom policy definition][custom-policy-tutorial-create].
+
+> [!NOTE]
+> Azure Policy now utilizes a new property known as *templateInfo* that allows users to define the source type for the constraint template. By defining *templateInfo* in policy definitions, users donΓÇÖt have to define *constraintTemplate* or *constraint* properties. Users still need to define *apiGroups* and *kinds*. For more information on this, see [Understanding Azure Policy effects][azure-policy-effects-audit].
+
+Once your custom policy definition has been created, see [Assign a policy definition][azure-policy-tutorial-assign] for a step-by-step walkthrough of assigning the policy to your Kubernetes cluster.
+ ## Validate a Azure Policy is running Confirm the policy assignments are applied to your cluster by running the following:
For more information about how Azure Policy works:
[azure-policy-assign-policy]: ../governance/policy/concepts/policy-for-kubernetes.md#assign-a-policy-definition [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [kubernetes-policy-reference]: ../governance/policy/concepts/policy-for-kubernetes.md
+[azure-policy-effects-audit]: ../governance/policy/concepts/effects.md#audit-properties
+[custom-policy-tutorial-create]: ../governance/policy/tutorials/create-custom-policy-definition.md
+[custom-policy-tutorial-assign]: https://docs.microsoft.com/en-us/azure/governance/policy/concepts/policy-for-kubernetes.md#assign-a-policy-definition
+[azure-policy-samples]: ../governance/policy/samples/index.md
+[azure-policy-definition-structure]: ../governance/policy/concepts/definition-structure.md
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-managed-identity.md
az aks update -g <RGName> -n <AKSName> --enable-managed-identity
> After updating, your cluster's control plane and addon pods will switch to use managed identity, but kubelet will KEEP USING SERVICE PRINCIPAL until you upgrade your agentpool. Perform an `az aks nodepool upgrade --node-image-only` on your nodes to complete the update to managed identity.
+> If your cluster was using --attach-acr to pull from image from ACR, after updating your cluster to Managed Identity, you need to rerun 'az aks update --attach-acr <ACR Resource ID>' to let the newly created kubelet used for managed identity get the permission to pull from ACR. Otherwise you will not be able to pull from ACR after the upgrade.
++ ## Obtain and use the system-assigned managed identity for your AKS cluster Confirm your AKS cluster is using managed identity with the following CLI command:
api-management Diagnose Solve Problems https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/diagnose-solve-problems.md
# Azure API Management Diagnostics overview
-When you build and managed an API in Azure API Management, you want to be prepared for any issues that may arise, from 404 not found errors to 502 bad gateway error. API Management Diagnostics is an intelligent and interactive experience to help you troubleshoot your API published in APIM with no configuration required. When you do run into issues with your published APIs, API Management Diagnostics points out whatΓÇÖs wrong, and guides you to the right information to quickly troubleshoot and resolve the issue.
+When you build and manage an API in Azure API Management, you want to be prepared for any issues that may arise, from 404 not found errors to 502 bad gateway error. API Management Diagnostics is an intelligent and interactive experience to help you troubleshoot your API published in APIM with no configuration required. When you do run into issues with your published APIs, API Management Diagnostics points out whatΓÇÖs wrong, and guides you to the right information to quickly troubleshoot and resolve the issue.
Although this experience is most helpful when you re having issues with your API within the last 24 hours, all the diagnostic graphs are always available for you to analyze.
app-service App Service Web Restore Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-web-restore-snapshots.md
description: Learn how to restore your app from a snapshot. Recover from unexpec
ms.assetid: 4164f9b5-f735-41c6-a2bb-71f15cdda417 Previously updated : 04/04/2018 Last updated : 09/02/2021 # Restore an app in Azure from a snapshot
-This article shows you how to restore an app in [Azure App Service](../app-service/overview.md) from a snapshot. You can restore your app to a previous state, based on one of your app's snapshots. You do not need to enable snapshots backup, the platform automatically saves a snapshot of all apps for data recovery purposes.
+This article shows you how to restore an app in [Azure App Service](../app-service/overview.md) from a snapshot. You can restore your app to a previous state, based on one of your app's snapshots. You do not need to enable snapshots, the platform automatically saves a snapshot of all apps for data recovery purposes.
+
+Snapshots are incremental shadow copies of your App Service app. When your app is in Premium tier or higher, App Service takes periodic snapshots of both the app's content and its configuration. They offer several advantages over [standard backups](manage-backup.md):
-Snapshots are incremental shadow copies, and they offer several advantages over regular [backups](manage-backup.md):
- No file copy errors due to file locks.-- No storage size limitation.-- No configuration required.
+- Higher maximum snapshot size (30GB).
+- No configuration required for supported pricing tiers.
+- Snapshots can be restored to a new App Service app in any Azure region.
Restoring from snapshots is available to apps running in **Premium** tier or higher. For information about scaling up your app, see [Scale up an app in Azure](manage-scale-up.md). ## Limitations -- The feature is currently in preview.-- You can only restore to the same app or to a slot belonging to that app.-- App Service stops the target app or target slot while doing the restore.-- App Service keeps three months worth of snapshots for platform data recovery purposes.-- You can only restore snapshots for the last 30 days.-- App Services running on an App Service Environment do not support snapshots.
-
+- Currently available as public preview for Windows apps only. Linux apps and custom container apps are not supported.
+- Maximum supported size for snapshot restore is 30GB. Snapshot restore fails if your storage size is greater than 30GB. To reduce your storage size, consider moving files like logs, images, audios, and videos to [Azure Storage](/azure/storage/), for example.
+- Any connected database that [standard backup](manage-backup.md#what-gets-backed-up) supports or [mounted Azure storage](configure-connect-to-azure-storage.md?pivots=container-windows) is *not* included in the snapshot. Consider using the native backup capabilities of the connected Azure service (for example, [SQL Database](../azure-sql/database/automated-backups-overview.md) and [Azure Files](../storage/files/storage-snapshots-files.md)).
+- App Service stops the target app or target slot while restoring a snapshot. To minimize downtime for the production app, restore the snapshot to a [staging slot](deploy-staging-slots.md) first, then swap into production.
+- Snapshots for the last 30 days are available. The retention period and snapshot frequency are not configurable.
+- App Services running on an App Service environment do not support snapshots.
## Restore an app from a snapshot 1. On the **Settings** page of your app in the [Azure portal](https://portal.azure.com), click **Backups** to display the **Backups** page. Then click **Restore** under the **Snapshot(Preview)** section.
- ![Screenshot that shows how to restore an app from a snapshot backup.](./media/app-service-web-restore-snapshots/1.png)
+ ![Screenshot that shows how to restore an app from a snapshot.](./media/app-service-web-restore-snapshots/1.png)
2. In the **Restore** page, select the snapshot to restore.
app-service Configure Connect To Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-connect-to-azure-storage.md
description: Learn how to attach custom network share in a containerized app in
Previously updated : 6/21/2021 Last updated : 09/02/2021 zone_pivot_groups: app-service-containers-windows-linux
This guide shows how to mount Azure Storage as a network share in a built-in Lin
The following features are supported for Windows containers:
+- Secured access to storage accounts with [private links](../storage/common/storage-private-endpoints.md) (when [VNET integration](web-sites-integrate-with-vnet.md) is used). [Service endpoint](../storage/common/storage-network-security.md#grant-access-from-a-virtual-network) support is currently unavailable.
- Azure Files (read/write). - Up to five mount points per app.
+- Drive letter assignments (`C:` to `Z:`).
::: zone-end
The following features are supported for Linux containers:
::: zone pivot="container-windows" - Storage mounts are not supported for native Windows (non-containerized) apps.-- [Storage firewall](../storage/common/storage-network-security.md), [service endpoints](../storage/common/storage-network-security.md#grant-access-from-a-virtual-network), and [private endpoints](../storage/common/storage-private-endpoints.md) are not supported.
+- Azure blobs are not supported.
+- [Storage firewall](../storage/common/storage-network-security.md) is supported only through [private endpoints](../storage/common/storage-private-endpoints.md) (when [VNET integration](web-sites-integrate-with-vnet.md) is used). Custom DNS support is currently unavailable when the mounted Azure Storage account uses a private endpoint.
- FTP/FTPS access to mounted storage not supported (use [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/)).-- Azure CLI, Azure PowerShell, and Azure SDK support is in preview.-- Mapping `D:\` or `D:\home` to custom-mounted storage is not supported.-- Drive letter assignments (`C:` to `Z:`) are not supported.
+- Mapping `[C-Z]:\`, `[C-Z]:\home`, `/`, and `/home` to custom-mounted storage is not supported.
- Storage mounts cannot be used together with clone settings option during [deployment slot](deploy-staging-slots.md) creation. - Storage mounts are not backed up when you [back up your app](manage-backup.md). Be sure to follow best practices to back up the Azure Storage accounts.
The following features are supported for Linux containers:
::: zone-end ::: zone pivot="container-windows"- ## Mount storage to Windows container-
-Use the [`az webapp config storage-account add`](/cli/azure/webapp/config/storage-account#az_webapp_config_storage_account_add) command. For example:
-
-```azurecli
-az webapp config storage-account add --resource-group <group-name> --name <app-name> --custom-id <custom-id> --storage-type AzureFiles --share-name <share-name> --account-name <storage-account-name> --access-key "<access-key>" --mount-path <mount-path-directory>
-```
--- `--storage-type` must be `AzureFiles` for Windows containers. -- `mount-path-directory` must be in the form `/path/to/dir` or `\path\to\dir` with no drive letter. It's always mounted on the `C:\` drive. Do no use `/` or `\` (the root directory).-
-Verify your storage is mounted by running the following command:
-
-```azurecli
-az webapp config storage-account list --resource-group <resource-group> --name <app-name>
-```
- ::: zone-end- ::: zone pivot="container-linux"- ## Mount storage to Linux container # [Azure portal](#tab/portal)
az webapp config storage-account list --resource-group <resource-group> --name <
1. From the left navigation, click **Configuration** > **Path Mappings** > **New Azure Storage Mount**. 1. Configure the storage mount according to the following table. When finished, click **OK**.
+ ::: zone pivot="container-windows"
+ | Setting | Description |
+ |-|-|
+ | **Name** | Name of the mount configuration. Spaces are not allowed. |
+ | **Configuration options** | Select **Basic** if the storage account is not using [private endpoints](../storage/common/storage-private-endpoints.md). Otherwise, select **Advanced**. |
+ | **Storage accounts** | Azure Storage account. It must contain an Azure Files share. |
+ | **Share name** | Files share to mount. |
+ | **Access key** (Advanced only) | [Access key](../storage/common/storage-account-keys-manage.md) for your storage account. |
+ | **Mount path** | Directory inside the Windows container to mount to Azure Storage. Do not use a root directory (`[C-Z]:\` or `/`) or the `home` directory (`[C-Z]:\home`, or `/home`).|
+ ::: zone-end
+ ::: zone pivot="container-linux"
| Setting | Description | |-|-| | **Name** | Name of the mount configuration. Spaces are not allowed. |
az webapp config storage-account list --resource-group <resource-group> --name <
| **Storage type** | Select the type based on the storage you want to mount. Azure Blobs only supports read-only access. | | **Storage container** or **Share name** | Files share or Blobs container to mount. | | **Access key** (Advanced only) | [Access key](../storage/common/storage-account-keys-manage.md) for your storage account. |
- | **Mount path** | Directory inside the Linux container to mount to Azure Storage. Do not use `/` (the root directory). |
+ | **Mount path** | Directory inside the Linux container to mount to Azure Storage. Do not use `/` or `/home`.|
+ ::: zone-end
> [!CAUTION]
- > The directory specified in **Mount path** in the Linux container should be empty. Any content stored in this directory is deleted when the Azure Storage is mounted (if you specify a directory under `/home`, for example). If you are migrating files for an existing app, make a backup of the app and its content before you begin.
+ > The directory specified in **Mount path** in the container should be empty. Any content stored in this directory is deleted when the Azure Storage is mounted (if you specify a directory under `/home`, for example). If you are migrating files for an existing app, make a backup of the app and its content before you begin.
> # [Azure CLI](#tab/cli)
-Use the [`az webapp config storage-account add`](/cli/azure/webapp/config/storage-account#az_webapp_config_storage_account_add) command.
+Use the [`az webapp config storage-account add`](/cli/azure/webapp/config/storage-account#az_webapp_config_storage_account_add) command. For example:
```azurecli az webapp config storage-account add --resource-group <group-name> --name <app-name> --custom-id <custom-id> --storage-type AzureFiles --share-name <share-name> --account-name <storage-account-name> --access-key "<access-key>" --mount-path <mount-path-directory> ```
+- `--storage-type` must be `AzureFiles` for Windows containers.
+- `mount-path-directory` must be in the form `/path/to/dir` or `[C-Z]:\path\to\dir` with no drive letter. Do not use a root directory (`[C-Z]:\` or `/`) or the `home` directory (`[C-Z]:\home`, or `/home`).
- `--storage-type` can be `AzureBlob` or `AzureFiles`. `AzureBlob` is read-only. - `--mount-path` is the directory inside the Linux container to mount to Azure Storage. Do not use `/` (the root directory).+
+Verify your storage is mounted by running the following command:
+
+```azurecli
+az webapp config storage-account list --resource-group <resource-group> --name <app-name>
+```
> [!CAUTION]
-> The directory specified in `--mount-path` in the Linux container should be empty. Any content stored in this directory is deleted when the Azure Storage is mounted (if you specify a directory under `/home`, for example). If you are migrating files for an existing app, make a backup of the app and its content before you begin.
+> The directory specified in `--mount-path` in the container should be empty. Any content stored in this directory is deleted when the Azure Storage is mounted (if you specify a directory under `/home`, for example). If you are migrating files for an existing app, make a backup of the app and its content before you begin.
> Verify your configuration by running the following command:
az webapp config storage-account list --resource-group <resource-group> --name <
- > [!NOTE] > Adding, editing, or deleting a storage mount causes the app to be restarted.
To validate that the Azure Storage is mounted successfully for the app:
tcpping Storageaccount.file.core.windows.net ``` + ## Best practices - To avoid potential issues related to latency, place the app and the Azure Storage account in the same Azure region. Note, however, if the app and Azure Storage account are in same Azure region, and if you grant access from App Service IP addresses in the [Azure Storage firewall configuration](../storage/common/storage-network-security.md), then these IP restrictions are not honored.-- The mount path in the container app should be empty. Any content stored at this path is deleted when the Azure Storage is mounted (if you specify a directory under `/home`, for example). If you are migrating files for an existing app, make a backup of the app and its content before you begin.
+- The mount directory in the container app should be empty. Any content stored at this path is deleted when the Azure Storage is mounted (if you specify a directory under `/home`, for example). If you are migrating files for an existing app, make a backup of the app and its content before you begin.
- Mounting the storage to `/home` is not recommended because it may result in performance bottlenecks for the app. - In the Azure Storage account, avoid [regenerating the access key](../storage/common/storage-account-keys-manage.md) that's used to mount the storage in the app. The storage account contains two different keys. Use a stepwise approach to ensure that the storage mount remains available to the app during key regeneration. For example, assuming that you used **key1** to configure storage mount in your app: 1. Regenerate **key2**.
To validate that the Azure Storage is mounted successfully for the app:
1. Regenerate **key1**. - If you delete an Azure Storage account, container, or share, remove the corresponding storage mount configuration in the app to avoid possible error scenarios. - The mounted Azure Storage account can be either Standard or Premium performance tier. Based on the app capacity and throughput requirements, choose the appropriate performance tier for the storage account. See the scalability and performance targets that correspond to the storage type:
- - [For Files](../storage/files/storage-files-scale-targets.md)
- - [For Blobs](../storage/blobs/scalability-targets.md)
+ - [For Files](../storage/files/storage-files-scale-targets.md) (Windows and Linux containers)
+ - [For Blobs](../storage/blobs/scalability-targets.md) (Linux containers only)
- If your app [scales to multiple instances](../azure-monitor/autoscale/autoscale-get-started.md), all the instances connect to the same mounted Azure Storage account. To avoid performance bottlenecks and throughput issues, choose the appropriate performance tier for the storage account. - It's not recommended to use storage mounts for local databases (such as SQLite) or for any other applications and components that rely on file handles and locks. - When using Azure Storage [private endpoints](../storage/common/storage-private-endpoints.md) with the app, you need to set the following two app settings:
To validate that the Azure Storage is mounted successfully for the app:
- `WEBSITE_VNET_ROUTE_ALL` = `1` - If you [initiate a storage failover](../storage/common/storage-initiate-account-failover.md) and the storage account is mounted to the app, the mount will fail to connect until you either restart the app or remove and add the Azure Storage mount. - ## Next steps ::: zone pivot="container-windows"
app-service How To Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/how-to-zone-redundancy.md
+
+ Title: Availability Zone support for public multi-tenant App Service
+description: Learn how to deploy your App Service so that your apps are zone redundant.
++ Last updated : 09/01/2021+++
+# Availability Zone support for public multi-tenant App Service
+
+Microsoft Azure App Service can be deployed into [Availability Zones (AZ)](../availability-zones/az-overview.md) which enables [high availability](https://en.wikipedia.org/wiki/High_availability) for your apps. This architecture is also known as zone redundancy.
+
+An app lives in an App Service plan (ASP), and the App Service plan exists in a single scale unit. When an App Service is configured to be zone redundant, the platform automatically spreads the VM instances in the App Service plan across all three zones in the selected region. If a capacity larger than three is specified and the number of instances is divisible by three, the instances will be spread evenly. Otherwise, instance counts beyond 3*N will get spread across the remaining one or two zones.
+
+## Requirements
+
+Zone redundancy is a property of the App Service plan. The following are the current requirements/limitations for enabling zone redundancy:
+
+- Both Windows and Linux are supported
+- Requires either **Premium v2** or **Premium v3** App Service plans
+- Minimum instance count of three
+ - The platform will enforce this minimum count behind the scenes if you specify an instance count fewer than three
+- Can be enabled in any of the following regions:
+ - West US 2
+ - West US 3
+ - Central US
+ - East US
+ - East US 2
+ - Canada Central
+ - Brazil South
+ - North Europe
+ - West Europe
+ - Germany West Central
+ - France Central
+ - UK South
+ - Japan East
+ - Southeast Asia
+ - Australia East
+- Zone redundancy can only be specified when creating a **new** App Service plan
+ - Currently you can't convert a pre-existing App Service plan. See next bullet for details on how to create a new App Service plan that supports zone redundancy.
+- Zone redundancy is only supported in the newer portion of the App Service footprint
+ - Currently if you're running on Pv3, then it is possible that you're already on a footprint that supports zone redundancy. In this scenario, you can create a new App Service plan and specify zone redundancy when creating the new App Service plan.
+ - If you aren't using Pv3 or a scale unit that supports zone redundancy, are in an unsupported region, or are unsure, follow the steps below:
+ - Create a new resource group in a region that is supported
+ - This ensures the App Service control plane can find a scale unit in the selected region that supports zone redundancy
+ - Create a new App Service plan (and app) in a region of your choice using the **new** resource group
+ - Ensure the zoneRedundant property (described below) is set to true when creating the new App Service plan
+- Must be created using [Azure Resource Manager (ARM) templates](../azure-resource-manager/templates/overview.md)
+
+In the case when a zone goes down, the App Service platform will detect lost instances and automatically attempt to find new replacement instances. If you also have autoscale configured, and if it decides more instances are needed, autoscale will also issue a request to App Service to add more instances (autoscale behavior is independent of App Service platform behavior). It's important to note there's no guarantee that requests for additional instances in a zone-down scenario will succeed since back filling lost instances occurs on a best-effort basis. The recommended solution is to provision your App Service plans to account for losing a zone as described in the next section of this article.
+
+Applications deployed in an App Service plan enabled for zone redundancy will continue to run and serve traffic even if other zones in the same region suffer an outage. However it's possible that non-runtime behaviors including App Service plan scaling, application creation, application configuration, and application publishing may still be impacted from an outage in other Availability Zones. Zone redundancy for App Service plans only ensures continued uptime for deployed applications.
+
+## How to Deploy a Zone Redundant App Service
+
+Currently, you need to use an ARM template to create a zone redundant App Service. Once created via an ARM template, the App Service plan can be viewed and interacted with via the Azure portal and CLI tooling. An ARM template is only needed for the initial creation of the App Service plan.
+
+The only changes needed in an ARM template to specify a zone redundant App Service are the new ***zoneRedundant*** property (required) and optionally the App Service plan instance count (***capacity***) on the [Microsoft.Web/serverfarms](https://docs.microsoft.com/azure/templates/microsoft.web/serverfarms?tabs=json) resource. If you don't specify a capacity, the platform defaults to three. The ***zoneRedundant*** property should be set to ***true*** and ***capacity*** should be set based on the workload requirement, but no less than three. A good rule of thumb to choose capacity is to ensure sufficient instances for the application such that losing one zone of instances leaves sufficient capacity to handle expected load.
+
+> [!TIP]
+> To decide instance capacity, you can use the following calculation:
+>
+> Since the platform spreads VMs across 3 zones and you need to account for at least the failure of 1 zone, multiply peak workload instance count by a factor of zones/(zones-1), or 3/2. For example, if your typical peak workload requires 4 instances, you should provision 6 instances: (2/3 * 6 instances) = 4 instances.
+>
+
+The ARM template snippet below shows the new ***zoneRedundant*** property and ***capacity*** specification.
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2018-02-01",
+ "name": "your-appserviceplan-name-here",
+ "location": "West US 3",
+ "sku": {
+ "name": "P1v3",
+ "tier": "PremiumV3",
+ "size": "P1v3",
+ "family": "Pv3",
+ "capacity": 3
+ },
+ "kind": "app",
+ "properties": {
+ "zoneRedundant": true
+ }
+ }
+]
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn how to create and deploy ARM templates](../azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md)
+
+> [!div class="nextstepaction"]
+> [ARM Quickstart Templates](https://azure.microsoft.com/resources/templates/)
app-service Manage Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/manage-backup.md
Title: Back up an app
description: Learn how to create backups of your apps in Azure App Service. Run manual or scheduled backups. Customize backups by including the attached database. ms.assetid: 6223b6bd-84ec-48df-943f-461d84605694 Previously updated : 10/16/2019 Last updated : 09/02/2021
The following database solutions are supported with backup feature:
* The Backup and Restore feature requires the App Service plan to be in the **Standard**, **Premium**, or **Isolated** tier. For more information about scaling your App Service plan to use a higher tier, see [Scale up an app in Azure](manage-scale-up.md). **Premium** and **Isolated** tiers allow a greater number of daily back ups than **Standard** tier. * You need an Azure storage account and container in the same subscription as the app that you want to back up. For more information on Azure storage accounts, see [Azure storage account overview](../storage/common/storage-account-overview.md).
-* Backups can be up to 10 GB of app and database content. If the backup size exceeds this limit, you get an error.
-* Backups of TLS enabled Azure Database for MySQL is not supported. If a backup is configured, you will encounter backup failures.
-* Backups of TLS enabled Azure Database for PostgreSQL is not supported. If a backup is configured, you will encounter backup failures.
-* In-app MySQL databases are automatically backed up without any configuration. If you make manually settings for in-app MySQL databases, such as adding connection strings, the backups may not work correctly.
-* Using a firewall enabled storage account as the destination for your backups is not supported. If a backup is configured, you will encounter backup failures.
-* Currently, you can't use the Backup and Restore feature with Azure storage accounts that are configured to use Private Endpoint.
+* Backups can be up to 10 GB of app and database content, up to 4GB of which can be the database backup. If the backup size exceeds this limit, you get an error.
+* Backups of [TLS enabled Azure Database for MySQL](../mysql/concepts-ssl-connection-security.md) is not supported. If a backup is configured, you will encounter backup failures.
+* Backups of [TLS enabled Azure Database for PostgreSQL](../postgresql/concepts-ssl-connection-security.md) is not supported. If a backup is configured, you will encounter backup failures.
+* In-app MySQL databases are automatically backed up without any configuration. If you make manual settings for in-app MySQL databases, such as adding connection strings, the backups may not work correctly.
+* Using a [firewall enabled storage account](../storage/common/storage-network-security.md) as the destination for your backups is not supported. If a backup is configured, you will encounter backup failures.
+* Using a [private endpoint enabled storage account](../storage/common/storage-private-endpoints.md) for backup and restore is not supported.
<a name="manualbackup"></a>
app-service Networking Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/networking-features.md
To learn how to set an address on your app, see [Add a TLS/SSL certificate in Az
Access restrictions let you filter *inbound* requests. The filtering action takes place on the front-end roles that are upstream from the worker roles where your apps are running. Because the front-end roles are upstream from the workers, you can think of access restrictions as network-level protection for your apps.
-This feature allows you to build a list of allow and deny rules that are evaluated in priority order. It's similar to the network security group (NSG) feature in Azure networking. You can use this feature in an ASE or in the multitenant service. When you use it with an ILB ASE or private endpoint, you can restrict access from private address blocks.
+This feature allows you to build a list of allow and deny rules that are evaluated in priority order. It's similar to the network security group (NSG) feature in Azure networking. You can use this feature in an ASE or in the multitenant service. When you use it with an ILB ASE, you can restrict access from private address blocks.
> [!NOTE] > Up to 512 access restriction rules can be configured per app.
app-service Quickstart Ruby https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-ruby.md
1. Create a [web app](overview.md#app-service-on-linux) in the `myAppServicePlan` App Service plan.
- In the Cloud Shell, you can use the [`az webapp create`](/cli/azure/webapp) command. In the following example, replace `<app-name>` with a globally unique app name (valid characters are `a-z`, `0-9`, and `-`). The runtime is set to `RUBY|2.6.2`. To see all supported runtimes, run [`az webapp list-runtimes --linux`](/cli/azure/webapp).
+ In the Cloud Shell, you can use the [`az webapp create`](/cli/azure/webapp) command. In the following example, replace `<app-name>` with a globally unique app name (valid characters are `a-z`, `0-9`, and `-`). The runtime is set to `RUBY|2.6`. To see all supported runtimes, run [`az webapp list-runtimes --linux`](/cli/azure/webapp).
```azurecli-interactive
- az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime 'RUBY|2.6.2' --deployment-local-git
+ az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime 'RUBY|2.6' --deployment-local-git
``` When the web app has been created, the Azure CLI shows output similar to the following example:
app-service Tutorial Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-custom-container.md
RUN apt-get update \
> [!NOTE] > This configuration doesn't allow external connections to the container. SSH is available only through the Kudu/SCM Site. The Kudu/SCM site is authenticated with your Azure account.
+> root:Docker! should not be altered SSH. SCM/KUDU will use your Azure Portal credentials. Changing this value will result in an error when using SSH.
The *Dockerfile* also copies the *sshd_config* file to the */etc/ssh/* folder and exposes port 2222 on the container:
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-python-postgresql-app.md
Test the app locally with the following steps:
1. Go to `http://localhost:8000` in a browser, which should display the message "No polls are available".
-1. Go to `http:///localhost:8000/admin` and sign in using the admin user you created previously. Under **Polls**, again select **Add** next to **Questions** and create a poll question with some choices.
+1. Go to `http://localhost:8000/admin` and sign in using the admin user you created previously. Under **Polls**, again select **Add** next to **Questions** and create a poll question with some choices.
-1. Go to *http:\//localhost:8000* again and answer the question to test the app.
+1. Go to `http://localhost:8000` again and answer the question to test the app.
1. Stop the Django server by pressing **Ctrl**+**C**.
attestation Basic Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/basic-concepts.md
Example of JWT generated for an SGX enclave:
Some of the claims used above are considered deprecated but are fully supported. It is recommended that all future code and tooling use the non-deprecated claim names. See [claims issued by Azure Attestation](claim-sets.md) for more information.
-The below claims will appear only in the attestation token generated for Intel® Xeon® Scalable processor-based server platforms. The claims will not appear if the SGX enclave is not configured with [Key Separation and and Sharing Support](https://github.com/openenclave/openenclave/issues/3054)
+The below claims will appear only in the attestation token generated for Intel® Xeon® Scalable processor-based server platforms. The claims will not appear if the SGX enclave is not configured with [Key Separation and Sharing Support](https://github.com/openenclave/openenclave/issues/3054)
**x-ms-sgx-config-id**
automanage Quick Create Virtual Machines Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/quick-create-virtual-machines-portal.md
If you don't have an Azure subscription, [create an account](https://azure.micro
Sign in to the [Azure portal](https://aka.ms/AutomanagePortal-Ignite21).
-## Enable Automanage for a single machine
-
-1. Browse to the Virtual Machine that you would like to enable.
-
-2. Click on the **Automanage (Preview)** entry in the Table of Contents under **Operations**.
-
-3. Select **Get Started**.
-
- :::image type="content" source="media\quick-create-virtual-machine-portal\vmmanage-getstartedbutton.png" alt-text="Get started single VM.":::
-
-4. Choose your Automanage settings (Environment, Preferences, Automanage Account) and hit **Enable**.
-
- :::image type="content" source="media\quick-create-virtual-machine-portal\vmmanage-enablepane.png" alt-text="Enable on single VM.":::
-
-## Enable Automanage for multiple machines
+## Enable Automanage on existing machines
1. In the search bar, search for and select **Automanage ΓÇô Azure machine best practices**.
automation Dsc Linux Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/dsc-linux-powershell.md
Last updated 08/31/2021
# Configure Linux desired state with Azure Automation State Configuration using PowerShell In this tutorial, you'll apply an Azure Automation State Configuration with PowerShell to an Azure Linux virtual machine to check whether it complies with a desired state. The desired state is to identify if the apache2 service is present on the node.+ Azure Automation State Configuration allows you to specify configurations for your machines and ensure those machines are in a specified state over time. For more information about State Configuration, see [Azure Automation State Configuration overview](./automation-dsc-overview.md). In this tutorial, you learn how to:
automation Create Account Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/quickstarts/create-account-portal.md
+
+ Title: Azure Quickstart - Create an Azure Automation account
+description: This article helps you get started creating an Azure Automation account and running a runbook.
+ Last updated : 09/01/2021+++++
+# Create an Azure Automation account
+
+You can create an Azure Automation account through Azure, using the Azure portal, a browser-based user interface allowing access to a number of resources. One Automation account can manage resources across all regions and subscriptions for a given tenant.
+
+This quickstart guides you in creating an Automation account and running a runbook in the account. If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Sign in to Azure
+
+[Sign in to Azure](https://portal.azure.com).
+
+## Create Automation account
+
+1. Choose a name for your Azure account. Automation account names are unique per region and resource group. Names for Automation accounts that have been deleted might not be immediately available.
+
+ > [!NOTE]
+ > You can't change the account name once it has been entered in the user interface.
+
+2. Click **Create a resource** found in the upper left corner of Azure portal.
+
+3. Select **IT & Management Tools**, and then select **Automation**.
+
+4. Enter the account information, including the selected account name. For **Create Azure Run As account**, choose **Yes** so that the artifacts to simplify authentication to Azure are enabled automatically. When the information is complete, click **Create** to start the Automation account deployment.
+
+ ![Enter information about your Automation account in the page](./media/create-account-portal/create-automation-account-portal-blade.png)
+
+ > [!NOTE]
+ > For an updated list of locations that you can deploy an Automation account to, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=automation&regions=all).
+
+5. When the deployment has completed, click **All Services**.
+
+6. Select **Automation Accounts** and then choose the Automation account you've created.
+
+ ![Automation account overview](./media/create-account-portal/automation-account-overview.png)
+
+## Run a runbook
+
+Run one of the tutorial runbooks.
+
+1. Click **Runbooks** under **Process Automation**. The list of runbooks is displayed. By default, several tutorial runbooks are enabled in the account.
+
+ ![Automation account runbooks list](./media/create-account-portal/automation-runbooks-overview.png)
+
+1. Select the **AzureAutomationTutorialScript** runbook. This action opens the runbook overview page.
+
+ ![Runbook overview](./media/create-account-portal/automation-tutorial-script-runbook-overview.png)
+
+1. Click **Start**, and on the Start Runbook page, click **OK** to start the runbook.
+
+ ![Runbook job page](./media/create-account-portal/automation-tutorial-script-job.png)
+
+1. After the job status becomes `Running`, click **Output** or **All Logs** to view the runbook job output. For this tutorial runbook, the output is a list of your Azure resources.
+
+## Next steps
+
+In this quickstart, youΓÇÖve deployed an Automation account, started a runbook job, and viewed the job results. To learn more about Azure Automation, continue to the quickstart for creating your first PowerShell runbook.
+
+> [!div class="nextstepaction"]
+> [Quickstart - Create an Azure Automation PowerShell runbook](create-powershell-runbook.md)
+
automation Create Powershell Runbook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/quickstarts/create-powershell-runbook.md
+
+ Title: Azure Quickstart - Create an Azure Automation runbook
+description: This article helps you get started creating an Azure Automation runbook.
+ Last updated : 09/01/2021+++
+ - mvc
+ - mode-api
++
+# Create an Azure Automation runbook
+
+Azure Automation runbooks can be created through Azure. This method provides a browser-based user interface for creating Automation runbooks. In this quickstart you walk through creating, editing, testing, and publishing an Automation PowerShell runbook.
+
+If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Sign in to Azure
+
+Sign in to Azure at https://portal.azure.com.
+
+## Create the runbook
+
+First, create a runbook. The sample runbook created in this quickstart outputs `Hello World` by default.
+
+1. In the Azure portal, navigate to **Automation accounts**.
+
+1. From the list of Automation accounts, select an account.
+
+1. Click **Runbooks** under **Process Automation**. The list of runbooks is displayed.
+
+1. Click **Create a runbook** at the top of the list.
+
+1. Enter `Hello-World` for the runbook name in the **Name** field, and select **PowerShell** for the **Runbook type** field.
+
+ ![Enter information about your Automation runbook in the page](./media/create-powershell-runbook/automation-create-runbook-configure.png)
+
+1. Click **Create**. The runbook is created and the Edit PowerShell Runbook page opens.
+
+ :::image type="content" source="./media/create-powershell-runbook/automation-edit-runbook-empty.png" alt-text="Screenshot of the Edit PowerShell Runbook page.":::
+
+1. Type or copy and paste the following code into the edit pane. It creates an optional input parameter called `Name` with a default value of `World`, and outputs a string that uses this input value:
+
+ ```powershell-interactive
+ param
+ (
+ [Parameter(Mandatory=$false)]
+ [String] $Name = "World"
+ )
+
+ "Hello $Name!"
+ ```
+
+1. Click **Save** to save a draft copy of the runbook.
+
+ :::image type="content" source="./media/create-powershell-runbook/automation-edit-runbook.png" alt-text="Screenshot of the Edit PowerShell Runbook page with a code example in the right window.":::
+
+## Test the runbook
+
+Once the runbook is created, you must test the runbook to validate that it works.
+
+1. Click **Test pane** to open the Test pane.
+
+1. Enter a value for **Name**, and click **Start**. The test job starts and the job status and output display.
+
+ :::image type="content" source="./media/create-powershell-runbook/automation-test-runbook.png" alt-text="Screenshot of the Test pane with an example value in the name field.":::
+
+1. Close the Test pane by clicking the **X** in the upper right corner. Select **OK** in the popup that appears.
+
+1. In the Edit PowerShell Runbook page, click **Publish** to publish the runbook as the official version of the runbook in the account.
+
+ :::image type="content" source="./media/create-powershell-runbook/automation-hello-world-runbook-job.png" alt-text="Screenshot of the Edit PowerShell Runbook page showing the Publish button selected.":::
+
+## Run the runbook
+
+Once the runbook is published, the overview page is shown.
+
+1. In the runbook overview page, click **Start** to open the Start Runbook configuration page for this runbook.
+
+ :::image type="content" source="./media/create-powershell-runbook/automation-hello-world-runbook-start.png" alt-text="Screenshot of the Start Runbook configuration page.":::
+
+1. Leave **Name** blank, so that the default value is used, and click **OK**. The runbook job is submitted, and the Job page appears.
+
+ :::image type="content" source="./media/create-powershell-runbook/automation-job-page.png" alt-text="Screenshot of Job page showing the Output button selected.":::
+
+1. When the job status is `Running` or `Completed`, click **Output** to open the Output pane and view the runbook output.
+
+ :::image type="content" source="./media/create-powershell-runbook/automation-hello-world-runbook-job-output.png" alt-text="Screenshot of the Output pane showing the runbook output.":::
+
+## Clean up resources
+
+When no longer needed, delete the runbook. To do so, select the runbook in the runbook list, and click **Delete**.
+
+## Next steps
+
+In this quickstart, youΓÇÖve created, edited, tested, and published a runbook and started a runbook job. To learn more about Automation runbooks, continue to the article on the different runbook types that you can create and use in Automation.
+
+> [!div class="nextstepaction"]
+> [Azure Automation runbook types](../automation-runbook-types.md)
automation Dsc Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/quickstarts/dsc-configuration.md
+
+ Title: Azure Quickstart - Configure a VM with Desired State Configuration
+description: This article helps you get started configuring an Azure VM with Desired State Configuration.
++
+keywords: dsc, configuration, automation
Last updated : 09/01/2021++++
+# Configure a VM with Desired State Configuration
+
+By enabling Azure Automation State Configuration, you can manage and monitor the configurations of your Windows and Linux servers using Desired State Configuration (DSC). Configurations that drift from a desired configuration can be identified or auto-corrected. This quickstart steps through enabling an Azure Linux VM and deploying a LAMP stack using Azure Automation State Configuration.
+
+## Prerequisites
+
+To complete this quickstart, you need:
+
+* An Azure subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/).
+* An Azure Automation account. For instructions on creating an Azure Automation Run As account, see [Azure Run As Account](../manage-runas-account.md).
+* An Azure Resource Manager virtual machine running Red Hat Enterprise Linux, CentOS, or Oracle Linux. For instructions on creating a VM, see [Create your first Linux virtual machine in the Azure portal](../../virtual-machines/linux/quick-create-portal.md)
+
+## Sign in to Azure
+Sign in to Azure at https://portal.azure.com.
+
+## Enable a virtual machine
+
+There are many different methods to enable a machine for Automation State Configuration. This quickstart tells how to enable the feature for an Azure VM using an Automation account. You can learn more about different methods to enable your machines for State Configuration by reading [Enable machines for management by Azure Automation State Configuration](../automation-dsc-onboarding.md).
+
+1. In the Azure portal, navigate to **Automation accounts**.
+1. From the list of Automation accounts, select an account.
+1. From the left pane of the Automation account, select **State configuration (DSC)**.
+2. Click **Add** to open the **VM select** page.
+3. Find the virtual machine for which to enable DSC. You can use the search field and filter options to find a specific virtual machine.
+4. Click on the virtual machine, and then click **Connect**
+5. Select the DSC settings appropriate for the virtual machine. If you have already prepared a configuration, you can specify it as `Node Configuration Name`. You can set the [configuration mode](/powershell/scripting/dsc/managing-nodes/metaConfig) to control the configuration behavior for the machine.
+6. Click **OK**. While the DSC extension is deployed to the virtual machine, the status reported is `Connecting`.
+
+![Enabling an Azure VM for DSC](./media/dsc-configuration/dsc-onboard-azure-vm.png)
+
+## Import modules
+
+Modules contain DSC resources and many can be found in the [PowerShell Gallery](https://www.powershellgallery.com). Any resources that are used in your configurations must be imported to the Automation account before compiling. For this quickstart, the module named **nx** is required.
+
+1. From the left pane of the Automation account, select **Modules Gallery** under **Shared Resources**.
+1. Search for the module to import by typing part of its name: `nx`.
+1. Click on the module to import.
+1. Click **Import**.
+
+![Importing a DSC Module](./media/dsc-configuration/dsc-import-module-nx.png)
+
+## Import the configuration
+
+This quickstart uses a DSC configuration that configures Apache HTTP Server, MySQL, and PHP on the machine. See [DSC configurations](/powershell/scripting/dsc/configurations/configurations).
+
+In a text editor, type the following and save it locally as **AMPServer.ps1**.
+
+```powershell-interactive
+configuration 'LAMPServer' {
+ Import-DSCResource -module "nx"
+
+ Node localhost {
+
+ $requiredPackages = @("httpd","mod_ssl","php","php-mysql","mariadb","mariadb-server")
+ $enabledServices = @("httpd","mariadb")
+
+ #Ensure packages are installed
+ ForEach ($package in $requiredPackages){
+ nxPackage $Package{
+ Ensure = "Present"
+ Name = $Package
+ PackageManager = "yum"
+ }
+ }
+
+ #Ensure daemons are enabled
+ ForEach ($service in $enabledServices){
+ nxService $service{
+ Enabled = $true
+ Name = $service
+ Controller = "SystemD"
+ State = "running"
+ }
+ }
+ }
+}
+```
+
+To import the configuration:
+
+1. In the left pane of the Automation account, select **State configuration (DSC)** and then click the **Configurations** tab.
+2. Click **+ Add**.
+3. Select the configuration file that you saved in the prior step.
+4. Click **OK**.
+
+## Compile a configuration
+
+You must compile a DSC configuration to a node configuration (MOF document) before it can be assigned to a node. Compilation validates the configuration and allows for the input of parameter values. To learn more about compiling a configuration, see [Compiling configurations in State Configuration](../automation-dsc-compile.md).
+
+1. In the left pane of the Automation account, select **State Configuration (DSC)** and then click the **Configurations** tab.
+1. Select the configuration `LAMPServer`.
+1. From the menu options, select **Compile** and then click **Yes**.
+1. In the Configuration view, you see a new compilation job queued. When the job has completed successfully, you are ready to move on to the next step. If there are any failures, you can click on the compilation job for details.
+
+## Assign a node configuration
+
+You can assign a compiled node configuration to a DSC node. Assignment applies the configuration to the machine and monitors or auto-corrects for any drift from that configuration.
+
+1. In the left pane of the Automation account, select **State Configuration (DSC)** and then click the **Nodes** tab.
+1. Select the node to which to assign a configuration.
+1. Click **Assign Node Configuration**
+1. Select the node configuration `LAMPServer.localhost` and click **OK**. State Configuration now assigns the compiled configuration to the node, and the node status changes to `Pending`. On the next periodic check, the node retrieves the configuration, applies it, and reports status. It can take up to 30 minutes for the node to retrieve the configuration, depending on the node settings.
+1. To force an immediate check, you can run the following command locally on the Linux virtual machine:
+ `sudo /opt/microsoft/dsc/Scripts/PerformRequiredConfigurationChecks.py`
+
+![Assigning a Node Configuration](./media/dsc-configuration/dsc-assign-node-configuration.png)
+
+## View node status
+
+You can view the status of all State Configuration-managed nodes in your Automation account. The information is displayed by choosing **State Configuration (DSC)** and clicking the **Nodes** tab. You can filter the display by status, node configuration, or name search.
+
+![DSC Node Status](./media/dsc-configuration/dsc-node-status.png)
+
+## Next steps
+
+In this quickstart, you enabled an Azure Linux VM for State Configuration, created a configuration for a LAMP stack, and deployed the configuration to the VM. To learn how you can use Azure Automation State Configuration to enable continuous deployment, continue to the article:
+
+> [!div class="nextstepaction"]
+> [Set up continuous deployment with Chocolatey](../automation-dsc-cd-chocolatey.md)
azure-arc Plan Evaluate On Azure Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/plan-evaluate-on-azure-virtual-machine.md
Title: How to evaluate Azure Arc-enabled servers with an Azure VM description: Learn how to evaluate Azure Arc-enabled servers using an Azure virtual machine. Previously updated : 07/16/2021 Last updated : 09/02/2021
When Arc-enabled servers is configured on the VM, you see two representations of
sudo ufw --force enable sudo ufw deny out from any to 169.254.169.254 sudo ufw default allow incoming
- sudo apt-get update
```-- To configure a generic iptables configuration, run the following command: ```bash
When Arc-enabled servers is configured on the VM, you see two representations of
> [!NOTE] > This configuration needs to be set after every reboot unless a persistent iptables solution is used.
+ If your Azure VM is running CentOS, Red Hat, or SUSE Linux Enterprise Server (SLES), perform the following steps to configure firewalld:
+
+ ```bash
+ firewall-cmd --permanent --direct --add-rule ipv4 filter OUTPUT 1 -p tcp -d 169.254.169.254 -j DROP
+ firewall-cmd --reload
+ ```
+ 4. Install and configure the Azure Arc-enabled servers agent. The VM is now ready for you to begin evaluating Arc-enabled servers. To install and configure the Arc-enabled servers agent, see [Connect hybrid machines using the Azure portal](onboard-portal.md) and follow the steps to generate an installation script and install using the scripted method.
azure-cache-for-redis Cache How To Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-scale.md
For more information on determining the cache pricing tier to use, see [Choosing
## Scale a cache
-To scale your cache, [browse to the cache](cache-configure.md#configure-azure-cache-for-redis-settings) in the [Azure portal](https://portal.azure.com) and select **Scale** from the **Resource menu**.
+To scale your cache, [browse to the cache](cache-configure.md#configure-azure-cache-for-redis-settings) in the [Azure portal](https://portal.azure.com) and select **Scale** on the left.
-![Scale](./media/cache-how-to-scale/redis-cache-scale-menu.png)
-On the left, select the pricing tier you want from **Select pricing tier** and **Select**.
+Choose a pricing tier on the right and then choose **Select**.
You can scale to a different pricing tier with the following restrictions:
You can scale to a different pricing tier with the following restrictions:
- You can't scale from a **Basic** cache directly to a **Premium** cache. First, scale from **Basic** to **Standard** in one scaling operation, and then from **Standard** to **Premium** in the next scaling operation. - You can't scale from a larger size down to the **C0 (250 MB)** size. However, you can scale down to any other size within the same pricing tier. For example, you can scale down from C5 Standard to C1 Standard.
-While the cache is scaling to the new pricing tier, a **Scaling** status is displayed on the left in the **Azure Cache for Redis**.
+While the cache is scaling to the new tier, a **Scaling Redis Cache** notification is displayed.
When scaling is complete, the status changes from **Scaling** to **Running**.
Generally, when you scale a cache with no data, it takes approximately 20 minute
### How can I tell when scaling is complete? In the Azure portal, you can see the scaling operation in progress. When scaling is complete, the status of the cache changes to **Running**.-
-<!-- IMAGES -->
-
-[redis-cache-pricing-tier-blade]: ./media/cache-how-to-scale/redis-cache-pricing-tier-blade.png
-
-[redis-cache-scaling]: ./media/cache-how-to-scale/redis-cache-scaling.png
azure-monitor Azure Monitor Agent Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-install.md
The following prerequisites are required prior to installing the Azure Monitor a
- *.control.monitor.azure.com > [!IMPORTANT]
-> The Azure Monitor agent does not currently support network proxies or private links.
+> The Azure Monitor agent does not currently support private links.
## Virtual machine extension details The Azure Monitor Agent is implemented as an [Azure VM extension](../../virtual-machines/extensions/overview.md) with the details in the following table. It can be installed using any of the methods to install virtual machine extensions including those described in this article.
azure-monitor Asp Net Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/asp-net-exceptions.md
Alternatively, instead of looking at exceptions of a specific failing operation,
To get diagnostic data specific to your app, you can insert code to send your own telemetry data. Your custom telemetry or log data is displayed in diagnostic search alongside the request, page view, and other automatically collected data.
-Using the <xref:Microsoft.ApplicationInsights.TelemetryClient?displayProperty=fullName>, you have several APIs available:
+Using the <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient?displayProperty=fullName>, you have several APIs available:
-* <xref:Microsoft.ApplicationInsights.TelemetryClient.TrackEvent%2A?displayProperty=nameWithType> is typically used for monitoring usage patterns, but the data it sends also appears under **Custom Events** in diagnostic search. Events are named, and can carry string properties and numeric metrics on which you can [filter your diagnostic searches](./diagnostic-search.md).
-* <xref:Microsoft.ApplicationInsights.TelemetryClient.TrackTrace%2A?displayProperty=nameWithType> lets you send longer data such as POST information.
-* <xref:Microsoft.ApplicationInsights.TelemetryClient.TrackException%2A?displayProperty=nameWithType> sends exception details, such as stack traces to Application Insights.
+* <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient.TrackEvent%2A?displayProperty=nameWithType> is typically used for monitoring usage patterns, but the data it sends also appears under **Custom Events** in diagnostic search. Events are named, and can carry string properties and numeric metrics on which you can [filter your diagnostic searches](./diagnostic-search.md).
+* <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient.TrackTrace%2A?displayProperty=nameWithType> lets you send longer data such as POST information.
+* <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient.TrackException%2A?displayProperty=nameWithType> sends exception details, such as stack traces to Application Insights.
To see these events, open [Search](./diagnostic-search.md) from the left menu, select the drop-down menu **Event types**, and then choose **Custom Event**, **Trace**, or **Exception**.
If your web page includes script files from content delivery networks or other d
> [!NOTE] > The `TelemetryClient` is recommended to be instantiated once, and re-used throughout the life of an application.
-With [Dependency Injection (DI) in .NET](/dotnet/core/extensions/dependency-injection), the appropriate .NET SDK, and correctly configuring Application Insights for DI, you can require the <xref:Microsoft.ApplicationInsights.TelemetryClient> as a constructor parameter.
+With [Dependency Injection (DI) in .NET](/dotnet/core/extensions/dependency-injection), the appropriate .NET SDK, and correctly configuring Application Insights for DI, you can require the <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient> as a constructor parameter.
```csharp public class ExampleController : ApiController
void Application_Error(object sender, EventArgs e)
} ```
-In the preceding example, the `_telemetryClient` is a class-scoped variable of type <xref:Microsoft.ApplicationInsights.TelemetryClient>.
+In the preceding example, the `_telemetryClient` is a class-scoped variable of type <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient>.
## MVC
azure-monitor Nodejs Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/nodejs-quick-start.md
Application Insights can gather telemetry data from any internet-connected appli
> [!NOTE] >If this is your first time creating an Application Insights resource you can learn more by visiting the [Create an Application Insights Resource](../app/create-new-resource.md) doc.
- A configuration page appears; use the following table to fill out the input fields.
+ A configuration page appears. Use the following table to fill out the input fields:
| Settings | Value | Description | | - |:-|:--|
- | **Name** | Globally Unique Value | Name that identifies the app you're monitoring |
- | **Resource Group** | myResourceGroup | Name for the new resource group to host AppInsights data. You can create a new resource group or use an existing one. |
- | **Location** | East US | Choose a location near you, or near where your app is hosted |
+ | **Name** | Globally Unique Value | Name that identifies the app you're monitoring. |
+ | **Resource Group** | myResourceGroup | Name for the new resource group to host Application Insights data. You can create a new resource group or use an existing one. |
+ | **Location** | East US | Choose a location near you, or near where your app is hosted. |
+ | **Resource Mode** | Workspace-based | If there's an option to choose the resource mode, select **Workspace-based**. |
+ | **Log Analytics Workspace** | | Accept the default value. |
3. Select **Create**.
azure-monitor Data Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/data-security.md
Title: Azure Monitor Logs data security | Microsoft Docs
-description: Learn about how [Azure Monitor Logs protects your privacy and secures your data.
+description: Learn about how Azure Monitor Logs protects your privacy and secures your data.
Last updated 11/11/2020
-# [Azure Monitor Logs data security
+# Azure Monitor Logs data security
This document is intended to provide information specific to [Azure Monitor Logs](../logs/data-platform-logs.md) to supplement the information on [Azure Trust Center](https://www.microsoft.com/en-us/trust-center?rtc=1). This article explains how log data is collected, processed, and secured by Azure Monitor. You can use agents to connect to the web service, use System Center Operations Manager to collect operational data, or retrieve data from Azure diagnostics for use by Azure Monitor.
azure-monitor Logicapp Flow Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/logicapp-flow-connector.md
The Azure Monitor Logs connector has these limits:
* Max query timeout 110 second * Chart visualizations could be available in Logs page and missing in the connector since the connector and Logs page don't use the same charting libraries currently
-Depending on the size of your data and the query you use, the connector may hit its limits and fail. You can work around such cases when adjusting the trigger recurrence to run more frequently and query less data. You can use queries that aggregate your data to return less records and columns.
+The connector may reach limits depending on the query you use and the size of the results. You can often avoid such cases by adjusting the flow recurrence to run more frequent on smaller time range, or aggregate data to reduce the results size. Frequent queries with lower intervals than 100 seconds arenΓÇÖt recommended due to caching.
## Actions The following table describes the actions included with the Azure Monitor Logs connector. Both allow you to run a log query against a Log Analytics workspace or Application Insights application. The difference is in the way the data is returned.
When the logic app completes, check the mail of the recipient that you specified
- Learn more about [log queries in Azure Monitor](./log-query-overview.md). - Learn more about [Logic Apps](../../logic-apps/index.yml)-- Learn more about [Power Automate](https://flow.microsoft.com).
+- Learn more about [Power Automate](https://flow.microsoft.com).
azure-monitor Move Workspace Region https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/move-workspace-region.md
Title: Move Log Analytics workspace to another Azure region using the Azure portal
-description: Use Azure Resource Manager template to move Log Analytics workspace from one Azure region to another using the Azure portal.
+ Title: Move a Log Analytics workspace to another Azure region by using the Azure portal
+description: Use an Azure Resource Manager template to move a Log Analytics workspace from one Azure region to another by using the Azure portal.
Last updated 08/17/2021
-# Move Log Analytics workspace to another region using the Azure portal
+# Move a Log Analytics workspace to another region by using the Azure portal
-There are various scenarios in which you would want to move your existing Log Analytics workspace from one region to another. For example, Log Analytics recently became available in a region that is hosting most of your resources and you want the workspace to be closer and save egress charges. You may also want to move your workspace to a newly added region for data sovereignty requirement.
+There are various scenarios in which you would want to move your existing Log Analytics workspace from one region to another. For example, Log Analytics recently became available in a region that's hosting most of your resources and you want the workspace to be closer and save egress charges. You might also want to move your workspace to a newly added region for a data sovereignty requirement.
-Log Analytics workspace can't be moved from one region to another. You can however, use an Azure Resource Manager template to export the workspace resource and related resources. You can then stage the resources in another region by exporting the workspace to a template, modifying the parameters to match the destination region, and then deploy the template to the new region. For more information on Resource Manager and templates, see [Quickstart: Create and deploy Azure Resource Manager templates by using the Azure portal](../../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md). Workspace environment can be complex and include connected sources, managed solutions, linked services, alerts and query packs. Not all resources can be exported in Resource Manager template and some will require separate configuration when moving a workspace.
+A Log Analytics workspace can't be moved from one region to another. But you can use an Azure Resource Manager template to export the workspace resource and related resources. You can then stage the resources in another region by exporting the workspace to a template, modifying the parameters to match the destination region, and then deploying the template to the new region. For more information on Resource Manager and templates, see [Quickstart: Create and deploy Azure Resource Manager templates by using the Azure portal](../../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md).
+
+A workspace environment can be complex and include connected sources, managed solutions, linked services, alerts, and query packs. Not all resources can be exported in a Resource Manager template, and some require separate configuration when you're moving a workspace.
## Prerequisites -- To export the workspace configuration to a template that can be deployed to another region, you need either [Log Analytics Contributor](../../role-based-access-control/built-in-roles.md#log-analytics-contributor) or [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) roles or higher.
+- To export the workspace configuration to a template that can be deployed to another region, you need the [Log Analytics Contributor](../../role-based-access-control/built-in-roles.md#log-analytics-contributor) or [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) role, or higher.
-- Identify all the resources that currently associated to your workspace including:
- - Connected agents -- Enter *Logs* in your workspace and query [Heartbeat](../insights/solution-agenthealth.md#heartbeat-records) table to list connected agents.
+- Identify all the resources that are currently associated with your workspace, including:
+ - *Connected agents*: Enter **Logs** in your workspace and query a [heartbeat](../insights/solution-agenthealth.md#heartbeat-records) table to list connected agents.
```kusto Heartbeat | summarize by Computer, Category, OSType, _ResourceId ```
- - Installed solutions -- Click **Solutions** in workspace navigation pane for a list of installed solutions
- - Data collector API -- Data arriving through [Data Collector API](../logs/data-collector-api.md) is stored in custom log tables. Click ***Logs*** in workspace navigation pane, then **Custom log** in schema pane for a list of custom log tables
- - Linked services -- Workspace may have linked services to dependent resources such as Automation account, storage account, dedicated cluster. Remove linked services from your workspace. These should be reconfigured manually in target workspace
- - Alerts -- Click **Alerts** in your workspace navigation pane, then **Manage alert rules** in toolbar to list alerts. Alerts in workspaces created after 1-June 2019, or in workspaces that were [upgraded from legacy Log Analytics alert API to scheduledQueryRules API](../alerts/alerts-log-api-switch.md) can be included in the template. You can [check if scheduledQueryRules API is used for alerts in your workspace](../alerts/alerts-log-api-switch.md#check-switching-status-of-workspace). Alternatively, you can configure alerts manually in target workspace
- - Query pack(s) -- A workspace can be associated with multiple query packs. To identify query pack(s) in your workspace, click **Logs** in the workspace navigation pane, **queries** on left pane, then ellipsis to the right of the search box for more settings - a dialog with selected query pack will open on the right. If your query pack(s) are in the same resource group as the workspace you are moving, you can include it with this migration
-- Verify that your Azure subscription allows you to create Log Analytics workspace in target region
+ - *Diagnostic settings*: Resources can send logs to Azure Diagnostics or dedicated tables in your workspace. Enter **Logs** in your workspace, and run this query for resources that send data to the `AzureDiagnostics` table:
+
+ ```kusto
+ AzureDiagnostics
+ | where TimeGenerated > ago(12h)
+ | summarize by ResourceProvider , ResourceType, Resource
+ | sort by ResourceProvider, ResourceType
+ ```
+
+ Run this query for resources that send data to dedicated tables:
+
+ ```kusto
+ search *
+ | where TimeGenerated > ago(12h)
+ | where isnotnull(_ResourceId)
+ | extend ResourceProvider = split(_ResourceId, '/')[6]
+ | where ResourceProvider !in ('microsoft.compute', 'microsoft.security')
+ | extend ResourceType = split(_ResourceId, '/')[7]
+ | extend Resource = split(_ResourceId, '/')[8]
+ | summarize by tostring(ResourceProvider) , tostring(ResourceType), tostring(Resource)
+ | sort by ResourceProvider, ResourceType
+ ```
+
+ - *Installed solutions*: Select **Solutions** on the workspace navigation pane for a list of installed solutions.
+ - *Data collector API*: Data arriving through a [Data Collector API](../logs/data-collector-api.md) is stored in custom log tables. For a list of custom log tables, select **Logs** on the workspace navigation pane, and then select **Custom log** on the schema pane.
+ - *Linked services*: Workspaces might have linked services to dependent resources such as an Azure Automation account, a storage account, or a dedicated cluster. Remove linked services from your workspace. Reconfigure them manually in the target workspace.
+ - *Alerts*: To list alerts, select **Alerts** on your workspace navigation pane, and then select **Manage alert rules** on the toolbar. Alerts in workspaces created after June 1, 2019, or in workspaces that were [upgraded from the Log Analytics Alert API to the scheduledQueryRules API](../alerts/alerts-log-api-switch.md) can be included in the template.
+
+ You can [check if the scheduledQueryRules API is used for alerts in your workspace](../alerts/alerts-log-api-switch.md#check-switching-status-of-workspace). Alternatively, you can configure alerts manually in the target workspace.
+ - *Query packs*: A workspace can be associated with multiple query packs. To identify query packs in your workspace, select **Logs** on the workspace navigation pane, select **queries** on the left pane, and then select the ellipsis to the right of the search box. A dialog with the selected query packs opens on the right. If your query packs are in the same resource group as the workspace that you're moving, you can include it with this migration.
+- Verify that your Azure subscription allows you to create Log Analytics workspaces in the target region.
## Prepare and move
-The following steps show how to prepare the workspace and resources for the move using Resource Manager template and move them to the target region using the portal. Not all resources can be exported through a template and these will need to be configured separately once the workspace is created in target region.
+The following procedures show how to prepare the workspace and resources for the move by using a Resource Manager template, and then move them to the target region by using the portal. Follow the procedures in order.
+
+> [!NOTE]
+> Not all resources can be exported through a template. You'll need to configure these separately after the workspace is created in the target region.
-### Export the template and deploy from the portal
+### Select resource groups and edit parameters
-1. Login to the [Azure portal](https://portal.azure.com), then **Resource Groups**
-2. Locate the Resource Group that contains your workspace and click on it
-3. To view alerts resource, select **Show hidden types** checkbox
-4. Click the 'Type' filter and select **Log Analytics workspace**, **Solution**, **SavedSearches**, **microsoft.insights/scheduledqueryrules** and **defaultQueryPack** as applicable, then click Apply
-5. Select the workspace, solutions, alerts, saved searches and query pack(s) that you want to move, then click **Export template** in the toolbar
+1. Sign in to the [Azure portal](https://portal.azure.com), and then select **Resource Groups**.
+1. Find the resource group that contains your workspace and select it.
+1. To view an alert resource, select the **Show hidden types** checkbox.
+1. Select the **Type** filter. Select **Log Analytics workspace**, **Solution**, **SavedSearches**, **microsoft.insights/scheduledqueryrules**, **defaultQueryPack**, and other workspace-related resources that you have (such as an Automation account). Then select **Apply**.
+1. Select the workspace, solutions, saved searches, alerts, query packs, and other workspace-related resources that you have (such as an Automation account). Then select **Export template** on the toolbar.
> [!NOTE]
- > Sentinel can't be exported with template and you need to [on-board Sentinel](../../sentinel/quickstart-onboard.md) to target workspace.
+ > Azure Sentinel can't be exported with a template. You need to [onboard Sentinel](../../sentinel/quickstart-onboard.md) to a target workspace.
-6. Click **Deploy** in the toolbar to edit and prepare the template for deployment
-7. Click **Edit parameters** in the toolbar to open the **parameters.json** file in the online editor
-8. To edit the parameters, change the **value** property under **parameters**
-
- Example parameters file:
+1. Select **Deploy** on the toolbar to edit and prepare the template for deployment.
+1. Select **Edit parameters** on the toolbar to open the *parameters.json* file in the online editor.
+1. To edit the parameters, change the `value` property under `parameters`. Here's an example:
```json {
The following steps show how to prepare the workspace and resources for the move
} ```
-9. Click **Save** in the editor
-10. Click **Edit template** in the toolbar to open the **template.json** file in the online editor
-11. To edit the target region where Log Analytics workspace will be deployed, change the **location** property under **resources** in the online editor. To obtain region location codes, see [Azure Locations](https://azure.microsoft.com/global-infrastructure/locations/). The code for a region is the region name with no spaces, **Central US** = **centralus**
-12. Remove linked services resources `microsoft.operationalinsights/workspaces/linkedservices` if present in template. These should be reconfigured manually in target workspace
+1. Select **Save** in the editor.
+
+### Edit the template
+
+1. Select **Edit template** on the toolbar to open the *template.json* file in the online editor.
+1. To edit the target region where the Log Analytics workspace will be deployed, change the `location` property under `resources` in the online editor.
- Example template including the workspace, saved search, solutions, alert and query pack:
+ To get region location codes, see [Data residency in Azure](https://azure.microsoft.com/global-infrastructure/locations/). The code for a region is the region name with no spaces. For example, **Central US** should be `centralus`.
+1. Remove linked-services resources (`microsoft.operationalinsights/workspaces/linkedservices`) if they're present in the template. You should reconfigure these resources manually in the target workspace.
+
+ The following example template includes the workspace, saved search, solutions, alerts, and query pack:
```json {
The following steps show how to prepare the workspace and resources for the move
} ```
-13. Click **Save** in the online editor
-14. Click **Subscription** to choose the subscription where the target workspace will be deployed
-16. Click **Resource group** to choose the resource group where the target workspace will be deployed. You can click **Create new** to create a new resource group for the target workspace
-17. Verify that the **Region** is set to the target location where you wish for the NSG to be deployed
-18. Click **Review + create** button to verify your template
-19. Click **Create** to deploy workspace and selected resource to the target region
-20. Your workspace including selected resources are now deployed in target region and you can complete the remaining configuration in the workspace for paring functionality to original workspace
- - Connect agents -- Use any of the available options including DCR to configure the required agents on virtual machines and virtual machine scale sets and specify the new target workspace as destination
- - Install solutions -- Some solutions such as [Azure Sentinel](../../sentinel/quickstart-onboard.md) require certain onboarding procedure and weren't included in the template. You should onboard them separately to the new workspace
- - Data collector API -- Configure data collector API instances to send data to target workspace
- - Alert rules -- When alerts aren't exported in template, you need to configure them manually in target workspace
-21. Very that new data isn't ingested to original workspace. Run this query in your original workspace and observe that there is no ingestion post migration time
+1. Select **Save** in the online editor.
+
+### Deploy the workspace
+
+1. Select **Subscription** to choose the subscription where the target workspace will be deployed.
+1. Select **Resource group** to choose the resource group where the target workspace will be deployed. You can select **Create new** to create a new resource group for the target workspace.
+1. Verify that **Region** is set to the target location where you want the network security group to be deployed.
+1. Select the **Review + create** button to verify your template.
+1. Select **Create** to deploy the workspace and the selected resource to the target region.
+1. Your workspace, including selected resources, is now deployed in the target region. You can complete the remaining configuration in the workspace for paring functionality to the original workspace.
+ - *Connect agents*: Use any of the available options, including Data Collection Rules, to configure the required agents on virtual machines and virtual machine scale sets and to specify the new target workspace as the destination.
+ - *Diagnostic settings*: Update diagnostic settings in identified resources, with the target workspace as the destination.
+ - *Install solutions*: Some solutions, such as [Azure Sentinel](../../sentinel/quickstart-onboard.md), require certain onboarding procedures and weren't included in the template. You should onboard them separately to the new workspace.
+ - *Configure the Data Collector API*: Configure Data Collector API instances to send data to the target workspace.
+ - *Configure alert rules*: When alerts aren't exported in the template, you need to configure them manually in the target workspace.
+1. Verify that new data isn't ingested to the original workspace. Run the following query in your original workspace, and observe that there's no ingestion after the migration:
```kusto search *
+ | where TimeGenerated > ago(12h)
| summarize max(TimeGenerated) by Type ```
-Ingested data after data sources connection to target workspace is stored in target workspace while older data remains in original workspace. You can perform [cross workspace query](./cross-workspace-query.md#performing-a-query-across-multiple-resources) and if both were assigned with the same name, use qualified name (*subscriptionName/resourceGroup/componentName*) in workspace reference.
+After data sources are connected to the target workspace, ingested data is stored in the target workspace. Older data stays in the original workspace and is subject to the retention policy. You can perform a [cross-workspace query](./cross-workspace-query.md#performing-a-query-across-multiple-resources). If both workspaces were assigned the same name, use a qualified name (*subscriptionName/resourceGroup/componentName*) in the workspace reference.
-Example for query across two workspaces having the same name:
+Here's an example for a query across two workspaces that have the same name:
```kusto union
union
## Discard
-If you wish to discard the source workspace, delete the exported resources or resource group that contains these. To do so, select the target resource group in Azure portal - if you created a new resource group for this deployment, click **Delete resource group** at the toolbar in Overview page. If template was deployed to existing resource group, select the resources that were deployed with template and click **Delete** in toolbar.
+If you want to discard the source workspace, delete the exported resources or the resource group that contains these resources:
+
+1. Select the target resource group in the Azure portal.
+1. On the **Overview** page:
+
+ - If you created a new resource group for this deployment, select **Delete resource group** on the toolbar to delete the resource group.
+ - If the template was deployed to an existing resource group, select the resources that were deployed with the template, and then select **Delete** on the toolbar to delete selected resources.
## Clean up
-While new data is being ingested to your new workspace, older data in original workspace remain available for query and subjected to the retention policy defined in workspace. It's recommended to remain the original workspace for the duration older data is needed to allow you to [query across](./cross-workspace-query.md#performing-a-query-across-multiple-resources) workspaces. If you no longer need access to older data in original workspace, select the original resource group in Azure portal, then select any resources that you want to remove and click **Delete** in toolbar.
+While new data is being ingested to your new workspace, older data in the original workspace remains available for query and is subject to the retention policy defined in the workspace. We recommend that you keep the original workspace for as long as you need older data to [query across](./cross-workspace-query.md#performing-a-query-across-multiple-resources) workspaces.
+
+If you no longer need access to older data in the original workspace:
+
+1. Select the original resource group in the Azure portal.
+1. Select any resources that you want to remove, and then select **Delete** on the toolbar.
## Next steps
-In this tutorial, you moved an Log Analytics workspace and associated resources from one region to another and cleaned up the source resources. To learn more about moving resources between regions and disaster recovery in Azure, refer to:
+In this article, you moved a Log Analytics workspace and associated resources from one region to another and cleaned up the source resources. To learn more about moving resources between regions and disaster recovery in Azure, see:
- [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md) - [Move Azure VMs to another region](../../site-recovery/azure-to-azure-tutorial-migrate.md)
azure-monitor Private Link Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/private-link-configure.md
Go to the Azure portal. In your resource's menu, there's a menu item called **Ne
> [!NOTE]
-> Starting August 16, 2021, Network Isolation will be strictly enforced. Resources set to block queries from public networks, and that aren't connected to any private network (through an AMPLS) will stop accepting queries from any network.
+> Starting September, 2021, Network Isolation will be strictly enforced. Resources set to block queries from public networks, and that aren't connected to any private network (through an AMPLS) will stop accepting queries from any network.
![LA Network Isolation](./media/private-link-security/ampls-network-isolation.png)
azure-monitor Private Link Design https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/private-link-design.md
To test Private Links locally without affecting other clients on your network, m
That approach isn't recommended for production environments. ## Control how Private Links apply to your networks
-Private Link access modes (introduced on August 2021) allow you to control how Private Links affect your network traffic. These settings can apply to your AMPLS object (to affect all connected networks) or to specific networks connected to it.
+Private Link access modes (introduced on September 2021) allow you to control how Private Links affect your network traffic. These settings can apply to your AMPLS object (to affect all connected networks) or to specific networks connected to it.
Choosing the proper access mode has detrimental effects on your network traffic. Each of these modes can be set for ingestion and queries, separately:
Choosing the proper access mode has detrimental effects on your network traffic.
![Diagram of AMPLS Open access mode](./media/private-link-security/ampls-open-access-mode.png) Access modes are set separately for ingestion and queries. For example, you can set the Private Only mode for ingestion and the Open mode for queries. +
+Apply caution when selecting your access mode. Using the Private Only access mode will block traffic to resources not in the AMPLS across all networks that share the same DNS, regardless of subscription or tenant (with the exception of Log Analytics ingestion requests, as explained below). If you can't add all Azure Monitor resources to the AMPLS, start with by adding select resources and applying the Open access mode. Only after adding *all* Azure Monitor resources to your AMPLS, switch to the 'Private Only' mode for maximum security.
+ > [!NOTE]
-> Apply caution when selecting your access mode: Using the Private Only access mode will block traffic to resources not in the AMPLS across all networks that share the same DNS, regardless of subscription or tenant. If you can't add all Azure Monitor resources to the AMPLS, we recommend that you use the Open mode and add select resources to your AMPLS. Only after adding all Azure Monitor resources to your AMPLS, switch to the Private Only mode for maximum security.
+> Log Analytics ingestion uses resource-specific endpoints. As such, it doesnΓÇÖt adhere to AMPLS access modes. Ingestion to workspaces in the AMPLS is sent through the private link, while ingestion to workspaces not in the AMPLS uses the default public endpoints. To assure ingestion requests canΓÇÖt access resources out of the AMPLS, block the networkΓÇÖs access to public endpoints.
### Setting access modes for specific networks The access modes set on the AMPLS resource affect all networks, but you can override these settings for specific networks.
azure-monitor Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/private-link-security.md
An Azure Monitor Private Link Scope connects private endpoints (and the VNets th
![Diagram of basic resource topology](./media/private-link-security/private-link-basic-topology.png)
+* An Azure Monitor Private Link connects a Private Endpoint to a set of Azure Monitor resources - Log Analytics workspaces and Application Insights resources. That set is called an Azure Monitor Private Link Scope (AMPLS).
* The Private Endpoint on your VNet allows it to reach Azure Monitor endpoints through private IPs from your network's pool, instead of using to the public IPs of these endpoints. That allows you to keep using your Azure Monitor resources without opening your VNet to unrequired outbound traffic. * Traffic from the Private Endpoint to your Azure Monitor resources will go over the Microsoft Azure backbone, and not routed to public networks.
-* You can configure your Azure Monitor Private Link Scope (or specific networks) to use the preferred access mode - either allow traffic only to Private Link resources, or allow traffic to both Private Link resources and non-Private-Link resources (resources out of the AMPLS)
+* You can configure your Azure Monitor Private Link Scope (or specific networks connecting to it) to use the preferred access mode - either allow traffic only to Private Link resources, or allow traffic to both Private Link resources and non-Private-Link resources (resources out of the AMPLS)
* You can configure each of your workspaces or components to allow or deny ingestion and queries from public networks. That provides a resource-level protection, so that you can control traffic to specific resources. > [!NOTE]
Therefore, Private Links created starting September 2021 have new mandatory AMPL
* Private Only mode - allows traffic only to Private Link resources * Open mode - uses Private Link to communicate with resources in the AMPLS, but also allows traffic to continue to other resources as well. See [Control how Private Links apply to your networks](./private-link-design.md#control-how-private-links-apply-to-your-networks) to learn more.
+> [!NOTE]
+> Log Analytics ingestion uses resource-specific endpoints. As such, it doesnΓÇÖt adhere to AMPLS access modes. Ingestion to workspaces in the AMPLS is sent through the private link, while ingestion to workspaces not in the AMPLS uses the default public endpoints. To assure ingestion requests canΓÇÖt access resources out of the AMPLS, block the networkΓÇÖs access to public endpoints.
+ ## Next steps - [Design your Private Link setup](private-link-design.md) - Learn how to [configure your Private Link](private-link-configure.md)
azure-netapp-files Azacsnap Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-release-notes.md
This page lists major changes made to AzAcSnap to provide new functionality or resolve defects.
+## Aug-2021
+
+### AzAcSnap v5.0.2 (Build_20210827.19086) - Patch update to v5.0.1
+
+AzAcSnap v5.0.2 (Build_20210827.19086) is provided as a patch update to the v5.0 branch with the following fixes and improvements:
+
+- Ignore `ssh` 255 exit codes. In some cases the `ssh` command, which is used to communicate with storage on Azure Large Instance, would emit an exit code of 255 when there were no errors or execution failures (refer `man ssh` "EXIT STATUS") - subsequently AzAcSnap would trap this as a failure and abort. With this update additional verification is done to validate correct execution, this includes parsing `ssh` STDOUT and STDERR for errors in addition to traditional Exit Code checks.
+- Fix installer hdbuserstore source path check. The installer would check for the existence of an incorrect source directory for the hdbuserstore for the user running the install - this is fixed to check for `~/.hdb`. This is applicable to systems (e.g. Azure Large Instance) where the hdbuserstore was pre-configured for the `root` user before installing `azacsnap`.
+- Installer now shows the version it will install/extract (if the installer is run without any arguments).
+
+Download the [latest release](https://aka.ms/azacsnapinstaller) of the installer and review how to [get started](azacsnap-get-started.md).
+ ## May-2021 ### AzAcSnap v5.0.1 (Build: 20210524.14837) - Patch update to v5.0
AzAcSnap v5.0.1 (Build: 20210524.14837) is provided as a patch update to the v5.
- Improved exit code handling. In some cases an exit code of 0 (zero) was emitted even when there was an execution failure where it should have been non-zero. Exit codes should now only be zero on successfully running `azacsnap` to completion and non-zero in case of any failure. Additionally, AzAcSnap's internal error handling has been extended to capture and emit the exit code of the external commands (e.g. hdbsql, ssh) run by AzAcSnap, if they are the cause of the failure.
-Download the [latest release](https://aka.ms/azacsnapdownload) of the installer and review how to [get started](azacsnap-get-started.md).
- ## April-2021 ### AzAcSnap v5.0 (Build: 20210421.6349) - GA Released (21-April-2021)
azure-netapp-files Azure Netapp Files Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-faqs.md
na ms.devlang: na Previously updated : 08/18/2021 Last updated : 09/01/2021 # FAQs About Azure NetApp Files
However, you cannot create Azure policies (custom naming policies) on the Azure
### When I delete an Azure NetApp Files volume, is the data deleted safely?
-Deletion of an Azure NetApp Files volume is performed in the backend (physical infrastructure layer) programmatically with immediate effect. The delete operation includes deleting keys used for encrypting data at rest. There is no option for any scenario to recover a deleted volume once the delete operation is executed successfully (via interfaces such as the Azure portal and the API.)
+Deletion of an Azure NetApp Files volume is performed programmatically with immediate effect. The delete operation includes deleting keys used for encrypting data at rest. There is no option for any scenario to recover a deleted volume once the delete operation is executed successfully (via interfaces such as the Azure portal and the API.)
## Performance FAQs
azure-percept Software Releases Usb Cable Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/software-releases-usb-cable-updates.md
Title: Azure Percept DK software releases for update over USB cable
+ Title: Software releases for Azure Percept DK USB cable updates
description: Information and download links for the USB cable update package of Azure Percept DK
Last updated 08/23/2021
-# Azure Percept DK software releases for updating over USB
+# Software releases for USB cable updates
This page provides information and download links for all the dev kit OS/firmware image releases. For detail of changes/fixes in each version, refer to the release notes:
azure-portal How To Create Azure Support Request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/supportability/how-to-create-azure-support-request.md
Title: How to create an Azure support request description: Customers who need assistance can use the Azure portal to find self-service solutions and to create and manage support requests. Previously updated : 08/24/2021 Last updated : 09/01/2021 # Create an Azure support request
Azure enables you to create and manage support requests, also known as support t
>* Azure portal for Germany is: [https://portal.microsoftazure.de](https://portal.microsoftazure.de) >* Azure portal for the United States government is: [https://portal.azure.us](https://portal.azure.us)
-The support request experience focuses on three main goals:
-
-* **Streamlined**: Make support and troubleshooting easy to find and simplify how you submit a support request.
-* **Integrated**: You can easily open a support request when you're troubleshooting an issue with an Azure resource, without switching context.
-* **Efficient**: Gather the key information your support engineer needs to efficiently resolve your issue.
- Azure provides unlimited support for subscription management, which includes billing, quota adjustments, and account transfers. For technical support, you need a support plan. For more information, see [Compare support plans](https://azure.microsoft.com/support/plans). ## Getting started
To start a support request from anywhere in the Azure portal:
1. Select the **?** in the global header, then select **Help + support**.
- ![Help and Support](./media/how-to-create-azure-support-request/helpandsupportnewlower.png)
+ :::image type="content" source="media/how-to-create-azure-support-request/helpandsupportnewlower.png" alt-text="Screenshot of the Help menu in the Azure portal.":::
-1. Select **New support request**. Follow the prompts to provide information about your problem. We'll suggest some possible solutions, gather details about the issue, and help you submit and track the support request.
+1. Select **Create a support request**. Follow the prompts to provide information about your problem. We'll suggest some possible solutions, gather details about the issue, and help you submit and track the support request.
- ![New Support Request](./media/how-to-create-azure-support-request/newsupportrequest2lower.png)
+ :::image type="content" source="media/how-to-create-azure-support-request/newsupportrequest2lower.png" alt-text="Screenshot of the Help + support page with Create a support request link.":::
### Go to Help + support from a resource menu To start a support request in the context of the resource you're currently working with:
-1. From the resource menu, in the **Support + Troubleshooting** section, select **New support request**.
+1. From the resource menu, in the **Support + troubleshooting** section, select **New Support Request**.
- ![In context](./media/how-to-create-azure-support-request/incontext2lower.png)
+ :::image type="content" source="media/how-to-create-azure-support-request/incontext2lower.png" alt-text="Screenshot of the New Support Request option in the resource pane.":::
-1. Follow the prompts to provide us with information about the problem you're having. When you start the support request process from the resource, some options are pre-selected for you.
+1. Follow the prompts to provide us with information about the problem you're having. When you start the support request process from a resource, some options are pre-selected for you.
## Create a support request We'll walk you through some steps to gather information about your problem and help you solve it. Each step is described in the following sections.
-### Basics
+### Problem description
+
+The first step of the support request process is to select an issue type. You'll then be prompted for more information, which can vary depending on what type of issue you selected. In most cases, you'll need to specify a subscription, briefly describe your issue, and select a problem type. If you select **Technical**, you'll need to specify the service that your issue relates to. Depending on the service, you'll see additional options for **Problem type** and **Problem subtype**.
-The first step of the support request process gathers basic information about your issue and your support plan.
-On the **Basics** tab of **New support request**, use the selectors to start to tell us about the problem. First, you'll identify some general categories for the issue type and choose the related subscription. Select the service, for example **Virtual Machine running Windows**. Select the resource, such as the name of your virtual machine. Describe the problem in your own words, then select **Problem type** and **Problem subtype** to get more specific.
+Once you've provided all of these details, select **Next**.
-![Basics blade](./media/how-to-create-azure-support-request/basics2lower.png)
+### Recommended solution
-### Solutions
+Based on the information you provided, we'll show you recommended solutions you can use to try and resolve the problem. In some cases, we may even run a quick diagnostic. Solutions are written by Azure engineers and will solve most common problems.
-After gathering basic information, we next show you solutions to try on your own. In some cases, we may even run a quick diagnostic. Solutions are written by Azure engineers and will solve most common problems.
+If you're still unable to resolve the issue, continue creating your support request by selecting **Next**.
-### Details
+### Additional details
Next, we collect additional details about the problem. Providing thorough and detailed information in this step helps us route your support request to the right engineer.
-1. If possible, tell us when the problem started and any steps to reproduce it. You can upload a file, such as a log file or output from diagnostics. For more information on file uploads, see [File upload guidelines](how-to-manage-azure-support-request.md#file-upload-guidelines).
+1. Complete the **problem details** so that we have more information about your issue. If possible, tell us when the problem started and any steps to reproduce it. You can upload a file, such as a log file or output from diagnostics. For more information on file uploads, see [File upload guidelines](how-to-manage-azure-support-request.md#file-upload-guidelines).
1. In the **Share diagnostic information** section, select **Yes** or **No**. Selecting **Yes** allows Azure support to gather [diagnostic information](https://azure.microsoft.com/support/legal/support-diagnostic-information-collection/) from your Azure resources. If you prefer not to share this information, select **No**. In some cases, there will be additional options to choose from, such as whether to allow access to a virtual machine's memory.
-1. **Support method** section of **Details**, select the severity of impact. The maximum severity level depends on your [support plan](https://azure.microsoft.com/support/plans).
+1. In the **Support method** section, select the severity of impact. The maximum severity level depends on your [support plan](https://azure.microsoft.com/support/plans).
1. Provide your preferred contact method, your availability, and your preferred support language. 1. Next, complete the **Contact info** section so we know how to contact you.
+Select **Next** when you've completed all of the necessary information.
+ ### Review + create
-Complete all required information on each tab, then select **Review + create**. Check the details that you'll send to support. Go back to any tab to make a change if needed. When you're satisfied the support request is complete, select **Create**.
+Before you create your request, review all of the details that you'll send to support. You can select **Previous** to return to any tab if you need to make changes. When you're satisfied the support request is complete, select **Create**.
A support engineer will contact you using the method you indicated. For information about initial response times, see [Support scope and responsiveness](https://azure.microsoft.com/support/plans/response/). - ## Next steps To learn more about self-help support options in Azure, watch this video:
Follow these links to learn more:
* [How to manage an Azure support request](how-to-manage-azure-support-request.md) * [Azure support ticket REST API](/rest/api/support)
-* [Send us your feedback and suggestions](https://feedback.azure.com/forums/266794-support-feedback)
* Engage with us on [Twitter](https://twitter.com/azuresupport) * Get help from your peers in the [Microsoft Q&A question page](/answers/products/azure) * Learn more in [Azure Support FAQ](https://azure.microsoft.com/support/faq)
azure-portal How To Manage Azure Support Request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/supportability/how-to-manage-azure-support-request.md
Title: Manage an Azure support request description: Describes how to view support requests, send messages, change the request severity level, share diagnostic information with Azure support, reopen a closed support request, and upload files. tags: billing Previously updated : 08/24/2021 Last updated : 09/01/2021 # To add: close and reopen, review request status, update contact info
On this page, you can search, filter, and sort support requests. Select a suppor
> [!NOTE] > The maximum severity level depends on your [support plan](https://azure.microsoft.com/support/plans).
->
1. On the **All support requests** page, select the support request. 1. On the **Support Request** page, select **Change**.
- :::image type="content" source="media/how-to-manage-azure-support-request/change-severity.png" alt-text="Change support request severity":::
- 1. The Azure portal shows one of two screens, depending on whether your request is already assigned to a support engineer: - If your request hasn't been assigned, you see a screen like the following. Select a new severity level, then select **Change**.
When you create a support request, you can select **Yes** or **No** in the **Sha
To change your **Share diagnostic information** selection after the request has been created: 1. On the **All support requests** page, select the support request.
-
+ 1. On the **Support Request** page, look for **Share diagnostic information** and then select **Change**.
-
-1. Select **Yes** or **No**, then select **OK** to confirm.
-
+
+1. Select **Yes** or **No**, then select **OK** to confirm.
+ :::image type="content" source="media/how-to-manage-azure-support-request/grant-permission-manage.png" alt-text="Grant permissions for diagnostic information"::: ## Upload files
You can use the file upload option to upload diagnostic files or any other files
Follow these guidelines when you use the file upload option:
-* To protect your privacy, do not include any personal information in your upload.
-* The file name must be no longer than 110 characters.
-* You can't upload more than one file.
-* Files can't be larger than 4 MB.
-* All files must have a file name extension, such as *.docx* or *.xlsx*. The following table shows the filename extensions that are allowed for upload.
+- To protect your privacy, do not include any personal information in your upload.
+- The file name must be no longer than 110 characters.
+- You can't upload more than one file.
+- Files can't be larger than 4 MB.
+- All files must have a file name extension, such as *.docx* or *.xlsx*. The following table shows the filename extensions that are allowed for upload.
| 0-9, A-C | D-G | H-N | O-Q | R-T | U-W | X-Z | |-|-|-|-|-|||
Follow these guidelines when you use the file upload option:
## Close a support request
-If you need to close a support request, [send a message](#send-a-message) asking that the request be closed.
+To close a support request, [send a message](#send-a-message) asking that the request be closed.
## Reopen a closed request
-If you need to reopen a closed support request, create a [new message](#send-a-message), which automatically reopens the request.
+To reopen a closed support request, create a [new message](#send-a-message), which automatically reopens the request.
## Cancel a support plan
-If you need to cancel a support plan, see [Cancel a support plan](../../cost-management-billing/manage/cancel-azure-subscription.md#cancel-a-support-plan).
+To cancel a support plan, see [Cancel a support plan](../../cost-management-billing/manage/cancel-azure-subscription.md#cancel-a-support-plan).
## Next steps
-[How to create an Azure support request](how-to-create-azure-support-request.md)
-
-[Azure support ticket REST API](/rest/api/support)
+- Review the process to [create an Azure support request](how-to-create-azure-support-request.md).
+- Learn about the [Azure support ticket REST API](/rest/api/support).
azure-resource-manager Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/deploy-cli.md
Before deploying your Bicep file, you can preview the changes the Bicep file wil
## Deploy template specs
-Currently, Azure CLI doesn't support creating template specs by providing Bicep files. However you can create a Bicep file with the [Microsoft.Resources/templateSpecs](/azure/templates/microsoft.resources/templatespecs) resource to deploy a template spec. Here's an [example](https://github.com/Azure/azure-docs-bicep-samples/blob/main/create-template-spec-using-bicep/azuredeploy.bicep). You can also build your Bicep file into an ARM template JSON by using the Bicep CLI, and then create a template spec with the JSON template.
+Currently, Azure CLI doesn't support creating template specs by providing Bicep files. However you can create a Bicep file with the [Microsoft.Resources/templateSpecs](/azure/templates/microsoft.resources/templatespecs) resource to deploy a template spec. The [Create template spec sample](https://github.com/Azure/azure-docs-bicep-samples/blob/main/samples/create-template-spec/azuredeploy.bicep) shows how to create a template spec in a Bicep file. You can also build your Bicep file into an ARM template JSON by using the Bicep CLI, and then create a template spec with the JSON template.
## Deployment name
azure-resource-manager Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/deploy-github-actions.md
description: Describes how to deploy Bicep files by using GitHub Actions.
Previously updated : 08/23/2021 Last updated : 09/02/2021 # Deploy Bicep files by using GitHub Actions
-[GitHub Actions](https://docs.github.com/en/actions) is a suite of features in GitHub to automate your software development workflows in the same place you store code and collaborate on pull requests and issues.
+[GitHub Actions](https://docs.github.com/en/actions) is a suite of features in GitHub to automate your software development workflows.
-Use the [Deploy Azure Resource Manager Template Action](https://github.com/marketplace/actions/deploy-azure-resource-manager-arm-template) to automate deploying a Bicep file to Azure.
+Use the [GitHub Action for Azure Resource Manager deployment](https://github.com/marketplace/actions/deploy-azure-resource-manager-arm-template) to automate deploying a Bicep file to Azure.
## Prerequisites
The file has two sections:
You can create a [service principal](../../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) with the [az ad sp create-for-rbac](/cli/azure/ad/sp#az_ad_sp_create_for_rbac) command in the [Azure CLI](/cli/azure/). Run this command with [Azure Cloud Shell](https://shell.azure.com/) in the Azure portal or by selecting the **Try it** button.
-Create a resource group if you do not already have one.
+Create a resource group if you don't already have one.
```azurecli-interactive
- az group create -n {MyResourceGroup} -l {location}
+az group create -n {MyResourceGroup} -l {location}
``` Replace the placeholder `myApp` with the name of your application. ```azurecli-interactive
- az ad sp create-for-rbac --name {myApp} --role contributor --scopes /subscriptions/{subscription-id}/resourceGroups/{MyResourceGroup} --sdk-auth
+az ad sp create-for-rbac --name {myApp} --role contributor --scopes /subscriptions/{subscription-id}/resourceGroups/{MyResourceGroup} --sdk-auth
```
-In the example above, replace the placeholders with your subscription ID and resource group name. The output is a JSON object with the role assignment credentials that provide access to your App Service app similar to below. Copy this JSON object for later. You will only need the sections with the `clientId`, `clientSecret`, `subscriptionId`, and `tenantId` values.
+In the example above, replace the placeholders with your subscription ID and resource group name. The output is a JSON object with the role assignment credentials that provide access to your App Service app similar to below. Copy this JSON object for later. You'll only need the sections with the `clientId`, `clientSecret`, `subscriptionId`, and `tenantId` values.
```output {
You need to create secrets for your Azure credentials, resource group, and subsc
1. Select **Settings > Secrets > New secret**.
-1. Paste the entire JSON output from the Azure CLI command into the secret's value field. Give the secret the name `AZURE_CREDENTIALS`.
+1. Paste the entire JSON output from the Azure CLI command into the secret's value field. Name the secret `AZURE_CREDENTIALS`.
1. Create another secret named `AZURE_RG`. Add the name of your resource group to the secret's value field (example: `myResourceGroup`).
-1. Create an additional secret named `AZURE_SUBSCRIPTION`. Add your subscription ID to the secret's value field (example: `90fd3f9d-4c61-432d-99ba-1273f236afa2`).
+1. Create another secret named `AZURE_SUBSCRIPTION`. Add your subscription ID to the secret's value field (example: `90fd3f9d-4c61-432d-99ba-1273f236afa2`).
## Add a Bicep file Add a Bicep file to your GitHub repository. The following Bicep file creates a storage account:
-```url
-https://raw.githubusercontent.com/Azure/azure-docs-bicep-samples/main/get-started-with-bicep-files/add-variable/azuredeploy.bicep
-```
-The Bicep file takes one parameter called **storagePrefix** with 3 to 11 characters.
+The Bicep file requires one parameter called **storagePrefix** with 3 to 11 characters.
-You can put the file anywhere in the repository. The workflow sample in the next section assumes the Bicep file is named **azuredeploy.bicep**, and it is stored at the root of your repository.
+You can put the file anywhere in the repository. The workflow sample in the next section assumes the Bicep file is named **azuredeploy.bicep**, and it's stored at the root of your repository.
## Create workflow
The workflow file must be stored in the **.github/workflows** folder at the root
1. Select **New workflow**. 1. Select **set up a workflow yourself**. 1. Rename the workflow file if you prefer a different name other than **main.yml**. For example: **deployBicepFile.yml**.
-1. Replace the content of the yml file with the following:
+1. Replace the content of the yml file with the following code:
```yml on: [push]
The workflow file must be stored in the **.github/workflows** folder at the root
failOnStdErr: false ```
- Replace **mystore** with your own storage account name prefix.
+ Replace `mystore` with your own storage account name prefix.
> [!NOTE] > You can specify a JSON format parameters file instead in the ARM Deploy action (example: `.azuredeploy.parameters.json`).
The workflow file must be stored in the **.github/workflows** folder at the root
The first section of the workflow file includes: - **name**: The name of the workflow.
- - **on**: The name of the GitHub events that triggers the workflow. The workflow is trigger when there is a push event on the main branch, which modifies at least one of the two files specified. The two files are the workflow file and the Bicep file.
+ - **on**: The name of the GitHub events that triggers the workflow. The workflow is trigger when there's a push event on the main branch, which modifies at least one of the two files specified. The two files are the workflow file and the Bicep file.
1. Select **Start commit**. 1. Select **Commit directly to the main branch**. 1. Select **Commit new file** (or **Commit changes**).
-Because the workflow is configured to be triggered by either the workflow file or the Bicep file being updated, the workflow starts right after you commit the changes.
+Updating either the workflow file or Bicep file triggers the workflow. The workflow starts right after you commit the changes.
## Check workflow status
-1. Select the **Actions** tab. You will see a **Create deployStorageAccount.yml** workflow listed. It takes 1-2 minutes to run the workflow.
+1. Select the **Actions** tab. You'll see a **Create deployStorageAccount.yml** workflow listed. It takes 1-2 minutes to run the workflow.
1. Select the workflow to open it. 1. Select **Run ARM deploy** from the menu to verify the deployment.
azure-resource-manager Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/deploy-powershell.md
You can target your deployment to a resource group, subscription, management gro
New-AzResourceGroupDeployment -ResourceGroupName <resource-group-name> -TemplateFile <path-to-bicep> ``` -- To deploy to a **subscription**, use [New-AzSubscriptionDeployment](/powershell/module/az.resources/new-azdeployment) which is an alias of the `New-AzDeployment` cmdlet:
+- To deploy to a **subscription**, use [New-AzSubscriptionDeployment](/powershell/module/az.resources/new-azdeployment), which is an alias of the `New-AzDeployment` cmdlet:
```azurepowershell New-AzSubscriptionDeployment -Location <location> -TemplateFile <path-to-bicep>
Before deploying your Bicep file, you can preview the changes the Bicep file wil
## Deploy template specs
-Currently, Azure PowerShell doesn't support creating template specs by providing Bicep files. However you can create a Bicep file with the [Microsoft.Resources/templateSpecs](/azure/templates/microsoft.resources/templatespecs) resource to deploy a template spec. Here is an [example](https://github.com/Azure/azure-docs-bicep-samples/blob/main/create-template-spec-using-bicep/azuredeploy.bicep). You can also build your Bicep file into an ARM template JSON by using the Bicep CLI, and then create a template spec with the JSON template.
+Currently, Azure PowerShell doesn't support creating template specs by providing Bicep files. However you can create a Bicep file with the [Microsoft.Resources/templateSpecs](/azure/templates/microsoft.resources/templatespecs) resource to deploy a template spec. The [Create template spec sample](https://github.com/Azure/azure-docs-bicep-samples/blob/main/samples/create-template-spec/azuredeploy.bicep) shows how to create a template spec in a Bicep file. You can also build your Bicep file into an ARM template JSON by using the Bicep CLI, and then create a template spec with the JSON template.
## Deployment name
azure-resource-manager Deploy To Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/deploy-to-resource-group.md
Title: Use Bicep to deploy resources to resource groups description: Describes how to deploy resources in a Bicep file. It shows how to target more than one resource group. Previously updated : 06/01/2021 Last updated : 09/02/2021 # Resource group deployments with Bicep files
For more information, see [Management group](deploy-to-management-group.md#manag
To deploy resources in the target resource group, define those resources in the `resources` section of the template. The following template creates a storage account in the resource group that is specified in the deployment operation. ## Deploy to multiple resource groups
azure-resource-manager Deploy What If https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/deploy-what-if.md
Title: Bicep deployment what-if
description: Determine what changes will happen to your resources before deploying a Bicep file. Previously updated : 06/01/2021 Last updated : 09/02/2021 # Bicep deployment what-if operation
The following results show the two different output formats:
### Set up environment
-To see how what-if works, let's runs some tests. First, deploy a [Bicep file that creates a virtual network](https://github.com/Azure/azure-docs-bicep-samples/blob/main/bicep/what-if/what-if-before.bicep). You'll use this virtual network to test how changes are reported by what-if. Download a copy of the Bicep file.
+To see how what-if works, let's runs some tests. First, deploy a Bicep file that creates a virtual network. You'll use this virtual network to test how changes are reported by what-if. Download a copy of the Bicep file.
++
+To deploy the Bicep file, use:
# [PowerShell](#tab/azure-powershell)
az deployment group create \
### Test modification
-After the deployment completes, you're ready to test the what-if operation. This time you deploy a [Bicep file that changes the virtual network](https://github.com/Azure/azure-docs-bicep-samples/blob/main/bicep/what-if/what-if-after.bicep). It's missing one the original tags, a subnet has been removed, and the address prefix has changed. Download a copy of the Bicep file.
+After the deployment completes, you're ready to test the what-if operation. This time you deploy a Bicep file that changes the virtual network. It's missing one the original tags, a subnet has been removed, and the address prefix has changed. Download a copy of the Bicep file.
++
+To view the changes, use:
# [PowerShell](#tab/azure-powershell)
Are you sure you want to execute the deployment?
You see the expected changes and can confirm that you want the deployment to run.
+## Clean up resources
+
+When you no longer need the example resources, use Azure CLI or Azure PowerShell to delete the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli
+az group delete --name ExampleGroup
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell
+Remove-AzResourceGroup -Name ExampleGroup
+```
+++ ## SDKs You can use the what-if operation through the Azure SDKs.
azure-resource-manager Outputs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/outputs.md
Title: Outputs in Bicep description: Describes how to define output values in Bicep Previously updated : 06/01/2021 Last updated : 09/02/2021 # Outputs in Bicep
The format of each output value must resolve to one of the [data types](data-typ
## Define output values
-The following example shows how to use the `output` keyword to return a property from a deployed resource.
-
-In the following example, `publicIP` is the identifier (symbolic name) of a public IP address deployed in the Bicep file. The output value gets the fully qualified domain name for the public IP address.
+The following example shows how to use the `output` keyword to return a property from a deployed resource. In the example, `publicIP` is the identifier (symbolic name) of a public IP address deployed in the Bicep file. The output value gets the fully qualified domain name for the public IP address.
```bicep output hostname string = publicIP.properties.dnsSettings.fqdn
var user = {
output stringOutput string = user['user-name'] ```
+The next example shows how to return outputs of different types.
+++ ## Conditional output You can conditionally return a value. Typically, you use a conditional output when you've [conditionally deployed](conditional-resource-deployment.md) a resource. The following example shows how to conditionally return the resource ID for a public IP address based on whether a new one was deployed:
publicIPAddress: {
} ```
-## Example template
-
-The following template doesn't deploy any resources. It shows some ways of returning outputs of different types.
-
-Bicep doesn't currently support loops.
-- ## Get output values When the deployment succeeds, the output values are automatically returned in the results of the deployment.
azure-resource-manager Variables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/variables.md
description: Describes how to define variables in Bicep
Previously updated : 06/01/2021 Last updated : 09/02/2021 # Variables in Bicep
You can use the value from a parameter or another variable when constructing the
param inputValue string = 'deployment Parameter' var stringVar = 'myVariable'+ var concatToVar = '${stringVar}AddToVar' var concatToParam = '${inputValue}AddToParam' ```
The following example creates a string value for a storage account name. It uses
var storageName = '${toLower(storageNamePrefix)}${uniqueString(resourceGroup().id)}' ```
+The following example doesn't deploy any resources. It shows how to declare variables of different types.
++ ## Use variable The following example shows how to use the variable for a resource property. You reference the value for the variable by providing the variable's name: `storageName`.
output stgOutput string = storageName
Because storage account names must use lowercase letters, the `storageName` variable uses the `toLower` function to make the `storageNamePrefix` value lowercase. The `uniqueString` function creates a unique value from the resource group ID. The values are concatenated to a string.
-## Example template
-
-The following template doesn't deploy any resources. It shows some ways of declaring variables of different types.
-- ## Configuration variables You can define variables that hold related values for configuring an environment. You define the variable as an object with the values. The following example shows an object that holds values for two environments - **test** and **prod**. Pass in one of these values during deployment. ## Next steps
azure-resource-manager Export Template Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/export-template-cli.md
+
+ Title: Export template in Azure CLI
+description: Use Azure CLI to export an Azure Resource Manager template from resources in your subscription.
+ Last updated : 09/01/2021+
+# Use Azure CLI to export a template
++
+This article shows how to export templates through **Azure CLI**. For other options, see:
+
+* [Export template with Azure portal](export-template-portal.md)
+* [Export template with Azure PowerShell](export-template-powershell.md)
+* [REST API export from resource group](/rest/api/resources/resourcegroups/exporttemplate) and [REST API export from deployment history](/rest/api/resources/deployments/export-template).
+++
+## Export template from a resource group
+
+After setting up your resource group successfully, you can export an Azure Resource Manager template for the resource group.
+
+To export all resources in a resource group, use [az group export](/cli/azure/group#az_group_export) and provide the resource group name.
+
+```azurecli-interactive
+az group export --name demoGroup
+```
+
+The script displays the template on the console. To save to a file, use:
+
+```azurecli-interactive
+az group export --name demoGroup > exportedtemplate.json
+```
+
+Instead of exporting all resources in the resource group, you can select which resources to export.
+
+To export one resource, pass that resource ID.
+
+```azurecli-interactive
+storageAccountID=$(az resource show --resource-group demoGroup --name demostg --resource-type Microsoft.Storage/storageAccounts --query id --output tsv)
+az group export --resource-group demoGroup --resource-ids $storageAccountID
+```
+
+To export more than one resource, pass the space-separated resource IDs. To export all resources, don't specify this argument or supply "*".
+
+```azurecli-interactive
+az group export --resource-group <resource-group-name> --resource-ids $storageAccountID1 $storageAccountID2
+```
+
+When exporting the template, you can specify whether parameters are used in the template. By default, parameters for resource names are included but they don't have a default value.
+
+```json
+"parameters": {
+ "serverfarms_demoHostPlan_name": {
+ "type": "String"
+ },
+ "sites_webSite3bwt23ktvdo36_name": {
+ "type": "String"
+ }
+}
+```
+
+If you use the `--skip-resource-name-params` parameter when exporting the template, parameters for resource names aren't included in the template. Instead, the resource name is set directly on the resource to its current value. You can't customize the name during deployment.
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2016-09-01",
+ "name": "demoHostPlan",
+ ...
+ }
+]
+```
+
+If you use the `--include-parameter-default-value` parameter when exporting the template, the template parameter includes a default value that is set to the current value. You can either use that default value or overwrite the default value by passing in a different value.
+
+```json
+"parameters": {
+ "serverfarms_demoHostPlan_name": {
+ "defaultValue": "demoHostPlan",
+ "type": "String"
+ },
+ "sites_webSite3bwt23ktvdo36_name": {
+ "defaultValue": "webSite3bwt23ktvdo36",
+ "type": "String"
+ }
+}
+```
+
+## Export template from deployment history
+
+You can export a template from the deployment history. The template you get is exactly the one that was used for deployment.
+
+To get a template from a resource group deployment, use the [az deployment group export](/cli/azure/deployment/group#az_deployment_group_export) command.
+
+```azurecli-interactive
+az deployment group export --resource-group demoGroup --name demoDeployment
+```
+
+The template is displayed in the console. To save the file, use:
+
+```azurecli-interactive
+az deployment group export --resource-group demoGroup --name demoDeployment > demoDeployment.json
+```
+
+To get templates deployed at other levels, use:
+
+* [az deployment sub export](/cli/azure/deployment/sub#az_deployment_sub_export) for deployments to subscriptions
+* [az deployment mg export](/cli/azure/deployment/mg#az_deployment_mg_export) for deployments to management groups
+* [az deployment tenant export](/cli/azure/deployment/tenant#az_deployment_tenant_export) for deployments to tenants
++
+## Next steps
+
+- Learn how to export templates with [Azure portal](export-template-portal.md), [Azure PowerShell](export-template-powershell.md), or [REST API](/rest/api/resources/resourcegroups/exporttemplate).
+- To learn the Resource Manager template syntax, see [Understand the structure and syntax of Azure Resource Manager templates](./syntax.md).
+- To learn how to develop templates, see the [step-by-step tutorials](../index.yml).
azure-resource-manager Export Template Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/export-template-portal.md
Title: Export template in Azure portal description: Use Azure portal to export an Azure Resource Manager template from resources in your subscription. Previously updated : 07/29/2020 Last updated : 09/01/2021
-# Single and multi-resource export to a template in Azure portal
+# Use Azure portal to export a template
-To assist with creating Azure Resource Manager templates, you can export a template from existing resources. The exported template helps you understand the JSON syntax and properties that deploy your resources. To automate future deployments, start with the exported template and modify it for your scenario.
-Resource Manager enables you to pick one or more resources for exporting to a template. You can focus on exactly the resources you need in the template.
+This article shows how to export templates through the **portal**. For other options, see:
-This article shows how to export templates through the portal. You can also use [Azure CLI](../management/manage-resource-groups-cli.md#export-resource-groups-to-templates), [Azure PowerShell](../management/manage-resource-groups-powershell.md#export-resource-groups-to-templates), or [REST API](/rest/api/resources/resourcegroups/exporttemplate).
+* [Export template with Azure CLI](export-template-cli.md)
+* [Export template with Azure PowerShell](export-template-powershell.md)
+* [REST API export from resource group](/rest/api/resources/resourcegroups/exporttemplate) and [REST API export from deployment history](/rest/api/resources/deployments/export-template).
-## Choose the right export option
-There are two ways to export a template:
-
-* **Export from resource group or resource**. This option generates a new template from existing resources. The exported template is a "snapshot" of the current state of the resource group. You can export an entire resource group or specific resources within that resource group.
-
-* **Export before deployment or from history**. This option retrieves an exact copy of a template used for deployment.
-
-Depending on the option you choose, the exported templates have different qualities.
-
-| From resource group or resource | Before deployment or from history |
-| | -- |
-| Template is snapshot of the resources' current state. It includes any manual changes you made after deployment. | Template only shows state of resources at the time of deployment. Any manual changes you made after deployment aren't included. |
-| You can select which resources from a resource group to export. | All resources for a specific deployment are included. You can't pick a subset of those resources or add resources that were added at a different time. |
-| Template includes all properties for the resources, including some properties you wouldn't normally set during deployment. You might want to remove or clean up these properties before reusing the template. | Template includes only the properties needed for the deployment. The template is ready-to-use. |
-| Template probably doesn't include all of the parameters you need for reuse. Most property values are hard-coded in the template. To redeploy the template in other environments, you need to add parameters that increase the ability to configure the resources. You can unselect **Include parameters** so that you can author your own parameters. | Template includes parameters that make it easy to redeploy in different environments. |
-
-Export the template from a resource group or resource, when:
-
-* You need to capture changes to the resources that were made after the original deployment.
-* You want to select which resources are exported.
-
-Export the template before deployment or from the history, when:
-
-* You want an easy-to-reuse template.
-* You don't need to include changes you made after the original deployment.
-
-## Limitations
-
-When exporting from a resource group or resource, the exported template is generated from the [published schemas](https://github.com/Azure/azure-resource-manager-schemas/tree/master/schemas) for each resource type. Occasionally, the schema doesn't have the latest version for a resource type. Check your exported template to make sure it includes the properties you need. If necessary, edit the exported template to use the API version you need.
-
-The export template feature doesn't support exporting Azure Data Factory resources. To learn about how you can export Data Factory resources, see [Copy or clone a data factory in Azure Data Factory](../../data-factory/copy-clone-data-factory.md).
-
-To export resources created through classic deployment model, you must [migrate them to the Resource Manager deployment model](../../virtual-machines/migration-classic-resource-manager-overview.md).
-
-If you get a warning when exporting a template that indicates a resource type wasn't exported, you can still discover the properties for that resource. To learn about the different options for viewing resource properties, see [Discover resource properties](view-resources.md). You can also look at the [Azure REST API](/rest/api/azure/) for the resource type.
-
-There's a limit of 200 resources in the resource group you create the exported template for. If you attempt to export a resource group that has more than 200 resources, the error message `Export template is not supported for resource groups more than 200 resources` is shown.
## Export template from a resource group
To export one resource:
1. The exported template is displayed, and is available to download and deploy. The template only contains the single resource. **Include parameters** is selected by default. When selected, all template parameters will be included when the template is generated. If youΓÇÖd like to author your own parameters, toggle this checkbox to not include them.
-## Export template before deployment
+## Download template before deployment
+
+The portal has the option of downloading a template before deploying it. This option isn't available through PowerShell or Azure CLI.
1. Select the Azure service you want to deploy.
To export one resource:
1. The template is displayed and is available for download and deploy. - ## Export template after deployment You can export the template that was used to deploy existing resources. The template you get is exactly the one that was used for deployment.
You can export the template that was used to deploy existing resources. The temp
## Next steps -- Learn how to export templates with [Azure CLI](../management/manage-resource-groups-cli.md#export-resource-groups-to-templates), [Azure PowerShell](../management/manage-resource-groups-powershell.md#export-resource-groups-to-templates), or [REST API](/rest/api/resources/resourcegroups/exporttemplate).
+- Learn how to export templates with [Azure CLI](export-template-cli.md), [Azure PowerShell](export-template-powershell.md), or [REST API](/rest/api/resources/resourcegroups/exporttemplate).
- To learn the Resource Manager template syntax, see [Understand the structure and syntax of Azure Resource Manager templates](./syntax.md).-- To learn how to develop templates, see the [step-by-step tutorials](../index.yml).-- To view the Azure Resource Manager template schemas, see [template reference](/azure/templates/).
+- To learn how to develop templates, see the [step-by-step tutorials](../index.yml).
azure-resource-manager Export Template Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/export-template-powershell.md
+
+ Title: Export template in Azure PowerShell
+description: Use Azure PowerShell to export an Azure Resource Manager template from resources in your subscription.
+ Last updated : 09/01/2021+
+# Use Azure PowerShell to export a template
++
+This article shows how to export templates through **Azure PowerShell**. For other options, see:
+
+* [Export template with Azure CLI](export-template-cli.md)
+* [Export template with Azure portal](export-template-portal.md)
+* [REST API export from resource group](/rest/api/resources/resourcegroups/exporttemplate) and [REST API export from deployment history](/rest/api/resources/deployments/export-template).
+++
+## Export template from a resource group
+
+After setting up your resource group, you can export an Azure Resource Manager template for the resource group.
+
+To export all resources in a resource group, use the [Export-AzResourceGroup](/powershell/module/az.resources/Export-AzResourceGroup) cmdlet and provide the resource group name.
+
+```azurepowershell-interactive
+Export-AzResourceGroup -ResourceGroupName demoGroup
+```
+
+It saves the template as a local file.
+
+Instead of exporting all resources in the resource group, you can select which resources to export.
+
+To export one resource, pass that resource ID.
+
+```azurepowershell-interactive
+$resource = Get-AzResource `
+ -ResourceGroupName <resource-group-name> `
+ -ResourceName <resource-name> `
+ -ResourceType <resource-type>
+Export-AzResourceGroup `
+ -ResourceGroupName <resource-group-name> `
+ -Resource $resource.ResourceId
+```
+
+To export more than one resource, pass the resource IDs in an array.
+
+```azurepowershell-interactive
+Export-AzResourceGroup `
+ -ResourceGroupName <resource-group-name> `
+ -Resource @($resource1.ResourceId, $resource2.ResourceId)
+```
+
+When exporting the template, you can specify whether parameters are used in the template. By default, parameters for resource names are included but they don't have a default value.
+
+```json
+"parameters": {
+ "serverfarms_demoHostPlan_name": {
+ "type": "String"
+ },
+ "sites_webSite3bwt23ktvdo36_name": {
+ "type": "String"
+ }
+}
+```
+
+If you use the `-SkipResourceNameParameterization` parameter when exporting the template, parameters for resource names aren't included in the template. Instead, the resource name is set directly on the resource to its current value. You can't customize the name during deployment.
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2016-09-01",
+ "name": "demoHostPlan",
+ ...
+ }
+]
+```
+
+If you use the `-IncludeParameterDefaultValue` parameter when exporting the template, the template parameter includes a default value that is set to the current value. You can either use that default value or overwrite the default value by passing in a different value.
+
+```json
+"parameters": {
+ "serverfarms_demoHostPlan_name": {
+ "defaultValue": "demoHostPlan",
+ "type": "String"
+ },
+ "sites_webSite3bwt23ktvdo36_name": {
+ "defaultValue": "webSite3bwt23ktvdo36",
+ "type": "String"
+ }
+}
+```
+
+## Export template from deployment history
+
+You can export a template from the deployment history. The template you get is exactly the one that was used for deployment.
+
+To get a template from a resource group deployment, use the [Save-AzResourceGroupDeploymentTemplate](/powershell/module/az.resources/save-azresourcegroupdeploymenttemplate) cmdlet.
+
+```azurepowershell-interactive
+Save-AzResourceGroupDeploymentTemplate -ResourceGroupName demoGroup -DeploymentName demoDeployment
+```
+
+The template is saved as a local file with the name of the deployment.
+
+To get templates deployed at other levels, use:
+
+* [Save-AzDeploymentTemplate](/powershell/module/az.resources/save-azdeploymenttemplate) for deployments to subscriptions
+* [Save-AzManagementGroupDeploymentTemplate](/powershell/module/az.resources/save-azmanagementgroupdeploymenttemplate) for deployments to management groups
+* [Save-AzTenantDeploymentTemplate](/powershell/module/az.resources/save-aztenantdeploymenttemplate) for deployments to tenants
+
+## Next steps
+
+- Learn how to export templates with [Azure CLI](export-template-cli.md), [Azure portal](export-template-portal.md), or [REST API](/rest/api/resources/resourcegroups/exporttemplate).
+- To learn the Resource Manager template syntax, see [Understand the structure and syntax of Azure Resource Manager templates](./syntax.md).
+- To learn how to develop templates, see the [step-by-step tutorials](../index.yml).
azure-resource-manager View Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/view-resources.md
- Title: Discover resource properties
-description: Describes how to search for resource properties.
- Previously updated : 06/10/2020--
-# Discover resource properties
-
-Before creating Resource Manager templates, you need to understand what resource types are available, and what values to use in your template. This article shows some ways you can find the properties to include in your template.
-
-## Find resource provider namespaces
-
-Resources in an ARM template are defined with a resource provider namespace and resource type. For example, Microsoft.Storage/storageAccounts is the full name of the storage account resource type. Microsoft.Storage is the namespace. If you don't already know the namespaces for the resource types you want to use, see [Resource providers for Azure services](../management/azure-services-resource-providers.md).
-
-![Resource Manager resource provider namespace mapping](./media/view-resources/resource-provider-namespace-and-azure-service-mapping.png)
-
-## Export templates
-
-The easiest way to get the template properties for your existing resources is to export the template. For more information, see [Single and multi-resource export to a template in the Azure portal](./export-template-portal.md).
-
-## Use Resource Manager tools extension
-
-Visual Studio Code and the Azure Resource Manager tools extension help you see exactly which properties are needed for each resource type. They provide intellisense and snippets that simplify how you define a resource in your template. For more information, see [Quickstart: Create Azure Resource Manager templates with Visual Studio Code](./quickstart-create-templates-use-visual-studio-code.md#add-an-azure-resource).
-
-The following screenshot shows a storage account resource is added to a template:
-
-![Resource Manager tools extension snippets](./media/view-resources/resource-manager-tools-extension-snippets.png)
-
-The extension also provides a list of options for the configuration properties.
-
-![Resource Manager tools extension configurable values](./media/view-resources/resource-manager-tools-extension-configurable-properties.png)
-
-## Use template reference
-
-The Azure Resource Manager template reference is the most comprehensive resource for template schema. You can find API versions, template format, and property information.
-
-1. Browse to [Azure Resource Manager template reference](/azure/templates/).
-1. From the left navigation, select **Storage**, and then select **All resources**. The All resources page summarizes the resource types and the versions.
-
- ![template reference resource versions](./media/view-resources/resource-manager-template-reference-resource-versions.png)
-
- If you know the resource type, you can go directly to this page with the following URL format: `https://docs.microsoft.com/azure/templates/{provider-namespace}/{resource-type}`.
-
-1. Select the latest version. It is recommended to use the latest API version.
-
- The **Template format** section lists all the properties for storage account. **sku** is in the list:
-
- ![template reference storage account format](./media/view-resources/resource-manager-template-reference-storage-account-sku.png)
-
- Scroll down to see **Sku object** in the **Property values** section. The article shows the allowed values for SKU name:
-
- ![template reference storage account SKU values](./media/view-resources/resource-manager-template-reference-storage-account-sku-values.png)
-
- At the end of the page, the **Quickstart templates** section lists some Azure Quickstart Templates that contain the resource type:
-
- ![template reference storage account quickstart templates](./media/view-resources/resource-manager-template-reference-quickstart-templates.png)
-
-The template reference is linked from each of the Azure service documentation sites. For example, the [Key Vault documentation site](../../key-vault/general/overview.md):
-
-![Resource Manager template reference Key Vault](./media/view-resources/resource-manager-template-reference-key-vault.png)
-
-## Use Resource Explorer
-
-Resource Explorer is embedded in the Azure portal. Before using this method, you need a storage account. If you don't have one, select the following button to create one:
-
-[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.storage%2Fstorage-account-create%2Fazuredeploy.json)
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the search box, enter **resource explorer**, and then select **Resource Explorer**.
-
- ![Screenshot shows searching for the Resource Explorer in the Azure portal.](./media/view-resources/azure-portal-resource-explorer.png)
-
-1. From left, expand **Subscriptions**, and then expand your Azure subscription. You can find the storage account under either **Providers** or **ResourceGroups**.
-
- ![Azure portal Resource Explorer](./media/view-resources/azure-portal-resource-explorer-home.png)
-
- - **Providers**: expand **Providers** -> **Microsoft.Storage** -> **storageAccounts**, and then select your storage account.
- - **ResourceGroups**: select the resource group, which contains the storage account, select **Resources**, and then select the storage account.
-
- On the right, you see the SKU configuration for the existing storage account similar to:
-
- ![Azure portal Resource Explorer storage account sku](./media/view-resources/azure-portal-resource-explorer-sku.png)
-
-## Use Resources.azure.com
-
-Resources.azure.com is a public website can be accessed by anyone with an Azure subscription. It is in preview. Consider using [Resource Explorer](#use-resource-explorer) instead. This tool provides these functionalities:
--- Discover the Azure Resource Management APIs.-- Get API documentation and schema information.-- Make API calls directly in your own subscriptions.-
-To demonstrate how to retrieve schema information by using this tool, you need a storage account. If you don't have one, select the following button to create one:
-
-[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.storage%2Fstorage-account-create%2Fazuredeploy.json)
-
-1. Browse to [resources.azure.com](https://resources.azure.com/). It takes a few moments for the tool to popular the left pane.
-1. Select **subscriptions**.
-
- ![resource.azure.com api mapping](./media/view-resources/resources-azure-com-api-mapping.png)
-
- The node on the left matches the API call on the right. You can make the API call by selecting the **GET** button.
-1. From left, expand **Subscriptions**, and then expand your Azure subscription. You can find the storage account under either **Providers** or **ResourceGroups**.
-
- - **Providers**: expand **Providers** -> **Microsoft.Storage** -> **storageAccounts**, and then browse to the storage account.
- - **ResourceGroups**: select the resource group, which contains the storage account, and then select **Resources**.
-
- On the right, you see the sku configuration for the existing storage account similar to:
-
- ![Azure portal Resource Explorer storage account sku](./media/view-resources/azure-portal-resource-explorer-sku.png)
-
-## Next steps
-
-In this article, you learned how to find template schema information. To learn more about creating Resource Manager templates, see [Understand the structure and syntax of ARM templates](./syntax.md).
azure-sql Authentication Aad Guest Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-aad-guest-users.md
Last updated 05/10/2021
[!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
-> [!NOTE]
-> This article is in **public preview**.
- Guest users in Azure Active Directory (Azure AD) are users that have been imported into the current Azure AD from other Azure Active Directories, or outside of it. For example, guest users can include users from other Azure Active Directories, or from accounts like *\@outlook.com*, *\@hotmail.com*,
-*\@live.com*, or *\@gmail.com*. This article will demonstrate how to create an Azure AD guest user, and set that user as an Azure AD admin for the Azure SQL logical server, without needing to have that guest user be part of a group inside Azure AD.
+*\@live.com*, or *\@gmail.com*.
+
+This article demonstrates how to create an Azure AD guest user and set that user as an Azure AD admin for Azure SQL Managed Instance or the [logical server in Azure](logical-servers.md) used by Azure SQL Database and Azure Synapse Analytics, without having to add the guest user to a group inside Azure AD.
## Feature description
-This feature lifts the current limitation that only allows guest users to connect to Azure SQL Database, SQL Managed Instance, or Azure Synapse Analytics when they're members of a group created in Azure AD. The group needed to be mapped to a user manually using the [CREATE USER (Transact-SQL)](/sql/t-sql/statements/create-user-transact-sql) statement in a given database. Once a database user has been created for the Azure AD group containing the guest user, the guest user can sign into the database using Azure Active Directory with MFA authentication. As part of this **public preview**, guest users can be created and connect directly to SQL Database, SQL Managed Instance, or Azure Synapse without the requirement of adding them to an Azure AD group first, and then creating a database user for that Azure AD group.
+This feature lifts the current limitation that only allows guest users to connect to Azure SQL Database, SQL Managed Instance, or Azure Synapse Analytics when they're members of a group created in Azure AD. The group needed to be mapped to a user manually using the [CREATE USER (Transact-SQL)](/sql/t-sql/statements/create-user-transact-sql) statement in a given database. Once a database user has been created for the Azure AD group containing the guest user, the guest user can sign into the database using Azure Active Directory with MFA authentication. Guest users can be created and connect directly to SQL Database, SQL Managed Instance, or Azure Synapse without the requirement of adding them to an Azure AD group first, and then creating a database user for that Azure AD group.
-As part of this feature, you also have the ability to set the Azure AD guest user directly as an AD admin for the Azure SQL logical server. The existing functionality where the guest user can be part of an Azure AD group, and that group can then be set as the Azure AD admin for the Azure SQL logical server is not impacted. Guest users in the database that are a part of an Azure AD group are also not impacted by this change.
+As part of this feature, you also have the ability to set the Azure AD guest user directly as an AD admin for the logical server or for a managed instance. The existing functionality (which allows the guest user to be part of an Azure AD group that can then be set as the Azure AD admin for the logical server or managed instance) is *not* impacted. Guest users in the database that are a part of an Azure AD group are also not impacted by this change.
For more information about existing support for guest users using Azure AD groups, see [Using multi-factor Azure Active Directory authentication](authentication-mfa-ssms-overview.md). ## Prerequisite -- [Az.Sql 2.9.0](https://www.powershellgallery.com/packages/Az.Sql/2.9.0) module or higher is needed when using PowerShell to set a guest user as an Azure AD admin for the Azure SQL logical server.
+- [Az.Sql 2.9.0](https://www.powershellgallery.com/packages/Az.Sql/2.9.0) module or higher is needed when using PowerShell to set a guest user as an Azure AD admin for the logical server or managed instance.
## Create database user for Azure AD guest user
Follow these steps to create a database user using an Azure AD guest user.
## Setting a guest user as an Azure AD admin
-Follow these steps to set an Azure AD guest user as the Azure AD admin for the SQL logical server.
+Set the Azure AD admin using either the Azure portal, Azure PowerShell, or the Azure CLI.
+
+### Azure portal
+
+To setup an Azure AD admin for a logical server or a managed instance using the Azure portal, follow these steps:
+
+1. Open the [Azure portal](https://portal.azure.com).
+1. Navigate to your SQL server or managed instance **Azure Active Directory** settings.
+1. Select **Set Admin**.
+1. In the Azure AD pop-up prompt, type the guest user, such as `guestuser@gmail.com`.
+1. Select this new user, and then save the operation.
-### Set Azure AD admin for SQL Database and Azure Synapse
+For more information, see [Setting Azure AD admin](authentication-aad-configure.md#azure-ad-admin-with-a-server-in-sql-database).
++
+### Azure PowerShell (SQL Database and Azure Synapse)
+
+To setup an Azure AD guest user for a logical server, follow these steps:
1. Ensure that the guest user (for example, `user1@gmail.com`) is already added into your Azure AD.
-1. Run the following PowerShell command to add the guest user as the Azure AD admin for your Azure SQL logical server:
+1. Run the following PowerShell command to add the guest user as the Azure AD admin for your logical server:
- - Replace `<ResourceGroupName>` with your Azure Resource Group name that contains the Azure SQL logical server.
- - Replace `<ServerName>` with your Azure SQL logical server name. If your server name is `myserver.database.windows.net`, replace `<Server Name>` with `myserver`.
+ - Replace `<ResourceGroupName>` with your Azure Resource Group name that contains the logical server.
+ - Replace `<ServerName>` with your logical server name. If your server name is `myserver.database.windows.net`, replace `<Server Name>` with `myserver`.
- Replace `<DisplayNameOfGuestUser>` with your guest user name. ```powershell Set-AzSqlServerActiveDirectoryAdministrator -ResourceGroupName <ResourceGroupName> -ServerName <ServerName> -DisplayName <DisplayNameOfGuestUser> ```
- You can also use the Azure CLI command [az sql server ad-admin](/cli/azure/sql/server/ad-admin) to set the guest user as an Azure AD admin for your Azure SQL logical server.
+You can also use the Azure CLI command [az sql server ad-admin](/cli/azure/sql/server/ad-admin) to set the guest user as an Azure AD admin for your logical server.
-### Set Azure AD admin for SQL Managed Instance
+### Azure PowerShell (SQL Managed Instance)
+
+To setup an Azure AD guest user for a managed instance, follow these steps:
1. Ensure that the guest user (for example, `user1@gmail.com`) is already added into your Azure AD.
Follow these steps to set an Azure AD guest user as the Azure AD admin for the S
Set-AzSqlInstanceActiveDirectoryAdministrator -ResourceGroupName <ResourceGroupName> -InstanceName "<ManagedInstanceName>" -DisplayName <DisplayNameOfGuestUser> -ObjectId <AADObjectIDOfGuestUser> ```
- You can also use the Azure CLI command [az sql mi ad-admin](/cli/azure/sql/mi/ad-admin) to set the guest user as an Azure AD admin for your SQL Managed Instance.
-
-## Limitations
-
-There is a limitation on the Azure portal that prevents selecting an Azure AD guest user as the Azure AD admin for SQL Managed Instance. For guest accounts outside of your Azure AD like *\@outlook.com*, *\@hotmail.com*, *\@live.com*, or *\@gmail.com*, the AD admin selector shows these accounts, but they are grayed out and cannot be selected. Use the above listed [PowerShell or CLI commands](#setting-a-guest-user-as-an-azure-ad-admin) to set the Azure AD admin. Alternatively, an Azure AD group containing the guest user can be set as the Azure AD admin for the SQL Managed Instance.
+You can also use the Azure CLI command [az sql mi ad-admin](/cli/azure/sql/mi/ad-admin) to set the guest user as an Azure AD admin for your managed instance.
-This functionality will be enabled for SQL Managed Instance prior to General Availability of this feature.
## Next steps
azure-sql Security Server Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/security-server-roles.md
+
+ Title: Server roles
+
+description: This article provides an overview of server roles for the logical server of Azure SQL Database
+++++ Last updated : 09/02/2021+++
+# Azure SQL Database server roles for permission management
++
+In Azure SQL Database, the server is a logical concept and permissions cannot be granted on a server level. To simplify permission management, Azure SQL Database provides a set of fixed server-level roles to help you manage the permissions on a [logical server](logical-servers.md). Roles are security principals that group logins.
+
+> [!NOTE]
+> The *roles* concept in this article are like *groups* in the Windows operating system.
+
+These special fixed server-level roles use the prefix **##MS_** and the suffix **##** to distinguish from other regular user-created principals.
+
+Like SQL Server on-premise, server permissions are organized hierarchically. The permissions that are held by these server-level roles can propagate to database permissions. For the permissions to be effectively propagated to the database, a login needs to have a user account in the database.
+
+For example, the server-level role **##MS_ServerStateReader##** holds the permission **VIEW SERVER STATE**. If a login who is member of this role has a user account in the databases *master* and *WideWorldImporters*, this user will have the permission, **VIEW DATABASE STATE** in those two databases.
+
+> [!NOTE]
+> Any permission can be denied within user databases, in effect, overriding the server-wide grant via role membership. However, in the system database *master*, permissions cannot be granted or denied.
+
+Azure SQL Database currently provides three fixed server roles. The permissions that are granted to the fixed server roles cannot be changed and these roles can't have other fixed roles as members. You can add server-level SQL logins as members to server-level roles.
+
+> [!IMPORTANT]
+> Each member of a fixed server role can add other logins to that same role.
+
+For more information on Azure SQL Database logins and users, see [Authorize database access to SQL Database, SQL Managed Instance, and Azure Synapse Analytics](logins-create-manage.md).
+
+## Built-in server-level roles
+
+The following table shows the fixed server-level roles and their capabilities.
+
+|Built-in server-level role|Description|
+||--|
+|**##MS_DefinitionReader##**|Members of the **##MS_DefinitionReader##** fixed server role can read all catalog views that are covered by **VIEW ANY DEFINITION**, respectively **VIEW DEFINITION** on any database on which the member of this role has a user account.|
+|**##MS_ServerStateReader##**|Members of the **##MS_ServerStateReader##** fixed server role can read all dynamic management views (DMVs) and functions that are covered by **VIEW SERVER STATE**, respectively **VIEW DATABASE STATE** on any database on which the member of this role has a user account.|
+|**##MS_ServerStateManager##**|Members of the **##MS_ServerStateManager##** fixed server role has the same permissions as the **##MS_ServerStateReader##** role. Also, it holds the **ALTER SERVER STATE** permission, which allows access to several management operations, such as: `DBCC FREEPROCCACHE`, `DBCC FREESYSTEMCACHE ('ALL')`, `DBCC SQLPERF()`; |
++
+## Permissions of fixed server roles
+
+Each built-in server-level role has certain permissions assigned to it. The following table shows the permissions assigned to the server-level roles. It also shows the database-level permissions inherited if a user account exist in the database.
+
+|Fixed server-level role|Server-level permissions|Database-level permissions (if database user exist)
+|-|-|--|
+|**##MS_DefinitionReader##**|VIEW ANY DATABASE, VIEW ANY DEFINITION, VIEW ANY SECURITY DEFINITION|VIEW DEFINITION, VIEW SECURITY DEFINITION|
+|**##MS_ServerStateReader##**|VIEW SERVER STATE, VIEW SERVER PERFORMANCE STATE, VIEW SERVER SECURITY STATE|VIEW DATABASE STATE, VIEW DATABASE PERFORMANCE STATE, VIEW DATABASE SECURITY STATE|
+|**##MS_ServerStateManager##**|ALTER SERVER STATE, VIEW SERVER STATE, VIEW SERVER PERFORMANCE STATE, VIEW SERVER SECURITY STATE|VIEW DATABASE STATE, VIEW DATABASE PERFORMANCE STATE, VIEW DATABASE SECURITY STATE|
+
+
+## Working with server-level roles
+
+The following table explains the system views, and functions that you can use to work with server-level roles in Azure SQL Database.
+
+|Feature|Type|Description|
+|-|-|--|
+|[IS_SRVROLEMEMBER &#40;Transact-SQL&#41;](/sql/t-sql/functions/is-srvrolemember-transact-sql)|Metadata|Indicates whether a SQL login is a member of the specified server-level role.|
+|[sys.server_role_members &#40;Transact-SQL&#41;](/sql/relational-databases/system-catalog-views/sys-server-role-members-transact-sql)|Metadata|Returns one row for each member of each server-level role.|
+|[sys.sql_logins &#40;Transact-SQL&#41;](/sql/relational-databases/system-catalog-views/sys-sql-logins-transact-sql)|Metadata|Returns one row for each SQL login.|
+|[ALTER SERVER ROLE &#40;Transact-SQL&#41;](/sql/t-sql/statements/alter-server-role-transact-sql)|Command|Changes the membership of a server role.|
+
+## <a name="_examples"></a> Examples
+
+The examples in this section show how to work with server-level roles in Azure SQL Database.
+
+### A. Adding a SQL login to a server-level role
+
+The following example adds the SQL login 'Jiao' to the server-level role ##MS_ServerStateReader##.
+
+```sql
+ALTER SERVER ROLE ##MS_ServerStateReader##
+ ADD MEMBER Jiao;
+GO
+```
+
+### B. Listing all principals (SQL authentication) which are members of a server-level role
+
+The following statement returns all members of any fixed server-level role using the `sys.server_role_members` and `sys.sql_logins` catalog views.
+
+```sql
+SELECT
+ sql_logins.principal_id AS MemberPrincipalID
+ , sql_logins.name AS MemberPrincipalName
+ , roles.principal_id AS RolePrincipalID
+ , roles.name AS RolePrincipalName
+FROM sys.server_role_members AS server_role_members
+INNER JOIN sys.server_principals AS roles
+ ON server_role_members.role_principal_id = roles.principal_id
+INNER JOIN sys.sql_logins AS sql_logins
+ ON server_role_members.member_principal_id = sql_logins.principal_id
+;
+GO
+```
+
+## Limitations of server-level roles
+
+- Role assignments may take up to 5 minutes to become effective. Also for existing sessions, changes to server role assignments don't take effect until the connection is closed and reopened. This is due to the distributed architecture between the *master* database and other databases on the same logical server.
+ - Partial workaround: to reduce the waiting period and ensure that server role assignments are current in a database, a server administrator, or an Azure AD administrator can run `DBCC FLUSHAUTHCACHE` in the user database(s) on which the login has access. Current logged on users still have to reconnect after running `DBCC FLUSHAUTHCACHE` for the membership changes to take effect on them.
+
+- Server-level roles in Azure SQL Database can be assigned to SQL logins only. Azure AD logins aren't supported.
+
+- `IS_SRVROLEMEMBER()` isn't supported in the *master* database.
++
+## See also
+
+- [Database-Level Roles](/sql/relational-databases/security/authentication-access/database-level-roles)
+- [Security Catalog Views &#40;Transact-SQL&#41;](/sql/relational-databases/system-catalog-views/security-catalog-views-transact-sql)
+- [Security Functions &#40;Transact-SQL&#41;](/sql/t-sql/functions/security-functions-transact-sql)
+- [Permissions &#40;Database Engine&#41;](/sql/relational-databases/security/permissions-database-engine)
+- [DBCC FLUSHAUTHCACHE (Transact-SQL)](/sql/t-sql/database-console-commands/dbcc-flushauthcache-transact-sql)
backup Backup Azure File Share Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-file-share-rest-api.md
Track the resulting operation using the "Location" header with a simple *GET* co
GET https://management.azure.com/Subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/operationResults/cca47745-12d2-42f9-b3a4-75335f18fdf6?api-version=2016-12-01 ```
-Once all the Azure Storage accounts are discovered, the GET command returns a 200 (No Content) response. The vault is now able to discover any storage account with file shares that can be backed up within the subscription.
+Once all the Azure Storage accounts are discovered, the GET command returns a 204 (No Content) response. The vault is now able to discover any storage account with file shares that can be backed up within the subscription.
```http HTTP/1.1 200 NoContent
batch Batch Pool Vm Sizes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-pool-vm-sizes.md
Title: Choose VM sizes and images for pools description: How to choose from the available VM sizes and OS versions for compute nodes in Azure Batch pools Previously updated : 08/27/2021 Last updated : 09/02/2021
When you select a node size for an Azure Batch pool, you can choose from almost
### Pools in Virtual Machine configuration
-Batch pools in the Virtual Machine configuration support almost all [VM sizes](../virtual-machines/sizes.md). See the following table to learn more about supported sizes and restrictions.
+Batch pools in the Virtual Machine configuration support almost all [VM sizes](../virtual-machines/sizes.md). The supported VM sizes in a region can be obtained via [Batch Management APIs](batch-apis-tools.md#batch-management-apis), as well as the [command line tools](batch-apis-tools.md#batch-command-line-tools) (PowerShell cmdlets and Azure CLI). For example, the [Azure Batch CLI command](/cli/azure/batch/location#az_batch_location_list_skus) to list supported VM sizes in a region is:
+
+```azurecli-interactive
+az batch location list-skus --location
+ [--filter]
+ [--maxresults]
+ [--subscription]
+```
+
+For each VM series, the following table also lists whether the VM series and VM sizes are supported by Batch.
| VM series | Supported sizes | |||
batch Security Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/security-best-practices.md
Title: Batch security and compliance best practices description: Learn best practices and useful tips for enhancing security with your Azure Batch solutions. Previously updated : 12/18/2020 Last updated : 09/01/2021
Many security features are only available for pools configured using [Virtual Ma
Batch account access supports two methods of authentication: Shared Key and [Azure Active Directory (Azure AD)](batch-aad-auth.md).
-We strongly recommend using Azure AD for Batch account authentication. Some Batch capabilities require this method of authentication, including many of the security-related features discussed here.
+We strongly recommend using Azure AD for Batch account authentication. Some Batch capabilities require this method of authentication, including many of the security-related features discussed here. The service API authentication mechanism for a Batch account can be restricted to only Azure AD using the [allowedAuthenticationModes](/rest/api/batchmanagement/batch-account/create) property. When this property is set, API calls using Shared Key authentication will be rejected.
### Batch account pool allocation mode When creating a Batch account, you can choose between two [pool allocation modes](accounts.md#batch-accounts): -- **Batch service**: The default option, where the underlying Cloud Service or virtual machine scale set resources used to allocate and manage pool nodes are created in internal subscriptions, and aren't directly visible in the Azure portal. Only the Batch pools and nodes are visible.
+- **Batch service**: The default option, where the underlying Cloud Service or virtual machine scale set resources used to allocate and manage pool nodes are created in internal subscriptions, and aren't directly visible in the Azure portal. Only the Batch pools and nodes are visible.
- **User subscription**: The underlying Cloud Service or virtual machine scale set resources are created in the same subscription as the Batch account. These resources are therefore visible in the subscription, in addition to the corresponding Batch resources. With user subscription mode, Batch VMs and other resources are created directly in your subscription when a pool is created. User subscription mode is required if you want to create Batch pools using Azure Reserved VM Instances, use Azure Policy on virtual machine scale set resources, and/or manage the core quota on the subscription (shared across all Batch accounts in the subscription). To create a Batch account in user subscription mode, you must also register your subscription with Azure Batch, and associate the account with an Azure Key Vault.
For extra security, encrypt these disks using one of these Azure disk encryption
## Securely access services from compute nodes
-Batch nodes can [securely access credentials and secrets](credential-access-key-vault.md) stored in [Azure Key Vault](../key-vault/general/overview.md), which can be used by task applications to access other services. A certificate is used to grant the pool nodes access to Key Vault.
+Batch nodes can securely access credentials stored in [Azure Key Vault](../key-vault/general/overview.md), which can be used by task applications to access other services. A certificate is used to grant the pool nodes access to Key Vault. By [enabling automatic certificate rotation in your Batch pool](automatic-certificate-rotation.md), the credentials will be automatically renewed. This is the recommended option for Batch nodes to access credentials stored in Azure Key Vault, although you can also [set up Batch nodes to securely access credentials and secrets with a certificate](credential-access-key-vault.md) without automatic certificate rotation.
## Governance and compliance ### Compliance
-To help customers meet their own compliance obligations across regulated industries and markets worldwide, Azure maintains a [large portfolio of compliance offerings](https://azure.microsoft.com/overview/trusted-cloud/compliance).
+To help customers meet their own compliance obligations across regulated industries and markets worldwide, Azure maintains a [large portfolio of compliance offerings](https://azure.microsoft.com/overview/trusted-cloud/compliance).
These offerings are based on various types of assurances, including formal certifications, attestations, validations, authorizations, and assessments produced by independent third-party auditing firms, as well as contractual amendments, self-assessments, and customer guidance documents produced by Microsoft. Review the [comprehensive overview of compliance offerings](https://aka.ms/AzureCompliance) to determine which ones may be relevant to your Batch solutions.
Depending on your pool allocation mode and the resources to which a policy shoul
## Next steps - Review the [Azure security baseline for Batch](security-baseline.md).-- Read more [best practices for Azure Batch](best-practices.md).
+- Read more [best practices for Azure Batch](best-practices.md).
cloud-services-extended-support Available Sizes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/available-sizes.md
This article describes the available virtual machine sizes for Cloud Services (e
| SKU Family | ACU/ Core | |||
-| [A5-7](../virtual-machines/sizes-previous-gen.md?bc=%2fazure%2fvirtual-machines%2flinux%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json#a-series)| 100 |
-|[A8-A11](../virtual-machines/sizes-previous-gen.md?bc=%2fazure%2fvirtual-machines%2flinux%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json#a-seriescompute-intensive-instances) | 225* |
|[Av2](../virtual-machines/av2-series.md) | 100 | |[D](../virtual-machines/sizes-previous-gen.md?bc=%2fazure%2fvirtual-machines%2flinux%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json#d-series) | 160 | |[Dv2](../virtual-machines/dv2-dsv2-series.md) | 160 - 190* |
cloud-services-extended-support Deploy Prerequisite https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-prerequisite.md
The following sizes are deprecated in Azure Resource Manager. However, if you wa
| Previous size name | Updated size name | |||
-| ExtraSmall | Standard_A0 |
-| Small | Standard_A1 |
-| Medium | Standard_A2 |
-| Large | Standard_A3 |
-| ExtraLarge | Standard_A4 |
-| A5 | Standard_A5 |
-| A6 | Standard_A6 |
-| A7 | Standard_A7 |
-| A8 | Standard_A8 |
-| A9 | Standard_A9 |
-| A10 | Standard_A10 |
-| A11 | Standard_A11 |
-| MSODSG5 | Standard_MSODSG5 |
+| ExtraSmall | Standard_A1_v2 |
+| Small | Standard_A1_v2 |
+| Medium | Standard_A2_v2 |
+| Large | Standard_A4_v2 |
+| ExtraLarge | Standard_A8_v2 |
+| A5 | Standard_A2m_v2 |
+| A6 | Standard_A4m_v2 |
+| A7 | Standard_A8m_v2 |
+| A8 | Deprecated |
+| A9 | Deprecated |
+| A10 | Deprecated |
+| A11 | Deprecated |
+| MSODSG5 | Deprecated |
For example, `<WorkerRole name="WorkerRole1" vmsize="Medium"` would become `<WorkerRole name="WorkerRole1" vmsize="Standard_A2"`.
cloud-services-extended-support Enable Rdp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/enable-rdp.md
Once remote desktop is enabled on the roles, you can initiate a connection direc
4. Open the file to connect to the role instance.
+## Update Remote Desktop Extension using PowerShell
+Follow the below steps to update your cloud service to the latest module with an RDP extension
+
+1. Update Az.CloudService module to the [latest version](https://www.powershellgallery.com/packages/Az.CloudService/0.5.0)
+
+```powershell
+Update-Module -Name Az.CloudService
+```
+
+2. Remove existing RDP extension to the cloud service
+
+```powershell
+$resourceGroupName='<Resource Group Name>'
+$cloudServiceName='<Cloud Service Name>'
+
+# Get existing cloud service
+$cloudService = Get-AzCloudService -ResourceGroup $resourceGroupName -CloudServiceName $cloudServiceName
+
+# Remove existing RDP Extension from cloud service object
+$cloudService.ExtensionProfile.Extension = $cloudService.ExtensionProfile.Extension | Where-Object { $_.Type-ne "RDP" }
+ ```
+
+3. Add new RDP extension to the cloud service with the latest module
+
+```powershell
+# Create new RDP extension object
+$credential = Get-Credential
+$expiration='<Expiration Date>'
+$rdpExtension = New-AzCloudServiceRemoteDesktopExtensionObject -Name "RDPExtension" -Credential $credential -Expiration $expiration -TypeHandlerVersion "1.2.1"
+
+# Add RDP extension to existing cloud service extension object
+$cloudService.ExtensionProfile.Extension = $cloudService.ExtensionProfile.Extension + $rdpExtension
+
+# Update cloud service
+$cloudService | Update-AzCloudService
+```
## Next steps - Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support).
cognitive-services Resiliency And Recovery Plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/resiliency-and-recovery-plan.md
+
+ Title: How to backup and recover speech customization resources
+
+description: Learn how to prepare for service outages with Custom Speech and Custom Voice.
++++++ Last updated : 07/28/2021+++
+# Backup and recover speech customization resources
+
+The Speech service is [available in various regions](/azure/cognitive-services/speech-service/regions). Service subscription keys are tied to a single region. When you acquire a key, you select a specific region, where your data, model and deployments reside.
+
+Datasets for customer-created data assets, such as customized speech models and custom voice fonts, are also **available only within the service-deployed region**. Such assets are:
+
+**Custom Speech**
+- Training audio/text data
+- Test audio/text data
+- Customized speech models
+- Log data
+
+**Custom Voice**
+- Training audio/text data
+- Test audio/text data
+- Custom voice fonts
+
+While some customers use our default endpoints to transcribe audio or standard voices for speech synthesis, other customers create assets for customization.
+
+These assets are backed up regularly and automatically by the repositories themselves, so **no data loss will occur** if a region becomes unavailable. However, you must take steps to ensure service continuity in the event of a region outage.
+
+## How to monitor service availability
+
+If you use our default endpoints, you should configure your client code to monitor for errors, and if errors persist, be prepared to re-direct to another region of your choice where you have a service subscription.
+
+Follow these steps to configure your client to monitor for errors:
+
+1. Find the [list of regionally available endpoints in our documentation](/azure/cognitive-services/speech-service/rest-speech-to-text).
+2. Select a primary and one or more secondary/backup regions from the list.
+3. From Azure portal, create Speech Service resources for each region.
+ - If you have set a specific quota, you may also consider setting the same quota in the backup regions. See details in [Speech service Quotas and Limits](/azure/cognitive-services/speech-service/speech-services-quotas-and-limits).
+
+4. Note that each region has its own STS token service. For the primary region and any backup regions your client configuration file needs to know the:
+ - Regional Speech service endpoints
+ - [Regional subscription key and the region code](/azure/cognitive-services/speech-service/rest-speech-to-text)
+
+5. Configure your code to monitor for connectivity errors (typically connection timeouts and service unavailability errors). Here is sample code in C#: [GitHub: Adding Sample for showing a possible candidate for switching regions](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/fa6428a0837779cbeae172688e0286625e340942/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#L965).
+
+ 1. Since networks experience transient errors, for single connectivity issue occurrences, the suggestion is to retry.
+ 2. For persistence redirect traffic to the new STS token service and Speech service endpoint. (For Text-to-Speech, reference sample code: [GitHub: TTS public voice switching region](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_samples.cs#L880).
+
+The recovery from regional failures for this usage type can be instantaneous and at a very low cost. All that is required is the development of this functionality on the client side. The data loss that will incur assuming no backup of the audio stream will be minimal.
+
+## Custom endpoint recovery
+
+Data assets, models or deployments in one region cannot be made visible or accessible in any other region.
+
+You should create Speech Service resources in both a main and a secondary region by following the same steps as used for default endpoints.
+
+### Custom Speech
+
+Custom Speech Service does not support automatic failover. We suggest the following steps to prepare for manual or automatic failover implemented in your client code. In these steps you replicate custom models in a secondary region. With this preparation, your client code can switch to a secondary region when the primary region fails.
+
+1. Create your custom model in one main region (Primary).
+2. Run the [Model Copy API](https://eastus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) to replicate the custom model to all prepared regions (Secondary).
+3. Go to Speech Studio to load the copied model and create a new endpoint in the secondary region. See how to deploy a new model in [Train and deploy a Custom Speech model](/azure/cognitive-services/speech-service/how-to-custom-speech-train-model).
+ - If you have set a specific quota, also consider setting the same quota in the backup regions. See details in [Speech service Quotas and Limits](/azure/cognitive-services/speech-service/speech-services-quotas-and-limits).
+4. Configure your client to fail over on persistent errors as with the default endpoints usage.
+
+Your client code can monitor availability of your deployed models in your primary region, and redirect their audio traffic to the secondary region when the primary fails. If you do not require real-time failover, you can still follow these steps to prepare for a manual failover.
+
+#### Offline failover
+
+If you do not require real-time failover you can decide to import your data, create and deploy your models in the secondary region at a later time with the understanding that these tasks will take time to complete.
+
+#### Failover Tests
+
+This section provides general guidance about timing. The times were recorded to estimate offline failover using a [representative test data set](https://github.com/microsoft/Cognitive-Custom-Speech-Service).
+
+- Data upload to new region: **15mins**
+- Acoustic/language model creation: **6 hours (depending on the data volume)**
+- Model evaluation: **30 mins**
+- Endpoint deployment: **10 mins**
+- Model copy API call: **10 mins**
+- Client code reconfiguration and deployment: **Depending on the client system**
+
+It is nonetheless advisable to create keys for a primary and secondary region for production models with real-time requirements.
+
+### Custom Voice
+
+Custom Voice does not support automatic failover. Handle real-time synthesis failures with these two options.
+
+**Option 1: Fail over to public voice in the same region.**
+
+When custom voice real-time synthesis fails, fail over to a public voice (client sample code: [GitHub: custom voice failover to public voice](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_samples.cs#L899)).
+
+Check the [public voices available](/azure/cognitive-services/speech-service/language-support#neural-voices). You can also change the sample code above if you would like to fail over to a different voice or in a different region.
+
+**Option 2: Fail over to custom voice on another region.**
+
+1. Create and deploy your custom voice in one main region (primary).
+2. Copy your custom voice model to another region (the secondary region) in [Speech Studio](https://speech.microsoft.com).
+3. Go to Speech Studio and switch to the Speech resource in the secondary region. Load the copied model and create a new endpoint.
+ - Voice model deployment usually finishes **in 3 minutes**.
+ - Note: additional endpoint is subjective to additional charges. [Check the pricing for model hosting here](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+
+4. Configure your client to fail over to the secondary region. See sample code in C#: [GitHub: custom voice failover to secondary region](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_samples.cs#L920).
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/overview.md
You can add Document Translation to your applications using the REST API or a cl
* The [**REST API**](reference/rest-api-guide.md). is a language agnostic interface that enables you to create HTTP requests and authorization headers to translate documents.
-* The [**client-library SDKs**](client-sdks.md) are language-specific classes, objects, methods, and code that you can quickly use by adding a reference in your project. Currently Document Translation has programming language support for [**C#/.NET**](/dotnet/api/azure.ai.translation.document) and [**Python**](/python/azure-ai-translation-document/latest/azure.ai.translation.document.html).
+* The [**client-library SDKs**](client-sdks.md) are language-specific classes, objects, methods, and code that you can quickly use by adding a reference in your project. Currently Document Translation has programming language support for [**C#/.NET**](/dotnet/api/azure.ai.translation.document) and [**Python**](https://pypi.org/project/azure-ai-translation-document/).
## Get started
cognitive-services Rest Api Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/reference/rest-api-guide.md
Text Translation is a cloud-based feature of the Azure Translator service and is
| [**dictionary/examples**](v3-0-dictionary-lookup.md) | **POST** | Returns how a term is used in context. | > [!div class="nextstepaction"]
-> [Create a Translator resource in the Azure portal.](/translator-how-to-signup.md)
+> [Create a Translator resource in the Azure portal.](../translator-how-to-signup.md)
> [!div class="nextstepaction"] > [Quickstart: REST API and your programming language](../quickstart-translator.md)
cognitive-services Translator Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/translator-overview.md
The following features are supported by the Translator service. Use the links in
| Feature | Description | Development options | |-|-|--|
-| [**Text Translation**](text-translation-overview.md) | Execute text translation between supported source and target languages in real time. | <ul><li>[**REST API**](reference/rest-api-guide.md) </li><li>[Text translation Docker container](/containers/translator-how-to-install-container)ΓÇöcurrently in gated preview.</li></ul> |
+| [**Text Translation**](text-translation-overview.md) | Execute text translation between supported source and target languages in real time. | <ul><li>[**REST API**](reference/rest-api-guide.md) </li><li>[Text translation Docker container](containers/translator-how-to-install-container.md)ΓÇöcurrently in gated preview.</li></ul> |
| [**Document Translation**](document-translation/overview.md) | Translate batch and complex files while preserving the structure and format of the original documents. | <ul><li>[**REST API**](document-translation/reference/rest-api-guide.md)</li><li>[**Client-library SDK**](document-translation/client-sdks.md)</li></ul> | | [**Custom Translator**](custom-translator/overview.md) | Build customized models to translate domain- and industry-specific language, terminology, and style. | <ul><li>[**Custom Translator portal**](https://portal.customtranslator.azure.ai/)</li></ul> |
communication-services Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/best-practices.md
Communication Services applications should dispose `VideoStreamRendererView`, or
### Hang up the call on onbeforeunload event Your application should invoke `call.hangup` when the `onbeforeunload` event is emitted.
+### Handling multiple calls on multiple Tabs on mobile
+Your application should not connect to calls from multiple browser tabs simultaneously as this can cause undefined behavior due to resource allocation for microphone and camera on the device. Developers are encouraged to always hangup calls when completed in the background before starting a new one.
+```JavaScript
+document.addEventListener("visibilitychange", function() {
+ if (document.visibilityState != 'visible') {
+ // call.hangUp
+ }
+});
+ ```
+ ### Hang up the call on microphoneMuteUnexpectedly UFD When an iOS/Safari user receives a PSTN call, Azure Communication Services loses microphone access. Azure Communication Services will raise the `microphoneMuteUnexpectedly` call diagnostic event, and at this point Communication Services will not be able to regain access to microphone.
cosmos-db Cassandra Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra/cassandra-support.md
Azure Cosmos DB supports the following database commands on Cassandra API accoun
| UPDATE IF NOT EXISTS | Yes | | UPDATE conditions | No |
-NOTE: Lightweight Transactions are currently not supported for Accounts with Multi-region Writes enabled.
+> [!NOTE]
+> Lightweight transactions currently aren't supported for accounts that have multi-region writes enabled.
## CQL Shell commands
cosmos-db Upgrade Mongodb Version https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb/upgrade-mongodb-version.md
When upgrading from 3.2, the database account endpoint suffix will be updated to
If you are upgrading from version 3.2, you will need to replace the existing endpoint in your applications and drivers that connect with this database account. **Only connections that are using the new endpoint will have access to the features in the new API version**. The previous 3.2 endpoint should have the suffix `.documents.azure.com`.
+When upgrading from 3.2 to newer versions, [compound indexes](mongodb-indexing.md) are now required to perform sort operations on multiple fields to ensure stable, high performance for these queries. Ensure that these compound indexes are created so that your multi-field sorts succeed.
+ >[!Note] > This endpoint might have slight differences if your account was created in a Sovereign, Government or Restricted Azure Cloud.
cosmos-db Scaling Provisioned Throughput Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scaling-provisioned-throughput-best-practices.md
+
+ Title: Best practices for scaling provisioned throughput (RU/s)
+description: Learn best practices for scaling provisioned throughput for manual and autoscale throughput
+++ Last updated : 08/20/2021+++++
+# Best practices for scaling provisioned throughput (RU/s)
+
+This article describes best practices and strategies for scaling the throughput (RU/s) of your database or container (collection, table, or graph). The concepts apply when you're increasing either the provisioned manual RU/s or the autoscale max RU/s of any resource for any of the Azure Cosmos DB APIs.
+
+## Prerequisites
+- If you're new to partitioning and scaling in Azure Cosmos DB, it's recommended to first read the article [Partitioning and horizontal scaling in Azure Cosmos DB](partitioning-overview.md).
+- If you're planning to scale your RU/s due to 429 exceptions, review the guidance in [Diagnose and troubleshoot Azure Cosmos DB request rate too large (429) exceptions](troubleshoot-request-rate-too-large.md). Before increasing RU/s, identify the root cause of the issue and whether increasing RU/s is the right solution.
+
+## Background on scaling RU/s
+
+When you send a request to increase the RU/s of your database or container, depending on your requested RU/s and your current physical partition layout, the scale-up operation will either complete instantly or asynchronously (typically 4-6 hours).
+- **Instant scale-up**
+ - When your requested RU/s can be supported by the current physical partition layout, Azure Cosmos DB doesnΓÇÖt need to split or add new partitions.
+ - As a result, the operation completes immediately and the RU/s are available for use.
+- **Asynchronous scale-up**
+ - When the requested RU/s are higher than what can be supported by the physical partition layout, Azure Cosmos DB will split existing physical partitions. This occurs until the resource has the minimum number of partitions required to support the requested RU/s.
+ - As a result, the operation can take some time to complete, typically 4-6 hours.
+
+Each physical partition can support a maximum of 10,000 RU/s (applies to all APIs) of throughput and 50 GB of storage (applies to all APIs, except Cassandra, which has 30 GB of storage).
+
+## How to scale-up RU/s without changing partition layout
+
+### Step 1: Find the current number of physical partitions.
+
+Navigate to
+**Insights** > **Throughput** > **Normalized RU Consumption (%) By PartitionKeyRangeID**. Count the distinct number of PartitionKeyRangeIds.
++
+> [!NOTE]
+> The chart will only show a maximum of 50 PartitionKeyRangeIds. If your resource has more than 50, you can use the [Azure Cosmos DB REST API](https://docs.microsoft.com/rest/api/cosmos-db/get-partition-key-ranges#example) to count the total number of partitions.
+
+Each PartitionKeyRangeId maps to one physical partition and is assigned to hold data for a range of possible hash values.
+
+Azure Cosmos DB distributes your data across logical and physical partitions based on your partition key to enable horizontal scaling. As data gets written, Azure Cosmos DB uses the hash of the partition key value to determine which logical and physical partition the data lives on.
+
+### Step 2: Calculate the default maximum throughput
+The highest RU/s you can scale to without triggering Azure Cosmos DB to split partitions is equal to `Current number of physical partitions * 10,000 RU/s`.
+
+#### Example
+Suppose we have an existing container with five physical partitions and 30,000 RU/s of manual provisioned throughput. We can increase the RU/s to 5 * 10,000 RU/s = 50,000 RU/s instantly. Similarly if we had a container with autoscale max RU/s of 30,000 RU/s (scales between 3000 - 30,000 RU/s), we could increase our max RU/s to 50,000 RU/s instantly (scales between 5000 - 50,000 RU/s).
+> [!TIP]
+> If you are scaling up RU/s to respond to request rate too large exceptions (429s), it's recommended to first increase RU/s to the highest RU/s that are supported by your current physical partition layout and assess if the new RU/s is sufficient before increasing further.
+
+## How to ensure even data distribution during asynchronous scaling
+
+### Background
+
+When you increase the RU/s beyond the current number of physical partitions * 10,000 RU/s, Azure Cosmos DB splits existing partitions, until the new number of partitions = `ROUNDUP(requested RU/s / 10,000 RU/s)`. During a split, parent partitions are split into two children partitions.
+
+For example, suppose we have a container with three physical partitions and 30,000 RU/s of manual provisioned throughput. If we increased the throughput to 45,000 RU/s, Azure Cosmos DB will split two of the existing physical partitions so that in total, there are `ROUNDUP(45,000 RU/s / 10,000 RU/s)` = 5 physical partitions.
+
+> [!NOTE]
+> Applications can always ingest or query data during a split. The Azure Cosmos DB client SDKs and service automatically handle this scenario and ensure that requests are routed to the correct physical partition, so no additional user action is required.
+
+If you have a workload that is very evenly distributed with respect to storage and request volumeΓÇötypically accomplished by partitioning by high cardinality fields like /idΓÇöit's recommended when you scale-up, set RU/s such that all partitions are split evenly.
+
+To see why, let's take an example where we have an existing container with 2 physical partitions, 20,000 RU/s, and 80 GB of data.
+
+Thanks to choosing a good partition key with high cardinality, the data is roughly evenly distributed in both physical partitions. Each physical partition is assigned roughly 50% of the keyspace, which is defined as the total range of possible hash values.
+
+In addition, Azure Cosmos DB distributes RU/s evenly across all physical partitions. As a result, each physical partition has 10,000 RU/s and 50% (40 GB) of the total data.
+The following diagram shows our current state.
++
+Now, suppose we want to increase our RU/s from 20,000 RU/s to 30,000 RU/s.
+
+If we simply increased the RU/s to 30,000 RU/s, only one of the partitions will be split. After the split, we will have:
+- One partition that contains 50% of the data (this partition wasn't split)
+- Two partitions that contain 25% of the data each (these are the resulting child partitions from the parent that was split)
+
+Because Azure Cosmos DB distributes RU/s evenly across all physical partitions, each physical partition will still get 10,000 RU/s. However, we now have a skew in storage and request distribution.
+
+In the following diagram, we see that Partitions 3 and 4 (the children partitions of Partition 2) each have 10,000 RU/s to serve requests for 20 GB of data, while Partition 1 has 10,000 RU/s to serve requests for twice the amount of data (40 GB).
++
+To maintain an even storage distribution, we can first scale up our RU/s to ensure every partition splits. Then, we can lower our RU/s back down to the desired state.
+
+So, if we start with two physical partitions, to guarantee that the partitions are even post-split, we need to set RU/s such that we'll end up with four physical partitions. To achieve this, we'll first set RU/s = 4 * 10,000 RU/s per partition = 40,000 RU/s. Then, after the split completes, we can lower our RU/s to 30,000 RU/s.
+
+As a result, we see in the following diagram that each physical partition gets 30,000 RU/s / 4 = 7500 RU/s to serve requests for 20 GB of data. Overall, we maintain even storage and request distribution across partitions.
++
+### General formula
+
+#### Step 1: Increase your RU/s to guarantee that all partitions split evenly
+
+In general, if you have a starting number of physical partitions `P`, and want to set a desired RU/s `S`:
+
+Increase your RU/s to: `10,000 * P * 2 ^ (ROUNDUP(LOG_2 (S/(10,000 * P)))`. This gives the closest RU/s to the desired value that will ensure all partitions are split evenly.
+
+> [!NOTE]
+> When you increase the RU/s of a database or container, this can impact the minimum RU/s you can lower to in the future. Typically, the minimum RU/s is equal to MAX(400 RU/s, Current storage in GB * 10 RU/s, Highest RU/s ever provisioned / 100). For example, if the highest RU/s you've ever scaled to is 100,000 RU/s, the lowest RU/s you can set in the future is 1000 RU/s. Learn more about [minimum RU/s](concepts-limits.md#minimum-throughput-limits).
+
+#### Step 2: Lower your RU/s to the desired RU/s
+
+For example, suppose we have five physical partitions, 50,000 RU/s and want to scale to 150,000 RU/s. We should first set: `10,000 * 5 * 2 ^ (ROUND(LOG_2(150,000/(10,000 * 5)))` = 200,000 RU/s, and then lower to 150,000 RU/s.
+
+When we scaled up to 200,000 RU/s, the lowest manual RU/s we can now set in the future is 2000 RU/s. The [lowest autoscale max RU/s](autoscale-faq.yml#lowering-the-max-ru-s) we can set is 20,000 RU/s (scales between 2000 - 20,000 RU/s). Since our target RU/s is 150,000 RU/s, we are not affected by the minimum RU/s.
+
+## How to optimize RU/s for large data ingestion
+
+When you plan to migrate or ingest a large amount of data into Azure Cosmos DB, it's recommended to set the RU/s of the container so that Azure Cosmos DB pre-provisions the physical partitions needed to store the total amount of data you plan to ingest upfront. Otherwise, during ingestion, Azure Cosmos DB may have to split partitions, which adds more time to the data ingestion.
+
+We can take advantage of the fact that during container creation, Azure Cosmos DB uses the heuristic formula of starting RU/s to calculate the number of physical partitions to start with.
+
+### Step 1: Review the choice of partition key
+Follow [best practices](partitioning-overview.md#choose-partitionkey) for choosing a partition key to ensure you will have even distribution of request volume and storage post-migration.
+
+### Step 2: Calculate the number of physical partitions you'll need
+`Number of physical partitions = Total data size in GB / Target data per physical partition in GB`
+
+Each physical partition can hold a maximum of 50 GB of storage (30 GB for Cassandra API). The value you should choose for the `Target data per physical partition in GB` depends on how fully packed you want the physical partitions to be and how much you expect storage to grow post-migration.
+
+For example, if you anticipate that storage will continue to grow, you may choose to set the value to 30 GB. Assuming you've chosen a good partition key that evenly distributes storage, each partition will be ~60% full (30 GB out of 50 GB). As future data is written, it can be stored on the existing set of physical partitions, without requiring the service to immediately add more physical partitions.
+
+In contrast, if you believe that storage will not grow significantly post-migration, you may choose to set the value higher, for example 45 GB. This means each partition will be ~90% full (45 GB out of 50 GB). This minimizes the number of physical partitions your data is spread across, which means each physical partition can get a larger fraction of the total provisioned RU/s.
+
+### Step 3: Calculate the number of RU/s to start with
+`Starting RU/s = Number of physical partitions * Initial throughput per physical partition`.
+- `Initial throughput per physical partition` = 10,000 RU/s when using autoscale or shared throughput databases
+- `Initial throughput per physical partition` = 6000 RU/s when using manual throughput
+
+### Example
+Let's say we have 1 TB (1000 GB) of data we plan to ingest and we want to use manual throughput. Each physical partition in Azure Cosmos DB has a capacity of 50 GB. Let's assume we aim to pack partitions to be 80% full (40 GB), leaving us room for future growth.
+
+This means that for 1 TB of data, we'll need 1000 GB / 40 GB = 25 physical partitions. To ensure we'll get 25 physical partitions, if we're using manual throughput, we first provision 25 * 6000 RU/s = 150,000 RU/s. Then, after the container is created, to help our ingestion go faster, we increase the RU/s to 250,000 RU/s before the ingestion begins (happens instantly because we already have 25 physical partitions). This allows each partition to get the maximum of 10,000 RU/s.
+
+If we're using autoscale throughput or a shared throughput database, to get 25 physical partitions, we'd first provision 25 * 10,000 RU/s = 250,000 RU/s. Because we are already at the highest RU/s that can be supported with 25 physical partitions, we would not further increase our provisioned RU/s before the ingestion.
+
+In theory, with 250,000 RU/s and 1 TB of data, if we assume 1-kb documents and 10 RUs required for write, the ingestion can theoretically complete in: 1000 GB * (1,000,000 kb / 1 GB) * (1 document / 1 kb) * (10 RU / document) * (1 sec / 150,000 RU) * (1 hour / 3600 seconds) = 11.1 hours.
+
+This calculation is an estimate assuming the client performing the ingestion can fully saturate the throughput and distribute writes across all physical partitions. As a best practice, itΓÇÖs recommended to ΓÇ£shuffleΓÇ¥ your data on the client-side. This ensures that each second, the client is writing to many distinct logical (and thus physical) partitions.
+
+Once the migration is over, we can lower the RU/s or enable autoscale as needed.
+
+## Next steps
+* [Monitor normalized RU/s consumption](monitor-normalized-request-units.md) of your database or container.
+* [Diagnose and troubleshoot](troubleshoot-request-rate-too-large.md) request rate too large (429) exceptions.
+* [Enable autoscale on a database or container](provision-throughput-autoscale.md).
cosmos-db Set Throughput https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/set-throughput.md
If you are **reducing the provisioned throughput**, you will be able to do it up
If you are **increasing the provisioned throughput**, most of the time, the operation is instantaneous. There are however, cases where the operation can take longer time due to the system tasks to provision the required resources. In this case, an attempt to modify the provisioned throughput while this operation is in progress will yield an HTTP 423 response with an error message explaining that another scaling operation is in progress.
+Learn more in the [Best practices for scaling provisioned throughput (RU/s)](scaling-provisioned-throughput-best-practices.md) article.
+ > [!NOTE] > If you are planning for a very large ingestion workload that will require a big increase in provisioned throughput, keep in mind that the scaling operation has no SLA and, as mentioned in the previous paragraph, it can take a long time when the increase is large. You might want to plan ahead and start the scaling before the workload starts and use the below methods to check progress.
cosmos-db Sql Query Group By https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-group-by.md
Previously updated : 07/30/2021 Last updated : 09/01/2021
The GROUP BY clause divides the query's results according to the values of one or more specified properties.
+> [!NOTE]
+> The GROUP BY clause is not supported in the Azure Cosmos DB Python SDK.
+ ## Syntax ```sql
The GROUP BY clause divides the query's results according to the values of one o
When a query uses a GROUP BY clause, the SELECT clause can only contain the subset of properties and system functions included in the GROUP BY clause. One exception is [aggregate functions](sql-query-aggregate-functions.md), which can appear in the SELECT clause without being included in the GROUP BY clause. You can also always include literal values in the SELECT clause. The GROUP BY clause must be after the SELECT, FROM, and WHERE clause and before the OFFSET LIMIT clause. You currently cannot use GROUP BY with an ORDER BY clause but this is planned.-
+
The GROUP BY clause does not allow any of the following: - Aliasing properties or aliasing system functions (aliasing is still allowed within the SELECT clause)
cosmos-db Troubleshoot Request Rate Too Large https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/troubleshoot-request-rate-too-large.md
description: Learn how to diagnose and fix request rate too large exceptions.
Previously updated : 07/13/2020 Last updated : 08/25/2021
In general, for a production workload, if you see between 1-5% of requests with
A hot partition arises when one or a few logical partition keys consume a disproportionate amount of the total RU/s due to higher request volume. This can be caused by a partition key design that doesn't evenly distribute requests. It results in many requests being directed to a small subset of logical (which implies physical) partitions that become "hot." Because all data for a logical partition resides on one physical partition and total RU/s is evenly distributed among the physical partitions, a hot partition can lead to 429s and inefficient use of throughput. Here are some examples of partitioning strategies that lead to hot partitions:-- You have a container storing IoT device data for a write-heavy workload that is partitioned by date. All data for a single date will reside on the same logical and physical partition. Because all the data written each day has the same date, this would result in a hot partition every day.
- - Instead, for this scenario, a partition key like id (either a GUID or device id), or a [synthetic partition key](./synthetic-partition-keys.md) combining id and date would yield a higher cardinality of values and better distribution of request volume.
-- You have a multi-tenant scenario with a container partitioned by tenantId. If one tenant is significantly more active than the others, it results in a hot partition. For example, if the largest tenant has 100,000 users, but most tenants have fewer than 10 users, you will have a hot partition when partitioned by the tenantID.
- - For this previous scenario, consider having a dedicated container for the largest tenant, partitioned by a more granular property such as UserId.
+- You have a container storing IoT device data for a write-heavy workload that is partitioned by `date`. All data for a single date will reside on the same logical and physical partition. Because all the data written each day has the same date, this would result in a hot partition every day.
+ - Instead, for this scenario, a partition key like `id` (either a GUID or device id), or a [synthetic partition key](./synthetic-partition-keys.md) combining `id` and `date` would yield a higher cardinality of values and better distribution of request volume.
+- You have a multi-tenant scenario with a container partitioned by `tenantId`. If one tenant is significantly more active than the others, it results in a hot partition. For example, if the largest tenant has 100,000 users, but most tenants have fewer than 10 users, you will have a hot partition when partitioned by `tenantID`.
+ - For this previous scenario, consider having a dedicated container for the largest tenant, partitioned by a more granular property such as `UserId`.
#### How to identify the hot partition
This sample output shows that in a particular minute, the logical partition key
Review the guidance on [how to chose a good partition key](../partitioning-overview.md#choose-partitionkey). If there is high percent of rate limited requests and no hot partition:-- You can [increase the RU/s](../set-throughput.md) on the database or container using the client SDKs, Azure portal, PowerShell, CLI or ARM template.
+- You can [increase the RU/s](../set-throughput.md) on the database or container using the client SDKs, Azure portal, PowerShell, CLI or ARM template. Follow [best practices for scaling provisioned throughput (RU/s)](../scaling-provisioned-throughput-best-practices.md) to determine the right RU/s to set.
If there is high percent of rate limited requests and there is an underlying hot partition: - Long-term, for best cost and performance, consider **changing the partition key**. The partition key cannot be updated in place, so this requires migrating the data to a new container with a different partition key. Azure Cosmos DB supports a [live data migration tool](https://devblogs.microsoft.com/cosmosdb/how-to-change-your-partition-key/) for this purpose.
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/introduction.md
Applications written for Azure Table storage can migrate to Azure Cosmos DB by u
> The [serverless capacity mode](../serverless.md) is now available on Azure Cosmos DB's Table API. > [!IMPORTANT]
-> The .NET Framework SDK [Microsoft.Azure.CosmosDB.Table](https://www.nuget.org/packages/Microsoft.Azure.CosmosDB.Table) is in maintenance mode and it will be deprecated soon. Please upgrade to the new .NET Standard library [Microsoft.Azure.Cosmos.Table](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table) to continue to get the latest features supported by the Table API.
+> The .NET Cosmos DB Table Library [Microsoft.Azure.Cosmos.Table](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table) is in maintenance mode and will be deprecated soon. Please upgrade to the new .NET Azure Data Tables Library [Azure.Data.Tables](https://www.nuget.org/packages/Azure.Data.Tables/) to continue to get the latest features supported by the Table API.
## Table offerings If you currently use Azure Table Storage, you gain the following benefits by moving to the Azure Cosmos DB Table API:
cost-management-billing Programmatically Create Subscription Enterprise Agreement https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/programmatically-create-subscription-enterprise-agreement.md
Previously updated : 06/22/2021 Last updated : 09/01/2021
The values for a billing scope and `id` are the same thing. The `id` for your en
The following example creates a subscription named *Dev Team Subscription* in the enrollment account selected in the previous step.
+Using one of the following methods, you'll create a subscription alias name. We recommend that when you create the alias name, you:
+
+- Use alphanumeric characters and hyphens
+- Start with a letter and end with an alphanumeric character
+- Don't use periods
++ ### [REST](#tab/rest) Call the PUT API to create a subscription creation request/alias.
data-factory Connect Data Factory To Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connect-data-factory-to-azure-purview.md
description: Learn about how to connect a Data Factory to Azure Purview
- Last updated 08/24/2021
Data factory's managed identity is used to authenticate lineage push operations
- For Purview account created **on or after August 18, 2021**, grant the data factory's managed identity **Data Curator** role on your Purview **root collection**. Learn more about [Access control in Azure Purview](../purview/catalog-permissions.md) and [Add roles and restrict access through collections](../purview/how-to-create-and-manage-collections.md#add-roles-and-restrict-access-through-collections).
- When connecting data factory to Purview on authoring UI, ADF tries to add such role assignment automatically. If you have **Collection admins** role on the Purview root collection, this operation is done successfully.
+ When connecting data factory to Purview on authoring UI, ADF tries to add such role assignment automatically. If you have **Collection admins** role on the Purview root collection and have access to Purview account from your network, this operation is done successfully.
- For Purview account created **before August 18, 2021**, grant the data factory's managed identity Azure built-in [**Purview Data Curator**](../role-based-access-control/built-in-roles.md#purview-data-curator) role on your Purview account. Learn more about [Access control in Azure Purview - legacy permissions](../purview/catalog-permissions.md#legacy-permission-guide).
data-factory Data Factory Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-private-link.md
For the illustrated example above, the DNS resource records for the Data Factory
| Name | Type | Value | | - | -- | |
-| DataFactoryA.{region}.datafactory.azure.net | CNAME | DataFactoryA.{region}.privatelink.datafactory.azure.net |
-| DataFactoryA.{region}.privatelink.datafactory.azure.net | CNAME | < data factory service public endpoint > |
+| DataFactoryA.{region}.datafactory.azure.net | CNAME | DataFactoryA.{region}.datafactory.azure.net |
+| DataFactoryA.{region}.datafactory.azure.net | CNAME | < data factory service public endpoint > |
| < data factory service public endpoint > | A | < data factory service public IP address > | The DNS resource records for DataFactoryA, when resolved in the VNet hosting the private endpoint, will be:
The DNS resource records for DataFactoryA, when resolved in the VNet hosting the
| DataFactoryA.{region}.datafactory.azure.net | CNAME | DataFactoryA.{region}.privatelink.datafactory.azure.net | | DataFactoryA.{region}.privatelink.datafactory.azure.net | A | < private endpoint IP address > |
-If you are using a custom DNS server on your network, clients must be able to resolve the FQDN for the Data Factory endpoint to the private endpoint IP address. You should configure your DNS server to delegate your private link subdomain to the private DNS zone for the VNet, or configure the A records for ' DataFactoryA.{region}.privatelink.datafactory.azure.net' with the private endpoint IP address.
+If you are using a custom DNS server on your network, clients must be able to resolve the FQDN for the Data Factory endpoint to the private endpoint IP address. You should configure your DNS server to delegate your private link subdomain to the private DNS zone for the VNet, or configure the A records for ' DataFactoryA.{region}.datafactory.azure.net' with the private endpoint IP address.
For more information on configuring your own DNS server to support private endpoints, refer to the following articles: - [Name resolution for resources in Azure virtual networks](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server)
data-factory How To Access Secured Purview Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-access-secured-purview-account.md
+
+ Title: Access a secured Azure Purview account
+description: Learn about how to access a firewall protected Azure Purview account through private endpoints from Azure Data Factory
+++++ Last updated : 09/02/2021++
+# Access a secured Azure Purview account from Azure Data Factory
+
+This article describes how to access a secured Azure Purview account from Azure Data Factory for different integration scenarios.
+
+## Azure Purview private endpoint deployment scenarios
+
+You can use [Azure private endpoints](../private-link/private-endpoint-overview.md) for your Azure Purview accounts to allow secure access from a virtual network (VNet) to the catalog over a Private Link. Purview provides different types of private points for various access need: *account* private endpoint, *portal* private endpoint, and *ingestion* private endpoints. Learn more from [Purview private endpoints conceptual overview](../purview/catalog-private-link.md#conceptual-overview).
+
+If your Purview account is protected by firewall and denies public access, make sure you follow below checklist to set up the private endpoints so Data Factory can successfully connect to Purview.
+
+| Scenario | Required Purview private endpoints |
+| | |
+| [Run pipeline and report lineage to Purview](tutorial-push-lineage-to-purview.md) | For Data Factory pipeline to push lineage to Purview, Purview ***account*** and ***ingestion*** private endpoints are required. <br>- When using **Azure Integration Runtime**, follow the steps in [Managed private endpoints for Purview](#managed-private-endpoints-for-purview) section to create managed private endpoints in the Data Factory managed virtual network.<br>- When using **Self-hosted Integration Runtime**, follow the steps in [this section](../purview/catalog-private-link-end-to-end.md#option-2enable-account-portal-and-ingestion-private-endpoint-on-existing-azure-purview-accounts) to create the *account* and *ingestion* private endpoints in your integration runtime's virtual network. |
+| [Discover and explore data using Purview on ADF UI](how-to-discover-explore-purview-data.md) | To use the search bar at the top center of Data Factory authoring UI to search for Purview data and perform actions, you need to create Purview ***account*** and ***portal*** private endpoints in the virtual network that you launch the Data Factory Studio. Follow the steps in [Enable *account* and *portal* private endpoint](../purview/catalog-private-link-account-portal.md#option-2enable-account-and-portal-private-endpoint-on-existing-azure-purview-accounts). |
+
+## Managed private endpoints for Purview
+
+[Managed private endpoints](managed-virtual-network-private-endpoint.md#managed-private-endpoints) are private endpoints created in the Azure Data Factory Managed Virtual Network establishing a private link to Azure resources. When you run pipeline and report lineage to a firewall protected Azure Purview account, create an Azure Integration Runtime with "Virtual network configuration" option enabled, then create the Purview ***account*** and ***ingestion*** managed private endpoints as follows.
+
+### Create managed private endpoints
+
+To create managed private endpoints for Purview on Data Factory authoring UI:
+
+1. Go to **Manage** -> **Azure Purview**, and click **Edit** to edit your existing connected Purview account or click **Connect to a Purview account** to connect to a new Purview account.
+
+2. Select **Yes** for **Create managed private endpoints**. You need to have at least one Azure Integration Runtime with "Virtual network configuration" option enabled in the data factory to see this option.
+
+3. Click **+ Create all** button to batch create the needed Purview private endpoints, including the ***account*** private endpoint and the ***ingestion*** private endpoints for the Purview managed resources - Blob storage, Queue storage, and Event Hubs namespace. You need to have at least **Reader** role on your Purview account for Data Factory to retrieve the Purview managed resources' information.
+
+ :::image type="content" source="./media/how-to-access-secured-purview-account/purview-create-all-managed-private-endpoints.png" alt-text="Create managed private endpoint for your connected Purview account.":::
+
+4. In the next page, specify a name for the private endpoint. It will be used to generate names for the ingestion private endpoints as well with suffix.
+
+ :::image type="content" source="./media/how-to-access-secured-purview-account/name-purview-private-endpoints.png" alt-text="Name the managed private endpoints for your connected Purview account.":::
+
+5. Click **Create** to create the private endpoints. After creation, 4 private endpoint requests will be generated that must [get approved by an owner of Purview](#approve-private-endpoint-connections).
+
+Such batch managed private endpoint creation is provided on the Data Factory UI only. If you want to create the managed private endpoints programmatically, you need to create those PEs individually. You can find Purview managed resources' information from Azure portal -> your Purview account -> Managed resources.
+
+### Approve private endpoint connections
+
+After you create the managed private endpoints for Purview, you see "Pending" state first. The Purview owner need to approve the private endpoint connections for each resource.
+
+If you have permission to approve the Purview private endpoint connection, from Data Factory UI:
+
+1. Go to **Manage** -> **Azure Purview** -> **Edit**
+2. In the private endpoint list, click the **Edit** (pencil) button next to each private endpoint name
+3. Click **Manage approvals in Azure portal** which will bring you to the resource.
+4. On the given resource, go to **Networking** -> **Private endpoint connection** to approve it. The private endpoint is named as `data_factory_name.your_defined_private_endpoint_name` with description as "Requested by data_factory_name".
+5. Repeat this operation for all private endpoints.
+
+If you don't have permission to approve the Purview private endpoint connection, ask the Purview account owner to do as follows.
+
+- For *account* private endpoint, go to Azure portal -> your Purview account -> Networking -> Private endpoint connection to approve.
+- For *ingestion* private endpoints, go to Azure portal -> your Purview account -> Managed resources, click into the Storage account and Event Hubs namespace respectively, and approve the private endpoint connection in Networking -> Private endpoint connection page.
+
+### Monitor managed private endpoints
+
+You can monitor the created managed private endpoints for Purview at two places:
+
+- Go to **Manage** -> **Azure Purview** -> **Edit** to open your existing connected Purview account. To see all the relevant private endpoints, you need to have at least **Reader** role on your Purview account for Data Factory to retrieve the Purview managed resources' information. Otherwise, you only see *account* private endpoint with warning.
+- Go to **Manage** -> **Managed private endpoints** where you see all the managed private endpoints created under the data factory. If you have at least **Reader** role on your Purview account, you see Purview relevant private endpoints being grouped together. Otherwise, they show up separately in the list.
+
+## Next steps
+
+- [Connect Data Factory to Azure Purview](connect-data-factory-to-azure-purview.md)
+- [Tutorial: Push Data Factory lineage data to Azure Purview](tutorial-push-lineage-to-purview.md)
+- [Discover and explore data in ADF using Purview](how-to-discover-explore-purview-data.md)
data-factory How To Discover Explore Purview Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-discover-explore-purview-data.md
Title: Discover and explore data in ADF using Purview description: Learn how to discover, explore data in Azure Data Factory using Purview -
data-factory Tutorial Push Lineage To Purview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-push-lineage-to-purview.md
description: Learn about how to push Data Factory lineage data to Azure Purview
--++ Last updated 08/10/2021
databox Data Box Deploy Copy Data Via Copy Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-deploy-copy-data-via-copy-service.md
Previously updated : 06/18/2019 Last updated : 08/26/2021 #Customer intent: As an IT admin, I need to be able to copy data to Data Box to upload on-premises data from my server onto Azure.
To copy data by using the data copy service, you need to create a job:
|**Destination type** |Select the target storage type from the list: **Block Blob**, **Page Blob**, or **Azure Files**. | |**Destination container/share** |Enter the name of the container or share that you want to upload data to in your destination storage account. The name can be a share name or a container name. For example, use `myshare` or `mycontainer`. You can also enter the name in the format `sharename\directory_name` or `containername\virtual_directory_name`. | |**Copy files matching pattern** | You can enter the file-name matching pattern in the following two ways:<ul><li>**Use wildcard expressions:** Only `*` and `?` are supported in wildcard expressions. For example, the expression `*.vhd` matches all the files that have the `.vhd` extension. Similarly, `*.dl?` matches all the files with either the extension `.dl` or that start with `.dl`, such as `.dll`. Likewise, `*foo` matches all the files whose names end with `foo`.<br>You can directly enter the wildcard expression in the field. By default, the value you enter in the field is treated as a wildcard expression.</li><li>**Use regular expressions:** POSIX-based regular expressions are supported. For example, the regular expression `.*\.vhd` will match all the files that have the `.vhd` extension. For regular expressions, provide the `<pattern>` directly as `regex(<pattern>)`. For more information about regular expressions, go to [Regular expression language - a quick reference](/dotnet/standard/base-types/regular-expression-language-quick-reference).</li><ul>|
- |**File optimization** |When this feature is enabled, files smaller than 1 MB are packed during ingestion. This packing speeds up the data copy for small files. It also saves a significant amount of time when the number of files far exceeds the number of directories. |
+ |**File optimization** |When this feature is enabled, files smaller than 1 MB are packed during ingestion. This packing speeds up the data copy for small files. It also saves a significant amount of time when the number of files far exceeds the number of directories.</br>If you use file optimization:<ul><li>After you run prepare to ship, you can [download a BOM file](data-box-logs.md#inspect-bom-during-prepare-to-ship), which lists the original file names, to help you ensure that all the right files are copied.</li><li>Don't delete the packed files, which are identified by a GUID as the file name. If you delete a packed file, the original file isn't uploaded during future data copies.</li><li>Don't copy the same files that you copy with the Copy Service via other protocols such as SMB, NFS, or REST API. Using different protocols can result in conflicts and failure during data uploads.</li></ul> |
4. Select **Start**. The inputs are validated, and if the validation succeeds, then the job starts. It might take a few minutes for the job to start.
ddos-protection Manage Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/manage-permissions.md
To work with DDoS protection plans, your account must be assigned to the [networ
To enable DDoS protection for a virtual network, your account must also be assigned the appropriate [actions for virtual networks](../virtual-network/manage-virtual-network.md#permissions).
+> [!IMPORTANT]
+> Once a DDoS Protection Plan has been enabled on a Virtual Network, subsequent operations on that Virtual Network still require the `Microsoft.Network/ddosProtectionPlans/join/action` action permission.
+ ## Azure Policy Creation of more than one plan is not required for most organizations. A plan cannot be moved between subscriptions. If you want to change the subscription a plan is in, you have to delete the existing plan and create a new one.
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/getting-started.md
The following table describes user access permissions to Azure Defender for IoT
**Clarify your network setup needs**
-Research your network architecture, monitored bandwidth, and other network details. For more information, see [About Azure Defender for IoT network setup](how-to-set-up-your-network.md).
+Research your:
+
+- Network architecture
+- Monitored bandwidth
+- Requirements for creating certificates
+- Other network details.
+
+For more information, see [About Azure Defender for IoT network setup](how-to-set-up-your-network.md).
**Clarify which sensors and management console appliances are required to handle the network load**
defender-for-iot How To Create And Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/how-to-create-and-manage-users.md
To disable the feature, change `infinity_session_expiration = true` to `infinity
To update sign-out counting periods, adjust the `= <number>` value to the required time.
-## Track user activity
+## Track user activity
You can track user activity in the event timeline on each sensor. The timeline displays the event or affected device, and the time and date that the user carried out the activity. **To view user activity**: 1. Sign in to the sensor.
-1. In the event timeline, enable the **User Operations** option.
+
+1. In the event timeline, enable the **User Operations** option.
:::image type="content" source="media/how-to-create-azure-for-defender-users-and-roles/User-login-attempts.png" alt-text="View a user's activity.":::
-## Integrate with Active Directory servers
+## Integrate with Active Directory servers
Configure the sensor or on-premises management console to work with Active Directory. This allows Active Directory users to access the Defender for IoT consoles by using their Active Directory credentials.
+> [!Note]
+> LDAP v3 is supported.
+ Two types of LDAP-based authentication are supported: - **Full authentication**: User details are retrieved from the LDAP server. Examples are the first name, last name, email, and user permissions.
You can associate Active Directory groups defined here with specific permission
:::image type="content" source="media/how-to-setup-active-directory/ad-system-settings-v2.png" alt-text="View your Active Directory system settings.":::
-2. On the **System Settings** pane, select **Active Directory**.
+1. On the **System Settings** pane, select **Active Directory**.
:::image type="content" source="media/how-to-setup-active-directory/ad-configurations-v2.png" alt-text="Edit your Active Directory configurations.":::
-3. In the **Edit Active Directory Configuration** dialog box, select **Active Directory Integration Enabled** > **Save**. The **Edit Active Directory Configuration** dialog box expands, and you can now enter the parameters to configure Active Directory.
+1. In the **Edit Active Directory Configuration** dialog box, select **Active Directory Integration Enabled** > **Save**. The **Edit Active Directory Configuration** dialog box expands, and you can now enter the parameters to configure Active Directory.
:::image type="content" source="media/how-to-setup-active-directory/ad-integration-enabled-v2.png" alt-text="Enter the parameters to configure Active Directory.":::
- > [!NOTE]
- > - You must define the LDAP parameters here exactly as they appear in Active Directory.
- > - For all the Active Directory parameters, use lowercase only. Use lowercase even when the configurations in Active Directory use uppercase.
- > - You can't configure both LDAP and LDAPS for the same domain. You can, however, use both for different domains at the same time.
+> [!NOTE]
+> - You must define the LDAP parameters here exactly as they appear in Active Directory.
+> - For all the Active Directory parameters, use lowercase only. Use lowercase even when the configurations in Active Directory use uppercase.
+> - You can't configure both LDAP and LDAPS for the same domain. You can, however, use both for different domains at the same time.
-4. Set the Active Directory server parameters, as follows:
+1. Set the Active Directory server parameters, as follows:
| Server parameter | Description | |--|--|
If you are creating Active Directory groups for on-premises management console u
1. Select **Save**.
-2. To add a trusted server, select **Add Server** and configure another server.
+1. To add a trusted server, select **Add Server** and configure another server.
## Change a user's password
defender-for-iot How To Deploy Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/how-to-deploy-certificates.md
+
+ Title: Deploy certificates
+description: Learn how to set up and deploy certificates for Defender for IoT.
Last updated : 08/29/2021+++
+# About certificates
+
+This article provides information needed when creating and deploying certificates for Azure Defender for IoT. A security, PKI or other qualified certificate lead should handle certificate creation and deployment.
+
+Defender for IoT uses SSL/TLS certificates to secure communication between the following system components:
+
+- Between users and the web console of the appliance.
+- Between the sensors and an on-premises management console.
+- Between a management console and a High Availability management console.
+- To the REST API on the sensor and on-premises management console.
+
+Defender for IoT Admin users can upload a certificate to sensor consoles and their on-premises management console from the SSL/TLS Certificates dialog box.
++
+## About certificate generation methods
+
+All certificate generation methods are supported using:
+
+- Private and Enterprise Key Infrastructures (Private PKI)
+- Public Key Infrastructures (Public PKI)
+- Certificates locally generated on the appliance (locally self-signed).
+
+> [!Important]
+> It is not recommended to use locally self-signed certificates. This type of connection is not secure and should be used for test environments only. Since the owner of the certificate can't be validated and the security of your system can't be maintained, self-signed certificates should never be used for production networks.
+
+## About certificate validation
+
+In addition to securing communication between system components, users can also carry out certificate validation.
+
+Validation is evaluated against:
+
+- A Certificate Revocation List (CRL)
+- The certificate expiration date
+- The certificate trust chain
+
+Validation is carried out twice:
+
+1. When uploading the certificate to sensors and on-premises management consoles. If validation fails, the certificate cannot be uploaded.
+1. When initiating encrypted communication between:
+
+ - Defender for IoT system components, for example, a sensor and on-premises management console.
+
+ - Defender for IoT and certain 3rd party servers defined in Forwarding rules. See [About forwarded alert information](how-to-forward-alert-information-to-partners.md#about-forwarded-alert-information) for more information.
+
+If validation fails, communication between the relevant components is halted and a validation error is presented in the console.
+
+## About certificate upload to Defender for IoT
+
+Following sensor and on-premises management console installation, a local self-signed certificate is generated and used to access the sensor and on-premises management console web application.
+
+When signing into the sensor and on-premises management console for the first time, Admin users are prompted to upload an SSL/TLS certificate. Using SSL/TLS certificates is highly recommended.
+
+If the certificate is not created properly by the certificate lead or there are connection issues to it, the certificate cannot be uploaded and users will be forced to work with a locally signed certificate.
+
+The option to validate the uploaded certificate and third-party certificates is automatically enabled, but can be disabled. When disabled, encrypted communications between components continues, even if a certificate is invalid.
+
+## Certificate deployment tasks
+
+This section describes the steps you need to take to ensure that certificate deployment runs smoothly.
+
+**To deploy certificates, verify that:**
+
+- A security, PKI or certificate specialist is creating or overseeing certificate creation.
+- You create a unique certificate for each sensor, management console and HA machine.
+- You meet certificate creation requirements. SeeΓÇ»[Certificate creation requirements](#certificate-creation-requirements).
+- Admin users logging in to each Defender for IoT sensor, and on-premises management console and HA machine have access to the certificate.
+
+## Certificate creation requirements
+
+This section covers certificate creation requirement, including:
+
+- [Port access requirements for certificate validation](#port-access-requirements-for-certificate-validation)
+
+- [File type requirements](#file-type-requirements)
+
+- [Key file requirements](#key-file-requirements)
+
+- [Certificate chain file requirements (if .pem is used)](#certificate-chain-file-requirements-if-pem-is-used)
+
+### Port access requirements for certificate validation
+
+If you are working with certificate validation, verify access to port 80 is available.
+
+Certificate validation is evaluated against a Certificate Revocation List, and the certificate expiration date. This means appliance should be able to establish connection to the CRL server defined by the certificate. By default, the certificate will reference the CRL URL on HTTP port 80.
+
+Some organizational security policies may block access to this port. If your organization does not have access to port 80, you can:
+
+1. Define another URL and a specific port in the certificate.
+
+ - The URL should be defined as http: // rather than https: // .
+
+ - Verify that the destination CRL server can listen on the port you defined.
+
+1. Use a proxy server that will access the CRL on port 80.
+
+### File type requirements
+
+Defender for IoT requires that each CA-signed certificate contains a .key file and a .crt file. These files are uploaded to the sensor and On-premises management console after login. Some organizations may require .pem file. Defender for IoT does not require this file type.
+
+**.crt – certificate container file**
+A .pem, or .der formatted file with a different extension. The file is recognized by Windows Explorer as a certificate. The .pem file is not recognized by Windows Explorer.
+
+**.key – Private key file**
+A key file is in the same format as a PEM file, but it has a different extension.
+
+**.pem – certificate container file (optional)**
+PEM is a text file that contains Base64 encoding of the certificate text, a plain-text header & a footer that marks the beginning and end of the certificate.
+
+You may need to convert existing files types to supported types. See [Convert existing files to supported files](#convert-existing-files-to-supported-files) for details.
+
+### Certificate file parameter requirements
+
+Verify that you have met the following parameter requirements before creating a certificate:
+
+- [CRT file requirements](#crt-file-requirements)
+- [Key file requirements](#key-file-requirements)
+- [Certificate chain file requirements (if .pem is used)](#certificate-chain-file-requirements-if-pem-is-used)
+
+### CRT file requirements
+
+This section covers .crt field requirements.
+
+- Signature Algorithm = SHA256RSA
+- Signature Hash Algorithm = SHA256
+- Valid from = Valid past date
+- Valid To = Valid future date
+- Public Key = RSA 2048 bits (Minimum) or 4096 bits
+- CRL Distribution Point = URL to .crl file
+- Subject CN (Common Name) = domain name of the appliance; for example, Sensor.contoso.com, or *.contoso.com.
+- Subject (C)ountry = defined, for example, US
+- Subject (OU) Org Unit = defined, for example, Contoso Labs
+- Subject (O)rganization = defined, for example, Contoso Inc.
+
+Certificates with other parameters might work, but Microsoft doesn't support them.ΓÇ»
+
+### Key file requirements
+
+Use either RSA 2048 bits or 4096 bits.
+
+When using a key length of 4096 bits, the SSL handshake at the start of each connection will be slower. in addition, there is an increase in CPU usage during handshakes.
+
+### Certificate chain file requirements (if .pem is used)
+
+A .pem file containing the certificates of all the certificate authorities in the chain of trust that led to your certificate.ΓÇ»
+
+Bag attributes are supported in the certificate chain file.
+
+## Create certificates
+
+Use a certificate management platform to create a certificate, for example, an automated PKI management platform. Verify that the certificates meet certificate file requirements. See Test certificates for information on testing the files you create.
+
+If you are not carrying out certificate validation, remove the CRL URL reference in the certificate. See [CRT file requirements](#crt-file-requirements) for information about this parameter.
+
+Consult a security, PKI, or other qualified certificate lead if you do not have an application that can automatically create certificates.
+
+You can [Test certificates you create](#test-certificates-you-create).
+
+You can also convert existing certificate files if you do not want to create new ones. See [Convert existing files to supported files](#convert-existing-files-to-supported-files) for details.
+
+### Sample Certificate
+
+You can compare your certificate to the sample certificate below. Verify that the same fields exits and that the order of the fields is the same.
++
+## Test certificates you create
+
+You can test certificates before deploying them to your sensors and on-premises management consoles. If you want to check the information within the certificate .csr file or private key file, use these commands:
+
+| **Test** | **CLI command** |
+|--|--|
+| Check a Certificate Signing Request (CSR) | openssl req -text -noout -verify -in CSR.csr |
+| Check a private key | openssl rsa -in privateKey.key -check |
+| Check a certificate | openssl x509 -in certificate.crt -text -noout |
+
+If these tests fail, review [Certificate file parameter requirements](#certificate-file-parameter-requirements) to verify file parameters are accurate, or consult your certificate lead.
+
+## Convert existing files to supported files
+
+This section describes how to convert existing certificates files to supported formats.
+
+|**Description** | **CLI command** |
+|--|--|
+| Convert .crt file to .pem file | openssl x509 -inform PEM -in <full path>/<pem-file-name>.pem -out <fullpath>/<crt-file-name>.crt |
+| Convert .pem file to .crt file | openssl x509 -inform PEM -in <full path>/<pem-file-name>.pem -out <fullpath>/<crt-file-name>.crt |
+| Convert a PKCS#12 file (.pfx .p12) containing a private key and certificates to .pem | openssl pkcs12 -in keyStore.pfx -out keyStore.pem -nodes. You can add -nocerts to only output the private key, or add -nokeys to only output the certificates. |
+
+## Troubleshooting
+
+This section covers various issues that may occur during certificate upload and validation, and steps to take to resolve the issues.
+
+### Troubleshoot CA-Certificate Upload
+
+Admin users attempting to log in to the sensor or on-premises management console for the first time will not be able to upload the CA-signed certificate if the certificate is not created properly or is invalid. If certificate upload fails, one or several of the error messages will display:
+
+| **Certificate validation error** | **Recommendation** |
+|--|--|
+| Passphrase does not match to the key | Validate that you typed the correct passphrase. If the problem continues, try recreating the certificate using the correct passphrase. |
+| Cannot validate chain of trust. The provided Certificate and Root CA do not match. | Make sure the .pem file correlates to the .crt file. If the problem continues, try recreating the certificate using the correct chain of trust (defined by the .pem file). |
+| This SSL certificate has expired and is not considered valid. | Create a new certificate with valid dates.|
+| This SSL certificate has expired and is not considered valid. | Create a new certificate with valid dates.|
+|This certificate has been revoked by the CRL and cannot be trusted for a secure connection | Create a new unrevoked certificate. |
+|The CRL (Certificate Revocation List) location is not reachable. Verify the URL can be accessed from this appliance | Make sure that your network configuration allows the appliance to reach the CRL Server defined in the certificate.You can use a proxy server if there are limitations in establishing a direct connection.
+|Certificate validation failed | This indicates a general error in the appliance. Contact [Microsoft Support](https://support.microsoft.com/supportforbusiness/productselection?sapId=82c8f35-1b8e-f274-ec11-c6efdd6dd099).|
+
+### Troubleshoot file conversions
+
+Your file conversion may not create a valid certificate. For example, the file structure may be inaccurate.
+
+If the conversion fails:
+
+- Use the conversion commands described in [Convert existing files to supported files](#convert-existing-files-to-supported-files).
+- Make sure the file parameters are accurate. See, [File type requirements](#file-type-requirements) and [Certificate File Parameter Requirements](#certificate-file-parameter-requirements) for details.
+- Consult your certificate lead.
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md
Title: Forward alert information description: You can send alert information to partner systems by working with forwarding rules. Previously updated : 07/12/2021 Last updated : 08/29/2021
Defender for IoT administrators has permission to use forwarding rules.
Alerts provide information about an extensive range of security and operational events. For example:
- - Date and time of the alert
+- Date and time of the alert
- - Engine that detected the event
+- Engine that detected the event
- - Alert title and descriptive message
+- Alert title and descriptive message
- - Alert severity
+- Alert severity
- - Source and destination name and IP address
+- Source and destination name and IP address
- - Suspicious traffic detected
+- Suspicious traffic detected
:::image type="content" source="media/how-to-work-with-alerts-sensor/address-scan-detected-screen.png" alt-text="Address scan detected."::: Relevant information is sent to partner systems when forwarding rules are created.
+## About Forwarding rules and certificates
+
+Certain Forwarding rules allow encryption and certificate validation between the sensor or on-premises management console, and the server of the integrated vendor.
+
+In these cases, the sensor or on-premises management console is the client and initiator of the session. The certificates are typically received from the server, or use asymmetric encryption where a specific certificate will be provided to set up the integration.
+
+Your Defender for IoT system was set up to either validate certificates or ignore certificate validation. See [About certificate validation](how-to-deploy-certificates.md#about-certificate-validation) for information about enabling and disabling validation.
+
+If validation is enabled and the certificate can not be verified, communication between Defender for IoT and the server will be halted. The sensor will display an error message indicating the validation failure. If the validation is disabled and the certificate is not valid, communication will still be carried out.
+
+The following Forwarding rules allow encryption and certificate validation:
+- Syslog CEF
+- Azure Sentinel
+- QRadar
+ ## Create forwarding rules **To create a new forwarding rule on a sensor**:
Relevant information is sent to partner systems when forwarding rules are create
:::image type="content" source="media/how-to-work-with-alerts-sensor/create-forwarding-rule-screen.png" alt-text="Create a Forwarding Rule icon.":::
-1. Enter a name for the forwarding rule.
+1. Enter a name for the forwarding rule.
1. Select the severity level.
Enter the following parameters:
- Time zone for the time stamp for the alert detection at the SIEM. - TLS encryption certificate file and key file for CEF servers (optional).
-
+ :::image type="content" source="media/how-to-work-with-alerts-sensor/configure-encryption.png" alt-text="Configure your encryption for your forwarding rule."::: | Syslog text message output fields | Description |
Enter the following parameters:
| Protocol | TCP or UDP | | Message | Sensor: The sensor name.<br /> Alert: The title of the alert.<br /> Type: The type of the alert. Can be **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**.<br /> Severity: The severity of the alert. Can be **Warning**, **Minor**, **Major**, or **Critical**.<br /> Source: The source device name.<br /> Source IP: The source device IP address.<br /> Destination: The destination device name.<br /> Destination IP: The IP address of the destination device.<br /> Message: The message of the alert.<br /> Alert group: The alert group associated with the alert. | - | Syslog object output | Description | |--|--|
-| Date and Time | Date and time that the syslog server machine received the information. |
-| Priority | User.Alert |
-| Hostname | Sensor IP |
-| Message | Sensor name: The name of the appliance. <br /> Alert time: The time that the alert was detected: Can vary from the time of the syslog server machine, and depends on the time-zone configuration of the forwarding rule. <br /> Alert Title:  The title of the alert. <br /> Alert message: The message of the alert. <br /> Alert severity: The severity of the alert: **Warning**, **Minor**, **Major**, or **Critical**. <br /> Alert type: **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br /> Protocol: The protocol of the alert. <br /> **Source_MAC**: IP address, name, vendor, or OS of the source device. <br /> Destination_MAC: IP address, name, vendor, or OS of the destination. If data is missing, the value will be **N/A**. <br /> alert_group: The alert group associated with the alert. |
-
+| Date and Time | Date and time that the syslog server machine received the information. |
+| Priority | User.Alert |
+| Hostname | Sensor IP |
+| Message | Sensor name: The name of the appliance. <br /> Alert time: The time that the alert was detected: Can vary from the time of the syslog server machine, and depends on the time-zone configuration of the forwarding rule. <br /> Alert Title:  The title of the alert. <br /> Alert message: The message of the alert. <br /> Alert severity: The severity of the alert: **Warning**, **Minor**, **Major**, or **Critical**. <br /> Alert type: **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br /> Protocol: The protocol of the alert. <br /> **Source_MAC**: IP address, name, vendor, or OS of the source device. <br /> Destination_MAC: IP address, name, vendor, or OS of the destination. If data is missing, the value will be **N/A**. <br /> alert_group: The alert group associated with the alert. |
| Syslog CEF output format | Description | |--|--| | Date and time | Date and time that the syslog server machine received the information. |
-| Priority | User.Alert |
+| Priority | User.Alert |
| Hostname | Sensor IP address | | Message | CEF:0 <br />Azure Defender for IoT <br />Sensor name: The name of the sensor appliance. <br />Sensor version <br />Alert Title: The title of the alert. <br />msg: The message of the alert. <br />protocol: The protocol of the alert. <br />severity: **Warning**, **Minor**, **Major**, or **Critical**. <br />type: **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br /> start: The time that the alert was detected. <br />Might vary from the time of the syslog server machine, and depends on the time-zone configuration of the forwarding rule. <br />src_ip: IP address of the source device. <br />dst_ip: IP address of the destination device.<br />cat: The alert group associated with the alert. | | Syslog LEEF output format | Description | |--|--|
-| Date and time | Date and time that the syslog server machine received the information. |
-| Priority | User.Alert |
-| Hostname | Sensor IP |
-| Message | Sensor name: The name of the Azure Defender for IoT appliance. <br />LEEF:1.0 <br />Azure Defender for IoT <br />Sensor <br />Sensor version <br />Azure Defender for IoT Alert <br /> Title:  The title of the alert. <br />msg: The message of the alert. <br />protocol: The protocol of the alert.<br />severity: **Warning**, **Minor**, **Major**, or **Critical**. <br />type: The type of the alert: **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br />start: The time of the alert. It may be different from the time of the syslog server machine. (This depends on the time-zone configuration.) <br />src_ip: IP address of the source device.<br />dst_ip: IP address of the destination device. <br />cat: The alert group associated with the alert. |
+| Date and time | Date and time that the syslog server machine received the information. |
+| Priority | User.Alert |
+| Hostname | Sensor IP |
+| Message | Sensor name: The name of the Azure Defender for IoT appliance. <br />LEEF:1.0 <br />Azure Defender for IoT <br />Sensor <br />Sensor version <br />Azure Defender for IoT Alert <br /> Title:  The title of the alert. <br />msg: The message of the alert. <br />protocol: The protocol of the alert.<br />severity: **Warning**, **Minor**, **Major**, or **Critical**. <br />type: The type of the alert: **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br />start: The time of the alert. It may be different from the time of the syslog server machine. (This depends on the time-zone configuration.) <br />src_ip: IP address of the source device.<br />dst_ip: IP address of the destination device. <br />cat: The alert group associated with the alert. |
After you enter all the information, select **Submit**. ### Webhook server action
-Send alert information to a webhook server. Working with webhook servers lets you set up integrations that subscribe to alert events with Defender for IoT. When an alert event is triggered, the management console sends an HTTP POST payload to the webhook's configured URL. Webhooks can be used to update an external SIEM system, SOAR systems, Incident management systems, etc.
+Send alert information to a webhook server. Working with webhook servers lets you set up integrations that subscribe to alert events with Defender for IoT. When an alert event is triggered, the management console sends an HTTP POST payload to the webhook's configured URL. Webhooks can be used to update an external SIEM system, SOAR systems, Incident management systems, etc.
**To define to a webhook action:**
Send alert information to a webhook server. Working with webhook servers lets yo
1. Select **Save**.
+### Webhook extended
+
+Webhook extended can be used to send extra data to the endpoint. The extended feature includes all of the information in the Webhook alert, and adds the following information to the report:
+
+- sensorID
+- sensorName
+- zoneID
+- zoneName
+- siteID
+- siteName
+- sourceDeviceAddress
+- destinationDeviceAddress
+- remediationSteps
+- handled
+- additionalInformation
+
+**To define a webhook extended action**:
+
+1. In the management console, select **Forwarding** from the left-hand pane.
+
+1. Add a forwarding rule by selecting the :::image type="icon" source="media/how-to-forward-alert-information-to-partners/add-icon.png" border="false"::: button.
+
+1. Add a meaningful name for the forwarding alert.
+
+1. Select a severity level.
+
+1. Select **Add**.
+
+1. In the Select Type drop down window, select **Webhook Extended**.
+
+ :::image type="content" source="media/how-to-forward-alert-information-to-partners/webhook-extended.png" alt-text="Select the webhook extended option from the select type drop down options menu.":::
+
+1. Add the endpoint data URL in the URL field.
+
+1. (Optional) Customize the HTTP header with a key and value definition. Add extra headers by selecting the :::image type="icon" source="media/how-to-forward-alert-information-to-partners/add-header.png" border="false"::: button.
+
+1. Select **Save**.
+
+Once the Webhook Extended forwarding rule has been configured, you can test the alert from the Forwarding screen on the management console.
+
+**To test the Webhook Extended forwarding rule**:
+
+1. In the management console, select **Forwarding** from the left-hand pane.
+
+1. Select the **run** button to test your alert.
+
+ :::image type="content" source="media/how-to-forward-alert-information-to-partners/run-button.png" alt-text="Select the run button to test your forwarding rule.":::
+
+You will know the forwarding rule is working if you see the Success notification appear.
++ ### NetWitness action Send alert information to a NetWitness server.
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
Title: Manage individual sensors
-description: Learn how to manage individual sensors, including managing activation files, performing backups, and updating a standalone sensor.
Previously updated : 05/26/2021
+description: Learn how to manage individual sensors, including managing activation files, certificates, performing backups, and updating a standalone sensor.
Last updated : 08/25/2021
You might need to upload a new activation file for an onboarded sensor when:
- You want to assign a new Defender for IoT hub to a cloud-connected sensor.
-To add a new activation file:
+**To add a new activation file:**
1. Go to the **Sensor Management** page.
You'll receive an error message if the activation file could not be uploaded. Th
## Manage certificates
-Following sensor installation, a local self-signed certificate is generated and used to access the sensor web application. When logging in to the sensor for the first time, Administrator users are prompted to provide an SSL/TLS certificate. For more information about first-time setup, see [Sign in and activate a sensor](how-to-activate-and-set-up-your-sensor.md).
+Following sensor installation, a local self-signed certificate is generated and used to access the sensor web application. When logging in to the sensor for the first time, Administrator users are prompted to provide an SSL/TLS certificate.
-This article provides information on updating certificates, working with certificate CLI commands, and supported certificates and certificate parameters.
+Sensor Administrators may be required to update certificates that were uploaded after initial login. This may happen for example if a certificate expired.
-### About certificates
-
-Azure Defender for IoT uses SSL/TLS certificates to:
--- Meet specific certificate and encryption requirements requested by your organization by uploading the CA-signed certificate.--- Allow validation between the management console and connected sensors, and between a management console and a High Availability management console. Validations is evaluated against a Certificate Revocation List (CRL), and the certificate expiration date. *If validation fails, communication between the management console and the sensor is halted and a validation error is presented in the console*. This option is enabled by default after installation.--- Third party Forwarding rules, for example alert information sent to SYSLOG, Splunk or ServiceNow; or communications with Active Directory are validated.-
-### About CRL servers
-
-When validation is on, the appliance should be able to establish connection to the CRL server defined by the certificate. By default, the certificate will reference the CRL URL on HTTP port 80. Some organizational security policies may block access to this port. If your organization does not have access to port 80, you can:
-1. Define another URL and a specific port in the certificate.
-- The URL should be defined as http:// rather than https://.-- Verify that the destination CRL server can listen on the port you defined.
-1. Use a proxy server that will access the CRL on port 80.
-1. Not carry out CRL validation. In this case, remove the CRL URL reference in the certificate.
-
-### About SSL/TLS certificates
-
-The Defender for IoT sensor and on-premises management console use SSL and TLS certificates for the following functions:
-
-
-
-
-Once installed, the appliance generates a local self-signed certificate to allow preliminary access to the web console.
-
- > [!NOTE]
- > For integrations and forwarding rules where the appliance is the client and initiator of the session, specific certificates are used and are not related to the system certificates.
- >
- >In these cases, the certificates are typically received from the server, or use asymmetric encryption where a specific certificate will be provided to set up the integration.
-
-Appliances may use unique certificate files. If you need to replace a certificate, you have uploaded;
--- From version 10.0, the certificate can be replaced from the System Settings menu.--- For versions previous to 10.0, the SSL certificate can be replaced using the command-line tool.-
-### Update certificates
-
-Sensor Administrator users can update certificates.
-
-To update a certificate:
+**To update a certificate:**
1. Select **System Settings**. 1. Select **SSL/TLS Certificates.**
-1. Delete or edit the certificate and add a new one.
-
- - Add a certificate name.
-
- - Upload a CRT file and key file and enter a passphrase.
- - Upload a PEM file if necessary.
-
-To change the validation setting:
-
-1. Enable or Disable the **Enable Certificate Validation** toggle.
-
-1. Select **Save**.
-
-If the option is enabled and validation fails, communication between the management console and the sensor is halted and a validation error is presented in the console.
-
-### Certificate Support
-
-The following certificates are supported:
--- Private and Enterprise Key Infrastructure (Private PKI)--- Public Key Infrastructure (Public PKI) --- Locally generated on the appliance (locally self-signed). -
-> [!IMPORTANT]
-> We don't recommend using a self-signed certificates. This type of connection is not secure and should be used for test environments only. Since, the owner of the certificate can't be validated, and the security of your system can't be maintained, self-signed certificates should never be used for production networks.
-
-### Supported SSL Certificates
-
-The following parameters are supported.
-
-**Certificate CRT**
--- The primary certificate file for your domain name--- Signature Algorithm = SHA256RSA-- Signature Hash Algorithm = SHA256-- Valid from = Valid past date-- Valid To = Valid future date-- Public Key = RSA 2048 bits (Minimum) or 4096 bits-- CRL Distribution Point = URL to .crl file-- Subject CN = URL, can be a wildcard certificate; for example, Sensor.contoso.<span>com, or *.contoso.<span>com-- Subject (C)ountry = defined, for example, US-- Subject (OU) Org Unit = defined, for example, Contoso Labs-- Subject (O)rganization = defined, for example, Contoso Inc.-
-**Key File**
--- The key file generated when you created CSR.--- RSA 2048 bits (Minimum) or 4096 bits.-
- > [!Note]
- > Using a key length of 4096bits:
- > - The SSL handshake at the start of each connection will be slower.
- > - There's an increase in CPU usage during handshakes.
-
-**Certificate Chain**
--- The intermediate certificate file (if any) that was supplied by your CA--- The CA certificate that issued the server's certificate should be first in the file, followed by any others up to the root. -- Can include Bag attributes.-
-**Passphrase**
-- One key supported.
+ :::image type="content" source="media/how-to-manage-individual-sensors/certificate-upload.png" alt-text="Upload a certificate":::
-- Set up when you're importing the certificate.
+1. In the SSL/TLS Certificates dialog box, delete the existing certificate and add a new one.
-Certificates with other parameters might work, but Microsoft doesn't support them.
-
-#### Encryption key artifacts
-
-**.pem – certificate container file**
-
-Privacy Enhanced Mail (PEM) files were the general file type used to secure emails. Nowadays, PEM files are used with certificates and use x509 ASN1 keys.ΓÇ»
-
-The container file is defined in RFCs 1421 to 1424, a container format that may include the public certificate only. For example, Apache installs, a CA certificate, files, ETC, SSL, or CERTS. This can include an entire certificate chain including public key, private key, and root certificates.
-
-It may also encode a CSR as the PKCS10 format, which can be translated into PEM.
-
-**.cert .cer .crt – certificate container file**
-
-A `.pem`, or `.der` formatted file with a different extension. The file is recognized by Windows Explorer as a certificate. The `.pem` file is not recognized by Windows Explorer.
-
-**.key – Private Key File**
-
-A key file is in the same format as a PEM file, but it has a different extension.
-
-#### Other commonly available key artifacts
-
-**.csrΓÇ»- certificate signing request**.
-
-This file is used for submission to certificate authorities. The actual format is PKCS10, which is defined in RFC 2986, and may include some, or all of the key details of the requested certificate. For example, subject, organization, and state. It is the public key of the certificate that gets signed by the CA, and receives a certificate in return.
-
-The returned certificate is the public certificate, which includes the public key but not the private key.
-
-**.pkcs12 .pfx .p12 – password container**.
-
-Originally defined by RSA in the Public-Key Cryptography Standards (PKCS), the 12-variant was originally enhanced by Microsoft, and later submitted as RFC 7292.
-
-This container format requires a password that contains both public and private certificate pairs. Unlike `.pem` files, this container is fully encrypted. 
-
-You can use OpenSSL to turn this into a `.pem` file with both public and private keys: `openssl pkcs12 -in file-to-convert.p12 -out converted-file.pem -nodes` 
-
-**.der – binary encoded PEM**.
-
-The way to encode ASN.1 syntax in binary, is through a `.pem` file, which is just a Base64 encoded `.der` file.
-
-OpenSSL can convert these files to a `.pem`: `openssl x509 -inform der -in to-convert.der -out converted.pem`.
-
-Windows will recognize these files as certificate files. By default, Windows will export certificates as `.der` formatted files with a different extension.ΓÇ»
-
-**.crl - certificate revocation list**.
-Certificate authorities produce these files as a way to de-authorize certificates before their expiration.
-
-##### CLI commands
-
-Use the `cyberx-xsense-certificate-import` CLI command to import certificates. To use this tool, you need to upload certificate files to the device, by using tools such as WinSCP or Wget.
-
-The command supports the following input flags:
--- `-h`: Shows the command-line help syntax.--- `--crt`: Path to a certificate file (.crt extension).--- `--key`: \*.key file. Key length should be a minimum of 2,048 bits.--- `--chain`: Path to a certificate chain file (optional).--- `--pass`: Passphrase used to encrypt the certificate (optional).--- `--passphrase-set`: Default = `False`, unused. Set to `True` to use the previous passphrase supplied with the previous certificate (optional).-
-When you're using the CLI command:
--- Verify that the certificate files are readable on the appliance.--- Verify that the domain name and IP in the certificate match the configuration that the IT department has planned.-
-### Use OpenSSL to manage certificates
-
-Manage your certificates with the following commands:
-
-| Description | CLI Command |
-|--|--|
-| Generate a new private key and Certificate Signing Request | `openssl req -out CSR.csr -new -newkey rsa:2048 -nodes -keyout privateKey.key` |
-| Generate a self-signed certificate | `openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout privateKey.key -out certificate.crt` |
-| Generate a certificate signing request (CSR) for an existing private key | `openssl req -out CSR.csr -key privateKey.key -new` |
-| Generate a certificate signing request based on an existing certificate | `openssl x509 -x509toreq -in certificate.crt -out CSR.csr -signkey privateKey.key` |
-| Remove a passphrase from a private key | `openssl rsa -in privateKey.pem -out newPrivateKey.pem` |
-
-If you need to check the information within a Certificate, CSR or Private Key, use these commands;
+ - Add a certificate name.
+ - Upload a CRT file and key file.
+ - Upload a PEM file if necessary.
-| Description | CLI Command |
-|--|--|
-| Check a Certificate Signing Request (CSR) | `openssl req -text -noout -verify -in CSR.csr` |
-| Check a private key | `openssl rsa -in privateKey.key -check` |
-| Check a certificate | `openssl x509 -in certificate.crt -text -noout` |
+If the upload fails, contact your security or IT administrator, or review the information in [About Certificates](how-to-deploy-certificates.md).
-If you receive an error that the private key doesnΓÇÖt match the certificate, or that a certificate that you installed to a site is not trusted, use these commands to fix the error;
+**To change the certificate validation setting:**
-| Description | CLI Command |
-|--|--|
-| Check an MD5 hash of the public key to ensure that it matches with what is in a CSR or private key | 1. `openssl x509 -noout -modulus -in certificate.crt | openssl md5` <br /> 2. `openssl rsa -noout -modulus -in privateKey.key | openssl md5` <br /> 3. `openssl req -noout -modulus -in CSR.csr | openssl md5 ` |
+1. Enable or disable the **Enable Cer