Updates from: 04/13/2021 03:06:02
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Use Scim To Provision Users And Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
Previously updated : 03/22/2021 Last updated : 04/12/2021
The SCIM standard defines a schema for managing users and groups.
The **core** user schema only requires three attributes (all other attributes are optional): - `id`, service provider defined identifier-- `externalId`, client defined identifier
+- `userName`, a unique identifier for the user (generally maps to the Azure AD user principal name)
- `meta`, *read-only* metadata maintained by the service provider In addition to the **core** user schema, the SCIM standard defines an **enterprise** user extension with a model for extending the user schema to meet your applicationΓÇÖs needs.
active-directory Concept Authentication Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-passwordless.md
The following providers offer FIDO2 security keys of different form factors that
| KONA I | [https://konai.com/business/security/fido](https://konai.com/business/security/fido) | | Excelsecu | [https://www.excelsecu.com/productdetail/esecufido2secu.html](https://www.excelsecu.com/productdetail/esecufido2secu.html) | | Token2 Switzerland | [https://www.token2.swiss/shop/product/token2-t2f2-alu-fido2-u2f-and-totp-security-key](https://www.token2.swiss/shop/product/token2-t2f2-alu-fido2-u2f-and-totp-security-key) |
-| Go-Trust ID | [https://www.gotrustid.com/](https://www.gotrustid.com/) |
+| GoTrustID Inc. | [https://www.gotrustid.com/idem-key](https://www.gotrustid.com/idem-key) |
| Kensington | [https://www.kensington.com/solutions/product-category/why-biometrics/](https://www.kensington.com/solutions/product-category/why-biometrics/) | > [!NOTE]
active-directory How To Gmsa Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/how-to-gmsa-cmdlets.md
This document will cover the following cmdlets:ΓÇ»
`Set-AADCloudSyncRestrictedPermissions`
-`Ste-AADCloudSyncPermissions`
+`Set-AADCloudSyncPermissions`
## How to use the cmdlets:ΓÇ»
active-directory Tutorial Single Forest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/tutorial-single-forest.md
You can use the environment you create in this tutorial for testing or for getti
- For certificate validation, unblock the following URLs: **mscrl.microsoft.com:80**, **crl.microsoft.com:80**, **ocsp.msocsp.com:80**, and **www\.microsoft.com:80**. Since these URLs are used for certificate validation with other Microsoft products you may already have these URLs unblocked. ## Install the Azure AD Connect provisioning agent
-1. Sign in to the domain joined server. If you are using the [Basic AD and Azure environment](tutorial-basic-ad-azure.md) tutorial, it would be DC1.
+1. Sign in to the domain joined server. If you are using the [Basic A D and Azure environment](tutorial-basic-ad-azure.md) tutorial, it would be DC1.
2. Sign in to the Azure portal using cloud-only global admin credentials. 3. On the left, select **Azure Active Directory**, click **Azure AD Connect**, and in the center select **Manage cloud sync**.
You can use the environment you create in this tutorial for testing or for getti
![Screenshot that shows the "Microsoft Azure A D Connect Provisioning Agent Package" splash screen.](media/how-to-install/install-1.png) 7. Once this operation completes, the configuration wizard will launch. Sign in with your Azure AD global administrator account. Note that if you have IE enhanced security enabled this will block the sign-in. If this is the case, close the installation, disable IE enhanced security in Server Manager, and click the **AAD Connect Provisioning Agent Wizard** to restart the installation.
-8. On the **Connect Active Directory** screen, click **Add directory** and then sign in with your Active Directory domain administrator account. NOTE: The domain administrator account should not have password change requirements. In case the password expires or changes, you will need to re-configure the agent with the new credentials. This operation will add your on-premises directory. Click **Next**.
+8. On the **Connect Active Directory** screen, click **Add directory** and then sign in with your Active Directory domain administrator account. NOTE: The domain administrator account should not have password change requirements. If the password expires or changes, you will need to re-configure the agent with the new credentials. This operation will add your on-premises directory. Click **Next**.
![Screenshot of the "Connect Active Directory" screen.](media/how-to-install/install-3a.png)
To verify the agent is being seen by Azure follow these steps:
![Azure portal](media/how-to-install/install-6.png)</br> 3. On the **Azure AD Connect cloud sync** screen click **Review all agents**.
-![Azure AD Provisioning](media/how-to-install/install-7.png)</br>
+![Azure A D Provisioning](media/how-to-install/install-7.png)</br>
4. On the **On-premises provisioning agents screen** you will see the agents you have installed. Verify that the agent in question is there and is marked **active**. ![Provisioning agents](media/how-to-install/verify-1.png)</br>
To verify that the agent is running follow these steps:
4. Select **Manage cloud sync** ![Screenshot showing "Manage cloud sync" link.](media/how-to-configure/manage-1.png) 5. Click **New Configuration**
-![Screenshot of Azure AD Connect cloud sync screen with "New configuration" link highlighted.](media/tutorial-single-forest/configure-1.png)
+![Screenshot of Azure A D Connect cloud sync screen with "New configuration" link highlighted.](media/tutorial-single-forest/configure-1.png)
7. On the configuration screen, enter a **Notification email**, move the selector to **Enable** and click **Save**. ![Screenshot of Configure screen with Notification email filled in and Enable selected.](media/how-to-configure/configure-2.png) 1. The configuration status should now be **Healthy**.
-![Screenshot of Azure AD Connect cloud sync screen showing Healthy status.](media/how-to-configure/manage-4.png)
+![Screenshot of Azure A D Connect cloud sync screen showing Healthy status.](media/how-to-configure/manage-4.png)
## Verify users are created and synchronization is occurring
-You will now verify that the users that you had in our on-premises directory have been synchronized and now exist in our Azure AD tenant. Be aware that this may take a few hours to complete. To verify users are synchronized do the following.
+You will now verify that the users that you had in your on-premises directory have been synchronized and now exist in your Azure AD tenant. Be aware that this may take a few hours to complete. To verify users are synchronized do the following.
1. Browse to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription. 2. On the left, select **Azure Active Directory** 3. Under **Manage**, select **Users**.
-4. Verify that you see the new users in our tenant</br>
+4. Verify that you see the new users in your tenant</br>
-## Test signing in with one of our users
+## Test signing in with one of your users
1. Browse to [https://myapps.microsoft.com](https://myapps.microsoft.com)
-2. Sign in with a user account that was created in our new tenant. You will need to sign in using the following format: (user@domain.onmicrosoft.com). Use the same password that the user uses to sign in on-premises.</br>
+2. Sign in with a user account that was created in your tenant. You will need to sign in using the following format: (user@domain.onmicrosoft.com). Use the same password that the user uses to sign in on-premises.</br>
![Verify](media/tutorial-single-forest/verify-1.png)</br>
-You have now successfully setup a hybrid identity environment that you can use to test and familiarize yourself with what Azure has to offer.
+You have now successfully configured a hybrid identity environment using Azure AD Connect cloud sync.
## Next steps - [What is provisioning?](what-is-provisioning.md)-- [What is Azure AD Connect cloud provisioning?](what-is-cloud-sync.md)
+- [What is Azure AD Connect cloud provisioning?](what-is-cloud-sync.md)
active-directory What Is Cloud Sync https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/what-is-cloud-sync.md
# What is Azure AD Connect cloud sync? Azure AD Connect cloud sync is new offering from Microsoft designed to meet and accomplish your hybrid identity goals for synchronization of users, groups and contacts to Azure AD. It accomplishes this by using the Azure AD cloud provisioning agent instead of the Azure AD Connect application. However, it can be used alongside Azure AD Connect sync and it provides the following benefits: -- Support for synchronizing to an Azure AD tenant from a multi-forest disconnected Active Directory forest environment: The common scenarios include merger & acquisition, where the acquired company's AD forests are isolated from the parent company's AD forests and companies that have historically had multiple AD forests.
+- Support for synchronizing to an Azure AD tenant from a multi-forest disconnected Active Directory forest environment: The common scenarios include merger & acquisition (where the acquired company's AD forests are isolated from the parent company's AD forests), and companies that have historically had multiple AD forests.
- Simplified installation with light-weight provisioning agents: The agents act as a bridge from AD to Azure AD, with all the sync configuration managed in the cloud. - Multiple provisioning agents can be used to simplify high availability deployments, particularly critical for organizations relying upon password hash synchronization from AD to Azure AD. - Support for large groups with up to 50K members. It is recommended to use only the OU scoping filter when synchronizing large groups.
Azure AD Connect cloud sync is new offering from Microsoft designed to meet and
![What is Azure AD Connect](media/what-is-cloud-sync/architecture-1.png) ## How is Azure AD Connect cloud sync different from Azure AD Connect sync?
-With Azure AD Connect cloud sync, provisioning from AD to Azure AD is orchestrated in Microsoft Online Services. An organization only needs to deploy, in their on-premises and IaaS-hosted environment, a lightweight agent that acts as a bridge between Azure AD and AD. The provisioning configuration is stored in Azure AD and managed as part of the service.
+With Azure AD Connect cloud sync, provisioning from AD to Azure AD is orchestrated in Microsoft Online Services. An organization only needs to deploy, in their on-premises or IaaS-hosted environment, a light-weight agent that acts as a bridge between Azure AD and AD. The provisioning configuration is stored in Azure AD and managed as part of the service.
## Azure AD Connect cloud sync video The following short video provides an excellent overview of Azure AD Connect cloud sync:
active-directory Developer Glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/developer-glossary.md
Use the following comments section to provide feedback and help to refine and sh
[AAD-Tokens-Claims]:access-tokens.md [AZURE-portal]: https://portal.azure.com [AAD-RBAC]: ../../role-based-access-control/role-assignments-portal.md
-[JWT]: https://tools.ietf.org/html/draft-ietf-oauth-json-web-token-32
+[JWT]: https://tools.ietf.org/html/rfc7519
[Microsoft-Graph]: https://developer.microsoft.com/graph [O365-Perm-Ref]: /graph/permissions-reference [OAuth2-Access-Token-Scopes]: https://tools.ietf.org/html/rfc6749#section-3.3
Use the following comments section to provide feedback and help to refine and sh
[OAuth2-Role-Def]: https://tools.ietf.org/html/rfc6749#page-6 [OpenIDConnect]: https://openid.net/specs/openid-connect-core-1_0.html [OpenIDConnect-AuthZ-Endpoint]: https://openid.net/specs/openid-connect-core-1_0.html#AuthorizationEndpoint
-[OpenIDConnect-ID-Token]: https://openid.net/specs/openid-connect-core-1_0.html#IDToken
+[OpenIDConnect-ID-Token]: https://openid.net/specs/openid-connect-core-1_0.html#IDToken
active-directory V2 Permissions And Consent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-permissions-and-consent.md
Content-Type: application/json
"grant_type": "authorization_code", "client_id": "6731de76-14a6-49ae-97bc-6eba6914391e", "scope": "https://outlook.office.com/mail.read https://outlook.office.com/mail.send",
- "code": "AwABAAAAvPM1KaPlrEqdFSBzjqfTGBCmLdgfSTLEMPGYuNHSUYBrq..."
+ "code": "AwABAAAAvPM1KaPlrEqdFSBzjqfTGBCmLdgfSTLEMPGYuNHSUYBrq...",
"redirect_uri": "https://localhost/myapp", "client_secret": "zc53fwe80980293klaj9823" // NOTE: Only required for web apps }
active-directory Azureadjoin Plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/azureadjoin-plan.md
Select **ΓÇ£Yes** if you require users to perform MFA while joining devices to A
![Require multi-factor Auth to join devices](./media/azureadjoin-plan/03.png)
-**Recommendation:** Use the user action [Register or join devices](/conditional-access/concept-conditional-access-cloud-apps#user-actions) in Conditional Access for enforcing MFA for joining devices.
+**Recommendation:** Use the user action [Register or join devices](/azure/active-directory/conditional-access/concept-conditional-access-cloud-apps#user-actions) in Conditional Access for enforcing MFA for joining devices.
## Configure your mobility settings
active-directory Invitation Email Elements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/invitation-email-elements.md
Previously updated : 10/20/2020 Last updated : 04/12/2021
The subject of the email follows this pattern:
We use a LinkedIn-like pattern for the From address. This pattern should make it clear that although the email comes from invites@microsoft.com, the invitation is from another organization. The format is: Microsoft InvitationsΓÇ»<invites@microsoft.com> or Microsoft invitations on behalf of &lt;tenantname&gt;ΓÇ»<invites@microsoft.com>.
+> [!NOTE]
+> For the Azure service operated by 21Vianet in China, the sender address is Invites@oe.21vianet.com.
+ ### Reply To The reply-to email is set to the inviter's email when available, so that replying to the email sends an email back to the inviter.
active-directory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/troubleshoot.md
Previously updated : 02/12/2021 Last updated : 04/12/2021 tags: active-directory
External users can be added only to ΓÇ£assignedΓÇ¥ or ΓÇ£SecurityΓÇ¥ groups and
The invitee should check with their ISP or spam filter to ensure that the following address is allowed: Invites@microsoft.com
+> [!NOTE]
+> For the Azure service operated by 21Vianet in China, the sender address is Invites@oe.21vianet.com.
+ ## I notice that the custom message does not get included with invitation messages at times To comply with privacy laws, our APIs do not include custom messages in the email invitation when:
active-directory How To Connect Sso Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sso-faq.md
Follow these steps on the on-premises server where you are running Azure AD Conn
**Step 1. Get list of AD forests where Seamless SSO has been enabled** 1. First, download, and install [Azure AD PowerShell](/powershell/azure/active-directory/overview).
- 2. Navigate to the `%programfiles%\Microsoft Azure Active Directory Connect` folder.
+ 2. Navigate to the `$env:programfiles\Microsoft Azure Active Directory Connect` folder.
3. Import the Seamless SSO PowerShell module using this command: `Import-Module .\AzureADSSO.psd1`. 4. Run PowerShell as an Administrator. In PowerShell, call `New-AzureADSSOAuthenticationContext`. This command should give you a popup to enter your tenant's Global Administrator credentials. 5. Call `Get-AzureADSSOStatus | ConvertFrom-Json`. This command provides you the list of AD forests (look at the "Domains" list) on which this feature has been enabled.
Follow these steps on the on-premises server where you are running Azure AD Conn
Run the following steps on the on-premises server where you are running Azure AD Connect: 1. First, download, and install [Azure AD PowerShell](/powershell/azure/active-directory/overview).
- 2. Navigate to the `%programfiles%\Microsoft Azure Active Directory Connect` folder.
+ 2. Navigate to the `$env:ProgramFiles\Microsoft Azure Active Directory Connect` folder.
3. Import the Seamless SSO PowerShell module using this command: `Import-Module .\AzureADSSO.psd1`. 4. Run PowerShell as an Administrator. In PowerShell, call `New-AzureADSSOAuthenticationContext`. This command should give you a popup to enter your tenant's Global Administrator credentials. 5. Call `Enable-AzureADSSO -Enable $false`.
Follow these steps on the on-premises server where you are running Azure AD Conn
Follow tasks 1 through 4 below if you have disabled Seamless SSO using Azure AD Connect. If you have disabled Seamless SSO using PowerShell instead, jump ahead to task 5 below. 1. First, download, and install [Azure AD PowerShell](/powershell/azure/active-directory/overview).
- 2. Navigate to the `%programfiles%\Microsoft Azure Active Directory Connect` folder.
+ 2. Navigate to the `$env:ProgramFiles\Microsoft Azure Active Directory Connect` folder.
3. Import the Seamless SSO PowerShell module using this command: `Import-Module .\AzureADSSO.psd1`. 4. Run PowerShell as an Administrator. In PowerShell, call `New-AzureADSSOAuthenticationContext`. This command should give you a popup to enter your tenant's Global Administrator credentials. 5. Call `Get-AzureADSSOStatus | ConvertFrom-Json`. This command provides you the list of AD forests (look at the "Domains" list) on which this feature has been enabled.
active-directory How To Connect Staged Rollout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-staged-rollout.md
The following scenarios are not supported for staged rollout:
- Windows 10 Hybrid Join or Azure AD Join primary refresh token acquisition for all versions, when userΓÇÖs on-premises UPN is not routable. This scenario will fall back to the WS-Trust endpoint while in staged rollout mode, but will stop working when staged migration is complete and user sign-on is no longer relying on federation server. >[!NOTE]
- >You still need to make the final cutover from federated to cloud authentication by using Azure AD Connect or PowerShell. Staged rollout doesn't switch domains from federated to managed. For more information about domain cutover, see [Migrate from federation to password hash synchronization](plan-migrate-adfs-password-hash-sync.md#step-3-change-the-sign-in-method-to-password-hash-synchronization-and-enable-seamless-sso) and [Migrate from federation to pass-through authentication](plan-migrate-adfs-password-hash-sync.md#step-3-change-the-sign-in-method-to-password-hash-synchronization-and-enable-seamless-sso)
+ >You still need to make the final cutover from federated to cloud authentication by using Azure AD Connect or PowerShell. Staged rollout doesn't switch domains from federated to managed. For more information about domain cutover, see [Migrate from federation to password hash synchronization](plan-migrate-adfs-password-hash-sync.md#step-3-change-the-sign-in-method-to-password-hash-synchronization-and-enable-seamless-sso) and [Migrate from federation to pass-through authentication](plan-migrate-adfs-pass-through-authentication.md#step-2-change-the-sign-in-method-to-pass-through-authentication-and-enable-seamless-sso).
## Get started with staged rollout
active-directory Whatis Azure Ad Connect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/whatis-azure-ad-connect.md
Integrating your on-premises directories with Azure AD makes your users more pro
* Provides the newest capabilities for your scenarios. Azure AD Connect replaces older versions of identity integration tools such as DirSync and Azure AD Sync. For more information, see [Hybrid Identity directory integration tools comparison](plan-hybrid-identity-design-considerations-tools-comparison.md). ## Why use Azure AD Connect Health?
-When with Azure AD, your users are more productive because there's a common identity to access both cloud and on-premises resources. Ensuring the environment is reliable, so that users can access these resources, becomes a challenge. Azure AD Connect Health helps monitor and gain insights into your on-premises identity infrastructure thus ensuring the reliability of this environment. It is as simple as installing an agent on each of your on-premises identity servers.
+When authenticating with Azure AD, your users are more productive because there's a common identity to access both cloud and on-premises resources. Ensuring the environment is reliable, so that users can access these resources, becomes a challenge. Azure AD Connect Health helps monitor and gain insights into your on-premises identity infrastructure thus ensuring the reliability of this environment. It is as simple as installing an agent on each of your on-premises identity servers.
Azure AD Connect Health for AD FS supports AD FS 2.0 on Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2 and Windows Server 2016. It also supports monitoring the AD FS proxy or web application proxy servers that provide authentication support for extranet access. With an easy and quick installation of the Health Agent, Azure AD Connect Health for AD FS provides you a set of key capabilities.
active-directory Application Proxy Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-configure-custom-domain.md
To publish your app through Application Proxy with a custom domain:
![Click to upload a certificate](./media/application-proxy-configure-custom-domain/certificate.png)
-7. On the **SSL certificate** page, browse to and select your PFX certificate file. Enter the password for the certificate, and select **Upload Certificate**. For more information about certificates, see the [Certificates for custom domains](#certificates-for-custom-domains) section. If the certificate is not valid or there is a problem with the password you will see an error message. The [Application Proxy FAQ](application-proxy-faq.md#application-configuration) contains some troubleshooting steps you can try.
+7. On the **SSL certificate** page, browse to and select your PFX certificate file. Enter the password for the certificate, and select **Upload Certificate**. For more information about certificates, see the [Certificates for custom domains](#certificates-for-custom-domains) section. If the certificate is not valid or there is a problem with the password you will see an error message. The [Application Proxy FAQ](application-proxy-faq.yml#application-configuration) contains some troubleshooting steps you can try.
![Upload Certificate](./media/application-proxy-configure-custom-domain/ssl-certificate.png)
active-directory Application Proxy Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-faq.md
- Title: Azure Active Directory Application Proxy frequently asked questions
-description: Learn answers to frequently asked questions (FAQ) about using Azure AD Application Proxy to publish internal, on-premises applications to remote users.
------- Previously updated : 07/23/2020-----
-# Active Directory (Azure AD) Application Proxy frequently asked questions
-
-This page answers frequently asked questions about Azure Active Directory (Azure AD) Application Proxy.
-
-## Enabling Azure AD Application Proxy
-
-### What license is required to use Azure AD Application Proxy?
-
-To use Azure AD Application Proxy, you must have an Azure AD Premium P1 or P2 license. For more information about licensing, see [Azure Active Directory Pricing](https://azure.microsoft.com/pricing/details/active-directory/)
-
-### What happens to Azure AD Application Proxy in my tenant, if my license expires?
-If your license expires, Application Proxy will automatically be disabled. Your application information will be saved for up to one year.
-
-### Why is the "Enable Application Proxy button grayed out?
-
-Make sure you have at least an Azure AD Premium P1 or P2 license and an Azure AD Application Proxy Connector installed. After you successfully install your first connector, the Azure AD Application Proxy service will be enabled automatically.
-
-## Connector configuration
-
-### Why is my connector still using an older version and not auto-upgraded to latest version?
-
-This may be due to either the updater service not working correctly or if there are no new updates available that the service can install.
-
-The updater service is healthy if itΓÇÖs running and there are no errors recorded in the event log (Applications and Services logs -> Microsoft -> AadApplicationProxy -> Updater -> Admin).
-
-> [!IMPORTANT]
-> Only major versions are released for auto-upgrade. We recommend updating your connector manually only if it's necessary. For example, you cannot wait for a major release, because you must fix a known problem or you want to use a new feature. For more information on new releases, the type of the release (download, auto-upgrade), bug fixes and new features see, [Azure AD Application Proxy: Version release history](application-proxy-release-version-history.md).
-
-To manually upgrade a connector:
--- Download the latest version of the connector. (You will find it under Application Proxy on the Azure Portal. You can also find the link at [Azure AD Application Proxy: Version release history](application-proxy-release-version-history.md).-- The installer restarts the Azure AD Application Proxy Connector services. In some cases, a reboot of the server might be required if the installer cannot replace all files. Therefore we recommend closing all applications (i.e. Event Viewer) before you start the upgrade.-- Run the installer. The upgrade process is quick and does not require providing any credentials and the connector will not be re-registered.-
-### Can Application Proxy Connector services run in a different user context than the default?
-
-No, this scenario isn't supported. The default settings are:
--- Microsoft AAD Application Proxy Connector - WAPCSvc - Network Service-- Microsoft AAD Application Proxy Connector Updater - WAPCUpdaterSvc - NT Authority\System-
-### Can a guest user with the Global Administrator or the Application Administrator role register the connector for the (guest) tenant?
-
-No, currently, this isn't possible. The registration attempt is always made on the user's home tenant.
-
-### My back-end application is hosted on multiple web servers and requires user session persistence (stickiness). How can I achieve session persistence? 
-
-For recommendations, see [High availability and load balancing of your Application Proxy connectors and applications](application-proxy-high-availability-load-balancing.md).
-
-### Is TLS termination (TLS/HTTPS inspection or acceleration) on traffic from the connector servers to Azure supported?
-
-The Application Proxy Connector performs certificate-based authentication to Azure. TLS Termination (TLS/HTTPS inspection or acceleration) breaks this authentication method and isn't supported. Traffic from the connector to Azure must bypass any devices that are performing TLS Termination.
-
-### Is TLS 1.2 required for all connections?
-Yes. To provide the best-in-class encryption to our customers, the Application Proxy service limits access to only TLS 1.2 protocols. These changes were gradually rolled out and effective since August 31, 2019. Make sure that all your client-server and browser-server combinations are updated to use TLS 1.2 to maintain connection to Application Proxy service. These include clients your users are using to access applications published through Application Proxy. See Preparing for [TLS 1.2 in Office 365](/microsoft-365/compliance/prepare-tls-1.2-in-office-365) for useful references and resources.
-
-### Can I place a forward proxy device between the connector server(s) and the back-end application server?
-Yes, this scenario is supported starting from the connector version 1.5.1526.0. See [Work with existing on-premises proxy servers](application-proxy-configure-connectors-with-proxy-servers.md).
-
-### Should I create a dedicated account to register the connector with Azure AD Application Proxy?
-
-There's no reason to. Any global admin or application administrator account will work. The credentials entered during installation aren't used after the registration process. Instead, a certificate is issued to the connector, which is used for authentication from that point on.
-
-### How can I monitor the performance of the Azure AD Application Proxy connector?
-
-There are Performance Monitor counters that are installed along with the connector. To view them:
-
-1. Select **Start**, type "Perfmon", and press ENTER.
-2. Select **Performance Monitor** and click the green **+** icon.
-3. Add the **Microsoft AAD Application Proxy Connector** counters you want to monitor.
-
-### Does the Azure AD Application Proxy connector have to be on the same subnet as the resource?
-
-The connector isn't required to be on the same subnet. However, it needs name resolution (DNS, hosts file) to the resource and the necessary network connectivity (routing to the resource, ports open on the resource, etc.). For recommendations, see [Network topology considerations when using Azure Active Directory Application Proxy](application-proxy-network-topology.md).
-
-### What versions of Windows Server can I install a connector on?
-
-Application Proxy requires Windows Server 2012 R2 or later. There is currently a limitation on HTTP2 for Windows Server 2019. In order to successfully use the connector on Windows Server 2019, you will need to add the following registry key and restart the server:
-
-```
-HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\WinHttp\EnableDefaultHttp2 (DWORD) Value: 0
-```
-
-## Application configuration
-
-### I am receiving an error about an invalid certificate or possible wrong password
-
-After you uploaded the SSL certificate, you receive the message "Invalid certificate, possible wrong password" on the portal.
-
-Here are some tips for troubleshooting this error:
-- Check for problems with the certificate. Install it on your local computer. If you don't experience any issues then the certificate is good.-- Ensure that the password does not contain any special characters. For testing, the password should only contain the characters 0-9, A-Z, and a-z.-- If the certificate was created with Microsoft Software Key Storage Provider, the RSA algorithm must be used.-
-### What is the length of the default and "long" back-end timeout? Can the timeout be extended?
-
-The default length is 85 seconds. The "long" setting is 180 seconds. The timeout limit can't be extended.
-
-### Can a service principal manage Application Proxy using Powershell or Microsoft Graph APIs?
-
-No, this is currently not supported.
-
-### What happens if I delete CWAP_AuthSecret (the client secret) in the app registration?
-
-The client secret, also called *CWAP_AuthSecret*, is automatically added to the application object (app registration) when the Azure AD Application Proxy app is created.
-
-The client secret is valid for one year. A new one-year client secret is automatically created before the current valid client secret expires. Three CWAP_AuthSecret client secrets are kept in the application object at all times.
-
-> [!IMPORTANT]
-> Deleting CWAP_AuthSecret breaks pre-authentication for Azure AD Application Proxy. Don't delete CWAP_AuthSecret.
-
-### How do I change the landing page my application loads?
-
-From the Application Registrations page, you can change the homepage URL to the desired external URL of the landing page. The specified page will load when the application is launched from My Apps or the Office 365 Portal. For configuration steps, see [Set a custom home page for published apps by using Azure AD Application Proxy](./application-proxy-configure-custom-home-page.md)
-
-### Can only IIS-based applications be published? What about web applications running on non-Windows web servers? Does the connector have to be installed on a server with IIS installed?
-
-No, there's no IIS requirement for applications that are published. You can publish web applications running on servers other than Windows Server. However, you might not be able to use pre-authentication with a non-Windows Server, depending on if the web server supports Negotiate (Kerberos authentication). IIS isn't required on the server where the connector is installed.
-
-### Can I configure Application Proxy to add the HSTS header?
-Application Proxy does not automatically add the HTTP Strict-Transport-Security header to HTTPS responses, but it will maintain the header if it is in the original response sent by the published application. Proving a setting to enable this functionality is on the roadmap. If you are interested in a preview that enables adding this to responses, reach out to aadapfeedback@microsoft.com for details.
-
-## Integrated Windows Authentication
-
-### When should I use the PrincipalsAllowedToDelegateToAccount method when setting up Kerberos Constrained Delegation (KCD)?
-
-The PrincipalsAllowedToDelegateToAccount method is used when connector servers are in a different domain from the web application service account. It requires the use of Resource-based Constrained Delegation.
-If the connector servers and the web application service account are in the same domain, you can use Active Directory Users and Computers to configure the delegation settings on each of the connector machine accounts, allowing them to delegate to the target SPN.
-
-If the connector servers and the web application service account are in different domains, Resource-based delegation is used. The delegation permissions are configured on the target web server and web application service account. This method of Constrained Delegation is relatively new. The method was introduced in Windows Server 2012, which supports cross-domain delegation by allowing the resource (web service) owner to control which machine and service accounts can delegate to it. There's no UI to assist with this configuration, so you'll need to use PowerShell.
-For more information, see the whitepaper [Understanding Kerberos Constrained Delegation with Application Proxy](https://aka.ms/kcdpaper).
-
-### Does NTLM authentication work with Azure AD Application Proxy?
-
-NTLM authentication canΓÇÖt be used as a pre-authentication or single sign-on method. NTLM authentication can be used only when it can be negotiated directly between the client and the published web application. Using NTLM authentication usually causes a sign-in prompt to appear in the browser.
-
-### Can I use the logon identity ΓÇ£On-premises user principal nameΓÇ¥ or ΓÇ£On-premises SAM account nameΓÇ¥ in a B2B IWA single sign-on scenario?
-
-No, this wonΓÇÖt work, because a guest user in Azure AD doesn't have the attribute that is required by any of the logon identities mentioned above.
-
-In this case there will be a fallback to ΓÇ£User principal nameΓÇ¥. For more details on the B2B scenario please read [Grant B2B users in Azure AD access to your on-premises applications](../external-identities/hybrid-cloud-to-on-premises.md).
-
-## Pass-through authentication
-
-### Can I use Conditional Access Policies for applications published with pass-through authentication?
-
-Conditional Access Policies are only enforced for successfully pre-authenticated users in Azure AD. Pass-through authentication doesnΓÇÖt trigger Azure AD authentication, so Conditional Access Policies can't be enforced. With pass-through authentication, MFA policies must be implemented on the on-premises server, if possible, or by enabling pre-authentication with Azure AD Application Proxy.
-
-### Can I publish a web application with client certificate authentication requirement?
-
-No, this scenario isn't supported because Application Proxy will terminate TLS traffic.
-
-## Remote Desktop Gateway publishing
-
-### How can I publish Remote Desktop Gateway over Azure AD Application Proxy?
-
-Refer to [Publish Remote Desktop with Azure AD Application Proxy](application-proxy-integrate-with-remote-desktop-services.md).
-
-### Can I use Kerberos Constrained Delegation (Single Sign-On - Windows Integrated Authentication) in the Remote Desktop Gateway publishing scenario?
-
-No, this scenario isn't supported.
-
-### My users don't use Internet Explorer 11 and the pre-authentication scenario doesnΓÇÖt work for them. Is this expected?
-
-Yes, itΓÇÖs expected. The pre-authentication scenario requires an ActiveX control, which isn't supported in third-party browsers.
-
-### Is the Remote Desktop Web Client (HTML5) supported?
-
-Yes, this scenario is currently in public preview. Refer to [Publish Remote Desktop with Azure AD Application Proxy](application-proxy-integrate-with-remote-desktop-services.md).
-
-### After I configured the pre-authentication scenario, I realized that the user has to authenticate twice: first on the Azure AD sign-in form, and then on the RDWeb sign-in form. Is this expected? How can I reduce this to one sign-in?
-
-Yes, it's expected. If the userΓÇÖs computer is Azure AD joined, the user signs in to Azure AD automatically. The user needs to provide their credentials only on the RDWeb sign-in form.
-
-## SharePoint publishing
-
-### How can I publish SharePoint over Azure AD Application Proxy?
-
-Refer to [Enable remote access to SharePoint with Azure AD Application Proxy](application-proxy-integrate-with-sharepoint-server.md).
-
-### Can I use the SharePoint mobile app (iOS/ Android) to access a published SharePoint server?
-
-The [SharePoint mobile app](/sharepoint/administration/supporting-the-sharepoint-mobile-apps-online-and-on-premises) does not support Azure Active Directory pre-authentication currently.
-
-## Active Directory Federation Services (AD FS) publishing
-
-### Can I use Azure AD Application Proxy as AD FS proxy (like Web Application Proxy)?
-
-No. Azure AD Application Proxy is designed to work with Azure AD and doesnΓÇÖt fulfill the requirements to act as an AD FS proxy.
-
-## WebSocket
-
-### Does WebSocket support work for applications other than QlikSense and Remote Desktop Web Client (HTML5)?
-
-Currently, WebSocket protocol support is still in public preview and it may not work for other applications. Some customers have had mixed success using WebSocket protocol with other applications. If you test such scenarios, we would love to hear your results. Please send us your feedback at aadapfeedback@microsoft.com.
-
-Features (Eventlogs, PowerShell and Remote Desktop Services) in Windows Admin Center (WAC) do not work through Azure AD Application Proxy presently.
-
-## Link translation
-
-### Does using Link translation affect performance?
-
-Yes. Link translation affects performance. The Application Proxy service scans the application for hardcoded links and replaces them with their respective, published external URLs before presenting them to the user.
-
-For best performance, we recommend using identical internal and external URLs by configuring [custom domains](./application-proxy-configure-custom-domain.md). If using custom domains isn't possible, you can improve link translation performance by using the My Apps Secure Sign in Extension or Microsoft Edge Browser on mobile. See [Redirect hardcoded links for apps published with Azure AD Application Proxy](application-proxy-configure-hard-coded-link-translation.md).
-
-## Wildcards
-
-### How do I use wildcards to publish two applications with the same custom domain name but with different protocols, one for HTTP and one for HTTPS?
-
-This scenario isn't supported directly. Your options for this scenario are:
-
-1. Publish both the HTTP and HTTPS URLs as separate applications with a wildcard, but give each of them a different custom domain. This configuration will work since they have different external URLS.
-
-2. Publish the HTTPS URL through a wildcard application. Publish the HTTP applications separately using these Application Proxy PowerShell cmdlets:
- - [Application Proxy Application Management](/powershell/module/azuread/#application_proxy_application_management&preserve-view=true)
- - [Application Proxy Connector Management](/powershell/module/azuread/#application_proxy_connector_management&preserve-view=true)
active-directory Application Sign In Unexpected User Consent Prompt https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-sign-in-unexpected-user-consent-prompt.md
Many applications that integrate with Azure Active Directory require permissions
This results in a consent prompt being shown the first time an application is used, which is often a one-time operation.
+> [!VIDEO https://www.youtube.com/embed/a1AjdvNDda4]
+ ## Scenarios in which users see consent prompts Additional prompts can be expected in various scenarios:
Additional prompts can be expected in various scenarios:
- [Apps, permissions, and consent in Azure Active Directory (v1.0 endpoint)](../develop/quickstart-register-app.md) -- [Scopes, permissions, and consent in the Azure Active Directory (v2.0 endpoint)](../develop/v2-permissions-and-consent.md)
+- [Scopes, permissions, and consent in the Azure Active Directory (v2.0 endpoint)](../develop/v2-permissions-and-consent.md)
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/whats-new-docs.md
Welcome to what's new in Azure Active Directory application management documenta
- [Troubleshoot Kerberos constrained delegation configurations for Application Proxy](application-proxy-back-end-kerberos-constrained-delegation-how-to.md) - [Quickstart: Set up SAML-based single sign-on (SSO) for an application in your Azure Active Directory (Azure AD) tenant](add-application-portal-setup-sso.md) - [Azure Active Directory application management: What's new](whats-new-docs.md)-- [Active Directory (Azure AD) Application Proxy frequently asked questions](application-proxy-faq.md)
+- [Active Directory (Azure AD) Application Proxy frequently asked questions](application-proxy-faq.yml)
- [Troubleshoot problems signing in to an application from Azure AD My Apps](application-sign-in-other-problem-access-panel.md) - [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory](application-proxy-add-on-premises-application.md) - [Optimize traffic flow with Azure Active Directory Application Proxy](application-proxy-network-topology.md)
Welcome to what's new in Azure Active Directory application management documenta
- [Application management best practices](application-management-fundamentals.md) - [Integrating Azure Active Directory with applications getting started guide](plan-an-application-integration.md) - [What is application management?](what-is-application-management.md)-- [Active Directory (Azure AD) Application Proxy frequently asked questions](application-proxy-faq.md)
+- [Active Directory (Azure AD) Application Proxy frequently asked questions](application-proxy-faq.yml)
- [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory](application-proxy-add-on-premises-application.md) - [Work with existing on-premises proxy servers](application-proxy-configure-connectors-with-proxy-servers.md) - [Develop line-of-business apps for Azure Active Directory](../develop/v2-overview.md)
active-directory Appneta Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/appneta-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with AppNeta Performance Monitor | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and AppNeta Performance Monitor.
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with AppNeta Performance Manager | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and AppNeta Performance Manager.
Last updated 12/28/2020
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with AppNeta Performance Monitor
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with AppNeta Performance Manager
-In this tutorial, you'll learn how to integrate AppNeta Performance Monitor with Azure Active Directory (Azure AD). When you integrate AppNeta Performance Monitor with Azure AD, you can:
+In this tutorial, you'll learn how to integrate AppNeta Performance Manager with Azure Active Directory (Azure AD). When you integrate AppNeta Performance Manager with Azure AD, you can:
-* Control in Azure AD who has access to AppNeta Performance Monitor.
-* Enable your users to be automatically signed-in to AppNeta Performance Monitor with their Azure AD accounts.
+* Control in Azure AD who has access to AppNeta Performance Manager.
+* Enable your users to be automatically signed-in to AppNeta Performance Manager with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal.
In this tutorial, you'll learn how to integrate AppNeta Performance Monitor with
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* AppNeta Performance Monitor single sign-on (SSO) enabled subscription.
+* AppNeta Performance Manager single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* AppNeta Performance Monitor supports **SP** initiated SSO
+* AppNeta Performance Manager supports **SP** initiated SSO
-* AppNeta Performance Monitor supports **Just In Time** user provisioning
+* AppNeta Performance Manager supports **Just In Time** user provisioning
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Adding AppNeta Performance Monitor from the gallery
+## Adding AppNeta Performance Manager from the gallery
-To configure the integration of AppNeta Performance Monitor into Azure AD, you need to add AppNeta Performance Monitor from the gallery to your list of managed SaaS apps.
+To configure the integration of AppNeta Performance Manager into Azure AD, you need to add AppNeta Performance Manager from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **AppNeta Performance Monitor** in the search box.
-1. Select **AppNeta Performance Monitor** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **AppNeta Performance Manager** in the search box.
+1. Select **AppNeta Performance Manager** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for AppNeta Performance Monitor
+## Configure and test Azure AD SSO for AppNeta Performance Manager
-Configure and test Azure AD SSO with AppNeta Performance Monitor using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in AppNeta Performance Monitor.
+Configure and test Azure AD SSO with AppNeta Performance Manager using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in AppNeta Performance Manager.
-To configure and test Azure AD SSO with AppNeta Performance Monitor, perform the following steps:
+To configure and test Azure AD SSO with AppNeta Performance Manager, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure AppNeta Performance Monitor SSO](#configure-appneta-performance-monitor-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create AppNeta Performance Monitor test user](#create-appneta-performance-monitor-test-user)** - to have a counterpart of B.Simon in AppNeta Performance Monitor that is linked to the Azure AD representation of user.
+1. **[Configure AppNeta Performance Manager SSO](#configure-appneta-performance-manager-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create AppNeta Performance Manager test user](#create-appneta-performance-manager-test-user)** - to have a counterpart of B.Simon in AppNeta Performance Manager that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **AppNeta Performance Monitor** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **AppNeta Performance Manager** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://<subdomain>.pm.appneta.com` > [!NOTE]
- > The Sign-on URL value is not real. Update this value with the actual Sign-On URL. Contact [AppNeta Performance Monitor Client support team](mailto:support@appneta.com) to get this value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > The Sign-on URL value is not real. Update this value with the actual Sign-On URL. Contact [AppNeta Performance Manager Client support team](mailto:support@appneta.com) to get this value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-1. AppNeta Performance Monitor application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+1. AppNeta Performance Manager application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
![image](common/edit-attribute.png)
-1. In addition to above, AppNeta Performance Monitor application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirement.
+1. In addition to above, AppNeta Performance Manager application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirement.
| Name | Source Attribute| | --| -|
Follow these steps to enable Azure AD SSO in the Azure portal.
![The Certificate download link](common/metadataxml.png)
-1. On the **Set up AppNeta Performance Monitor** section, copy the appropriate URL(s) based on your requirement.
+1. On the **Set up AppNeta Performance Manager** section, copy the appropriate URL(s) based on your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to AppNeta Performance Monitor.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to AppNeta Performance Manager.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **AppNeta Performance Monitor**.
+1. In the applications list, select **AppNeta Performance Manager**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure AppNeta Performance Monitor SSO
+## Configure AppNeta Performance Manager SSO
-To configure single sign-on on **AppNeta Performance Monitor** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [AppNeta Performance Monitor support team](mailto:support@appneta.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **AppNeta Performance Manager** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [AppNeta Performance Manager support team](mailto:support@appneta.com). They set this setting to have the SAML SSO connection set properly on both sides.
-### Create AppNeta Performance Monitor test user
+### Create AppNeta Performance Manager test user
-In this section, a user called Britta Simon is created in AppNeta Performance Monitor. AppNeta Performance Monitor supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in AppNeta Performance Monitor, a new one is created after authentication.
+In this section, a user called Britta Simon is created in AppNeta Performance Manager. AppNeta Performance Manager supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in AppNeta Performance Manager, a new one is created after authentication.
> [!Note]
-> If you need to create a user manually, contact [AppNeta Performance Monitor support team](mailto:support@appneta.com).
+> If you need to create a user manually, contact [AppNeta Performance Manager support team](mailto:support@appneta.com).
## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on **Test this application** in Azure portal. This will redirect to AppNeta Performance Monitor Sign-on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to AppNeta Performance Manager Sign-on URL where you can initiate the login flow.
-* Go to AppNeta Performance Monitor Sign-on URL directly and initiate the login flow from there.
+* Go to AppNeta Performance Manager Sign-on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the AppNeta Performance Monitor tile in the My Apps, this will redirect to AppNeta Performance Monitor Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+* You can use Microsoft My Apps. When you click the AppNeta Performance Manager tile in the My Apps, this will redirect to AppNeta Performance Manager Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure AppNeta Performance Monitor you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
+Once you configure AppNeta Performance Manager you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
analysis-services Analysis Services Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-monitor.md
Use this table to determine which metrics are best for your monitoring scenario.
|RowsWrittenPerSec|Processing: Rows written per sec|CountPerSecond|Average|Rate of rows written during processing.| |qpu_metric|QPU|Count|Average|QPU. Range 0-100 for S1, 0-200 for S2 and 0-400 for S4| |QueryPoolBusyThreads|Query Pool Busy Threads|Count|Average|Number of busy threads in the query thread pool.|
-|SuccessfullConnectionsPerSec|Successful Connections Per Sec|CountPerSecond|Average|Rate of successful connection completions.|
+|SuccessfullConnectionsPerSec|Successfull Connections Per Sec|CountPerSecond|Average|Rate of successful connection completions.|
|CommandPoolBusyThreads|Threads: Command pool busy threads|Count|Average|Number of busy threads in the command thread pool.| |CommandPoolIdleThreads|Threads: Command pool idle threads|Count|Average|Number of idle threads in the command thread pool.| |LongParsingBusyThreads|Threads: Long parsing busy threads|Count|Average|Number of busy threads in the long parsing thread pool.|
app-service Configure Language Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-python.md
The following table describes the production settings that are relevant to Azure
| Django setting | Instructions for Azure | | | |
-| `SECRET_KEY` | Store the value in an App Service setting as described on [Access app settings as environment variables](#access-app-settings-as-environment-variables). You can alternately [store the value as a "secrete" in Azure Key Vault](../key-vault/secrets/quick-create-python.md). |
+| `SECRET_KEY` | Store the value in an App Service setting as described on [Access app settings as environment variables](#access-app-settings-as-environment-variables). You can alternately [store the value as a "secret" in Azure Key Vault](../key-vault/secrets/quick-create-python.md). |
| `DEBUG` | Create a `DEBUG` setting on App Service with the value 0 (false), then load the value as an environment variable. In your development environment, create a `DEBUG` environment variable with the value 1 (true). | | `ALLOWED_HOSTS` | In production, Django requires that you include app's URL in the `ALLOWED_HOSTS` array of *settings.py*. You can retrieve this URL at runtime with the code, `os.environ['WEBSITE_HOSTNAME']`. App Service automatically sets the `WEBSITE_HOSTNAME` environment variable to the app's URL. | | `DATABASES` | Define settings in App Service for the database connection and load them as environment variables to populate the [`DATABASES`](https://docs.djangoproject.com/en/3.1/ref/settings/#std:setting-DATABASES) dictionary. You can alternately store the values (especially the username and password) as [Azure Key Vault secrets](../key-vault/secrets/quick-create-python.md). |
If you're encountering this error with the sample in [Tutorial: Deploy a Django
> [Tutorial: Deploy from private container repository](tutorial-custom-container.md?pivots=container-linux) > [!div class="nextstepaction"]
-> [App Service Linux FAQ](faq-app-service-linux.md)
+> [App Service Linux FAQ](faq-app-service-linux.md)
application-gateway Application Gateway Faq Md https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/application-gateway-faq-md.md
- Title: Frequently asked questions about Azure Application Gateway
-description: Find answers to frequently asked questions about Azure Application Gateway.
---- Previously updated : 05/26/2020----
-# Frequently asked questions about Application Gateway
--
-The following are common questions asked about Azure Application Gateway.
-
-## General
-
-### What is Application Gateway?
-
-Azure Application Gateway provides an application delivery controller (ADC) as a service. It offers various layer 7 load-balancing capabilities for your applications. This service is highly available, scalable, and fully managed by Azure.
-
-### What features does Application Gateway support?
-
-Application Gateway supports autoscaling, TLS offloading, and end-to-end TLS, a web application firewall (WAF), cookie-based session affinity, URL path-based routing, multisite hosting, and other features. For a full list of supported features, see [Introduction to Application Gateway](./overview.md).
-
-### How do Application Gateway and Azure Load Balancer differ?
-
-Application Gateway is a layer 7 load balancer, which means it works only with web traffic (HTTP, HTTPS, WebSocket, and HTTP/2). It supports capabilities such as TLS termination, cookie-based session affinity, and round robin for load-balancing traffic. Load Balancer load-balances traffic at layer 4 (TCP or UDP).
-
-### What protocols does Application Gateway support?
-
-Application Gateway supports HTTP, HTTPS, HTTP/2, and WebSocket.
-
-### How does Application Gateway support HTTP/2?
-
-See [HTTP/2 support](./configuration-listeners.md#http2-support).
-
-### What resources are supported as part of a backend pool?
-
-See [supported backend resources](./application-gateway-components.md#backend-pools).
-
-### In what regions is Application Gateway available?
-
-Application Gateway v1 (Standard and WAF) is available in all regions of global Azure. It's also available in [Azure China 21Vianet](https://www.azure.cn/) and [Azure Government](https://azure.microsoft.com/overview/clouds/government/).
-
-For Application Gateway v2 (Standard_v2 and WAF_v2) availability, see [supported regions for Application Gateway v2](./application-gateway-autoscaling-zone-redundant.md#supported-regions)
-
-### Is this deployment dedicated for my subscription, or is it shared across customers?
-
-Application Gateway is a dedicated deployment in your virtual network.
-
-### Does Application Gateway support HTTP-to-HTTPS redirection?
-
-Redirection is supported. See [Application Gateway redirect overview](./redirect-overview.md).
-
-### In what order are listeners processed?
-
-See the [order of listener processing](./configuration-listeners.md#order-of-processing-listeners).
-
-### Where do I find the Application Gateway IP and DNS?
-
-If you're using a public IP address as an endpoint, you'll find the IP and DNS information on the public IP address resource. Or find it in the portal, on the overview page for the application gateway. If you're using internal IP addresses, find the information on the overview page.
-
-For the v2 SKU, open the public IP resource and select **Configuration**. The **DNS name label (optional)** field is available to configure the DNS name.
-
-### What are the settings for Keep-Alive timeout and TCP idle timeout?
-
-*Keep-Alive timeout* governs how long the Application Gateway will wait for a client to send another HTTP request on a persistent connection before reusing it or closing it. *TCP idle timeout* governs how long a TCP connection is kept open in case of no activity.
-
-The *Keep-Alive timeout* in the Application Gateway v1 SKU is 120 seconds and in the v2 SKU it's 75 seconds. The *TCP idle timeout* is a 4-minute default on the frontend virtual IP (VIP) of both v1 and v2 SKU of Application Gateway. You can configure the TCP idle timeout value on v1 and v2 Application Gateways to be anywhere between 4 minutes and 30 minutes. For both v1 and v2 Application Gateways, you'll need to navigate to the public IP of the Application Gateway and change the TCP idle timeout under the "Configuration" blade of the public IP on Portal. You can set the TCP idle timeout value of the public IP through PowerShell by running the following commands:
-
-```azurepowershell-interactive
-$publicIP = Get-AzPublicIpAddress -Name MyPublicIP -ResourceGroupName MyResourceGroup
-$publicIP.IdleTimeoutInMinutes = "15"
-Set-AzPublicIpAddress -PublicIpAddress $publicIP
-```
-
-### Does the IP or DNS name change over the lifetime of the application gateway?
-
-In Application Gateway V1 SKU, the VIP can change if you stop and start the application gateway. But the DNS name associated with the application gateway doesn't change over the lifetime of the gateway. Because the DNS name doesn't change, you should use a CNAME alias and point it to the DNS address of the application gateway. In Application Gateway V2 SKU, you can set the IP address as static, so IP and DNS name will not change over the lifetime of the application gateway.
-
-### Does Application Gateway support static IP?
-
-Yes, the Application Gateway v2 SKU supports static public IP addresses. The v1 SKU supports static internal IPs.
-
-### Does Application Gateway support multiple public IPs on the gateway?
-
-An application gateway supports only one public IP address.
-
-### How large should I make my subnet for Application Gateway?
-
-See [Application Gateway subnet size considerations](./configuration-infrastructure.md#size-of-the-subnet).
-
-### Can I deploy more than one Application Gateway resource to a single subnet?
-
-Yes. In addition to multiple instances of a given Application Gateway deployment, you can provision another unique Application Gateway resource to an existing subnet that contains a different Application Gateway resource.
-
-A single subnet can't support both v2 and v1 Application Gateway SKUs.
-
-### Does Application Gateway v2 support user-defined routes (UDR)?
-
-Yes, but only specific scenarios. For more information, see [Application Gateway infrastructure configuration](configuration-infrastructure.md#supported-user-defined-routes).
-
-### Does Application Gateway support x-forwarded-for headers?
-
-Yes. See [Modifications to a request](./how-application-gateway-works.md#modifications-to-the-request).
-
-### How long does it take to deploy an application gateway? Will my application gateway work while it's being updated?
-
-New Application Gateway v1 SKU deployments can take up to 20 minutes to provision. Changes to instance size or count aren't disruptive, and the gateway remains active during this time.
-
-Most deployments that use the v2 SKU take around 6 minutes to provision. However it can take longer depending on the type of deployment. For example, deployments across multiple Availability Zones with many instances can take more than 6 minutes.
-
-### Can I use Exchange Server as a backend with Application Gateway?
-
-No. Application Gateway doesn't support email protocols such as SMTP, IMAP, and POP3.
-
-### Is there guidance available to migrate from the v1 SKU to the v2 SKU?
-
-Yes. For details see, [Migrate Azure Application Gateway and Web Application Firewall from v1 to v2](migrate-v1-v2.md).
-
-### Will the Application Gateway v1 SKU continue to be supported?
-
-Yes. The Application Gateway v1 SKU will continue to be supported. However, it is strongly recommended that you move to v2 to take advantage of the feature updates in that SKU. For more information, see [Autoscaling and Zone-redundant Application Gateway v2](application-gateway-autoscaling-zone-redundant.md).
-
-### Does Application Gateway V2 support proxying requests with NTLM authentication?
-
-No. Application Gateway V2 doesn't support proxying requests with NTLM authentication.
-
-### Does Application Gateway affinity cookie support SameSite attribute?
-Yes, the [Chromium browser](https://www.chromium.org/Home) [v80 update](https://chromiumdash.appspot.com/schedule) introduced a mandate on HTTP cookies without SameSite attribute to be treated as SameSite=Lax. This means that the Application Gateway affinity cookie won't be sent by the browser in a third-party context.
-
-To support this scenario, Application Gateway injects another cookie called *ApplicationGatewayAffinityCORS* in addition to the existing *ApplicationGatewayAffinity* cookie. These cookies are similar, but the *ApplicationGatewayAffinityCORS* cookie has two more attributes added to it: *SameSite=None; Secure*. These attributes maintain sticky sessions even for cross-origin requests. See the [cookie based affinity section](configuration-http-settings.md#cookie-based-affinity) for more information.
-
-## Performance
-
-### How does Application Gateway support high availability and scalability?
-
-The Application Gateway v1 SKU supports high-availability scenarios when you've deployed two or more instances. Azure distributes these instances across update and fault domains to ensure that instances don't all fail at the same time. The v1 SKU supports scalability by adding multiple instances of the same gateway to share the load.
-
-The v2 SKU automatically ensures that new instances are spread across fault domains and update domains. If you choose zone redundancy, the newest instances are also spread across availability zones to offer zonal failure resiliency.
-
-### How do I achieve a DR scenario across datacenters by using Application Gateway?
-
-Use Traffic Manager to distribute traffic across multiple application gateways in different datacenters.
-
-### Does Application Gateway support autoscaling?
-
-Yes, the Application Gateway v2 SKU supports autoscaling. For more information, see [Autoscaling and Zone-redundant Application Gateway](application-gateway-autoscaling-zone-redundant.md).
-
-### Does manual or automatic scale up or scale down cause downtime?
-
-No. Instances are distributed across upgrade domains and fault domains.
-
-### Does Application Gateway support connection draining?
-
-Yes. You can set up connection draining to change members within a backend pool without disruption. For more information, see [connection draining section of Application Gateway](features.md#connection-draining).
-
-### Can I change instance size from medium to large without disruption?
-
-Yes.
-
-## Configuration
-
-### Is Application Gateway always deployed in a virtual network?
-
-Yes. Application Gateway is always deployed in a virtual network subnet. This subnet can contain only application gateways. For more information, see [virtual network and subnet requirements](./configuration-infrastructure.md#virtual-network-and-dedicated-subnet).
-
-### Can Application Gateway communicate with instances outside of its virtual network or outside of its subscription?
-
-As long as you have IP connectivity, Application Gateway can communicate with instances outside of the virtual network that it's in. Application Gateway can also communicate with instances outside of the subscription it's in. If you plan to use internal IPs as backend pool members, use [virtual network peering](../virtual-network/virtual-network-peering-overview.md) or [Azure VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md).
-
-### Can I deploy anything else in the application gateway subnet?
-
-No. But you can deploy other application gateways in the subnet.
-
-### Are network security groups supported on the application gateway subnet?
-
-See [Network security groups in the Application Gateway subnet](./configuration-infrastructure.md#network-security-groups).
-
-### Does the application gateway subnet support user-defined routes?
-
-See [User-defined routes supported in the Application Gateway subnet](./configuration-infrastructure.md#supported-user-defined-routes).
-
-### Are service endpoint policies supported in the Application Gateway subnet?
-
-No. [Service endpoint policies](../virtual-network/virtual-network-service-endpoint-policies-overview.md) for storage accounts are not supported in Application Gateway subnet and configuring it will block Azure infrastructure traffic.
-
-### What are the limits on Application Gateway? Can I increase these limits?
-
-See [Application Gateway limits](../azure-resource-manager/management/azure-subscription-service-limits.md#application-gateway-limits).
-
-### Can I simultaneously use Application Gateway for both external and internal traffic?
-
-Yes. Application Gateway supports one internal IP and one external IP per application gateway.
-
-### Does Application Gateway support virtual network peering?
-
-Yes. Virtual network peering helps load-balance traffic in other virtual networks.
-
-### Can I talk to on-premises servers when they're connected by ExpressRoute or VPN tunnels?
-
-Yes, as long as traffic is allowed.
-
-### Can one backend pool serve many applications on different ports?
-
-Microservice architecture is supported. To probe on different ports, you need to configure multiple HTTP settings.
-
-### Do custom probes support wildcards or regex on response data?
-
-No.
-
-### How are routing rules processed in Application Gateway?
-
-See [Order of processing rules](./configuration-request-routing-rules.md#order-of-processing-rules).
-
-### For custom probes, what does the Host field signify?
-
-The Host field specifies the name to send the probe to when you've configured multisite on Application Gateway. Otherwise use '127.0.0.1'. This value is different from the virtual machine host name. Its format is \<protocol\>://\<host\>:\<port\>\<path\>.
-
-### Can I allow Application Gateway access to only a few source IP addresses?
-
-Yes. See [restrict access to specific source IPs](./configuration-infrastructure.md#allow-access-to-a-few-source-ips).
-
-### Can I use the same port for both public-facing and private-facing listeners?
-
-No.
-
-### Does Application Gateway support IPv6?
-
-Application Gateway v2 does not currently support IPv6. It can operate in a dual stack VNet using only IPv4, but the gateway subnet must be IPv4-only. Application Gateway v1 does not support dual stack VNets.
-
-### How do I use Application Gateway V2 with only private frontend IP address?
-
-Application Gateway V2 currently does not support only private IP mode. It supports the following combinations
-* Private IP and Public IP
-* Public IP only
-
-But if you'd like to use Application Gateway V2 with only private IP, you can follow the process below:
-1. Create an Application Gateway with both public and private frontend IP address
-2. Do not create any listeners for the public frontend IP address. Application Gateway will not listen to any traffic on the public IP address if no listeners are created for it.
-3. Create and attach a [Network Security Group](../virtual-network/network-security-groups-overview.md) for the Application Gateway subnet with the following configuration in the order of priority:
-
- a. Allow traffic from Source as **GatewayManager** service tag and Destination as **Any** and Destination port as **65200-65535**. This port range is required for Azure infrastructure communication. These ports are protected (locked down) by certificate authentication. External entities, including the Gateway user administrators, can't initiate changes on those endpoints without appropriate certificates in place
-
- b. Allow traffic from Source as **AzureLoadBalancer** service tag and Destination and destination port as **Any**
-
- c. Deny all inbound traffic from Source as **Internet** service tag and Destination and destination port as **Any**. Give this rule the *least priority* in the inbound rules
-
- d. Keep the default rules like allowing VirtualNetwork inbound so that the access on private IP address is not blocked
-
- e. Outbound internet connectivity can't be blocked. Otherwise, you will face issues with logging, metrics, etc.
-
-Sample NSG configuration for private IP only access:
-![Application Gateway V2 NSG Configuration for private IP access only](./media/application-gateway-faq/appgw-privip-nsg.png)
-
-## Configuration - TLS
-
-### What certificates does Application Gateway support?
-
-Application Gateway supports self-signed certificates, certificate authority (CA) certificates, Extended Validation (EV) certificates, multi-domain (SAN) certificates, and wildcard certificates.
-
-### What cipher suites does Application Gateway support?
-
-Application Gateway supports the following cipher suites.
--- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384-- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256-- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384-- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256-- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA-- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA-- TLS_DHE_RSA_WITH_AES_256_GCM_SHA384-- TLS_DHE_RSA_WITH_AES_128_GCM_SHA256-- TLS_DHE_RSA_WITH_AES_256_CBC_SHA-- TLS_DHE_RSA_WITH_AES_128_CBC_SHA-- TLS_RSA_WITH_AES_256_GCM_SHA384-- TLS_RSA_WITH_AES_128_GCM_SHA256-- TLS_RSA_WITH_AES_256_CBC_SHA256-- TLS_RSA_WITH_AES_128_CBC_SHA256-- TLS_RSA_WITH_AES_256_CBC_SHA-- TLS_RSA_WITH_AES_128_CBC_SHA-- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384-- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256-- TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384-- TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256-- TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA-- TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA-- TLS_DHE_DSS_WITH_AES_256_CBC_SHA256-- TLS_DHE_DSS_WITH_AES_128_CBC_SHA256-- TLS_DHE_DSS_WITH_AES_256_CBC_SHA-- TLS_DHE_DSS_WITH_AES_128_CBC_SHA-- TLS_RSA_WITH_3DES_EDE_CBC_SHA-- TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA-
-For information on how to customize TLS options, see [Configure TLS policy versions and cipher suites on Application Gateway](application-gateway-configure-ssl-policy-powershell.md).
-
-### Does Application Gateway support reencryption of traffic to the backend?
-
-Yes. Application Gateway supports TLS offload and end-to-end TLS, which reencrypt traffic to the backend.
-
-### Can I configure TLS policy to control TLS protocol versions?
-
-Yes. You can configure Application Gateway to deny TLS1.0, TLS1.1, and TLS1.2. By default, SSL 2.0 and 3.0 are already disabled and aren't configurable.
-
-### Can I configure cipher suites and policy order?
-
-Yes. In Application Gateway, you can [configure cipher suites](application-gateway-ssl-policy-overview.md). To define a custom policy, enable at least one of the following cipher suites.
-
-* TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
-* TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
-* TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
-* TLS_RSA_WITH_AES_128_GCM_SHA256
-* TLS_RSA_WITH_AES_256_CBC_SHA256
-* TLS_RSA_WITH_AES_128_CBC_SHA256
-
-Application Gateway uses SHA256 to for backend management.
-
-### How many TLS/SSL certificates does Application Gateway support?
-
-Application Gateway supports up to 100 TLS/SSL certificates.
-
-### How many authentication certificates for backend reencryption does Application Gateway support?
-
-Application Gateway supports up to 100 authentication certificates.
-
-### Does Application Gateway natively integrate with Azure Key Vault?
-
-Yes, the Application Gateway v2 SKU supports Key Vault. For more information, see [TLS termination with Key Vault certificates](key-vault-certs.md).
-
-### How do I configure HTTPS listeners for .com and .net sites?
-
-For multiple domain-based (host-based) routing, you can create multisite listeners, set up listeners that use HTTPS as the protocol, and associate the listeners with the routing rules. For more information, see [Hosting multiple sites by using Application Gateway](./multiple-site-overview.md).
-
-### Can I use special characters in my .pfx file password?
-
-No, use only alphanumeric characters in your .pfx file password.
-
-### My EV certificate is issued by DigiCert and my intermediate certificate has been revoked. How do I renew my certificate on Application Gateway?
-
-Certificate Authority (CA) Browser members recently published reports detailing multiple certificates issued by CA vendors that are used by our customers, Microsoft, and the greater technology community that were out of compliance with industry standards for publicly trusted CAs. The reports regarding the non-compliant CAs can be found here: 
-
-* [Bug 1649951](https://bugzilla.mozilla.org/show_bug.cgi?id=1649951)
-* [Bug 1650910](https://bugzilla.mozilla.org/show_bug.cgi?id=1650910)
-
-As per the industry’s compliance requirements, CA vendors began revoking non-compliant CAs and issuing compliant CAs which requires customers to have their certificates re-issued. Microsoft is partnering closely with these vendors to minimize the potential impact to Azure Services, **however your self-issued certificates or certificates used in “Bring Your Own Certificate” (BYOC) scenarios are still at risk of being unexpectedly revoked**.
-
-To check if certificates utilized by your application have been revoked reference [DigiCert’s Announcement](https://knowledge.digicert.com/alerts/DigiCert-ICA-Replacement) and the [Certificate Revocation Tracker](https://misissued.com/#revoked). If your certificates have been revoked, or will be revoked, you will need to request new certificates from the CA vendor utilized in your applications. To avoid your application’s availability being interrupted due to certificates being unexpectedly revoked, or to update a certificate which has been revoked, please refer to our Azure updates post for remediation links of various Azure services that support BYOC: https://azure.microsoft.com/updates/certificateauthorityrevocation/
-
-For Application Gateway specific information, see below -
-
-If you are using a certificate issued by one of the revoked ICAs, your applicationΓÇÖs availability might be interrupted and depending on your application, you may receive a variety of error messages including but not limited to:
-
-1. Invalid certificate/revoked certificate
-2. Connection timed out
-3. HTTP 502
-
-To avoid any interruption to your application due to this issue, or to re-issue a CA which has been revoked, you need to take the following actions:
-
-1. Contact your certificate provider on how to re-issue your certificates
-2. Once reissued, update your certificates on the Azure Application Gateway/WAF with the complete [chain of trust](/windows/win32/seccrypto/certificate-chains) (leaf, intermediate, root certificate). Based on where you are using your certificate, either on the listener or the HTTP settings of the Application Gateway, follow the steps below to update the certificates and check the documentation links mentioned for more information.
-3. Update your backend application servers to use the re-issued certificate. Depending on the backend server that you are using, your certificate update steps may vary. Please check for the documentation from your vendor.
-
-To update the certificate in your listener:
-
-1. In the [Azure portal](https://portal.azure.com/), open your Application Gateway resource
-2. Open the listener settings thatΓÇÖs associated with your certificate
-3. Click ΓÇ£Renew or edit selected certificateΓÇ¥
-4. Upload your new PFX certificate with the password and click Save
-5. Access the website and verify if the site is working as expected
-For more information, check documentation [here](./renew-certificates.md).
-
-If you are referencing certificates from Azure KeyVault in your Application Gateway listener, we recommend the following the steps for a quick change ΓÇô
-
-1. In the [Azure portal](https://portal.azure.com/), navigate to your Azure KeyVault settings which has been associated with the Application Gateway
-2. Add/import the reissued certificate in your store. See documentation [here](../key-vault/certificates/quick-create-portal.md) for more information on how-to.
-3. Once the certificate has been imported, navigate to your Application Gateway listener settings and under ΓÇ£Choose a certificate from Key VaultΓÇ¥, click on the ΓÇ£CertificateΓÇ¥ drop down and choose the recently added certificate
-4. Click Save
-For more information on TLS termination on Application Gateway with Key Vault certificates, check documentation [here](./key-vault-certs.md).
--
-To update the certificate in your HTTP Settings:
-
-If you are using V1 SKU of the Application Gateway/WAF service, then you would have to upload the new certificate as your backend authentication certificate.
-1. In the [Azure portal](https://portal.azure.com/), open your Application Gateway resource
-2. Open the HTTP settings thatΓÇÖs associated with your certificate
-3. Click on ΓÇ£Add certificateΓÇ¥ and upload the reissued certificate and click save
-4. You can remove the old certificate later by clicking on the “…” options button next to the old certificate and select delete and click save.
-For more information, check documentation [here](./end-to-end-ssl-portal.md#add-authenticationtrusted-root-certificates-of-back-end-servers).
-
-If you are using the V2 SKU of the Application Gateway/WAF service, you donΓÇÖt have to upload the new certificate in the HTTP settings since V2 SKU uses ΓÇ£trusted root certificatesΓÇ¥ and no action needs to be taken here.
-
-## Configuration - mutual authentication
-
-### What is mutual authentication?
-
-Mutual authentication is two-way authentication between a client and a server. Mutual authentication with Application Gateway currently allows the gateway to verify the client sending the request, which is client authentication. Typically, the client is the only one that authenticates the Application Gateway. Because Application Gateway can now also authenticate the client, it becomes mutual authentication where Application Gateway and the client are mutually authenticating each other.
-
-### Is mutual authentication available between Application Gateway and its backend pools?
-
-No, mutual authentication is currently only between the frontend client and the Application Gateway. Backend mutual authentication is currently not supported.
-
-## Configuration - ingress controller for AKS
-
-### What is an Ingress Controller?
-
-Kubernetes allows creation of `deployment` and `service` resource to expose a group of pods internally in the cluster. To expose the same service externally, an [`Ingress`](https://kubernetes.io/docs/concepts/services-networking/ingress/) resource is defined which provides load balancing, TLS termination and name-based virtual hosting.
-To satisfy this `Ingress` resource, an Ingress Controller is required which listens for any changes to `Ingress` resources and configures the load balancer policies.
-
-The Application Gateway Ingress Controller (AGIC) allows [Azure Application Gateway](https://azure.microsoft.com/services/application-gateway/) to be used as the ingress for an [Azure Kubernetes Service](https://azure.microsoft.com/services/kubernetes-service/) also known as an AKS cluster.
-
-### Can a single ingress controller instance manage multiple Application Gateways?
-
-Currently, one instance of Ingress Controller can only be associated to one Application Gateway.
-
-### Why is my AKS cluster with kubenet not working with AGIC?
-
-AGIC tries to automatically associate the route table resource to the Application Gateway subnet but may fail to do so due to lack of permissions from the AGIC. If AGIC is unable to associate the route table to the Application Gateway subnet, there will be an error in the AGIC logs saying so, in which case you'll have to manually associate the route table created by the AKS cluster to the Application Gateway's subnet. For more information, see [Supported user-defined routes](configuration-infrastructure.md#supported-user-defined-routes).
-
-### Can I connect my AKS cluster and Application Gateway in separate virtual networks?
-
-Yes, as long as the virtual networks are peered and they don't have overlapping address spaces. If you're running AKS with kubenet, then be sure to associate the route table generated by AKS to the Application Gateway subnet.
-
-### What features are not supported on the AGIC add-on?
-
-Please see the differences between AGIC deployed through Helm versus deployed as an AKS add-on [here](ingress-controller-overview.md#difference-between-helm-deployment-and-aks-add-on)
-
-### When should I use the add-on versus the Helm deployment?
-
-Please see the differences between AGIC deployed through Helm versus deployed as an AKS add-on [here](ingress-controller-overview.md#difference-between-helm-deployment-and-aks-add-on), especially the tables documenting which scenario(s) are supported by AGIC deployed through Helm as opposed to an AKS add-on. In general, deploying through Helm will allow you to test out beta features and release candidates before an official release.
-
-### Can I control which version of AGIC will be deployed with the add-on?
-
-No, AGIC add-on is a managed service which means Microsoft will automatically update the add-on to the latest stable version.
-
-## Diagnostics and logging
-
-### What types of logs does Application Gateway provide?
-
-Application Gateway provides three logs:
-
-* **ApplicationGatewayAccessLog**: The access log contains each request submitted to the application gateway frontend. The data includes the caller's IP, URL requested, response latency, return code, and bytes in and out. It contains one record per application gateway.
-* **ApplicationGatewayPerformanceLog**: The performance log captures performance information for each application gateway. Information includes the throughput in bytes, total requests served, failed request count, and healthy and unhealthy backend instance count.
-* **ApplicationGatewayFirewallLog**: For application gateways that you configure with WAF, the firewall log contains requests that are logged through either detection mode or prevention mode.
-
-All logs are collected every 60 seconds. For more information, see [Backend health, diagnostics logs, and metrics for Application Gateway](application-gateway-diagnostics.md).
-
-### How do I know if my backend pool members are healthy?
-
-Verify health by using the PowerShell cmdlet `Get-AzApplicationGatewayBackendHealth` or the portal. For more information, see [Application Gateway diagnostics](application-gateway-diagnostics.md).
-
-### What's the retention policy for the diagnostic logs?
-
-Diagnostic logs flow to the customer's storage account. Customers can set the retention policy based on their preference. Diagnostic logs can also be sent to an event hub or Azure Monitor logs. For more information, see [Application Gateway diagnostics](application-gateway-diagnostics.md).
-
-### How do I get audit logs for Application Gateway?
-
-In the portal, on the menu blade of an application gateway, select **Activity Log** to access the audit log.
-
-### Can I set alerts with Application Gateway?
-
-Yes. In Application Gateway, alerts are configured on metrics. For more information, see [Application Gateway metrics](./application-gateway-metrics.md) and [Receive alert notifications](../azure-monitor/alerts/alerts-overview.md).
-
-### How do I analyze traffic statistics for Application Gateway?
-
-You can view and analyze access logs in several ways. Use Azure Monitor logs, Excel, Power BI, and so on.
-
-You can also use a Resource Manager template that installs and runs the popular [GoAccess](https://goaccess.io/) log analyzer for Application Gateway access logs. GoAccess provides valuable HTTP traffic statistics such as unique visitors, requested files, hosts, operating systems, browsers, and HTTP status codes. For more information, in GitHub, see the [Readme file in the Resource Manager template folder](https://aka.ms/appgwgoaccessreadme).
-
-### What could cause backend health to return an unknown status?
-
-Usually, you see an unknown status when access to the backend is blocked by a network security group (NSG), custom DNS, or user-defined routing (UDR) on the application gateway subnet. For more information, see [Backend health, diagnostics logging, and metrics for Application Gateway](application-gateway-diagnostics.md).
-
-### Are NSG flow logs supported on NSGs associated to Application Gateway v2 subnet?
-
-Due to current platform limitations, if you have an NSG on the Application Gateway v2 (Standard_v2, WAF_v2) subnet and if you have enabled NSG flow logs on it, you will see nondeterministic behavior and this scenario is currently not supported.
-
-### Where does Application Gateway store customer data?
-
-Application Gateway does not move or store customer data out of the region it's deployed in.
-
-## Next steps
-
-To learn more about Application Gateway, see [What is Azure Application Gateway?](overview.md).
application-gateway Configure Keyvault Ps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/configure-keyvault-ps.md
$identity = New-AzUserAssignedIdentity -Name "appgwKeyVaultIdentity" `
### Create a key vault, policy, and certificate to be used by the application gateway ```azurepowershell
-$keyVault = New-AzKeyVault -Name $kv -ResourceGroupName $rgname -Location $location -EnableSoftDelete
+$keyVault = New-AzKeyVault -Name $kv -ResourceGroupName $rgname -Location $location
Set-AzKeyVaultAccessPolicy -VaultName $kv -PermissionsToSecrets get -ObjectId $identity.PrincipalId $policy = New-AzKeyVaultCertificatePolicy -ValidityInMonths 12 `
$certificate = Add-AzKeyVaultCertificate -VaultName $kv -Name "cert1" -Certifica
$certificate = Get-AzKeyVaultCertificate -VaultName $kv -Name "cert1" $secretId = $certificate.SecretId.Replace($certificate.Version, "") ```
-> [!NOTE]
-> The -EnableSoftDelete flag must be used for TLS termination to function properly. If you're configuring [Key Vault soft-delete through the Portal](../key-vault/general/soft-delete-overview.md#soft-delete-behavior), the retention period must be kept at 90 days, the default value. Application Gateway doesn't support a different retention period yet.
### Create a virtual network
azure-arc Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/azure-rbac.md
description: "Use Azure RBAC for authorization checks on Azure Arc enabled Kubernetes clusters"
-# Azure RBAC for Azure Arc enabled Kubernetes clusters
+# Integrate Azure Active Directory with Azure Arc enabled Kubernetes clusters
-Kubernetes [ClusterRoleBinding and RoleBinding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) object types help to define authorization in Kubernetes natively. With Azure RBAC, you can use Azure Active Directory and role assignments in Azure to control authorization checks on the cluster. This implies you can now use Azure role assignments to granularly control who can read, write, delete your Kubernetes objects such as Deployment, Pod and Service
+Kubernetes [ClusterRoleBinding and RoleBinding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) object types help to define authorization in Kubernetes natively. Using this feature, you can use Azure Active Directory and role assignments in Azure to control authorization checks on the cluster. This implies you can now use Azure role assignments to granularly control who can read, write, delete your Kubernetes objects such as Deployment, Pod and Service
A conceptual overview of this feature is available in [Azure RBAC - Azure Arc enabled Kubernetes](conceptual-azure-rbac.md) article.
Owners of the Azure Arc enabled Kubernetes resource can either use built-in role
| Role | Description | |||
-| Azure Arc Kubernetes Viewer | Allows read-only access to see most objects in a namespace. This role doesn't allow viewing secrets. This is because `read` permission on secrets would enable access to `ServiceAccount` credentials in the namespace, which would in turn allow API access using that `ServiceAccount` (a form of privilege escalation). |
-| Azure Arc Kubernetes Writer | Allows read/write access to most objects in a namespace. This role doesn't allow viewing or modifying roles or role bindings. However, this role allows accessing secrets and running pods as any `ServiceAccount` in the namespace, so it can be used to gain the API access levels of any `ServiceAccount` in the namespace. |
-| Azure Arc Kubernetes Admin | Allows admin access. Intended to be granted within a namespace using a RoleBinding. If used in a RoleBinding, allows read/write access to most resources in a namespace, including the ability to create roles and role bindings within the namespace. This role doesn't allow write access to resource quota or to the namespace itself. |
-| Azure Arc Kubernetes Cluster Admin | Allows super-user access to execute any action on any resource. When used in a ClusterRoleBinding, it gives full control over every resource in the cluster and in all namespaces. When used in a RoleBinding, it gives full control over every resource in the role binding's namespace, including the namespace itself.|
+| [Azure Arc Kubernetes Viewer](../../role-based-access-control/built-in-roles.md#azure-arc-kubernetes-viewer) | Allows read-only access to see most objects in a namespace. This role doesn't allow viewing secrets. This is because `read` permission on secrets would enable access to `ServiceAccount` credentials in the namespace, which would in turn allow API access using that `ServiceAccount` (a form of privilege escalation). |
+| [Azure Arc Kubernetes Writer](../../role-based-access-control/built-in-roles.md#azure-arc-kubernetes-writer) | Allows read/write access to most objects in a namespace. This role doesn't allow viewing or modifying roles or role bindings. However, this role allows accessing secrets and running pods as any `ServiceAccount` in the namespace, so it can be used to gain the API access levels of any `ServiceAccount` in the namespace. |
+| [Azure Arc Kubernetes Admin](../../role-based-access-control/built-in-roles.md#azure-arc-kubernetes-admin) | Allows admin access. Intended to be granted within a namespace using a RoleBinding. If used in a RoleBinding, allows read/write access to most resources in a namespace, including the ability to create roles and role bindings within the namespace. This role doesn't allow write access to resource quota or to the namespace itself. |
+| [Azure Arc Kubernetes Cluster Admin](../../role-based-access-control/built-in-roles.md#azure-arc-kubernetes-cluster-admin) | Allows super-user access to execute any action on any resource. When used in a ClusterRoleBinding, it gives full control over every resource in the cluster and in all namespaces. When used in a RoleBinding, it gives full control over every resource in the role binding's namespace, including the namespace itself.|
You can create role assignments scoped to the Arc enabled Kubernetes cluster on the `Access Control (IAM)` blade of the cluster resource on Azure portal. You can also use Azure CLI commands, as shown below:
azure-arc Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/extensions.md
A conceptual overview of this feature is available in [Cluster extensions - Azur
| Extension | Description | | | -- | | [Azure Monitor](../../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md?toc=/azure/azure-arc/kubernetes/toc.json) | Provides visibility into the performance of workloads deployed on the Kubernetes cluster. Collects memory and CPU utilization metrics from controllers, nodes, and containers. |
-| [Azure Defender](../../security-center/defender-for-kubernetes-azure-arc.md?toc=/azure/azure-arc/kubernetes/toc.json) | Gathers audit log data from control plane nodes of the Kubernetes cluster. Provides recommendations and threat alerts based on gathered data. |
+| [Azure Defender](../../security-center/defender-for-kubernetes-azure-arc.md?toc=/azure/azure-arc/kubernetes/toc.json) | Gathers information related to security like audit log data from the Kubernetes cluster. Provides recommendations and threat alerts based on gathered data. |
## Usage of cluster extensions
az k8s-extension list --cluster-name <clusterName> --resource-group <resourceGro
] ```
-### Update an existing extension instance
-
-Update an extension instance on a cluster with `k8s-extension update`, passing in the values to update. This command only updates the `auto-upgrade-minor-version`, `release-train`, and `version` properties. For example:
--- **Update release train:**-
- ```azurecli
- az k8s-extension update --name azuremonitor-containers --cluster-type connectedClusters --cluster-name <clusterName> --resource-group <resourceGroupName> --release-train Preview
- ```
--- **Turn off auto-upgrade and pin extension instance to a specific version:**-
- ```azurecli
- az k8s-extension update --name azuremonitor-containers --cluster-type connectedClusters --cluster-name <clusterName> --resource-group <resourceGroupName> --auto-upgrade-minor-version false --version 2.2.2
- ```
--- **Turn on auto-upgrade for the extension instance:**-
- ```azurecli
- az k8s-extension update --name azuremonitor-containers --cluster-type connectedClusters --cluster-name <clusterName> --resource-group <resourceGroupName> --auto-upgrade-minor-version true
- ```
-
-> [!NOTE]
-> The `version` parameter can be set only when `--auto-upgrade-minor-version` is set to `false`.
- ### Delete extension instance Delete an extension instance on a cluster with `k8s-extension delete`, passing in values for the mandatory parameters.
azure-arc Plan At Scale Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/plan-at-scale-deployment.md
+
+ Title: How to plan and deploy Azure Arc enabled Kubernetes
++ Last updated : 04/12/2021+++
+description: Onboard large number of clusters to Azure Arc enabled Kubernetes for configuration management
++
+# Plan and deploy Azure Arc enabled Kubernetes
+
+Deployment of an IT infrastructure service or business application is a challenge for any company. To prevent any unwelcome surprises or unplanned costs, you need to thoroughly plan for it to ensure you're as ready as possible. Such a plan should identify design and deployment criteria that needs to be met to complete the tasks.
+
+For the deployment to continue smoothly, your plan should establish a clear understanding of:
+
+* Roles and responsibilities.
+* Inventory of all Kubernetes clusters
+* Meet networking requirements.
+* The skill set and training required to enable successful deployment and on-going management.
+* Acceptance criteria and how you track its success.
+* Tools or methods to be used to automate the deployments.
+* Identified risks and mitigation plans to avoid delays and disruptions.
+* How to avoid disruption during deployment.
+* What's the escalation path when a significant issue occurs?
+
+The purpose of this article is to ensure you're prepared for a successful deployment of Azure Arc enabled Kubernetes across multiple production clusters in your environment.
+
+## Prerequisites
+
+* An existing Kubernetes cluster. If you don't have one, you can create a cluster using one of these options:
+ - [Kubernetes in Docker (KIND)](https://kind.sigs.k8s.io/)
+ - Create a Kubernetes cluster using Docker for [Mac](https://docs.docker.com/docker-for-mac/#kubernetes) or [Windows](https://docs.docker.com/docker-for-windows/#kubernetes)
+ - Self-managed Kubernetes cluster using [Cluster API](https://cluster-api.sigs.k8s.io/user/quick-start.html)
+
+* Your machines have connectivity from your on-premises network or other cloud environment to resources in Azure, either directly or through a proxy server. More details can be found under [network prerequisites](quickstart-connect-cluster.md#meet-network-requirements).
+
+* A `kubeconfig` file pointing to the cluster you want to connect to Azure Arc.
+* 'Read' and 'Write' permissions for the user or service principal creating the Azure Arc enabled Kubernetes resource type of `Microsoft.Kubernetes/connectedClusters`.
+
+## Pilot
+
+Before deploying to all production clusters, start by evaluating this deployment process before adopting it broadly in your environment. For a pilot, identify a representative sampling of clusters that aren't critical to your companies ability to conduct business. You'll want to be sure to allow enough time to run the pilot and assess its impact: we recommend approximately 30 days.
+
+Establish a formal plan describing the scope and details of the pilot. The following sample plan should help you get started.
+
+* **Goals** - Describes the business and technical drivers that led to the decision that a pilot is necessary.
+* **Selection criteria** - Specifies the criteria used to select which aspects of the solution will be demonstrated via a pilot.
+* **Scope** - Covers solution components, expected schedule, duration of the pilot, and number of clusters to target.
+* **Success criteria and metrics** - Define the pilot's success criteria and specific measures used to determine level of success.
+* **Training plan** - Describes the plan for training system engineers, administrators, etc. who are new to Azure and it services during the pilot.
+* **Transition plan** - Describes the strategy and criteria used to guide transition from pilot to production.
+* **Rollback** - Describes the procedures for rolling back a pilot to pre-deployment state.
+* **Risks** - List all identified risks for conducting the pilot and associated with production deployment.
+
+## Phase 1: Build a foundation
+
+In this phase, system engineers or administrators perform the core activities such creation of resource groups, tags, role assignments so that the Azure Arc enabled Kubernetes resources can then be created and operated.
+
+|Task |Detail |Duration |
+|--|-||
+| [Create a resource group](../../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups) | A dedicated resource group to include only Azure Arc enabled Kubernetes resources and centralize management and monitoring of these resources. | One hour |
+| Apply [Tags](../../azure-resource-manager/management/tag-resources.md) to help organize machines. | Evaluate and develop an IT-aligned [tagging strategy](/azure/cloud-adoption-framework/decision-guides/resource-tagging/). This can help reduce the complexity of managing your Azure Arc enabled Kubernetes resources and simplify making management decisions. | One day |
+| Identify [configurations](tutorial-use-gitops-connected-cluster.md) for GitOps | Identify the application or baseline configurations such as `PodSecurityPolicy`, `NetworkPolicy` that you want to deploy to your clusters | One day |
+| [Develop an Azure Policy](../../governance/policy/overview.md) governance plan | Determine how you'll implement governance of Azure Arc enabled Kubernetes clusters at the subscription or resource group scope with Azure Policy. | One day |
+| Configure [Role based access control](../../role-based-access-control/overview.md) (RBAC) | Develop an access plan to identify who has read/write/all permissions on your clusters | One day |
+
+## Phase 2: Deploy Azure Arc enabled Kubernetes
+
+In this phase, we connect your Kubernetes clusters to Azure:
+
+|Task |Detail |Duration |
+|--|-||
+| [Connect your first Kubernetes cluster to Azure Arc](quickstart-connect-cluster.md) | As part of connecting your first cluster to Azure Arc, set up your onboarding environment with all the required tools such as Azure CLI, Helm, and `connectedk8s` extension for Azure CLI. | 15 minutes |
+| [Create service principal](create-onboarding-service-principal.md) | Create a service principal to connect Kubernetes clusters non-interactively using Azure CLI or PowerShell. | One hour |
++
+## Phase 3: Manage and operate
+
+In this phase, we deploy applications and baseline configurations to your Kubernetes clusters.
+
+|Task |Detail |Duration |
+|--|-||
+|[Create configurations](tutorial-use-gitops-connected-cluster.md) on your clusters | Create configurations for deploying your applications on your Azure Arc enabled Kubernetes resource. | 15 minutes |
+|[Use Azure Policy](use-azure-policy.md) for at-scale enforcement of configurations | Create policy assignments to automate the deployment of baseline configurations across all your clusters under a subscription or resource group scope. | 15 minutes |
+| [Upgrade Azure Arc agents](agent-upgrade.md) | If you have disabled auto-upgrade of agents on your clusters, update your agents manually to the latest version to make sure you have the most recent security and bug fixes. | 15 minutes |
+
+## Next steps
+
+* Use our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md).
+* [Create configurations](./tutorial-use-gitops-connected-cluster.md) on your Azure Arc enabled Kubernetes cluster.
+* [Use Azure Policy to apply configurations at scale](./use-azure-policy.md).
azure-monitor Alerts Common Schema Definitions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-common-schema-definitions.md
Title: Alert schema definitions in Azure Monitor
description: Understanding the common alert schema definitions for Azure Monitor Previously updated : 09/22/2020 Last updated : 04/12/2021
Any alert instance describes the resource that was affected and the cause of the
## Alert context
-### Metric alerts
+### Metric alerts (excluding availability tests)
#### `monitoringService` = `Platform`
Any alert instance describes the resource that was affected and the cause of the
} ```
+### Metric alerts (availability tests)
+
+#### `monitoringService` = `Platform`
+
+**Sample values**
+```json
+{
+ "alertContext": {
+ "properties": null,
+ "conditionType": "WebtestLocationAvailabilityCriteria",
+ "condition": {
+ "windowSize": "PT5M",
+ "allOf": [
+ {
+ "metricName": "Failed Location",
+ "metricNamespace": null,
+ "operator": "GreaterThan",
+ "threshold": "2",
+ "timeAggregation": "Sum",
+ "dimensions": [],
+ "metricValue": 5,
+ "webTestName": "myAvailabilityTest-myApplication"
+ }
+ ],
+ "windowStartTime": "2019-03-22T13:40:03.064Z",
+ "windowEndTime": "2019-03-22T13:45:03.064Z"
+ }
+ }
+}
+```
+ ### Log alerts > [!NOTE]
azure-monitor Alerts Troubleshoot Metric https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-troubleshoot-metric.md
description: Common issues with Azure Monitor metric alerts and possible solutio
Previously updated : 03/15/2021 Last updated : 04/12/2021 # Troubleshooting problems in Azure Monitor metric alerts
If youΓÇÖre looking to alert on a specific metric but canΓÇÖt see it when creati
If you're looking to alert on [specific dimension values of a metric](./alerts-metric-overview.md#using-dimensions), but cannot find these values, note the following: 1. It might take a few minutes for the dimension values to appear under the **Dimension values** list
-1. The displayed dimension values are based on metric data collected in the last day
-1. If the dimension value isnΓÇÖt yet emitted or isn't shown, you can use the 'Add custom value' option to add a custom dimension value
-1. If youΓÇÖd like to alert on all possible values of a dimension (including future values), choose the 'Select all current and future values' option
+2. The displayed dimension values are based on metric data collected in the last day
+3. If the dimension value isnΓÇÖt yet emitted or isn't shown, you can use the 'Add custom value' option to add a custom dimension value
+4. If youΓÇÖd like to alert on all possible values of a dimension (including future values), choose the 'Select all current and future values' option
+5. Custom metrics dimensions of Application Insights resources are turned off by default. To turn on the collection of dimensions for these custom metrics, see [here](../app/pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation).
## Metric alert rules still defined on a deleted resource
azure-monitor Asp Net Troubleshoot No Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/asp-net-troubleshoot-no-data.md
Last updated 05/21/2020
* SDK channel keeps telemetry in buffer, and sends them in batches. If the application is shutting down, you may need to explicitly call [Flush()](api-custom-events-metrics.md#flushing-data). Behavior of `Flush()` depends on the actual [channel](telemetry-channels.md#built-in-telemetry-channels) used.
+## Request count collected by Application Insights SDK does not match the IIS log count for my application
+
+Internet Information Services (IIS) logs counts of all request reaching IIS and inherently could differ from the total request reaching an application. Due to this it is not guaranteed that the request count collected by the SDKs will match the total IIS log count.
+ ## No data from my server *I installed my app on my web server, and now I don't see any telemetry from it. It worked OK on my dev machine.*
azure-monitor Javascript Angular Plugin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/javascript-angular-plugin.md
The Angular plugin for the Application Insights JavaScript SDK, enables: - Tracking of router changes
+- Tracking uncaught exceptions
> [!WARNING] > Angular plugin is NOT ECMAScript 3 (ES3) compatible.
export class AppComponent {
} ```
+To track uncaught exceptions, setup ApplicationinsightsAngularpluginErrorService in `app.module.ts`:
+
+```js
+import { ApplicationinsightsAngularpluginErrorService } from '@microsoft/applicationinsights-angularplugin-js';
+
+@NgModule({
+ ...
+ providers: [
+ {
+ provide: ErrorHandler,
+ useClass: ApplicationinsightsAngularpluginErrorService
+ }
+ ]
+ ...
+})
+export class AppModule { }
+```
+ ## Next steps - To learn more about the JavaScript SDK, see the [Application Insights JavaScript SDK documentation](javascript.md)
azure-monitor Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/customer-managed-keys.md
The following rules apply:
- The Log Analytics cluster storage accounts generate unique encryption key for every storage account, which is known as the AEK. - The AEK is used to derive DEKs, which are the keys that are used to encrypt each block of data written to disk. - When you configure your key in Key Vault and reference it in the cluster, Azure Storage sends requests to your Azure Key Vault to wrap and unwrap the AEK to perform data encryption and decryption operations.-- Your KEK never leaves your Key Vault and in the case of an HSM key, it never leaves the hardware.
+- Your KEK never leaves your Key Vault.
- Azure Storage uses the managed identity that's associated with the *Cluster* resource to authenticate and access to Azure Key Vault via Azure Active Directory. ### Customer-Managed key provisioning steps
Select the current version of your key in Azure Key Vault to get the key identif
Update KeyVaultProperties in cluster with key identifier details.
+>[!NOTE]
+>Key rotation supports two modes: auto-rotation or explicit key version update, see [Key rotation](#key-rotation) to determine the best approach for you.
+ The operation is asynchronous and can take a while to complete. # [Azure portal](#tab/portal)
The cluster's storage periodically checks your Key Vault to attempt to unwrap th
## Key rotation
-Customer-managed key rotation requires an explicit update to the cluster with the new key version in Azure Key Vault. [Update cluster with Key identifier details](#update-cluster-with-key-identifier-details). If you don't update the new key version in the cluster, the Log Analytics cluster storage will keep using your previous key for encryption. If you disable or delete your old key before updating the new key in the cluster, you will get into [key revocation](#key-revocation) state.
+Key rotation has two modes:
+- Auto-rotation - when you you update your cluster with ```"keyVaultProperties"``` but omit ```"keyVersion"``` property, or set it to ```""```, storage will autoamatically use the latest versions.
+- Explicit key version update - when you update your cluster and provide key version in ```"keyVersion"``` property, any new key versions require an explicit ```"keyVaultProperties"``` update in cluster, see [Update cluster with Key identifier details](#update-cluster-with-key-identifier-details). If you generate new key version in Key Vault but don't update it in the cluster, the Log Analytics cluster storage will keep using your previous key. If you disable or delete your old key before updating the new key in the cluster, you will get into [key revocation](#key-revocation) state.
All your data remains accessible after the key rotation operation, since data always encrypted with Account Encryption Key (AEK) while AEK is now being encrypted with your new Key Encryption Key (KEK) version in Key Vault.
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/manage-cost-storage.md
To get you started, here are the recommended settings for the alert querying the
- Target: Select your Log Analytics resource - Criteria: - Signal name: Custom log search
- - Search query: `_LogOperation | where Operation == "Data Collection Status" | where Detail contains "OverQuota"`
+ - Search query: `_LogOperation | where Operation == "Data collection Status" | where Detail contains "OverQuota"`
- Based on: Number of results - Condition: Greater than - Threshold: 0
azure-netapp-files Azure Netapp Files Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-faqs.md
na ms.devlang: na Previously updated : 04/06/2021 Last updated : 04/12/2021 # FAQs About Azure NetApp Files
This article answers frequently asked questions (FAQs) about Azure NetApp Files.
## Networking FAQs
-### Does the NFS data path go over the Internet?
+### Does the data path for NFS or SMB go over the Internet?
-No. The NFS data path does not go over the Internet. Azure NetApp Files is an Azure native service that is deployed into the Azure Virtual Network (VNet) where the service is available. Azure NetApp Files uses a delegated subnet and provisions a network interface directly on the VNet.
+No. The data path for NFS or SMB does not go over the Internet. Azure NetApp Files is an Azure native service that is deployed into the Azure Virtual Network (VNet) where the service is available. Azure NetApp Files uses a delegated subnet and provisions a network interface directly on the VNet.
See [Guidelines for Azure NetApp Files network planning](./azure-netapp-files-network-topologies.md) for details.
azure-resource-manager Microsoft Solutions Armapicontrol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/microsoft-solutions-armapicontrol.md
The control's output is not displayed to the user. Instead, the result of the op
## Remarks - The `request.method` property specifies the HTTP method. Only `GET` or `POST` are allowed.-- The `request.path` property specifies relative path of the URL. It can be a static path or can be constructed dynamically by referring output values of the other controls.
+- The `request.path` property specifies a URL that must be a relative path to an ARM endpoint. It can be a static path or can be constructed dynamically by referring output values of the other controls.
+
+ For example, an ARM call into `Microsoft.Network/expressRouteCircuits` resource provider:
+
+ ```json
+ "path": "<subid>/resourceGroup/<resourceGroupName>/providers/Microsoft.Network/expressRouteCircuits/<routecircuitName>/?api-version=2020-05-01"
+ ```
+ - The `request.body` property is optional. Use it to specify a JSON body that is sent with the request. The body can be static content or constructed dynamically by referring to output values from other controls. ## Example
For an example of using the ArmApiControl to check the availability of a resourc
## Next steps
-* For an introduction to creating UI definitions, see [Getting started with CreateUiDefinition](create-uidefinition-overview.md).
-* For a description of common properties in UI elements, see [CreateUiDefinition elements](create-uidefinition-elements.md).
+- For an introduction to creating UI definitions, see [Getting started with CreateUiDefinition](create-uidefinition-overview.md).
+- For a description of common properties in UI elements, see [CreateUiDefinition elements](create-uidefinition-elements.md).
azure-resource-manager Test Createuidefinition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/test-createuidefinition.md
After [creating the createUiDefinition.json file](create-uidefinition-overview.m
## Prerequisites
-* A **createUiDefinition.json** file. If you don't have this file, copy the [sample file](https://github.com/Azure/azure-quickstart-templates/blob/master/100-marketplace-sample/createUiDefinition.json).
+* A **createUiDefinition.json** file. If you don't have this file, copy the [sample file](https://github.com/Azure/azure-quickstart-templates/blob/master/demos/100-marketplace-sample/createUiDefinition.json).
* An Azure subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
Now that you've verified your portal interface is working as expected, it's time
## Next steps
-After validating your portal interface, learn about making your [Azure managed application available in the Marketplace](../../marketplace/create-new-azure-apps-offer.md).
+After validating your portal interface, learn about making your [Azure managed application available in the Marketplace](../../marketplace/create-new-azure-apps-offer.md).
azure-resource-manager Deployment Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/deployment-models.md
Title: Resource Manager and classic deployment description: Describes the differences between the Resource Manager deployment model and the classic (or Service Management) deployment model. Previously updated : 02/06/2020 Last updated : 04/12/2021 # Azure Resource Manager vs. classic deployment: Understand deployment models and the state of your resources
Last updated 02/06/2020
In this article, you learn about Azure Resource Manager and classic deployment models. The Resource Manager and classic deployment models represent two different ways of deploying and managing your Azure solutions. You work with them through two different API sets, and the deployed resources can contain important differences. The two models aren't compatible with each other. This article describes those differences.
-To simplify the deployment and management of resources, Microsoft recommends that you use Resource Manager for all new resources. If possible, Microsoft recommends that you redeploy existing resources through Resource Manager.
+To simplify the deployment and management of resources, Microsoft recommends that you use Resource Manager for all new resources. If possible, Microsoft recommends that you redeploy existing resources through Resource Manager. If you've used Cloud Services, you can migrate your solution to [Cloud Services (extended support)](../../cloud-services-extended-support/overview.md).
If you're new to Resource Manager, you may want to first review the terminology defined in the [Azure Resource Manager overview](overview.md).
When Resource Manager was added, all resources were retroactively added to defau
There are three scenarios to be aware of:
-1. Cloud Services doesn't support Resource Manager deployment model.
+1. [Cloud Services (classic)](../../cloud-services/cloud-services-choose-me.md) doesn't support the Resource Manager deployment model. [Cloud Services (extended support)](../../cloud-services-extended-support/overview.md) supports the Resource Manager deployment model.
2. Virtual machines, storage accounts, and virtual networks support both Resource Manager and classic deployment models. 3. All other Azure services support Resource Manager.
Here are the components and their relationships for classic deployment:
The classic solution for hosting a virtual machine includes:
-* A required cloud service that acts as a container for hosting virtual machines (compute). Virtual machines are automatically provided with a network interface card and an IP address assigned by Azure. Additionally, the cloud service contains an external load balancer instance, a public IP address, and default endpoints to allow remote desktop and remote PowerShell traffic for Windows-based virtual machines and Secure Shell (SSH) traffic for Linux-based virtual machines.
+* Cloud Services (classic) acts as a container for hosting virtual machines (compute). Virtual machines are automatically provided with a network interface card and an IP address assigned by Azure. Additionally, the cloud service contains an external load balancer instance, a public IP address, and default endpoints to allow remote desktop and remote PowerShell traffic for Windows-based virtual machines and Secure Shell (SSH) traffic for Linux-based virtual machines.
* A required storage account that stores the virtual hard disks for a virtual machine, including the operating system, temporary, and additional data disks (storage). * An optional virtual network that acts as an additional container, in which you can create a subnetted structure and choose the subnet on which the virtual machine is located (network).
azure-resource-manager Bicep Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-install.md
az bicep upgrade
To install a specific version: ```bash
-az bicep install --version v0.3.126
+az bicep install --version v0.3.255
``` > [!IMPORTANT]
bicep --help
```sh # Add the tap for bicep
-brew tap azure/bicep https://github.com/azure/bicep
+brew tap azure/bicep
# Install the tool
-brew install azure/bicep/bicep
+brew install bicep
``` ##### macOS manual install
azure-resource-manager Bicep Tutorial Create First Bicep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-tutorial-create-first-bicep.md
Title: Tutorial - Create & deploy Azure Resource Manager Bicep files description: Create your first Bicep file for deploying Azure resources. In the tutorial, you learn about the Bicep file syntax and how to deploy a storage account. Previously updated : 03/17/2021 Last updated : 04/12/2021
Okay, you're ready to start learning about Bicep.
```bicep resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' = {
- name: '{provide-unique-name}'
+ name: '{provide-unique-name}' // must be globally unique
location: 'eastus' sku: { name: 'Standard_LRS'
Okay, you're ready to start learning about Bicep.
If you decide to change the API version for a resource, make sure you evaluate the properties for that version and adjust your Bicep file appropriately.
+ For more information, see [Bicep structure](./bicep-file.md).
+
+ There is a comment for the name property. Use `//` for single-line comments or `/* ... */` for multi-line comments
+ 1. Replace `{provide-unique-name}` including the curly braces `{}` with a unique storage account name. > [!IMPORTANT]
azure-resource-manager Create Visual Studio Deployment Project https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/create-visual-studio-deployment-project.md
Title: Create & deploy Visual Studio resource group projects description: Use Visual Studio to create an Azure resource group project and deploy the resources to Azure. Previously updated : 10/16/2019 Last updated : 04/12/2021 # Creating and deploying Azure resource groups through Visual Studio
In this section, you create an Azure Resource Group project with a **Web app** t
| | | | Deploy-AzureResourceGroup.ps1 |A PowerShell script that runs PowerShell commands to deploy to Azure Resource Manager. Visual Studio uses this PowerShell script to deploy your template. | | WebSite.json |The Resource Manager template that defines the infrastructure you want deploy to Azure, and the parameters you can provide during deployment. It also defines the dependencies between the resources so Resource Manager deploys the resources in the correct order. |
- | WebSite.parameters.json |A parameters file that has values needed by the template. You pass in parameter values to customize each deployment. |
+ | WebSite.parameters.json |A parameters file that has values needed by the template. You pass in parameter values to customize each deployment. Notice that **Build Action** is set to **Content**. If you add more parameter files, make sure the build action is set to **Content**. |
- All resource group deployment projects have these basic files. Other projects may have additional files to support other functionality.
+ All resource group deployment projects have these basic files. Other projects may have more files to support other functionality.
## Customize Resource Manager template
It should look like:
"packageUri": "[concat(parameters('_artifactsLocation'), parameters('ExampleAppPackageFolder'), '/', parameters('ExampleAppPackageFileName'), parameters('_artifactsLocationSasToken'))]", ```
-Notice in the preceding example there is no `'/',` between **parameters('_artifactsLocation')** and **parameters('ExampleAppPackageFolder')**.
+Notice in the preceding example there's no `'/',` between **parameters('_artifactsLocation')** and **parameters('ExampleAppPackageFolder')**.
Rebuild the project. Building the project makes sure the files you need to deploy are added to the staging folder.
azure-resource-manager Deployment Script Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deployment-script-template.md
Property value details:
- `kind`: Specify the type of script. Currently, Azure PowerShell and Azure CLI scripts are supported. The values are **AzurePowerShell** and **AzureCLI**. - `forceUpdateTag`: Changing this value between template deployments forces the deployment script to re-execute. If you use the `newGuid()` or the `utcNow()` functions, both functions can only be used in the default value for a parameter. To learn more, see [Run script more than once](#run-script-more-than-once). - `containerSettings`: Specify the settings to customize Azure Container Instance. Deployment script requires a new Azure Container Instance. You can't specify an existing Azure Container Instance. However, you can customize the container group name by using `containerGroupName`. If not specified, the group name is automatically generated.-- `storageAccountSettings`: Specify the settings to use an existing storage account. If `containerGroupName` is not specified, a storage account is automatically created. See [Use an existing storage account](#use-existing-storage-account).
+- `storageAccountSettings`: Specify the settings to use an existing storage account. If `storageAccountName` is not specified, a storage account is automatically created. See [Use an existing storage account](#use-existing-storage-account).
- `azPowerShellVersion`/`azCliVersion`: Specify the module version to be used. See a list of [supported Azure PowerShell versions](https://mcr.microsoft.com/v2/azuredeploymentscripts-powershell/tags/list). See a list of [supported Azure CLI versions](https://mcr.microsoft.com/v2/azure-cli/tags/list). >[!IMPORTANT]
The following template shows how to pass values between two `deploymentScripts`
In the first resource, you define a variable called `$DeploymentScriptOutputs`, and use it to store the output values. To access the output value from another resource within the template, use: ```json
-reference('<ResourceName>').output.text
+reference('<ResourceName>').outputs.text
``` ## Work with outputs from CLI script
In this article, you learned how to use deployment scripts. To walk through a de
> [Tutorial: Use deployment scripts in Azure Resource Manager templates](./template-tutorial-deployment-script.md) > [!div class="nextstepaction"]
-> [Learn module: Extend ARM templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts/)
+> [Learn module: Extend ARM templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts/)
azure-resource-manager Parameter Files https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/parameter-files.md
Title: Create parameter file description: Create parameter file for passing in values during deployment of an Azure Resource Manager template Previously updated : 09/01/2020 Last updated : 04/12/2021 # Create Resource Manager parameter file
For more information, see [Deploy resources with ARM templates and Azure PowerSh
> [!NOTE] > It's not possible to use a parameter file with the custom template blade in the portal.
+If you're using the [Azure Resource Group project in Visual Studio](create-visual-studio-deployment-project.md), make sure the parameter file has its **Build Action** set to **Content**.
+ ## File name The general convention for naming the parameter file is to add **.parameters** to the template name. For example, if your template is named **azuredeploy.json**, your parameter file is named **azuredeploy.parameters.json**. This naming convention helps you see the connection between the template and the parameters.
azure-resource-manager Quickstart Create Bicep Use Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/quickstart-create-bicep-use-visual-studio-code.md
Title: Create Bicep files - Visual Studio Code description: Use Visual Studio Code and the Bicep extension to Bicep files for deploy Azure resources Previously updated : 03/26/2021 Last updated : 04/12/2021
The resource declaration has four components:
- **resource type** (Microsoft.Storage/storageAccounts@2019-06-01): It is composed of the resource provider (Microsoft.Storage), resource type (storageAccounts), and apiVersion (2019-06-01). Each resource provider publishes its own API versions, so this value is specific to the type. You can find more types and apiVersions for various Azure resources from [ARM template reference](/azure/templates/). - **properties** (everything inside = {...}): Specify the properties for the resource type. Every resource has a `name` property. Most resources also have a `location` property, which sets the region where the resource is deployed. The other properties vary by resource type and API version.
+For more information, see [Bicep structure](./bicep-file.md).
+
+There is a comment for the name property. Use `//` for single-line comments or `/* ... */` for multi-line comments
+ ## Completion and validation One of the most powerful capabilities of the extension is its integration with Azure schemas. Azure schemas provide the extension with validation and resource-aware completion capabilities. Let's modify the storage account to see validation and completion in action.
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-functions-resource.md
The possible uses of list* are shown in the following table.
| Microsoft.DataFactory/datafactories/gateways | listauthkeys | | Microsoft.DataFactory/factories/integrationruntimes | [listauthkeys](/rest/api/datafactory/integrationruntimes/listauthkeys) | | Microsoft.DataLakeAnalytics/accounts/storageAccounts/Containers | [listSasTokens](/rest/api/datalakeanalytics/storageaccounts/listsastokens) |
-| Microsoft.DataShare/accounts/shares | [listSynchronizations](/rest/api/datashare/shares/listsynchronizations) |
-| Microsoft.DataShare/accounts/shareSubscriptions | [listSourceShareSynchronizationSettings](/rest/api/datashare/sharesubscriptions/listsourcesharesynchronizationsettings) |
-| Microsoft.DataShare/accounts/shareSubscriptions | [listSynchronizationDetails](/rest/api/datashare/sharesubscriptions/listsynchronizationdetails) |
-| Microsoft.DataShare/accounts/shareSubscriptions | [listSynchronizations](/rest/api/datashare/sharesubscriptions/listsynchronizations) |
+| Microsoft.DataShare/accounts/shares | [listSynchronizations](/rest/api/datashare/2020-09-01/shares/listsynchronizations) |
+| Microsoft.DataShare/accounts/shareSubscriptions | [listSourceShareSynchronizationSettings](/rest/api/datashare/2020-09-01/sharesubscriptions/listsourcesharesynchronizationsettings) |
+| Microsoft.DataShare/accounts/shareSubscriptions | [listSynchronizationDetails](/rest/api/datashare/2020-09-01/sharesubscriptions/listsynchronizationdetails) |
+| Microsoft.DataShare/accounts/shareSubscriptions | [listSynchronizations](/rest/api/datashare/2020-09-01/sharesubscriptions/listsynchronizations) |
| Microsoft.Devices/iotHubs | [listkeys](/rest/api/iothub/iothubresource/listkeys) | | Microsoft.Devices/iotHubs/iotHubKeys | [listkeys](/rest/api/iothub/iothubresource/getkeysforkeyname) | | Microsoft.Devices/provisioningServices/keys | [listkeys](/rest/api/iot-dps/iotdpsresource/listkeysforkeyname) |
The possible uses of list* are shown in the following table.
| Microsoft.DevTestLab/labs/virtualMachines | [ListApplicableSchedules](/rest/api/dtl/virtualmachines/listapplicableschedules) | | Microsoft.DocumentDB/databaseAccounts | [listConnectionStrings](/rest/api/cosmos-db-resource-provider/2021-03-01-preview/databaseaccounts/listconnectionstrings) | | Microsoft.DocumentDB/databaseAccounts | [listKeys](/rest/api/cosmos-db-resource-provider/2021-03-01-preview/databaseaccounts/listkeys) |
-| Microsoft.DocumentDB/databaseAccounts/notebookWorkspaces | [listConnectionInfo](/rest/api/cosmos-db-resource-provider/2021-03-01-preview/notebookworkspaces/listconnectioninfo) |
+| Microsoft.DocumentDB/databaseAccounts/notebookWorkspaces | [listConnectionInfo](/rest/api/cosmos-db-resource-provider/2021-03-15/notebookworkspaces/listconnectioninfo) |
| Microsoft.DomainRegistration | [listDomainRecommendations](/rest/api/appservice/domains/listrecommendations) | | Microsoft.DomainRegistration/topLevelDomains | [listAgreements](/rest/api/appservice/topleveldomains/listagreements) | | Microsoft.EventGrid/domains | [listKeys](/rest/api/eventgrid/version2020-06-01/domains/listsharedaccesskeys) |
azure-resource-manager Template User Defined Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-user-defined-functions.md
Title: User-defined functions in templates description: Describes how to define and use user-defined functions in an Azure Resource Manager template (ARM template). Previously updated : 02/11/2021 Last updated : 04/12/2021 # User-defined functions in ARM template
When defining a user function, there are some restrictions:
* The function can only use parameters that are defined in the function. When you use the [parameters](template-functions-deployment.md#parameters) function within a user-defined function, you're restricted to the parameters for that function. * The function can't call other user-defined functions. * The function can't use the [reference](template-functions-resource.md#reference) function or any of the [list](template-functions-resource.md#list) functions.
+* The function can't use the [dateTimeAdd](template-functions-date.md#datetimeadd) function.
* Parameters for the function can't have default values. ## Next steps
azure-sql High Availability Sla https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/high-availability-sla.md
A failover can be initiated using PowerShell, REST API, or Azure CLI:
|Deployment type|PowerShell|REST API| Azure CLI| |:|:|:|:| |Database|[Invoke-AzSqlDatabaseFailover](/powershell/module/az.sql/invoke-azsqldatabasefailover)|[Database failover](/rest/api/sql/databases/failover)|[az rest](/cli/azure/reference-index#az-rest) may be used to invoke a REST API call from Azure CLI|
-|Elastic pool|[Invoke-AzSqlElasticPoolFailover](/powershell/module/az.sql/invoke-azsqlelasticpoolfailover)|[Elastic pool failover](/rest/api/sql/elasticpools/failover)|[az rest](/cli/azure/reference-index#az-rest) may be used to invoke a REST API call from Azure CLI|
+|Elastic pool|[Invoke-AzSqlElasticPoolFailover](/powershell/module/az.sql/invoke-azsqlelasticpoolfailover)|[Elastic pool failover](/javascript/api/@azure/arm-sql/elasticpools#failover_string__string__string__msRest_RequestOptionsBase)|[az rest](/cli/azure/reference-index#az-rest) may be used to invoke a REST API call from Azure CLI|
|Managed Instance|[Invoke-AzSqlInstanceFailover](/powershell/module/az.sql/Invoke-AzSqlInstanceFailover/)|[Managed Instances - Failover](/rest/api/sql/managed%20instances%20-%20failover/failover)|[az sql mi failover](/cli/azure/sql/mi/#az-sql-mi-failover)| > [!IMPORTANT]
azure-sql Performance Guidelines Best Practices Checklist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist.md
There is typically a trade-off between optimizing for costs and optimizing for p
The following is a quick checklist of VM size best practices for running your SQL Server on Azure VM: -- Use VM sizes with 4 or more vCPU like the [Standard_M8-4ms](/../../virtual-machines/m-series), the [E4ds_v4](../../../virtual-machines/edv4-edsv4-series.md#edv4-series), or the [DS12_v2](../../../virtual-machines/dv2-dsv2-series-memory.md#dsv2-series-11-15) or higher.
+- Use VM sizes with 4 or more vCPU like the [Standard_M8-4ms](../../../virtual-machines/m-series.md), the [E4ds_v4](../../../virtual-machines/edv4-edsv4-series.md#edv4-series), or the [DS12_v2](../../../virtual-machines/dv2-dsv2-series-memory.md#dsv2-series-11-15) or higher.
- Use [memory optimized](../../../virtual-machines/sizes-memory.md) virtual machine sizes for the best performance of SQL Server workloads. - The [DSv2 11-15](../../../virtual-machines/dv2-dsv2-series-memory.md), [Edsv4](../../../virtual-machines/edv4-edsv4-series.md) series, the [M-](../../../virtual-machines/m-series.md), and the [Mv2-](../../../virtual-machines/mv2-series.md) series offer the optimal memory-to-vCore ratio required for OLTP workloads. Both M series VMs offer the highest memory-to-vCore ratio required for mission critical workloads and are also ideal for data warehouse workloads. - Consider a higher memory-to-vCore ratio for mission critical and data warehouse workloads.
azure-vmware Configure Alerts For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-alerts-for-azure-vmware-solution.md
Last updated 04/02/2021
# Configure Azure Alerts in Azure VMware Solution
-In this article, you'll learn how to configure [Azure Action Groups](/azure/azure-monitor/alerts/action-groups) in [Microsoft Azure Alerts](/azure/azure-monitor/alerts/alerts-overvie) to receive notifications of triggered events that you define. You'll also learn about using [Azure Monitor Metrics](/azure/azure-monitor/essentials/data-platform-metrics) to gain deeper insights into your Azure VMware Solution private cloud.
+In this article, you'll learn how to configure [Azure Action Groups](/azure/azure-monitor/alerts/action-groups) in [Microsoft Azure Alerts](/azure/azure-monitor/alerts/alerts-overview) to receive notifications of triggered events that you define. You'll also learn about using [Azure Monitor Metrics](/azure/azure-monitor/essentials/data-platform-metrics) to gain deeper insights into your Azure VMware Solution private cloud.
## Supported metrics and activities
cdn Cdn Map Content To Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-map-content-to-custom-domain.md
Title: 'Tutorial: Add a custom domain to your endpoint'
description: Use this tutorial to add a custom domain to an Azure Content Delivery Network endpoint so that your domain name is visible in your URL. -+ Previously updated : 02/04/2020- Last updated : 04/12/2021+ #Customer intent: As a website owner, I want to add a custom domain to my CDN endpoint so that my users can use my custom domain to access my content.
After you've completed the registration of your custom domain, verify that the c
If you no longer want to associate your endpoint with a custom domain, remove the custom domain by doing the following steps:
-1. In your CDN profile, select the endpoint with the custom domain that you want to remove.
+1. Go to your DNS provider, delete the CNAME record for the custom domain, or update the CNAME record for the custom domain to a non-Azure CDN endpoint.
-2. From the **Endpoint** page, under Custom domains, right-click the custom domain that you want to remove, then select **Delete** from the context menu. Select **Yes**.
+ > [!Important]
+ > To prevent dangling DNS entries and the security risks they create, starting from April 9th 2021, Azure CDN requires removal of the CNAME records to Azure CDN endpoints before the resources can be deleted. Resources include Azure CDN custom domains, Azure CDN profiles/endpoints or Azure resource groups that has Azure CDN custom domain(s) enabled.
+
+2. In your CDN profile, select the endpoint with the custom domain that you want to remove.
+
+3. From the **Endpoint** page, under Custom domains, right-click the custom domain that you want to remove, then select **Delete** from the context menu. Select **Yes**.
The custom domain is disassociated from your endpoint.
If you no longer want to associate your endpoint with a custom domain, remove th
If you no longer want to associate your endpoint with a custom domain, remove the custom domain by doing the following steps:
-1. Use [Remove-AzCdnCustomDomain](/powershell/module/az.cdn/remove-azcdncustomdomain) to remove the custom domain from the endpoint:
+1. Go to your DNS provider, delete the CNAME record for the custom domain, or update the CNAME record for the custom domain to a non-Azure CDN endpoint.
+
+ > [!Important]
+ > To prevent dangling DNS entries and the security risks they create, starting from April 9th 2021, Azure CDN requires removal of the CNAME records to Azure CDN endpoints before the resources can be deleted. Resources include Azure CDN custom domains, Azure CDN profiles/endpoints or Azure resource groups that has Azure CDN custom domain(s) enabled.
+
+2. Use [Remove-AzCdnCustomDomain](/powershell/module/az.cdn/remove-azcdncustomdomain) to remove the custom domain from the endpoint:
* Replace **myendpoint8675** with your CDN endpoint name. * Replace **www.contoso.com** with your custom domain name. * Replace **myCDN** with your CDN profile name. * Replace **myResourceGroupCDN** with your resource group name. -
-```azurepowershell-interactive
- $parameters = @{
- CustomDomainName = 'www.contoso.com'
- EndPointName = 'myendpoint8675'
- ProfileName = 'myCDN'
- ResourceGroupName = 'myResourceGroupCDN'
- }
- Remove-AzCdnCustomDomain @parameters
-```
-
+ ```azurepowershell-interactive
+ $parameters = @{
+ CustomDomainName = 'www.contoso.com'
+ EndPointName = 'myendpoint8675'
+ ProfileName = 'myCDN'
+ ResourceGroupName = 'myResourceGroupCDN'
+ }
+ Remove-AzCdnCustomDomain @parameters
+ ```
+ ## Next steps In this tutorial, you learned how to:
certification How To Indirectly Connected Devices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/certification/how-to-indirectly-connected-devices.md
# Device bundles and indirectly connected devices
-To support devices that interact with Azure through a device, SaaS or PaaS offerings, our submission portal (https://www.certify.azure.com), and device catalog (https://devicecatalog.azure.com) enable concepts of bundling and dependencies to promote and enable these device combinations access to our Azure Certified Device program.
+To support devices that interact with Azure through a device, SaaS or PaaS offerings, our submission portal (https://certify.azure.com/), and device catalog (https://devicecatalog.azure.com) enable concepts of bundling and dependencies to promote and enable these device combinations access to our Azure Certified Device program.
Depending on your product line and services offered, your situation may require a combination of these steps:
certification Program Requirements Azure Certified Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/certification/program-requirements-azure-certified-device.md
Promise of Azure Certified Device certification are:
| **Applies To** | Leaf device/Edge device | | **OS** | Agnostic | | **Validation Type** | Automated |
-| **Validation** | Device must send any telemetry schemas to IoT Hub. Microsoft provides the [portal workflow](https://certify.azure.come) to execute the tests. Device to cloud (required): **1.** Validates that the device can send message to AICS managed IoT Hub **2.** User must specify the number and frequency of messages. **3.** AICS validates the telemetry is received by the Hub instance |
+| **Validation** | Device must send any telemetry schemas to IoT Hub. Microsoft provides the [portal workflow](https://certify.azure.com/) to execute the tests. Device to cloud (required): **1.** Validates that the device can send message to AICS managed IoT Hub **2.** User must specify the number and frequency of messages. **3.** AICS validates the telemetry is received by the Hub instance |
| **Resources** | [Certification steps](./overview.md) (has all the additional resources) | **[Required] DPS: The purpose of test is to check the device implements and supports IoT Hub Device Provisioning Service with one of the three attestation methods**
Promise of Azure Certified Device certification are:
| **OS** | Agnostic | | **Validation Type** | Automated | | **Validation** | Device must send any telemetry schemas to IoT Hub. Microsoft provides the [portal workflow](https://certify.azure.com) to execute the tests. Device twin property (if implemented) **1.** AICS validates the read/write-able property in device twin JSON **2.** User has to specify the JSON payload to be changed **3.** AICS validates the specified desired properties sent from IoT Hub and ACK message received by the device |
-| **Resources** | **a)** [Certification steps](./overview.md) (has all the additional resources) **b)** [Use device twins with IoT Hub](../iot-hub/iot-hub-devguide-device-twins.md) |
+| **Resources** | **a)** [Certification steps](./overview.md) (has all the additional resources) **b)** [Use device twins with IoT Hub](../iot-hub/iot-hub-devguide-device-twins.md) |
certification Program Requirements Pnp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/certification/program-requirements-pnp.md
Promise of IoT Plug and Play certification are:
| **OS** | Agnostic | | **Validation Type** | Automated | | **Validation** | Device must implement easy transfer of DPS ID Scope ownership without needing to recompile the embedded code. Microsoft provides the [portal workflow](https://certify.azure.com) to execute the tests to validate that the device supports DPS **1.** User must select one of the attestation methods (X.509, TPM and SAS key) **2.** Depending on the attestation method, user needs to take corresponding action such as **a)** Upload X.509 cert to AICS managed DPS scope **b)** Implement SAS key or endorsement key into the device |
-| **Resources** | **a)** [Device provisioning service overview](../iot-dps/about-iot-dps.md), **b)** [Sample config file for DPS ID Scope transfer](https://github.com/Azure/azure-iot-sdk-c/tree/public-preview-pnp/digitaltwin_client/samples/digitaltwin_sample_ll_device/sample_config) |
+| **Resources** | **a)** [Device provisioning service overview](../iot-dps/about-iot-dps.md), **b)** [Sample config file for DPS ID Scope transfer](https://github.com/Azure/azure-iot-sdk-c/tree/public-preview-pnp/serializer/samples/devicetwin_simplesample) |
**[Required] DTDL v2: The purpose of test to ensure defined device models and interfaces are compliant with the Digital Twins Definition Language v2.**
Promise of IoT Plug and Play certification are:
| **Applies To** | Any device | | **OS** | Agnostic | | **Validation Type** | Automated |
-| **Validation** | [Portal workflow](https://certify.azure.com) validates the device code implements [device info interface](https://repo.azureiotrepository.com/Models/dtmi:azure:DeviceManagement:DeviceInformation;1?api-version=2020-05-01-previewureiot:DeviceManagement:DeviceInformation:1) **1.** Checks the values are emitted by the device code to IoT Hub **2.** Checks the interface is implemented in the DCM (this implementation will change in DTDL v2) **3.** Checks properties are not write-able (read only) **4.** Checks the schema type is string and/or long and not null |
+| **Validation** | [Portal workflow](https://certify.azure.com) validates the device code implements device info interface **1.** Checks the values are emitted by the device code to IoT Hub **2.** Checks the interface is implemented in the DCM (this implementation will change in DTDL v2) **3.** Checks properties are not write-able (read only) **4.** Checks the schema type is string and/or long and not null |
| **Resources** | [Microsoft defined interface](../iot-pnp/overview-iot-plug-and-play-preview-updates.md) | | **Azure Recommended** | N/A |
Promise of IoT Plug and Play certification are:
| **OS** | Agnostic | | **Validation Type** | Automated | | **Validation** | Device must send any telemetry schemas to IoT Hub. Microsoft provides the [portal workflow](https://certify.azure.com) to execute the tests. Device twin property (if implemented): **1.** AICS validates the read/write-able property in device twin JSON **2.** User has to specify the JSON payload to be changed **3.** AICS validates the specified desired properties sent from IoT Hub and ACK message received by the device |
-| **Resources** | **1.** [Certification steps](./overview.md) (has all the additional resources), **2.** [Use device twins with IoT Hub](../iot-hub/iot-hub-devguide-device-twins.md) |
+| **Resources** | **1.** [Certification steps](./overview.md) (has all the additional resources), **2.** [Use device twins with IoT Hub](../iot-hub/iot-hub-devguide-device-twins.md) |
cognitive-services Best Practices Multivariate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/concepts/best-practices-multivariate.md
+
+ Title: Best practices for using the Anomaly Detector Multivariate API
+
+description: Best practices for using the Anomaly Detector Multivariate API's to apply anomaly detection to your time series data.
++++++ Last updated : 04/01/2021+
+keywords: anomaly detection, machine learning, algorithms
++
+# Multivariate time series Anomaly Detector best practices
+
+This article will provide guidance around recommended practices to follow when using the multivariate Anomaly Detector APIs.
+
+## How to prepare data for training
+
+To use the Anomaly Detector multivariate APIs, we need to train our own model before using detection. Data used for training is a batch of time series, each time series should be in CSV format with two columns, timestamp and value. All of the time series should be zipped into one zip file and uploaded to Azure Blob storage. By default the file name will be used to represent the variable for the time series. Alternatively, an extra meta.json file can be included in the zip file if you wish the name of the variable to be different from the .zip file name. Once we generate a [blob SAS (Shared access signatures) URL](../../../storage/common/storage-sas-overview.md), we can use it for training.
+
+## Data quality and quantity
+
+The Anomaly Detector multivariate API uses state-of-the-art deep neural networks to learn normal patterns from historical data and predicts whether future values are anomalies. The quality and quantity of training data is important to train an optimal model. As the model learns normal patterns from historical data, the training data should represent the overall normal state of the system. It is hard for the model to learn these types of patterns if the training data is full of anomalies. Also, the model has millions of parameters and it needs a minimum number of data points to learn an optimal set of parameters. The general rule is that you need to provide at least 15,000 data points per variable to properly train the model. The more data, the better the model.
+
+It is common that many time series have missing values, which may affect the performance of trained models. The missing ratio of each time series should be controlled under a reasonable value. A time series having 90% values missing provides little information about normal patterns of the system. Even worse, the model may consider filled values as normal patterns, which are usually straight segments or constant values. When new data flows in, the data might be detected as anomalies.
+
+A recommended max missing value threshold is 20%, but a higher threshold might be acceptable under some circumstances. For example, if you have a time series with one-minute granularity and another time series with hourly granularity. Each hour there are 60 data points per minute of data and 1 data point for hourly data, which means that the missing ratio for hourly data is 98.33%. However, it is fine to fill the hourly data with the only value if the hourly time series does not typically fluctuate too much.
+
+## Parameters
+
+### Sliding window
+
+Multivariate anomaly detection takes a segment of data points of length `slidingWindow` as input and decides if the next data point is an anomaly. The larger the sample length, the more data will be considered for a decision. You should keep two things in mind when choosing a proper value for `slidingWindow`: properties of input data, and the trade-off between training/inference time and potential performance improvement. `slidingWindow` consists of an integer between 28 and 2880. You may decide how many data points are used as inputs based on whether your data is periodic, and the sampling rate for your data.
+
+When your data is periodic, you may include 1 - 3 cycles as an input and when your data is sampled at a high frequency (small granularity) like minute-level or second-level data, you may select more data as an input. Another issue is that longer inputs may cause longer training/inference time, and there is no guarantee that more input points will lead to performance gains. Whereas too few data points, may make the model difficult to converge to an optimal solution. For example, it is hard to detect anomalies when the input data only has two points.
+
+### Align mode
+
+The parameter `alignMode` is used to indicate how you want to align multiple time series on time stamps. This is because many time series have missing values and we need to align them on the same time stamps before further processing. There are two options for this parameter, `inner join` and `outer join`. `inner join` means we will report detection results on timestamps on which **every time series** has a value, while `outer join` means we will report detection results on time stamps for **any time series** that has a value. **The `alignMode` will also affect the input sequence of the model**, so choose a suitable `alignMode` for your scenario because the results might be significantly different.
+
+Here we show an example to explain different `alignModel` values.
+
+#### Series1
+
+|timestamp | value|
+-| --|
+|`2020-11-01`| 1
+|`2020-11-02`| 2
+|`2020-11-04`| 4
+|`2020-11-05`| 5
+
+#### Series2
+
+timestamp | value
+ | -
+`2020-11-01`| 1
+`2020-11-02`| 2
+`2020-11-03`| 3
+`2020-11-04`| 4
+
+#### Inner join two series
+
+timestamp | Series1 | Series2
+-| - | -
+`2020-11-01`| 1 | 1
+`2020-11-02`| 2 | 2
+`2020-11-04`| 4 | 4
+
+#### Outer join two series
+
+timestamp | series1 | series2
+ | - | -
+`2020-11-01`| 1 | 1
+`2020-11-02`| 2 | 2
+`2020-11-03`| NA | 3
+`2020-11-04`| 4 | 4
+`2020-11-05`| 5 | NA
+
+### Fill not available (NA)
+
+After variables are aligned on timestamp by outer join, there might be some `Not Available` (`NA`) value in some of the variables. You can specify method to fill this NA value. The options for the `fillNAMethod` are `Linear`, `Previous`, `Subsequent`, `Zero`, and `Fixed`.
+
+| Option | Method |
+| - | -|
+| Linear | Fill NA values by linear interpolation |
+| Previous | Propagate last valid value to fill gaps. Example: `[1, 2, nan, 3, nan, 4]` -> `[1, 2, 2, 3, 3, 4]` |
+| Subsequent | Use next valid value to fill gaps. Example: `[1, 2, nan, 3, nan, 4]` -> `[1, 2, 3, 3, 4, 4]` |
+| Zero | Fill NA values with 0. |
+| Fixed | Fill NA values with a specified valid value that should be provided in `paddingValue`. |
+
+## Model analysis
+
+### Training latency
+
+Multivariate Anomaly Detection training can be time-consuming. Especially when you have a large quantity of timestamps used for training. Therefore, we allow part of the training process to be asynchronous. Typically, users submit train task through Train Model API. Then get model status through the `Get Multivariate Model API`. Here we demonstrate how to extract the remaining time before training completes. In the Get Multivariate Model API response, there is an item named `diagnosticsInfo`. In this item, there is a `modelState` element. To calculate the remaining time, we need to use `epochIds` and `latenciesInSeconds`. An epoch represents one complete cycle through the training data. Every 10 epochs, we will output status information. In total, we will train for 100 epochs, the latency indicates how long an epoch takes. With this information, we know remaining time left to train the model.
+
+### Model performance
+
+Multivariate Anomaly Detection, as an unsupervised model. The best way to evaluate it is to check the anomaly results manually. In the Get Multivariate Model response, we provide some basic info for us to analyze model performance. In the `modelState` element returned by the Get Multivariate Model API, we can use `trainLosses` and `validationLosses` to evaluate whether the model has been trained as expected. In most cases, the two losses will decrease gradually. Another piece of information for us to analyze model performance against is in `variableStates`. The variables state list is ranked by `filledNARatio` in descending order. The larger the worse our performance, usually we need to reduce this `NA ratio` as much as possible. `NA` could be caused by missing values or unaligned variables from a timestamp perspective.
+
+## Next steps
+
+- [Quickstarts](../quickstarts/client-libraries-multivariate.md).
+- [Learn about the underlying algorithms that power Anomaly Detector Multivariate](https://arxiv.org/abs/2009.02040)
cognitive-services Multivariate Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/concepts/multivariate-architecture.md
+
+ Title: Predicative maintenance architecture for using the Anomaly Detector Multivariate API
+
+description: Reference architecture for using the Anomaly Detector Multivariate APIs to apply anomaly detection to your time series data for predictive maintenance.
++++++ Last updated : 04/01/2021+
+keywords: anomaly detection, machine learning, algorithms
++
+# Predictive maintenance solution with Anomaly Detector multivariate
+
+Many different industries need predictive maintenance solutions to reduce risks and gain actionable insights through processing data from their equipment. Predictive maintenance evaluates the condition of equipment by performing online monitoring. The goal is to perform maintenance before the equipment degrades or breaks down.
+
+Monitoring the health status of equipment can be challenging, as each component inside the equipment can generate dozens of signals, for example vibration, orientation, and rotation. This can be even more complex when those signals have an implicit relationship, and need to be monitored and analyzed together. Defining different rules for those signals and correlating them with each other manually can be costly. Anomaly Detector's multivariate feature allows:
+
+* Multiple correlated signals to be monitored together, and the inter-correlations between them are accounted for in the model.
+* In each captured anomaly, the contribution rank of different signals can help with anomaly explanation, and incident root cause analysis.
+* The multivariate anomaly detection model is built in an unsupervised manner. Models can be trained specifically for different types of equipment.
+
+Here, we provide a reference architecture for a predictive maintenance solution based on Anomaly Detector multivariate.
+
+## Reference architecture
+
+[ ![Architectural diagram that starts at sensor data being collected at the edge with a piece of industrial equipment and tracks the processing/analysis pipeline to an end output of an incident alert being generated after Anomaly Detector runs.](../media/multivariate-architecture/multivariate-architecture.png) ](../media/multivariate-architecture/multivariate-architecture.png#lightbox)
+
+In the above architecture, streaming events coming from sensor data will be stored in Azure Data Lake and then processed by a data transforming module to be converted into a time-series format. Meanwhile, the streaming event will trigger real-time detection with the trained model. In general, there will be a module to manage the multivariate model life cycle, like *Bridge Service* in this architecture.
+
+**Model training**: Before using the Anomaly Detector multivariate to detect anomalies for a component or equipment. We need to train a model on specific signals (time-series) generated by this entity. The *Bridge Service* will fetch historical data and submit a training job to the Anomaly Detector and then keep the Model ID in the *Model Meta* storage.
+
+**Model validation**: Training time of a certain model could be varied based on the training data volume. The *Bridge Service* could query model status and diagnostic info on a regular basis. Validating model quality could be necessary before putting it online. If there are labels in the scenario, those labels can be used to verify the model quality. Otherwise, the diagnostic info can be used to evaluate the model quality, and you can also perform detection on historical data with the trained model and evaluate the result to backtest the validity of the model.
+
+**Model inference**: Online detection will be performed with the valid model, and the result ID can be stored in the *Inference table*. Both the training process and the inference process are done in an asynchronous manner. In general, a detection task can be completed within seconds. Signals used for detection should be the same ones that have been used for training. For example, if we use vibration, orientation, and rotation for training, in detection the three signals should be included as an input.
+
+**Incident alerting** The detection results can be queried with result IDs. Each result contains severity of each anomaly, and contribution rank. Contribution rank can be used to understand why this anomaly happened, and which signal caused this incident. Different thresholds can be set on the severity to generate alerts and notifications to be sent to field engineers to conduct maintenance work.
+
+## Next steps
+
+- [Quickstarts](../quickstarts/client-libraries-multivariate.md).
+- [Best Practices](../concepts/best-practices-multivariate.md): This article is about recommended patterns to use with the multivariate APIs.
cognitive-services Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/concepts/troubleshoot.md
+
+ Title: Troubleshooting the Anomaly Detector Multivariate API
+
+description: Learn how to remediate common error codes when using the Anomaly Detector API
++++++ Last updated : 04/01/2021+
+keywords: anomaly detection, machine learning, algorithms
++
+# Troubleshooting the multivariate API
+
+This article provides guidance on how to troubleshoot and remediate common HTTP error messages when using the multivariate API.
+
+### Multivariate error codes
+
+| Method | HTTP error code | Error message | Action to take |
+|-|--|--||
+| Train a Multivariate Anomaly Detection Model | 400 | `The 'source' field is required in the request.` | The key word "source" has not been specified correctly. The format should be `"{\"source\": \|" <SAS URL>\"}"` |
+| Train a Multivariate Anomaly Detection Model | 400 | `The source field must be a valid sas blob url` | The source field must be a valid blob container sas url. |
+| Train a Multivariate Anomaly Detection Model | 400 | `The 'startTime' field is required in the request.` | Add startTime in the request |
+| Train a Multivariate Anomaly Detection Model | 400 | `The 'endTime' field is required in the request.` | Add endTime in the request. |
+| Train a Multivariate Anomaly Detection Model | 400 | `Invalid Timestamp format.` | The timestamp in the csv file zipped in the source url is in invalid format or "startTime", "endTime" is in invalid format. |
+| Train a Multivariate Anomaly Detection Model | 400 | `The displayName length exceeds maximum allowed length 24.` | DisplayName is an optional parameter to be used for users to distinguish different models. A valid displayName must be smaller than 24 characters. |
+| Train a Multivariate Anomaly Detection Model | 400 | `The 'slidingWindow' field must be an integer between 28 and 2880.` | Sliding window must be in a valid range. |
+| Train a Multivariate Anomaly Detection Model | 401 | `Unable to download blobs on the Azure Blob storage account.` | The URL does not have the right permissions. The list flag is not set. The customer should re-create the SAS URL and make sure the read and list flags is checked (for example using Storage Explorer) |
+| Train a Multivariate Anomaly Detection Model | 413 | `Unable to process the dataset. Number of variables exceed the limit (300).` | The data in the blob container exceeds the limit of currently 300 variables. The customer has to point to reduce the variable size. |
+| Train a Multivariate Anomaly Detection Model | 413 | `Valid Timestamps in the dataset exceeds the limit (1 million points), please change startTime or endTime parameters.` | The max number of points can be used for training 1 million. Customers can reduce variable size or change startTime or endTime |
+| Train a Multivariate Anomaly Detection Model | 413 | `Unable to process dataset. Size of dataset exceeds size limit (2GB).` | The data in the blob container exceeds the limit of currently 4 MB. The customer has to point to a blob with smaller data. |
+| Detect Multivariate Anomaly | 404 | `The model does not exist.` | The model ID is invalid. Customers need to train a model before using it. |
+| Detect Multivariate Anomaly | 400 | `The model is not ready yet.` | The model is not ready yet. Customers need to call Get Multivariate Model api to check model status. |
+| Detect Multivariate Anomaly | 400 | `The 'source' field is required in the request.` | The key word "source" has not been specified correctly. The format should be `"{\"source\": \|" <SAS URL>\"}"` |
+| Detect Multivariate Anomaly | 400 | `The source field must be a valid sas blob url` | The source field must be a valid blob container sas url. |
+| Detect Multivariate Anomaly | 400 | `The 'startTime' field is required in the request.` | Add startTime in the request |
+| Detect Multivariate Anomaly | 400 | `The 'endTime' field is required in the request.` | Add endTime in the request. |
+| Detect Multivariate Anomaly | 400 | `Invalid Timestamp format.` | The timestamp in the csv file zipped in the source url is in an invalid format or "startTime", "endTime" is in invalid format. |
+| Detect Multivariate Anomaly | 400 | `The corresponding file of the variable does not exist.` | One variable has been used in train, but it cannot be found when the customer uses the corresponding model to do detection. Customers need to add this variable and then submit the detection request. |
+| Detect Multivariate Anomaly | 413 | `Unable to process the dataset. Number of variables exceed the limit (300).` | The data in the blob container exceeds the limit of currently 300 variables. The customer has to point to reduce the variable size. |
+| Detect Multivariate Anomaly | 413 | `The limit timestamps of one detection request is 2880, please change startTime or endTime parameters.` | The max timestamps to be detected in one detection request is 2880, customers need to change the startTime or endTime and then submit detection request. |
+| Detect Multivariate Anomaly | 413 | `Unable to process dataset. Size of dataset exceeds size limit (2GB).` | The data in the blob container exceeds the limit of currently 4 MB. The customer has to point to a blob with smaller data. |
+| Get Multivariate Model | 404 | `Model with 'id=<input model ID>' not found.` | The ID is not a valid model ID. Use GET models to find all valid model Ids. |
+| Get Multivariate Model | 404 | `Model with id=<input model ID>' not found.` | The ID is not a valid model ID. Use GET models to find all valid model Ids. |
+| Get Multivariate Anomaly Detection Result | 404 | `Result with 'id=<input result ID>' not found.` | The ID is not a valid result ID. Resubmit your detection request. |
+| Delete Multivariate Model | 404 | `Location for model with 'id=<input model ID>' not found.` | The ID is not a valid model ID. Use GET models to find all valid model Ids. |
cognitive-services Overview Multivariate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/overview-multivariate.md
+
+ Title: What is the Anomaly Detector Multivariate API?
+
+description: Overview of new Anomaly Detector public preview multivariate APIs.
++++++ Last updated : 04/01/2021+
+keywords: anomaly detection, machine learning, algorithms
++
+# Multivariate time series Anomaly Detection (public preview)
+
+The first release of the Azure Cognitive Services Anomaly Detector allowed you to build metrics monitoring solutions using the easy-to-use [univariate time series Anomaly Detector APIs](overview.md). By allowing analysis of time series individually, Anomaly Detector univariate provides simplicity and scalability.
+
+The new **multivariate anomaly detection** APIs further enable developers by easily integrating advanced AI for detecting anomalies from groups of metrics, without the need for machine learning knowledge or labeled data. Dependencies and inter-correlations between up to 300 different signals are now automatically counted as key factors. This new capability helps you to proactively protect your complex systems such as software applications, servers, factory machines, spacecraft, or even your business, from failures.
+
+Imagine 20 sensors from an auto engine generating 20 different signals like vibration, temperature, fuel pressure, etc. The readings of those signals individually may not tell you much about system level issues, but together they can represent the health of the engine. When the interaction of those signals deviates outside the usual range, the multivariate anomaly detection feature can sense the anomaly like a seasoned expert. The underlying AI models are trained and customized using your data such that it understands the unique needs of your business. With the new APIs in Anomaly Detector, developers can now easily integrate the multivariate time series anomaly detection capabilities into predictive maintenance solutions, AIOps monitoring solutions for complex enterprise software, or business intelligence tools.
+
+## When to use **multivariate** versus **univariate**
+
+Use univariate anomaly detection APIs, if your goal is to detect anomalies out of a normal pattern on each individual time series purely based on their own historical data. Examples: you want to detect daily revenue anomalies based on revenue data itself, or you want to detect a CPU spike purely based on CPU data.
+- `POST /anomalydetector/v1.0/timeseries/last/detect`
+- `POST /anomalydetector/v1.0/timeseries/batch/detect`
+- `POST /anomalydetector/v1.0/timeseries/changepoint/detect`
+
+![Time series line graph with a single variable's fluctuating values captured by a blue line with anomalies identified by orange circles](./media/anomaly_detection2.png)
+
+Use multivariate anomaly detection APIs below, if your goal is to detect system level anomalies from a group of time series data. Particularly, when any individual time series won't tell you much, and you have to look at all signals (a group of time series) holistically to determine a system level issue. Example: you have an expensive physical asset like aircraft, equipment on an oil rig, or a satellite. Each of these assets has tens or hundreds of different types of sensors. You would have to look at all those time series signals from those sensors to decide whether there is system level issue.
+
+- `POST /anomalydetector/v1.1-preview/multivariate/models`
+- `GET /anomalydetector/v1.1-preview/multivariate/models[?$skip][&$top]`
+- `GET /anomalydetector/v1.1-preview/multivariate/models/{modelId}`
+- `POST/anomalydetector/v1.1-preview/multivariate/models/{modelId}/detect`
+- `GET /anomalydetector/v1.1-preview/multivariate/results/{resultId}`
+- `DELETE /anomalydetector/v1.1-preview/multivariate/models/{modelId}`
+- `GET /anomalydetector/v1.1-preview/multivariate/models/{modelId}/export`
+
+![Multiple time series line graphs for variables of: vibration, temperature, pressure, velocity, rotation speed with anomalies highlighted in orange](./media/multivariate-graph.png)
+
+## Region support
+
+The public preview of Anomaly Detector multivariate is currently available in three regions: West US2, East US2, and West Europe.
+
+## Algorithms
+
+- [Multivariate time series Anomaly Detection via Graph Attention Network](https://arxiv.org/abs/2009.02040)
+
+## Join the Anomaly Detector community
+
+- Join the [Anomaly Detector Advisors group on Microsoft Teams](https://aka.ms/AdAdvisorsJoin)
+
+## Next steps
+
+- [Quickstarts](./quickstarts/client-libraries-multivariate.md).
+- [Best Practices](./concepts/best-practices-multivariate.md): This article is about recommended patterns to use with the multivariate APIs.
cognitive-services Client Libraries Multivariate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/quickstarts/client-libraries-multivariate.md
+
+ Title: 'Quickstart: Anomaly detection using the Anomaly Detector client library for multivariate anomaly detection'
+
+description: The Anomaly Detector multivariate offers client libraries to detect abnormalities in your data series either as a batch or on streaming data.
+++
+zone_pivot_groups: anomaly-detector-quickstart-multivariate
++++ Last updated : 04/01/2020+
+keywords: anomaly detection, algorithms
++
+# Quickstart: Use the Anomaly Detector multivariate client library
++++++++++++
cognitive-services Call Read Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/Vision-API-How-to-Topics/call-read-api.md
This guide assumes you have already <a href="https://portal.azure.com/#create/Mi
## Submit data to the service
-The Read API's [Read call](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-3/operations/5d986960601faab4bf452005) takes an image or PDF document as the input and extracts text asynchronously.
+The Read API's [Read call](https://centraluseuap.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) takes an image or PDF document as the input and extracts text asynchronously.
-`https://{endpoint}/vision/v3.2-preview.3/read/analyze[?language][&pages][&readingOrder]`
+`https://{endpoint}/vision/v3.2/read/analyze[?language][&pages][&readingOrder]`
The call returns with a response header field called `Operation-Location`. The `Operation-Location` value is a URL that contains the Operation ID to be used in the next step. |Response header| Example value | |:--|:-|
-|Operation-Location | `https://cognitiveservice/vision/v3.1/read/analyzeResults/49a36324-fc4b-4387-aa06-090cfbf0064f` |
+|Operation-Location | `https://cognitiveservice/vision/v3.2/read/analyzeResults/49a36324-fc4b-4387-aa06-090cfbf0064f` |
> [!NOTE] > **Billing**
The call returns with a response header field called `Operation-Location`. The `
### Language specification
-The [Read](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-ga/operations/5d986960601faab4bf452005) call has an optional request parameter for language. Read supports auto language identification and multilingual documents, so only provide a language code if you would like to force the document to be processed as that specific language.
+The [Read](https://centraluseuap.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) call has an optional request parameter for language. Read supports auto language identification and multilingual documents, so only provide a language code if you would like to force the document to be processed as that specific language.
### Natural reading order output (Latin languages only)
-With the [Read 3.2 preview API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-3/operations/5d986960601faab4bf452005), you can specify the order in which the text lines are output with the `readingOrder` query parameter. Use `natural` for a more human-friendly reading order output as shown in the following example. This feature is only supported for Latin languages.
+Specify the order in which the text lines are output with the `readingOrder` query parameter. Use `natural` for a more human-friendly reading order output as shown in the following example. This feature is only supported for Latin languages.
:::image border type="content" source="../Images/ocr-reading-order-example.png" alt-text="OCR Reading order example"::: ### Select page(s) or page ranges for text extraction
-With the [Read 3.2 preview API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-3/operations/5d986960601faab4bf452005), for large multi-page documents, use the `pages` query parameter to specify page numbers or page ranges to extract text from only those pages. The following example shows a document with 10 pages, with text extracted for both cases - all pages (1-10) and selected pages (3-6).
+For large multi-page documents, use the `pages` query parameter to specify page numbers or page ranges to extract text from only those pages. The following example shows a document with 10 pages, with text extracted for both cases - all pages (1-10) and selected pages (3-6).
:::image border type="content" source="../Images/ocr-select-pages.png" alt-text="Selected pages output"::: ## Get results from the service
-The second step is to call [Get Read Results](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-ga/operations/5d9869604be85dee480c8750) operation. This operation takes as input the operation ID that was created by the Read operation.
+The second step is to call [Get Read Results](https://centraluseuap.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d9869604be85dee480c8750) operation. This operation takes as input the operation ID that was created by the Read operation.
-`https://{endpoint}/vision/v3.2-preview.3/read/analyzeResults/{operationId}`
+`https://{endpoint}/vision/v3.2/read/analyzeResults/{operationId}`
It returns a JSON response that contains a **status** field with the following possible values.
It returns a JSON response that contains a **status** field with the following p
You call this operation iteratively until it returns with the **succeeded** value. Use an interval of 1 to 2 seconds to avoid exceeding the requests per second (RPS) rate. > [!NOTE]
-> The free tier limits the request rate to 20 calls per minute. The paid tier allows 10 requests per second (RPS) that can be increased upon request. Use the Azure support channel or your account team to request a higher request per second (RPS) rate.
+> The free tier limits the request rate to 20 calls per minute. The paid tier allows 10 requests per second (RPS) that can be increased upon request. Note your Azure resource identfier and region, and open an Azure support ticket or contact your account team to request a higher request per second (RPS) rate.
When the **status** field has the `succeeded` value, the JSON response contains the extracted text content from your image or document. The JSON response maintains the original line groupings of recognized words. It includes the extracted text lines and their bounding box coordinates. Each text line includes all extracted words with their coordinates and confidence scores.
See the following example of a successful JSON response:
```json { "status": "succeeded",
- "createdDateTime": "2020-05-28T05:13:21Z",
- "lastUpdatedDateTime": "2020-05-28T05:13:22Z",
+ "createdDateTime": "2021-02-04T06:32:08.2752706+00:00",
+ "lastUpdatedDateTime": "2021-02-04T06:32:08.7706172+00:00",
"analyzeResult": {
- "version": "3.1.0",
+ "version": "3.2",
"readResults": [ { "page": 1,
- "angle": 0.8551,
- "width": 2661,
- "height": 1901,
+ "angle": 2.1243,
+ "width": 502,
+ "height": 252,
"unit": "pixel", "lines": [ { "boundingBox": [
- 67,
- 646,
- 2582,
- 713,
- 2580,
- 876,
- 67,
- 821
+ 58,
+ 42,
+ 314,
+ 59,
+ 311,
+ 123,
+ 56,
+ 121
],
- "text": "The quick brown fox jumps",
+ "text": "Tabs vs",
+ "appearance": {
+ "style": {
+ "name": "handwriting",
+ "confidence": 0.96
+ }
+ },
"words": [ { "boundingBox": [
- 143,
- 650,
- 435,
- 661,
- 436,
- 823,
- 144,
- 824
+ 68,
+ 44,
+ 225,
+ 59,
+ 224,
+ 122,
+ 66,
+ 123
+ ],
+ "text": "Tabs",
+ "confidence": 0.933
+ },
+ {
+ "boundingBox": [
+ 241,
+ 61,
+ 314,
+ 72,
+ 314,
+ 123,
+ 239,
+ 122
],
- "text": "The",
- "confidence": 0.958
+ "text": "vs",
+ "confidence": 0.977
} ] }
See the following example of a successful JSON response:
``` ### Handwritten classification for text lines (Latin languages only)
-The [Read 3.2 preview API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-3/operations/5d986960601faab4bf452005) response includes classifying whether each text line is of handwriting style or not, along with a confidence score. This feature is only supported for Latin languages. The following example shows the handwritten classification for the text in the image.
+The response includes classifying whether each text line is of handwriting style or not, along with a confidence score. This feature is only supported for Latin languages. The following example shows the handwritten classification for the text in the image.
:::image border type="content" source="../Images/ocr-handwriting-classification.png" alt-text="OCR handwriting classification example"::: ## Next steps
-To try out the REST API, go to the [Read API Reference](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-3/operations/5d986960601faab4bf452005).
+To try out the REST API, go to the [Read API Reference](https://centraluseuap.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005).
cognitive-services Computer Vision How To Install Containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md
Previously updated : 03/02/2021 Last updated : 04/09/2021 keywords: on-premises, OCR, Docker, container
-# Install Read OCR Docker containers (Preview)
+# Install Read OCR Docker containers
[!INCLUDE [container hosting on the Microsoft Container Registry](../containers/includes/gated-container-hosting.md)]
Containers enable you to run the Computer Vision APIs in your own environment. C
The *Read* OCR container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API how-to guide](Vision-API-How-to-Topics/call-read-api.md).
-## Read 3.2-preview container
+## Read 3.2 container
-> [!NOTE]
-> The Read 3.0-preview container has been deprecated.
-
-The Read 3.2-preview OCR container provides:
+The Read 3.2 OCR container provides:
* New models for enhanced accuracy. * Support for multiple languages within the same document. * Support for a total of 73 languages. See the full list of [OCR-supported languages](./language-support.md#optical-character-recognition-ocr).
If you don't have an Azure subscription, create a [free account](https://azure.m
Fill out and submit the [request form](https://aka.ms/csgate) to request approval to run the container. [!INCLUDE [Gathering required container parameters](../containers/includes/container-gathering-required-parameters.md)]
Container images for Read are available.
| Container | Container Registry / Repository / Image Name | |--|| | Read 2.0-preview | `mcr.microsoft.com/azure-cognitive-services/vision/read:2.0-preview` |
-| Read 3.2-preview | `mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-preview.2` |
+| Read 3.2 | `mcr.microsoft.com/azure-cognitive-services/vision/read:3.2` |
Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image. ### Docker pull for the Read OCR container
-# [Version 3.2-preview](#tab/version-3-2)
+# [Version 3.2](#tab/version-3-2)
```bash
-docker pull mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-preview.2
+docker pull mcr.microsoft.com/azure-cognitive-services/vision/read:3.2
``` # [Version 2.0-preview](#tab/version-2)
Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/)
[Examples](computer-vision-resource-container-config.md#example-docker-run-commands) of the `docker run` command are available.
-# [Version 3.2-preview](#tab/version-3-2)
+# [Version 3.2](#tab/version-3-2)
```bash docker run --rm -it -p 5000:5000 --memory 18g --cpus 8 \
-mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-preview.2 \
+mcr.microsoft.com/azure-cognitive-services/vision/read:3.2 \
Eula=accept \ Billing={ENDPOINT_URI} \ ApiKey={API_KEY}
To find your connection string:
The container provides REST-based query prediction endpoint APIs.
-# [Version 3.2-preview](#tab/version-3-2)
+# [Version 3.2](#tab/version-3-2)
-Use the host, `http://localhost:5000`, for container APIs. You can view the Swagger path at: `http://localhost:5000/swagger/vision-v3.2-preview-read/swagger.json`.
+Use the host, `http://localhost:5000`, for container APIs. You can view the Swagger path at: `http://localhost:5000/swagger/vision-v3.2-read/swagger.json`.
# [Version 2.0-preview](#tab/version-2)
Use the host, `http://localhost:5000`, for container APIs. You can view the Swag
### Asynchronous read
-# [Version 3.2-preview](#tab/version-3-2)
+# [Version 3.2](#tab/version-3-2)
You can use the `POST /vision/v3.2/read/analyze` and `GET /vision/v3.2/read/operations/{operationId}` operations in concert to asynchronously read an image, similar to how the Computer Vision service uses those corresponding REST operations. The asynchronous POST method will return an `operationId` that is used as the identifer to the HTTP GET request.
The `operation-location` is the fully qualified URL and is accessed via an HTTP
You can use the following operation to synchronously read an image.
-# [Version 3.2-preview](#tab/version-3-2)
+# [Version 3.2](#tab/version-3-2)
`POST /vision/v3.2/read/syncAnalyze`
For more information about these options, see [Configure containers](./computer-
In this article, you learned concepts and workflow for downloading, installing, and running Computer Vision containers. In summary: * Computer Vision provides a Linux container for Docker, encapsulating Read.
-* Container images are downloaded from the "Container Preview" container registry in Azure.
+* The read container image requires an application to run it.
* Container images run in Docker. * You can use either the REST API or SDK to call operations in Read OCR containers by specifying the host URI of the container. * You must specify billing information when instantiating a container.
cognitive-services Computer Vision Resource Container Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/computer-vision-resource-container-config.md
Previously updated : 11/23/2020 Last updated : 04/09/2021
This setting can be found in the following place:
* Azure portal: **Cognitive Services** Overview, labeled `Endpoint`
-Remember to add the `vision/v1.0` routing to the endpoint URI as shown in the following table.
+Remember to add the `vision/<version>` routing to the endpoint URI as shown in the following table.
|Required| Name | Data type | Description | |--||--|-|
-|Yes| `Billing` | String | Billing endpoint URI<br><br>Example:<br>`Billing=https://westcentralus.api.cognitive.microsoft.com/vision/v1.0` |
+|Yes| `Billing` | String | Billing endpoint URI<br><br>Example:<br>`Billing=https://westcentralus.api.cognitive.microsoft.com/vision/v3.2` |
## Eula setting
Replace {_argument_name_} with your own values:
The following Docker examples are for the Read OCR container.
-# [Version 3.2-preview](#tab/version-3-2)
+# [Version 3.2](#tab/version-3-2)
### Basic example ```bash docker run --rm -it -p 5000:5000 --memory 18g --cpus 8 \
-mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-preview.1 \
+mcr.microsoft.com/azure-cognitive-services/vision/read:3.2 \
Eula=accept \ Billing={ENDPOINT_URI} \ ApiKey={API_KEY}
ApiKey={API_KEY}
```bash docker run --rm -it -p 5000:5000 --memory 18g --cpus 8 \
-mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-preview.1 \
+mcr.microsoft.com/azure-cognitive-services/vision/read:3.2 \
Eula=accept \ Billing={ENDPOINT_URI} \ ApiKey={API_KEY}
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/language-support.md
Some features of Computer Vision support multiple languages; any features not me
Computer Vision's OCR APIs support several languages. They do not require you to specify a language code. See the [Optical Character Recognition (OCR) overview](overview-ocr.md) for more information.
-|Language| Language code | OCR API | Read 3.0/3.1 | Read v3.2 preview |
+|Language| Language code | OCR API | Read 3.0/3.1 | Read v3.2 |
|:--|:-:|:--:|::|::| |Afrikaans|`af`| | |Γ£ö | |Albanian |`sq`| | |Γ£ö |
Computer Vision's OCR APIs support several languages. They do not require you to
|Danish | `da` |Γ£ö | |Γ£ö | |Dutch | `nl` |Γ£ö |Γ£ö |Γ£ö | |English | `en` |Γ£ö |Γ£ö |Γ£ö |
-|Estonian |`crh`| | |Γ£ö |
+|Estonian |`et`| | |Γ£ö |
|Fijian |`fj`| | |Γ£ö | |Filipino |`fil`| | |Γ£ö | |Finnish | `fi` |Γ£ö | |Γ£ö |
cognitive-services Overview Ocr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/overview-ocr.md
# What is Optical character recognition?
-The Optical character recognition (OCR) service allows you to extract printed or handwritten text from images, such as photos of license plates or containers with serial numbers, as well as from documents&mdash;invoices, bills, financial reports, articles, and more. It uses deep learning based models and works with text on a variety of surfaces and backgrounds.
+The Optical character recognition (OCR) service allows you to extract printed or handwritten text from images, such as photos of street signs and products, as well as from documents&mdash;invoices, bills, financial reports, articles, and more. It uses deep learning based models and works with text on a variety of surfaces and backgrounds.
The OCR APIs support extracting printed text in [several languages](./language-support.md). Follow a [quickstart](./quickstarts-sdk/client-library.md) to get started.
The **Read** call takes images and documents as its input. They have the followi
## Read API
-The Computer Vision [Read API](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-g)) that extracts printed text (in several languages), handwritten text (English only), digits, and currency symbols from images and multi-page PDF documents. It's optimized to extract text from text-heavy images and multi-page PDF documents with mixed languages. It supports detecting both printed and handwritten text in the same image or document.
+The Computer Vision [Read API](https://centraluseuap.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) is Azure's latest OCR technology ([learn what's new](./whats-new.md)) that extracts printed text (in several languages), handwritten text (English only), digits, and currency symbols from images and multi-page PDF documents. It's optimized to extract text from text-heavy images and multi-page PDF documents with mixed languages. It supports detecting both printed and handwritten text in the same image or document.
![How OCR converts images and documents into structured output with extracted text](./Images/how-ocr-works.svg)
For on-premise deployment, the [Read Docker container (preview)](./computer-visi
## OCR API
-The legacy [OCR API](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-g#optical-character-recognition-ocr) for a list of supported languages.
-
-## RecognizeText API
+The legacy [OCR API](https://centraluseuap.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20d) uses an older recognition model, supports only images, and executes synchronously, returning immediately with the detected text. See the OCR column of [supported languages](./language-support.md#optical-character-recognition-ocr) for a list of supported languages.
> [!WARNING] > The Computer Vision 2.0 RecognizeText operations are in the process of being deprecated in favor of the new [Read API](#read-api) covered in this article. Existing customers should [transition to using Read operations](upgrade-api-versions.md).
As with all of the Cognitive Services, developers using the Computer Vision serv
## Next steps -- Get started with the [OCR REST API or client library quickstarts](./quickstarts-sdk/client-library.md).-- Learn about the [Read 3.1 REST API](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-ga/operations/5d986960601faab4bf452005).-- Learn about the [Read 3.2 public preview REST API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-3/operations/5d986960601faab4bf452005) with support for a total of 73 languages.
+- Get started with the [OCR (Read) REST API or client library quickstarts](./quickstarts-sdk/client-library.md).
+- Learn about the [Read 3.2 REST API](https://centraluseuap.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005).
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/whats-new.md
Learn what's new in the service. These items may be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with the service.
+## April 2021
+
+### Computer Vision v3.2 GA
+
+The Computer Vision API v3.2 is now generally available with the following updates:
+* Improved image tagging model: analyzes visual content and generates relevant tags based on objects, actions and content displayed in the image. This is available through the [Tag Image API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-3/operations/56f91f2e778daf14a499f200). See the Image Analysis [how-to guide](https://docs.microsoft.com/azure/cognitive-services/computer-vision/vision-api-how-to-topics/howtocallvisionapi) and [overview](https://docs.microsoft.com/azure/cognitive-services/computer-vision/overview-image-analysis) to learn more.
+* Updated content moderation model: detects presence of adult content and provides flags to filter images containing adult, racy and gory visual content. This is available through the [Analyze API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-3/operations/56f91f2e778daf14a499f21b). See the Image Analysis [how-to guide](https://docs.microsoft.com/azure/cognitive-services/computer-vision/vision-api-how-to-topics/howtocallvisionapi) and [overview](https://docs.microsoft.com/azure/cognitive-services/computer-vision/overview-image-analysis) to learn more.
+* [OCR (Read) available for 73 languages](./language-support.md#optical-character-recognition-ocr) including Simplified and Traditional Chinese, Japanese, Korean, and Latin languages.
+* [OCR (Read)](./overview-ocr.md) also available as a [Distroless container](./computer-vision-how-to-install-containers.md?tabs=version-3-2) for on-premise deployment.
+
+> [!div class="nextstepaction"]
+> [See Computer Vision v3.2 GA](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005)
+ ## March 2021 ### Computer Vision 3.2 Public Preview update
Follow an [Extract text quickstart](https://github.com/Azure-Samples/cognitive-s
## Cognitive Service updates
-[Azure update announcements for Cognitive Services](https://azure.microsoft.com/updates/?product=cognitive-services)
+[Azure update announcements for Cognitive Services](https://azure.microsoft.com/updates/?product=cognitive-services)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/language-support.md
Translator supports the following languages for text to text translation.
| Chinese Traditional | `zh-Hant` | | Croatian | `hr` | | Czech | `cs` |
-| Dari | `prs` |
| Danish | `da` |
+| Dari | `prs` |
| Dutch | `nl` | | English | `en` | | Estonian | `et` |
The following languages are available for customization to or from English using
| Hungarian | `hu` | | Icelandic | `is` | | Indonesian| `id` |
+| Inuktitut| `iu` |
| Irish | `ga` | | Italian | `it` | | Japanese | `ja` |
cognitive-services Cognitive Services Container Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/cognitive-services-container-support.md
Previously updated : 12/16/2020 Last updated : 04/12/2021 keywords: on-premises, Docker, container, Kubernetes #Customer intent: As a potential customer, I want to know more about how Cognitive Services provides and supports Docker containers for each service.
Install and explore the functionality provided by containers in Azure Cognitive
[ta-containers-language]: text-analytics/how-tos/text-analytics-how-to-install-containers.md?tabs=language [ta-containers-sentiment]: text-analytics/how-tos/text-analytics-how-to-install-containers.md?tabs=sentiment [ta-containers-health]: text-analytics/how-tos/text-analytics-how-to-install-containers.md?tabs=health
-[request-access]: https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbRyQZ7B8Cg2FEjpibPziwPcZUNlQ4SEVORFVLTjlBSzNLRlo0UzRRVVNPVy4u
+[request-access]: https://aka.ms/csgate
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/whats-new.md
The Text Analytics API is updated on an ongoing basis. To stay up-to-date with r
* [C#](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/textanalytics/Azure.AI.TextAnalytics) * [Python](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/) * [Java](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/textanalytics/azure-ai-textanalytics)
- * [JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/textanalytics/ai-text-analytics/samples/javascript)
+ * [JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/textanalytics/ai-text-analytics/samples/v5/javascript)
> [!div class="nextstepaction"] > [Learn more about Text Analytics API v3.1-Preview.4](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1-preview-4/operations/Languages)
communication-services Sdk Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/sdk-options.md
Publishing locations for individual SDK packages are detailed below.
| Chat | [npm](https://www.npmjs.com/package/@azure/communication-chat) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Chat) | [PyPi](https://pypi.org/project/azure-communication-chat/) | [Maven](https://search.maven.org/search?q=a:azure-communication-chat) | [GitHub](https://github.com/Azure/azure-sdk-for-ios/releases) | [Maven](https://search.maven.org/search?q=a:azure-communication-chat) | - | | SMS | [npm](https://www.npmjs.com/package/@azure/communication-sms) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Sms) | [PyPi](https://pypi.org/project/azure-communication-sms/) | [Maven](https://search.maven.org/artifact/com.azure/azure-communication-sms) | - | - | - | | Calling | [npm](https://www.npmjs.com/package/@azure/communication-calling) | - | - | - | [GitHub](https://github.com/Azure/Communication/releases) | [Maven](https://search.maven.org/artifact/com.azure.android/azure-communication-calling/) | - |
-| Reference Documentation | [docs](https://azure.github.io/azure-sdk-for-js/communication.html) | [docs](https://azure.github.io/azure-sdk-for-net/communication.html) | - | [docs](http://azure.github.io/azure-sdk-for-java/communication.html) | [docs](/objectivec/communication-services/calling/) | [docs](/java/api/com.azure.communication.calling) | - |
+| Reference Documentation | [docs](https://azure.github.io/azure-sdk-for-js/communication.html) | [docs](https://azure.github.io/azure-sdk-for-net/communication.html) | - | [docs](http://azure.github.io/azure-sdk-for-java/communication.html) | [docs](/objectivec/communication-services/calling/) | [docs](/java/api/com.azure.android.communication.calling) | - |
## REST API Throttles
communication-services Teams Embed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/ui-framework/teams-embed.md
The Teams Embed provides most features supported in Teams meetings, including:
- In-meeting experience for configuring audio and video devices - [Video Backgrounds](https://support.microsoft.com/office/change-your-background-for-a-teams-meeting-f77a2381-443a-499d-825e-509a140f4780): allowing participants to blur or replace their backgrounds - [Multiple options for the video gallery](https://support.microsoft.com/office/using-video-in-microsoft-teams-3647fc29-7b92-4c26-8c2d-8a596904cdae) large gallery, together mode, focus, pinning, and spotlight-- [Content Sharing](https://support.microsoft.comoffice/share-content-in-a-meeting-in-teams-fcc2bf59-aecd-4481-8f99-ce55dd836ce8#ID0EABAAA=Mobile): allowing participants to share their screen
+- [Content Sharing](https://support.microsoft.com/en-us/office/share-content-in-a-meeting-in-teams-fcc2bf59-aecd-4481-8f99-ce55dd836ce8): allowing participants to share their screen
For more information about this UI compared to other Azure Communication SDKs, see the [UI SDK concept introduction](ui-sdk-overview.md).
container-registry Container Registry Get Started Docker Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-get-started-docker-cli.md
docker login myregistry.azurecr.io
Both commands return `Login Succeeded` once completed. > [!NOTE]
->* You might want to use Visual Studio Code with Docker extention for a faster and more convenient login.
+>* You might want to use Visual Studio Code with Docker extension for a faster and more convenient login.
> [!TIP] > Always specify the fully qualified registry name (all lowercase) when you use `docker login` and when you tag images for pushing to your registry. In the examples in this article, the fully qualified name is *myregistry.azurecr.io*.
cosmos-db Mongodb Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-introduction.md
## Wire protocol compatibility
-Azure Cosmos DB implements the wire protocol for MongoDB. This implementation allows transparent compatibility with native MongoDB client SDKs, drivers, and tools. Azure Cosmos DB does host the MongoDB database engine. The details of the supported features by MongoDB can be found here:
+Azure Cosmos DB implements the wire protocol for MongoDB. This implementation allows transparent compatibility with native MongoDB client SDKs, drivers, and tools. Azure Cosmos DB does not host the MongoDB database engine. The details of the supported features by MongoDB can be found here:
- [Azure Cosmos DB's API for Mongo DB version 4.0](mongodb-feature-support-40.md) - [Azure Cosmos DB's API for Mongo DB version 3.6](mongodb-feature-support-36.md)
cost-management-billing Understand Work Scopes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/understand-work-scopes.md
Azure supports three scopes for resource management. Each scope supports managin
- [**Management groups**](../../governance/management-groups/overview.md) - Hierarchical containers, up to eight levels, to organize Azure subscriptions.
- Resource type: [Microsoft.Management/managementGroups](/rest/api/resources/managementgroups)
+ Resource type: [Microsoft.Management/managementGroups](/rest/api/managementgroups/)
- **Subscriptions** - Primary containers for Azure resources.
Cost Management is currently supported in [Azure Global](https://management.azur
## Next steps -- If you haven't already completed the first quickstart for Cost Management, read it at [Start analyzing costs](quick-acm-cost-analysis.md).
+- If you haven't already completed the first quickstart for Cost Management, read it at [Start analyzing costs](quick-acm-cost-analysis.md).
cost-management-billing Programmatically Create Subscription Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/programmatically-create-subscription-preview.md
POST https://management.azure.com<invoiceSectionId>/providers/Microsoft.Subscrip
| `skuId` | Yes | String | The sku ID that determines the type of Azure plan. | | `owners` | No | String | The Object ID of any user or service principal to add as an Azure RBAC Owner on the subscription when it's created. | | `costCenter` | No | String | The cost center associated with the subscription. It shows up in the usage CSV file. |
-| `managementGroupId` | No | String | The ID of the management group to which the subscription will be added. To get the list of management groups, see [Management Groups - List API](/rest/api/resources/managementgroups/list). Use the ID of a management group from the API. |
+| `managementGroupId` | No | String | The ID of the management group to which the subscription will be added. To get the list of management groups, see [Management Groups - List API](/rest/api/managementgroups/entities/list). Use the ID of a management group from the API. |
In the response, you get back a `subscriptionCreationResult` object for monitoring. When the subscription creation is finished, the `subscriptionCreationResult` object returns a `subscriptionLink` object, which has the subscription ID.
cost-management-billing Prepare Buy Reservation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/prepare-buy-reservation.md
Previously updated : 07/24/2020 Last updated : 04/12/2021
You can scope a reservation to a subscription or resource groups. Setting the sc
You have three options to scope a reservation, depending on your needs: -- **Single resource group scope**ΓÇö Applies the reservation discount to the matching resources in the selected resource group only.-- **Single subscription scope**ΓÇö Applies the reservation discount to the matching resources in the selected subscription.
+- **Single resource group scope** ΓÇö Applies the reservation discount to the matching resources in the selected resource group only.
+- **Single subscription scope** ΓÇö Applies the reservation discount to the matching resources in the selected subscription.
- **Shared scope** ΓÇö Applies the reservation discount to matching resources in eligible subscriptions that are in the billing context. - For Enterprise Agreement customers, the billing context is the enrollment. The reservation shared scope would include multiple Active Directory tenants in an enrollment. - For Microsoft Customer Agreement customers, the billing scope is the billing profile.
You have three options to scope a reservation, depending on your needs:
While applying reservation discounts on your usage, Azure processes the reservation in the following order:
-1. Reservations that are scoped to a resource group
-2. Single scope reservations
-3. Shared scope reservations
+1. Reservations with a single resource group scope
+2. Reservations with a single subscription scope
+3. Reservations with a shared scope (multiple subscriptions), described previously
You can always update the scope after you buy a reservation. To do so, go to the reservation, click **Configuration**, and rescope the reservation. Rescoping a reservation isn't a commercial transaction. Your reservation term isn't changed. For more information about updating the scope, see [Update the scope after you purchase a reservation](manage-reserved-vm-instance.md#change-the-reservation-scope).
data-factory Concepts Data Flow Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-flow-monitoring.md
Title: Monitoring mapping data flows
description: How to visually monitor mapping data flows in Azure Data Factory - Previously updated : 11/22/2020 Last updated : 04/11/2021 # Monitor Data Flows
You can also see detailed timing for each partition transformation step if you o
} ```
-### Post processing time
+### Sink processing time
When you select a sink transformation icon in your map, the slide-in panel on the right will show an additional data point called "post processing time" at the bottom. This is the amount time spent executing your job on the Spark cluster *after* your data has been loaded, transformed, and written. This time can include closing connection pools, driver shutdown, deleting files, coalescing files, etc. When you perform actions in your flow like "move files" and "output to single file", you will likely see an increase in the post processing time value.+
+* Write stage duration: The time to write the data to a staging location for Synapse SQL
+* Table operation SQL duration: The time spent moving data from temp tables to target table
+* Pre SQL duration & Post SQL duration: The time spent running pre/post SQL commands
+* Pre commands duration & post commands duration: The time spent running any pre/post operations for file based source/sinks. For example move or delete files after processing.
+* Merge duration: The time spent merging the file, merge files are used for file based sinks when writing to single file or when "File name as column data" is used. If significant time is spent in this metric, you should avoid using these options.
## Error rows
data-factory Concepts Data Flow Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-flow-overview.md
Previously updated : 12/10/2020 Last updated : 04/11/2021 # Mapping data flows in Azure Data Factory
Mapping data flow integrates with existing Azure Data Factory monitoring capabil
The Azure Data Factory team has created a [performance tuning guide](concepts-data-flow-performance.md) to help you optimize the execution time of your data flows after building your business logic.
-## Available regions
-
-=======
-Mapping data flows are available in the following regions in ADF:
-
-| Azure region | Data flows in ADF |
-| | -- |
-| Australia Central | |
-| Australia Central 2 | |
-| Australia East | Γ£ô |
-| Australia Southeast | Γ£ô |
-| Brazil South | Γ£ô |
-| Canada Central | Γ£ô |
-| Central India | Γ£ô |
-| Central US | Γ£ô |
-| China East | |
-| China East 2 | |
-| China Non-Regional | |
-| China North | |
-| China North 2 | |
-| East Asia | Γ£ô |
-| East US | Γ£ô |
-| East US 2 | Γ£ô |
-| France Central | Γ£ô |
-| France South | |
-| Germany Central (Sovereign) | |
-| Germany Non-Regional (Sovereign) | |
-| Germany North (Public) | |
-| Germany Northeast (Sovereign) | |
-| Germany West Central (Public) | |
-| Japan East | Γ£ô |
-| Japan West | |
-| Korea Central | Γ£ô |
-| Korea South | |
-| North Central US | Γ£ô |
-| North Europe | Γ£ô |
-| Norway East | |
-| Norway West | |
-| South Africa North | Γ£ô |
-| South Africa West | |
-| South Central US | |
-| South India | |
-| Southeast Asia | Γ£ô |
-| Switzerland North | |
-| Switzerland West | |
-| UAE Central | |
-| UAE North | |
-| UK South | Γ£ô |
-| UK West | |
-| US DoD Central | |
-| US DoD East | |
-| US Gov Arizona | |
-| US Gov Non-Regional | |
-| US Gov Texas | |
-| US Gov Virginia | |
-| West Central US | |
-| West Europe | Γ£ô |
-| West India | |
-| West US | Γ£ô |
-| West US 2 | Γ£ô |
- ## Next steps * Learn how to create a [source transformation](data-flow-source.md).
data-factory Concepts Data Flow Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-flow-performance.md
Previously updated : 03/15/2021 Last updated : 04/10/2021 # Mapping data flows performance and tuning guide
A best practice is to start small and scale up to meet your performance needs.
### Time to live
-By default, every data flow activity spins up a new cluster based upon the IR configuration. Cluster start-up time takes a few minutes and data processing can't start until it is complete. If your pipelines contain multiple **sequential** data flows, you can enable a time to live (TTL) value. Specifying a time to live value keeps a cluster alive for a certain period of time after its execution completes. If a new job starts using the IR during the TTL time, it will reuse the existing cluster and start up time will greatly reduced. After the second job completes, the cluster will again stay alive for the TTL time.
+By default, every data flow activity spins up a new Spark cluster based upon the Azure IR configuration. Cold cluster start-up time takes a few minutes and data processing can't start until it is complete. If your pipelines contain multiple **sequential** data flows, you can enable a time to live (TTL) value. Specifying a time to live value keeps a cluster alive for a certain period of time after its execution completes. If a new job starts using the IR during the TTL time, it will reuse the existing cluster and start up time will greatly reduced. After the second job completes, the cluster will again stay alive for the TTL time.
-Only one job can run on a single cluster at a time. If there is an available cluster, but two data flows start, only one will use the live cluster. The second job will spin up its own isolated cluster.
+You can additionally minimize the startup time of warm clusters by setting the "Quick re-use" option in the Azure Integration runtime under Data Flow Properties. Setting this to true will tell ADF to not teardown the existing cluster after each job and instead re-use the existing cluster, essentially keeping the compute environment you've set in your Azure IR alive for up to the period of time specified in your TTL. This option makes for the shortest start-up time of your data flow activities when executing from a pipeline.
-If most of your data flows execute in parallel, it is not recommended that you enable TTL.
+However, if most of your data flows execute in parallel, it is not recommended that you enable TTL for the IR that you use for those activities. Only one job can run on a single cluster at a time. If there is an available cluster, but two data flows start, only one will use the live cluster. The second job will spin up its own isolated cluster.
> [!NOTE] > Time to live is not available when using the auto-resolve integration runtime
+
+> [!NOTE]
+> Quick re-use of existing clusters is a feature in the Azure Integration Runtime that is currently in public preview
## Optimizing sources
If your data flows execute in parallel, its recommended to not enable the Azure
### Execute data flows sequentially
-If you execute your data flow activities in sequence, it is recommended that you set a TTL in the Azure IR configuration. ADF will reuse the compute resources resulting in a faster cluster start up time. Each activity will still be isolated receive a new Spark context for each execution.
+If you execute your data flow activities in sequence, it is recommended that you set a TTL in the Azure IR configuration. ADF will reuse the compute resources resulting in a faster cluster start up time. Each activity will still be isolated receive a new Spark context for each execution. To reduce the time between sequential activities even more, set the "quick re-use" checkbox on the Azure IR to tell ADF to re-use the existing cluster.
-Running jobs sequentially will likely take the longest time to execute end-to-end, but provides a clean separation of logical operations.
+> [!NOTE]
+> Quick re-use of existing clusters is a feature in the Azure Integration Runtime that is currently in public preview
### Overloading a single data flow
data-factory Control Flow Execute Data Flow Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-execute-data-flow-activity.md
Previously updated : 01/03/2021 Last updated : 04/11/2021 # Data Flow activity in Azure Data Factory
The Core Count and Compute Type properties can be set dynamically to adjust to t
### Data Flow integration runtime
-Choose which Integration Runtime to use for your Data Flow activity execution. By default, Data Factory will use the auto-resolve Azure Integration runtime with four worker cores and no time to live (TTL). This IR has a general purpose compute type and runs in the same region as your factory. You can create your own Azure Integration Runtimes that define specific regions, compute type, core counts, and TTL for your data flow activity execution.
+Choose which Integration Runtime to use for your Data Flow activity execution. By default, Data Factory will use the auto-resolve Azure Integration runtime with four worker cores. This IR has a general purpose compute type and runs in the same region as your factory. For operationalized pipelines, it is highly recommended that you create your own Azure Integration Runtimes that define specific regions, compute type, core counts, and TTL for your data flow activity execution.
-For pipeline executions, the cluster is a job cluster, which takes several minutes to start up before execution starts. If no TTL is specified, this start-up time is required on every pipeline run. If you specify a TTL, a warm cluster pool will stay active for the time specified after the last execution, resulting in shorter start-up times. For example, if you have a TTL of 60 minutes and run a data flow on it once an hour, the cluster pool will stay active. For more information, see [Azure integration runtime](concepts-integration-runtime.md).
+A minimum compute type of General Purpose (compute optimized is not recommended for large workloads) with an 8+8 (16 total v-cores) configuration and a 10-minute is the minimum recommendation for most production workloads. By setting a small TTL, the Azure IR can maintain a warm cluster that will not incur the several minutes of start time for a cold cluster. You can speed up the execution of your data flows even more by select "Quick re-use" on the Azure IR data flow configurations. For more information, see [Azure integration runtime](concepts-integration-runtime.md).
![Azure Integration Runtime](media/data-flow/ir-new.png "Azure Integration Runtime")
data-factory Quickstart Create Data Factory Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-python.md
ms.devlang: python Previously updated : 04/06/2021 Last updated : 04/12/2021
Pipelines can ingest data from disparate data stores. Pipelines process or trans
* [Azure Storage Explorer](https://storageexplorer.com/) (optional).
-* [An application in Azure Active Directory](../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal). Make note of the following values to use in later steps: **application ID**, **authentication key**, and **tenant ID**. Assign application to the **Contributor** role by following instructions in the same article.
+* [An application in Azure Active Directory](../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal). Make note of the following values to use in later steps: **application ID**, **authentication key**, and **tenant ID**. Assign application to the **Contributor** role by following instructions in the same article. Make note of the following values as shown in the article to use in later steps: **application ID (service principal id below), authentication key (client secret below), and tenant ID.**
## Create and upload an input file
You define a dataset that represents the source data in Azure Blob. This Blob da
rg_name, df_name, dsOut_name, dsOut_azure_blob) print_item(dsOut) ```
+ > [!NOTE]
+ > The To pass parameters to the pipeline, add them to the json string params_for_pipeline shown below in the format **{ ΓÇ£ParameterName1ΓÇ¥ : ΓÇ£ParameterValue1ΓÇ¥ }** for each of the parameters needed in the pipeline. To pass parameters to a dataflow, create a pipeline parameter to hold the parameter name/value, and then consume the pipeline parameter in the dataflow parameter in the format **@pipeline().parameters.parametername.**
+ ## Create a pipeline
databox-online Azure Stack Edge Gpu Back Up Virtual Machine Disks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-back-up-virtual-machine-disks.md
+
+ Title: Back up VM disks on Azure Stack Edge Pro GPU device via PowerShell
+description: Describes how to back up data on virtual machine disks running on your Azure Stack Edge Pro GPU device.
++++++ Last updated : 04/12/2021+
+#Customer intent: As an IT admin, I need to understand how to create and upload Azure VM images that I can use with my Azure Stack Edge Pro device so that I can deploy VMs on the device.
++
+# Back up VM disks on Azure Stack Edge Pro GPU via Azure PowerShell
++
+This article describes how to create backups of virtual machine disks on Azure Stack Edge Pro GPU device using Azure PowerShell.
+
+> [!IMPORTANT]
+> This procedure is meant to be used for VMs that are stopped. To back up running VMs, we recommend that you use a third-party backup tool.
+
+## Workflow
+
+The following steps summarize the high-level workflow to back up a VM disk on your device:
+
+1. Stop the VM.
+1. Take a snapshot of the VM disk.
+1. Copy the snapshot to a local storage account as a VHD.
+1. Upload the VHD to an external target.
+
+## Prerequisites
+
+Before you back up VMs, make sure that:
+
+- You've access to a client that you'll use to connect to your device.
+ - Your client runs a [Supported OS](azure-stack-edge-gpu-system-requirements.md#supported-os-for-clients-connected-to-device).
+ - Your client is configured to connect to the local Azure Resource Manager of your device as per the instructions in [Connect to Azure Resource Manager for your device](azure-stack-edge-gpu-connect-resource-manager.md).
+
+## Verify connection to local Azure Resource Manager
++
+## Back up a VM Disk
+
+1. Get a list of the VMs running on your device. Identify the VM that you want to stop.
+
+ ```powershell
+ Get-AzureRMVM
+ ```
+
+ Here is an example output:
+
+ ```powershell
+ PS C:\Users\user> Get-AzureRMVM
+
+ ResourceGroupName Name Location VmSize OsType NIC ProvisioningState Zone
+ -- - -- -- -
+ MYASERG myasewindowsvm1 dbelocal Standard_D1_v2 Linux myasewindowsvm1nic Succeeded
+ MYASERG1 myaselinuxvm1 dbelocal Standard_D1_v2 Linux myaselinuxvm1nic Succeeded
+ MYASERG2 myasetestvm1 dbelocal Standard_D1_v2 Linux myasetestvm1nic Succeeded
+
+ PS C:\Users\user>
+ ```
+
+1. Stop the VM.
+
+ ```powershell
+ Stop-AzureRMVM ΓÇôResourceGroupName <Resource group name> -Name <VM name>
+ ```
+
+ Here is an example output:
+
+ ```powershell
+ PS C:\Users\user> Stop-AzureRMVM -ResourceGroupName myaserg2 -Name myasetestvm1
+
+ Virtual machine stopping operation
+ This cmdlet will stop the specified virtual machine. Do you want to continue?
+ [Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): Y
+
+ OperationId :
+ Status : Succeeded
+ StartTime : 4/9/2021 8:43:47 AM
+ EndTime : 4/9/2021 8:44:27 AM
+ Error :
+
+ PS C:\Users\user>
+ ```
+ You can also stop the VM from the Azure portal.
+
+
+2. Take a snapshot of the VM disk and save the snapshot to a local resource group. You can use this procedure for both OS and data disks.
+
+ 1. Get the list of disks on your device, or in a specific resource group. Make a note of the name of the disk to back up.
+
+ ```powershell
+ Get-AzureRMDisk -ResourceGroupName <Resource group name>
+ ```
+ Here is an example output:
+
+ ```powershell
+ PS C:\Users\user> $Disk = Get-AzureRMDisk -ResourceGroupName myaserg2
+ PS C:\Users\user> $Disk.Name
+ myasetestdisk1
+ myasetestvm1_disk1_0ed91809927f4023b7aceb6eeca51c05
+ PS C:\Users\user>
+ ```
+ 1. Create a local resource group to serve as the destination for the VM snapshot.
+
+ ```powershell
+ PS C:\Users\user> New-AzureRmResourceGroup -ResourceGroupName myaserg3 -Location dbelocal
+
+ ResourceGroupName : myaserg3
+ Location : dbelocal
+ ProvisioningState : Succeeded
+ Tags :
+ ResourceId : /subscriptions/992601bc-b03d-4d72-598e-d24eac232122/resourceGroups/myaserg3
+
+ PS C:\Users\user>
+ ```
+
+ 1. Set some parameters.
+
+ ```powershell
+ $DiskResourceGroup = <Disk resource group>
+ $DiskName = <Disk name>
+ $SnapshotName = <Snapshot name>
+ $DestinationRG = <Snapshot destination resource group>
+ ```
+
+ 3. Set the snapshot configuration and take the snapshot.
+
+ ```powershell
+ $Disk = Get-AzureRmDisk -ResourceGroupName $DiskResourceGroup -DiskName $DiskName
+ $SnapshotConfig = New-AzureRmSnapshotConfig -SourceUri $Disk.Id -CreateOption Copy -Location 'dbelocal'
+ $Snapshot = New-AzureRmSnapshot -Snapshot $SnapshotConfig -SnapshotName $SnapshotName -ResourceGroupName $DestinationRG
+ ```
+ Verify that the snapshot is created in the destination resource group.
+
+ ```powershell
+ Get-AzureRMSnapshot -ResourceGroupName $DestinationRG
+ ```
+ Here is an example output:
+
+ ```powershell
+ PS C:\Users\user> $DiskResourceGroup = "myaserg2"
+ PS C:\Users\user> $DiskName = "myasetestdisk1"
+ PS C:\Users\user> $SnapshotName = "myasetestdisk1_ss"
+ PS C:\Users\user> $DestinationRG = "myaserg3"
+ PS C:\Users\user> $Disk = Get-AzureRmDisk -ResourceGroupName $DiskResourceGroup -DiskName $DiskName
+ PS C:\Users\user> $SnapshotConfig = New-AzureRmSnapshotConfig -SourceUri $Disk.Id -CreateOption Copy -Location 'dbelocal'
+ PS C:\Users\user> $Snapshot=New-AzureRmSnapshot -Snapshot $SnapshotConfig -SnapshotName $SnapshotName -ResourceGroupName $DestinationRG
+ PS C:\Users\user> Get-AzureRMSnapshot -ResourceGroupName $DestinationRG
+
+ ResourceGroupName : myaserg3
+ ManagedBy :
+ Sku : Microsoft.Azure.Management.Compute.Models.DiskSku
+ TimeCreated : 4/9/2021 4:23:21 PM
+ OsType :
+ CreationData : Microsoft.Azure.Management.Compute.Models.CreationData
+ DiskSizeGB : 10
+ EncryptionSettings :
+ ProvisioningState : Succeeded
+ Id : /subscriptions/992601bc-b03d-4d72-598e-d24eac232122/resourceGroups/myaserg3/providers/Microsoft.Compute/snapshots/myasetestdisk1_ss
+ Name : myasetestdisk1_ss
+ Type : Microsoft.Compute/snapshots
+ Location : DBELocal
+ Tags : {}
+
+ PS C:\Users\user>
+ ```
+
+## Copy the snapshot into a local storage account
+
+ Copy the snapshots to a local storage account on your device.
+
+1. Set some parameters.
+
+ ```powershell
+ $StorageAccountRG = <Local storage account resource group>
+ $StorageAccountName = <Storage account name>
+ $StorageEndpointSuffix = <Connection string in format: DeviceName.DnsDomain.com>
+ $DestStorageContainer = <Destination storage container>
+ $DestFileName = <Blob file name>
+ ```
+
+1. Create a local storage account on your device.
+
+ ```powershell
+ New-AzureRmStorageAccount -Name <Storage account name> -ResourceGroupName <Storage account resource group> -Location DBELocal -SkuName Standard_LRS
+ ```
+
+ Here is an example output:
+
+ ```powershell
+ PS C:\Users\user> New-AzureRmStorageAccount -Name myasesa4 -ResourceGroupName myaserg4 -Location DBELocal -SkuName Standard_LRS
+ StorageAccountName ResourceGroupName Location SkuName Kind AccessTier CreationTime ProvisioningState EnableHttpsTrafficOnly
+ -- -- - - - --
+ myasesa4 myaserg4 DBELocal StandardLRS Storage 4/9/2021 6:02:56 PM Succeeded False
+
+ PS C:\Users\user>
+ ```
+
+1. Create a container in the local storage account that you created.
+
+ ```powershell
+ $keys = Get-AzureRmStorageAccountKey -ResourceGroupName $StorageAccountRG -Name $StorageAccountName
+ $keyValue = $keys[0].Value
+ $storageContext = New-AzureStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $keyValue -Protocol Http -Endpoint $StorageEndpointSuffix;
+ $container = New-AzureStorageContainer -Name $DestStorageContainer -Context $storageContext -Permission Container -ErrorAction Ignore;
+ ```
+ Here is an example output:
+
+ ```powershell
+ PS C:\Users\user> $StorageAccountName = "myasesa4"
+ PS C:\Users\user> $StorageAccountRG = "myaserg4"
+ PS C:\Users\user> $DestStorageContainer = "myasecont2"
+ PS C:\Users\user> $StorageEndpointSuffix = "myasegpudev.wdshcsso.com"
+ PS C:\Users\user> $DestFileName = "testfile1"
+
+ PS C:\Users\user> $keys = Get-AzureRmStorageAccountKey -ResourceGroupName $StorageAccountRG -Name $StorageAccountName
+ PS C:\Users\user> $keyValue = $keys[0].Value
+ PS C:\Users\user> $storageContext = New-AzureStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $keyValue -Protocol Http -Endpoint $StorageEndpointSuffix;
+ PS C:\Users\user> $storagecontext
+ StorageAccountName : myasesa4
+ BlobEndPoint : http://myasesa4.blob.myasegpudev.wdshcsso.com/
+ TableEndPoint : http://myasesa4.table.myasegpudev.wdshcsso.com/
+ QueueEndPoint : http://myasesa4.queue.myasegpudev.wdshcsso.com/
+ FileEndPoint : http://myasesa4.file.myasegpudev.wdshcsso.com/
+ Context : Microsoft.WindowsAzure.Commands.Storage.AzureStorageContext
+ Name :
+ StorageAccount : BlobEndpoint=http://myasesa4.blob.myasegpudev.wdshcsso.com/;QueueEndpoint=http://myasesa4.que
+ ue.myasegpudev.wdshcsso.com/;TableEndpoint=http://myasesa4.table.myasegpudev.wdshcsso.com/;Fi
+ leEndpoint=http://myasesa4.file.myasegpudev.wdshcsso.com/;AccountName=myasesa4;AccountKey=[ke
+ y hidden]
+ EndPointSuffix : myasegpudev.wdshcsso.com/
+ ConnectionString : BlobEndpoint=http://myasesa4.blob.myasegpudev.wdshcsso.com/;QueueEndpoint=http://myasesa4.que
+ ue.myasegpudev.wdshcsso.com/;TableEndpoint=http://myasesa4.table.myasegpudev.wdshcsso.com/;Fi
+ leEndpoint=http://myasesa4.file.myasegpudev.wdshcsso.com/;AccountName=myasesa4;AccountKey=GSK
+ FuTCJi5Dby6A6C1F4jB4bYS/gBNslb7/FAdlh/0VWUg3Vxd1kHsbwN8sw85pMqdKG1AajoeiwzhievHPnlQ==
+ ExtendedProperties : {}
+
+ PS C:\Users\user> $container = New-AzureStorageContainer -Name $DestStorageContainer -Context $storageContext -Permission Container -ErrorAction Ignore;
+ PS C:\Users\user> $container
+ Blob End Point: http://myasesa4.blob.myasegpudev.wdshcsso.com/
+
+ Name PublicAccess LastModified
+ -
+ myasecont2 Container 4/12/2021 4:46:03 PM +00:00
+
+ PS C:\Users\user>
+ ```
+
+ You can also use Azure Storage Explorer to [Create a local storage account](azure-stack-edge-gpu-deploy-virtual-machine-templates.md#create-a-storage-account) and then [Create a container in the local storage account](azure-stack-edge-gpu-deploy-virtual-machine-templates.md#use-storage-explorer-for-upload) on your device.
+++
+1. Download the snapshot into the local storage account.
+
+ ```powershell
+ $sassnapshot = Grant-AzureRmSnapshotAccess -ResourceGroupName $DestinationRG -SnapshotName $SnapshotName -Access 'Read' -DurationInSecond 3600
+ $destContext = New-AzureStorageContext ΓÇôStorageAccountName $StorageAccountName -StorageAccountKey $keyValue
+ Start-AzureStorageBlobCopy -AbsoluteUri $sassnapshot.AccessSAS -DestContainer $DestStorageContainer -DestContext $destContext -DestBlob $DestFileName
+ ```
+
+ Here is an example output:
+
+ ```powershell
+ PS C:\Users\user> $sassnapshot= Grant-AzureRmSnapshotAccess -ResourceGroupName $DestinationRG -SnapshotName $SnapshotName -Access 'Read' -DurationInSecond 3600
+ PS C:\Users\user> $sassnapshot
+
+ AccessSAS : https://md-2.blob.myasegpudev.wdshcsso.com/3d95ae10d9924e6fb84de408d071f22d/abcd.vhd?sv=2017-04-17&sr=
+ b&si=2535bf98-f87f-4738-9142-594e3c1150fc&sk=system-1&sig=4wrtYzWg6ePWBdrXlodrVgT76q7PIueCbw3bbShKCGs%3D
+
+ PS C:\Users\user> $destContext = New-AzureStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $keyValue
+ PS C:\Users\user> $destContext
+
+ StorageAccountName : myasesa4
+ BlobEndPoint : https://myasesa4.blob.myasegpudev.wdshcsso.com/
+ TableEndPoint : https://myasesa4.table.myasegpudev.wdshcsso.com/
+ QueueEndPoint : https://myasesa4.queue.myasegpudev.wdshcsso.com/
+ FileEndPoint : https://myasesa4.file.myasegpudev.wdshcsso.com/
+ Context : Microsoft.WindowsAzure.Commands.Storage.AzureStorageContext
+ Name :
+ StorageAccount : BlobEndpoint=https://myasesa4.blob.myasegpudev.wdshcsso.com/;QueueEndpoint=https://myasesa4.q
+ ueue.myasegpudev.wdshcsso.com/;TableEndpoint=https://myasesa4.table.myasegpudev.wdshcsso.com/;FileEndpoint=https://myasesa4.file.myasegpudev.wdshcsso.com/;AccountName=myasesa4;AccountKey=[key hidden] EndPointSuffix : myasegpudev.wdshcsso.com/ ConnectionString : BlobEndpoint=https://myasesa4.blob.myasegpudev.wdshcsso.com/;QueueEndpoint=https://myasesa4.q
+ ueue.myasegpudev.wdshcsso.com/;TableEndpoint=https://myasesa4.table.myasegpudev.wdshcsso.com/
+ ;FileEndpoint=https://myasesa4.file.myasegpudev.wdshcsso.com/;AccountName=myasesa4;AccountKey
+ =GSKFuTCJi5Dby6A6C1F4jB4bYS/gBNslb7/FAdlh/0VWUg3Vxd1kHsbwN8sw85pMqdKG1AajoeiwzhievHPnlQ==
+ ExtendedProperties : {}
+
+ PS C:\Users\user> Start-AzureStorageBlobCopy -AbsoluteUri $sassnapshot.AccessSAS -DestContainer $DestStorageContainer -DestContext $destContext -DestBlob $DestFileName
+
+
+ Container Uri: https://myasesa4.blob.myasegpudev.wdshcsso.com/myasecont2
+
+ Name BlobType Length ContentType LastModified AccessTier Snapshot Time
+ - -- -- - -
+ testfile1 BlockBlob -1 2021-04-12 17:01:58Z
+
+ PS C:\Users\user>
+ ```
+
+## Download VHD to external target
+
+To move your backups to an external location, you can use Azure Storage Explorer or AzCopy.
+
+- Use the following AzCopy command to download VHD to an external target.
+
+ ```powershell
+ azcopy copy "https://<local storage account name>.blob.<device name>.<DNS domain>/<container name>/<filename><SAS query string>" <destination target>
+ ```
+
+- To set up and use Azure Storage Explorer with Azure Stack Edge, see the instructions contained in [Use Storage Explorer for upload](azure-stack-edge-gpu-deploy-virtual-machine-templates.md#use-storage-explorer-for-upload).
+
+## Next steps
+
+[Deploy virtual machines on your Azure Stack Edge Pro GPU device using templates](azure-stack-edge-gpu-deploy-virtual-machine-templates.md).
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Cli Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-cli-python.md
Before you begin creating and managing a VM on your Azure Stack Edge Pro device
Your Azure Resource Manager Client ID is hard-coded. Your Azure Resource Manager Tenant ID and Azure Resource Manager Subscription ID are both present in the output of `az login` command you ran earlier. The Azure Resource Manager Client secret is the Azure Resource Manager password that you set.
- For more information, see [Azure Resource Manager password](/azure/azure-stack-edge-gpu-set-azure-resource-manager-password).
+ For more information, see [Azure Resource Manager password](/azure/databox-online/azure-stack-edge-gpu-set-azure-resource-manager-password).
5. Change the profile to version 2019-03-01-hybrid. To change the profile version, run the following command:
databox-online Azure Stack Edge Gpu Manage Virtual Machine Tags Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-manage-virtual-machine-tags-powershell.md
Before you can deploy a VM on your device via PowerShell, make sure that:
## Verify connection to local Azure Resource Manager
-Make sure that the following steps can be used to access the device from your client.
-
-Verify that your client can connect to the local Azure Resource Manager.
-
-1. Call local device APIs to authenticate:
-
- ```powershell
- login-AzureRMAccount -EnvironmentName <Environment Name> -TenantId c0257de7-538f-415c-993a-1b87a031879d
- ```
-
-1. Provide the username `EdgeArmUser` and the password to connect via Azure Resource Manager. If you do not recall the password, [Reset the password for Azure Resource Manager](azure-stack-edge-gpu-set-azure-resource-manager-password.md) and use this password to sign in.
## Add a tag to a VM
databox-online Azure Stack Edge Gpu Technical Specifications Compliance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-technical-specifications-compliance.md
Previously updated : 03/01/2021 Last updated : 04/12/2021
The hardware components of your Azure Stack Edge Pro with an onboard Graphics Pr
The Azure Stack Edge Pro device has the following specifications for compute and memory:
-| Specification | Value |
-|-|-|
-| CPU | 2 X Intel Xeon Silver 4214 (Cascade Lake) CPU<br> 24 physical cores (12 per CPU)<br>48 logical cores (vCPUs) (24 per CPU) |
-| Memory | 128 (8x16 GB) GB RAM <br> Dell Compatible 16 GB PC4-23400 DDR4-2933Mhz 2Rx8 1.2v ECC Registered RDIMM |
+| Specification | Value |
+|-|--|
+| CPU type | Dual Intel Xeon Silver 4214 (Cascade Lake) CPU |
+| CPU: raw | 24 total cores, 48 total vCPUs |
+| CPU: usable | 40 vCPUs |
+| Memory type | Dell Compatible 16 GB PC4-23400 DDR4-2933Mhz 2Rx8 1.2v ECC Registered RDIMM |
+| Memory: raw | 128 GB RAM (8 x 16 GB) |
+| Memory: usable | 102 GB RAM |
## Compute acceleration specifications
The Azure Stack Edge Pro device has two 100-240 V power supply units (PSUs) with
| Specification | 750 W PSU | |-|-|
-| Maximum output power | 750 W |
+| Maximum output power | 750 W |
| Frequency | 50/60 Hz | | Voltage range selection | Auto ranging: 100-240 V AC | | Hot pluggable | Yes |
Your Azure Stack Edge Pro device has six network interfaces, PORT1- PORT6.
| Specification | Description | |-|-|
-| Network interfaces | **2 X 1 GbE interfaces** ΓÇô 1 management interface Port 1 is used for initial setup and is static by default. After the initial setup is complete, you can use the interface for data with any IP address. However, on reset, the interface reverts back to static IP. <br>The other interface Port 2 is user configurable, can be used for data transfer, and is DHCP by default. <br>**4 X 25 GbE interfaces** ΓÇô These data interfaces, Port 3 through Port 6, can be configured by user as DHCP (default) or static. They can also operate as 10 GbE interfaces. |
+| Network interfaces | **2 X 1 GbE interfaces** ΓÇô 1 management interface Port 1 is used for initial setup and is static by default. After the initial setup is complete, you can use the interface for data with any IP address. However, on reset, the interface reverts back to static IP. <br>The other interface Port 2 is user configurable, can be used for data transfer, and is DHCP by default. <br>**4 X 25-GbE interfaces** ΓÇô These data interfaces, Port 3 through Port 6, can be configured by user as DHCP (default) or static. They can also operate as 10-GbE interfaces. |
Your Azure Stack Edge Pro device has the following network hardware:
-* **Custom Microsoft Qlogic Cavium 25G NDC adapter** - Port 1 through port 4.
+* **Custom Microsoft `Qlogic` Cavium 25G NDC adapter** - Port 1 through port 4.
* **Mellanox dual port 25G ConnectX-4 channel network adapter** - Port 5 and port 6. Here are the details for the Mellanox card:
Here are the details for the Mellanox card:
For a full list of supported cables, switches, and transceivers for these network cards, go to: -- [Qlogic Cavium 25G NDC adapter interoperability matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/).
+- [`Qlogic` Cavium 25G NDC adapter interoperability matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/).
- [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products). ## Storage specifications
databox-online Azure Stack Edge Gpu Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-troubleshoot.md
Here are the errors that may show up during the configuration of Azure Resource
|Add-AzureRmEnvironment: An error occurred while sending the request.<br>At line:1 char:1<br>+ Add-AzureRmEnvironment -Name Az3 -ARMEndpoint "https://management.dbe ...|This error means that your Azure Stack Edge Pro device is not reachable or configured properly. Verify that the Edge device and the client are configured correctly. For guidance, see the **General issues** row in this table.| |Service returned error. Check InnerException for more details: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. | This error is likely due to one or more bring your own certificate steps incorrectly performed. You can find guidance [here](./azure-stack-edge-gpu-connect-resource-manager.md#step-2-create-and-install-certificates). | |Operation returned an invalid status code 'ServiceUnavailable' <br> Response status code does not indicate success: 503 (Service Unavailable). | This error could be the result of any of these conditions.<li>ArmStsPool is in stopped state.</li><li>Either of the Azure Resource Manager/Security token services websites are down.</li><li>The Azure Resource Manager cluster resource is down.</li><br><strong>Note:</strong> Restarting the appliance might fix the issue, but you should collect the support package so that you can debug it further.|
-|AADSTS50126: Invalid username or password.<br>Trace ID: 29317da9-52fc-4ba0-9778-446ae5625e5a<br>Correlation ID: 1b9752c4-8cbf-4304-a714-8a16527410f4<br>Timestamp: 2019-11-15 09:21:57Z: The remote server returned an error: (400) Bad Request.<br>At line:1 char:1 |This error could be the result of any of these conditions.<li>For an invalid username and password, validate that the customer has changed the password from Azure portal by following the steps [here](/azure/azure-stack-edge-gpu-set-azure-resource-manager-password) and then by using the correct password.<li>For an invalid tenant ID, the tenant ID is a fixed GUID and should be set to `c0257de7-538f-415c-993a-1b87a031879d`</li>|
+|AADSTS50126: Invalid username or password.<br>Trace ID: 29317da9-52fc-4ba0-9778-446ae5625e5a<br>Correlation ID: 1b9752c4-8cbf-4304-a714-8a16527410f4<br>Timestamp: 2019-11-15 09:21:57Z: The remote server returned an error: (400) Bad Request.<br>At line:1 char:1 |This error could be the result of any of these conditions.<li>For an invalid username and password, validate that the customer has changed the password from Azure portal by following the steps [here](/azure/databox-online/azure-stack-edge-gpu-set-azure-resource-manager-password) and then by using the correct password.<li>For an invalid tenant ID, the tenant ID is a fixed GUID and should be set to `c0257de7-538f-415c-993a-1b87a031879d`</li>|
|connect-AzureRmAccount: AADSTS90056: The resource is disabled or does not exist. Check your app's code to ensure that you have specified the exact resource URL for the resource you are trying to access.<br>Trace ID: e19bdbc9-5dc8-4a74-85c3-ac6abdfda115<br>Correlation ID: 75c8ef5a-830e-48b5-b039-595a96488ff9 Timestamp: 2019-11-18 07:00:51Z: The remote server returned an error: (400) Bad |The resource endpoints used in the `Add-AzureRmEnvironment` command are incorrect.| |Unable to get endpoints from the cloud.<br>Please ensure you have network connection. Error detail: HTTPSConnectionPool(host='management.dbg-of4k6suvm.microsoftdatabox.com', port=30005): Max retries exceeded with url: /metadata/endpoints?api-version=2015-01-01 (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),)) |This error appears mostly in a Mac/Linux environment, and is due to the following issues:<li>A PEM format certificate wasn't added to the python certificate store.</li> |
databox-online Azure Stack Edge Mini R Technical Specifications Compliance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-mini-r-technical-specifications-compliance.md
Previously updated : 03/01/2021 Last updated : 04/12/2021 # Azure Stack Edge Mini R technical specifications
The hardware components of your Microsoft Azure Stack Edge Mini R device adhere
The Azure Stack Edge Mini R device has the following specifications for compute and memory:
-| Specification | Value |
-|-||
-| CPU | 16-core CPU, Intel Xeon-D 1577 |
-| Memory | 48 GB RAM (2400 MT/s) |
+| Specification | Value |
+|-||
+| CPU type | Intel Xeon-D 1577 |
+| CPU: raw | 16 total cores, 32 total vCPUs |
+| CPU: usable | 24 vCPUs |
+| Memory type | 16 GB 2400 MT/s SODIMM |
+| Memory: raw | 48 GB RAM (3 x 16 GB) |
+| Memory: usable | 32 GB RAM |
## Compute acceleration specifications A Vision Processing Unit (VPU) is included on every Azure Stack Edge Mini R device that enables Kubernetes, deep neural network and computer vision based applications.
-| Specification | Value |
-|-||
-| Compute Acceleration card | Intel Movidius Myriad X VPU <br> For more information, see [Intel Movidius Myriad X VPU](https://www.movidius.com/MyriadX) |
+| Specification | Value |
+|||
+| Compute Acceleration card | Intel Movidius Myriad X VPU <br> For more information, see [Intel Movidius Myriad X VPU](https://www.movidius.com/MyriadX) |
## Storage specifications The Azure Stack Edge Mini R device has 1 data disk and 1 boot disk (that serves as operating system storage). The following table shows the details for the storage capacity of the device.
-| Specification | Value |
-|--|--|
-| Number of solid-state drives (SSDs) | 2 X 1 TB disks <br> One data disk and one boot disk |
-| Single SSD capacity | 1 TB |
-| Total capacity (data only) | 1 TB |
-| Total usable capacity* | ~ 750 GB |
+| Specification | Value |
+|--|--|
+| Number of solid-state drives (SSDs) | 2 X 1 TB disks <br> One data disk and one boot disk |
+| Single SSD capacity | 1 TB |
+| Total capacity (data only) | 1 TB |
+| Total usable capacity* | ~ 750 GB |
**Some space is reserved for internal use.*
The Azure Stack Edge Mini R device has the following specifications for network:
|Specification |Value | |||
-|Network interfaces |2 x 10 Gbe SFP+ <br> Shown as PORT 3 and PORT 4 in the local UI |
-|Network interfaces |2 x 1 Gbe RJ45 <br> Shown as PORT 1 and PORT 2 in the local UI |
+|Network interfaces |2 x 10 GbE SFP+ <br> Shown as PORT 3 and PORT 4 in the local UI |
+|Network interfaces |2 x 1 GbE RJ45 <br> Shown as PORT 1 and PORT 2 in the local UI |
|Wi-Fi |802.11ac |
The Azure Stack Edge Mini R device also includes an onboard battery that is char
An additional [Type 2590 battery](https://www.bren-tronics.com/bt-70791ck.html) can be used in conjunction with the onboard battery to extend the use of the device between the charges. This battery should be compliant with all the safety, transportation, and environmental regulations applicable in the country of use.
-| Specification | Value |
-|-|-|
-| Onboard battery capacity | 73 WHr |
+| Specification | Value |
+|--|-|
+| Onboard battery capacity | 73 Wh |
## Enclosure dimensions and weight specifications
The following table lists the dimensions of the device and the USP with the rugg
| Enclosure | Millimeters | Inches | |-||-|
-| Height | 68 | 2.68 |
-| Width | 208 | 8.19 |
-| Length | 259 | 10.20 |
+| Height | 68 | 2.68 |
+| Width | 208 | 8.19 |
+| Length | 259 | 10.20 |
### Enclosure weight The following table lists the weight of the device including the battery.
-| Enclosure | Weight |
-|--||
-| Total weight of the device | 7 lbs. |
+| Enclosure | Weight |
+|--||
+| Total weight of the device | 7 lbs |
## Enclosure environment specifications
databox-online Azure Stack Edge Pro R Technical Specifications Compliance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-pro-r-technical-specifications-compliance.md
Previously updated : 03/24/2021 Last updated : 04/12/2021 # Azure Stack Edge Pro R technical specifications
The hardware components of your Azure Stack Edge Pro R device adhere to the tech
## Compute, memory specifications
-The Azure Stack Edge Pro R device have the following specifications for compute and memory:
-
-| Specification | Value |
-|||
-| CPU | 2 X Intel Xeon Silver 4114 CPU<br>20 phsyical cores (10 per CPU)<br>40 logical cores (vCPUs) (20 per CPU) |
-| Memory | 256 GB RAM (2666 MT/s) |
+The Azure Stack Edge Pro R device has the following specifications for compute and memory:
+| Specification | Value |
+|-||
+| CPU type | Dual Intel Xeon Silver 4114 CPU |
+| CPU: raw | 20 total cores, 40 total vCPUs |
+| CPU: usable | 32 vCPUs |
+| Memory type | Dell Compatible 16 GB RDIMM, 2666 MT/s, Dual rank |
+| Memory: raw | 256 GB RAM (16 x 16 GB) |
+| Memory: usable | 230 GB RAM |
## Compute acceleration specifications A Graphics Processing Unit (GPU) is included on every device that enables Kubernetes, deep learning, and machine learning scenarios.
-| Specification | Value |
+| Specification | Value |
|-|-|
-| GPU | One nVidia T4 GPU <br> For more information, see [NVIDIA T4](https://www.nvidia.com/en-us/data-center/tesla-t4/).|
+| GPU | One nVidia T4 GPU <br> For more information, see [NVIDIA T4](https://www.nvidia.com/en-us/data-center/tesla-t4/). |
## Power supply unit specifications The Azure Stack Edge Pro R device has two 100-240 V Power supply units (PSUs) with high-performance fans. The two PSUs provide a redundant power configuration. If a PSU fails, the device continues to operate normally on the other PSU until the failed module is replaced. The following table lists the technical specifications of the PSUs.
-| Specification | 550 W PSU |
-|-|-|
-| Maximum output power | 550 W |
-| Heat dissipation (maximum) | 2891 BTU/hr |
-| Frequency | 50/60 Hz |
-| Voltage range selection | Auto ranging: 115-230 V AC |
-| Hot pluggable | Yes |
+| Specification | 550 W PSU |
+|-|-|
+| Maximum output power | 550 W |
+| Heat dissipation (maximum) | 2891 BTU/hr |
+| Frequency | 50/60 Hz |
+| Voltage range selection | Auto ranging: 115-230 V AC |
+| Hot pluggable | Yes |
## Network specifications
-The Azure Stack Edge Pro R device has four network interfaces, PORT1 - PORT4.
+The Azure Stack Edge Pro R device has four network interfaces, PORT1 - PORT4.
-|Specification |Description |
+|Specification |Description |
|-|-|
-|Network interfaces |**2 x 1 Gbe RJ45** <br> PORT 1 is used as management interface for initial setup and is static by default. After the initial setup is complete, you can use the interface for data with any IP address. However, on reset, the interface reverts back to static IP. <br>The other interface PORT 2 is user configurable, can be used for data transfer, and is DHCP by default. |
-|Network interfaces |**2 x 25 Gbe SFP28** <br> These data interfaces PORT 3 and PORT 4 can be configured as DHCP (default) or static. |
+|Network interfaces |**2 x 1 GbE RJ45** <br> PORT 1 is used as the management interface for initial setup and is static by default. After the initial setup is complete, you can use the interface for data with any IP address. However, on reset, the interface reverts to static IP. <br>The other interface, PORT 2, which is user-configurable, can be used for data transfer, and is DHCP by default. |
+|Network interfaces |**2 x 25 GbE SFP28** <br> These data interfaces on PORT 3 and PORT 4 can be configured as DHCP (default) or static. |
Your Azure Stack Edge Pro R device has the following network hardware:
Your Azure Stack Edge Pro R device has the following network hardware:
| Parameter | Description | |-|-| | Model | ConnectX®-4 Lx EN network interface card |
-| Model Description | 25GbE dual-port SFP28; PCIe3.0 x8; ROHS R6 |
+| Model Description | 25 GbE dual-port SFP28; PCIe3.0 x8; ROHS R6 |
| Device Part Number (XR2) | MCX4421A-ACAN | | PSID (R640) | MT_2420110034 |--> <!-- confirm w/ Ravi what is this-->
-For a full list of supported cables, switches, and transceivers for these network cards, go to: [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products).
+For a full list of supported cables, switches, and transceivers for these network cards, go to [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products).
## Storage specifications
-The Azure Stack Edge Pro R devices have 8 data disks and 2 M.2 SATA disks that serve as operating system disks. For more information, go to [M.2 SATA disks](https://en.wikipedia.org/wiki/M.2).
+Azure Stack Edge Pro R devices have eight data disks and two M.2 SATA disks that serve as operating system disks. For more information, go to [M.2 SATA disks](https://en.wikipedia.org/wiki/M.2).
#### Storage for 1-node device
-The following table has the details for the storage capacity of the 1-node device.
+The following table has details for the storage capacity of the 1-node device.
| Specification | Value | |--|--| | Number of solid-state drives (SSDs) | 8 | | Single SSD capacity | 8 TB | | Total capacity | 64 TB |
-| Total usable capacity* | ~ 42 TB |
+| Total usable capacity* | ~ 42 TB |
**Some space is reserved for internal use.*
The weight of the device depends on the configuration of the enclosure.
| Enclosure | Weight | |--||
-| Total weight of 1-node device + rugged case with end caps | ~114 lbs. |
+| Total weight of 1-node device + rugged case with end caps | ~114 lbs |
<!--#### For the 4-node system
databox-online Azure Stack Edge Technical Specifications Compliance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-technical-specifications-compliance.md
Previously updated : 03/02/2020 Last updated : 04/12/2021 # Azure Stack Edge Pro technical specifications
The hardware components of your Microsoft Azure Stack Edge Pro device adhere to
The Azure Stack Edge Pro device has the following specifications for compute and memory:
-| Specification | Value |
-|-|-|
-| CPU | 2 X 10 core CPU Intel Xeon Silver 4114 2.2G |
-| Memory | 128 GB RAM (8x 16GB RDIMM) |
+| Specification | Value |
+|-|--|
+| CPU type | Dual Intel Xeon Silver 4114 2.2 G |
+| CPU: raw | 20 total cores, 40 total vCPUs |
+| CPU: usable | 32 vCPUs |
+| Memory type | 8 x 16 GB RDIMM |
+| Memory: raw | 128 GB RAM (8 x 16 GB) |
+| Memory: usable | 102 GB RAM |
+ ## FPGA specifications A Field Programmable Gate Array (FPGA) is included on every Azure Stack Edge Pro device that enables Machine Learning (ML) scenarios.
-| Specification | Value |
+| Specification | Value |
|-|-| | FPGA | Intel Arria 10 <br> Available Deep Neural Network (DNN) models are the same as those [supported by cloud FPGA instances](../machine-learning/how-to-deploy-fpga-web-service.md#fpga-support-in-azure).|
The Azure Stack Edge Pro device has two 100-240 V Power supply units (PSUs) with
| Specification | 750 W PSU | |-|-|
-| Maximum output power | 750 W |
+| Maximum output power | 750 W |
| Frequency | 50/60 Hz | | Voltage range selection | Auto ranging: 100-240 V AC | | Hot pluggable | Yes |
Your Azure Stack Edge Pro device has 6 network interfaces, PORT1- PORT6.
|-|-| | Network interfaces | 2 X 1 GbE interfaces ΓÇô 1 management, not user configurable, used for initial setup. The other interface is user configurable, can be used for data transfer, and is DHCP by default. <br>2 X 25 GbE interfaces ΓÇô These can also operate as 10 GbE interfaces. These data interfaces can be configured by user as DHCP (default) or static. <br> 2 X 25 GbE interfaces - These data interfaces can be configured by user as DHCP (default) or static. |
-The Network Adapters used are:
+The Network Adapters used are:
| Specification | Description | |-|-|
The Azure Stack Edge Pro devices have 9 X 2.5" NVMe SSDs, each with a capacity o
| Number of solid-state drives (SSDs) | 8 | | Single SSD capacity | 1.6 TB | | Total capacity | 12.8 TB |
-| Total usable capacity* | ~ 12.5 TB |
+| Total usable capacity* | ~ 12.5 TB |
**Some space is reserved for internal use.*
The following tables list the various enclosure specifications for dimensions an
The following table lists the dimensions of the enclosure in millimeters and inches.
-| Enclosure | Millimeters | Inches |
-|-||-|
-| Height | 44.45 | 1.75" |
-| Width | 434.1 | 17.09" |
-| Length | 740.4 | 29.15" |
+| Enclosure | Millimeters | Inches |
+|-|--|-|
+| Height | 44.45 | 1.75" |
+| Width | 434.1 | 17.09" |
+| Length | 740.4 | 29.15" |
The following table lists the dimensions of the shipping package in millimeters and inches.
-| Package | Millimeters | Inches |
+| Package | Millimeters | Inches |
|-||-|
-| Height | 311.2 | 12.25" |
-| Width | 642.8 | 25.31" |
-| Length | 1,051.1 | 41.38" |
+| Height | 311.2 | 12.25" |
+| Width | 642.8 | 25.31" |
+| Length | 1,051.1 | 41.38" |
### Enclosure weight
digital-twins How To Manage Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-model.md
description: See how to create, edit, and delete a model within Azure Digital Twins. Previously updated : 3/12/2020 Last updated : 4/07/2021
# Manage Azure Digital Twins models
-You can manage the [models](concepts-models.md) that your Azure Digital Twins instance knows about using the [**DigitalTwinModels APIs**](/rest/api/digital-twins/dataplane/models), the [.NET (C#) SDK](/dotnet/api/overview/azure/digitaltwins/client), or the [Azure Digital Twins CLI](how-to-use-cli.md).
+You can manage the [models](concepts-models.md) of your Azure Digital Twins instance using the [**DigitalTwinModels APIs**](/rest/api/digital-twins/dataplane/models), the [.NET (C#) SDK](/dotnet/api/overview/azure/digitaltwins/client), or the [Azure Digital Twins CLI](how-to-use-cli.md).
Management operations include upload, validation, retrieval, and deletion of models.
event-grid Consume Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/consume-private-endpoints.md
To deliver events to event hubs in your Event Hubs namespace using managed ident
To deliver events to Service Bus queues or topics in your Service Bus namespace using managed identity, follow these steps: 1. Enable system-assigned identity: [system topics](enable-identity-system-topics.md), [custom topics, and domains](enable-identity-custom-topics-domains.md).
-1. [Add the identity to the **Azure Service Bus Data Sender**](/service-bus-messaging/service-bus-managed-service-identity#azure-built-in-roles-for-azure-service-bus) role on the Service Bus namespace
+1. [Add the identity to the **Azure Service Bus Data Sender**](../service-bus-messaging/service-bus-managed-service-identity.md#azure-built-in-roles-for-azure-service-bus) role on the Service Bus namespace
1. [Enable the **Allow trusted Microsoft services to bypass this firewall** setting on your Service Bus namespace](../service-bus-messaging/service-bus-service-endpoints.md#trusted-microsoft-services). 1. [Configure the event subscription](managed-service-identity.md) that uses a Service Bus queue or topic as an endpoint to use the system-assigned identity.
firewall Active Ftp Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/active-ftp-support.md
Previously updated : 03/05/2021 Last updated : 04/12/2021
With Active FTP, the FTP server initiates the data connection to the designated
By default, Active FTP support is disabled on Azure Firewall to protect against FTP bounce attacks using the FTP `PORT` command. However, you can enable Active FTP when you deploy using Azure PowerShell, the Azure CLI, or an Azure ARM template.
+To support active mode FTP the following TCP ports need to be opened:
+
+- FTP server's port 21 from anywhere (client initiates connection)
+- FTP server's port 21 to ports > 1023 (server responds to client's control port)
+- FTP server's port 20 to ports > 1023 on clients (server initiates data connection to client's data port)
+- FTP server's port 20 from ports > 1023 on clients (client sends ACKs to server's data port)
## Azure PowerShell
frontdoor Front Door Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-custom-domain.md
Title: Tutorial - Add custom domain to your Azure Front Door configuration
description: In this tutorial, you learn how to onboard a custom domain to Azure Front Door. documentationcenter: ''-+ editor: '' na ms.devlang: na Previously updated : 09/24/2020- Last updated : 04/12/2021+ #Customer intent: As a website owner, I want to add a custom domain to my Front Door configuration so that my users can use my custom domain to access my content.
In this tutorial, you learn how to:
* Before you can complete the steps in this tutorial, you must first create a Front Door. For more information, see [Quickstart: Create a Front Door](quickstart-create-front-door.md).
-* If you do not already have a custom domain, you must first purchase one with a domain provider. For example, see [Buy a custom domain name](../app-service/manage-custom-dns-buy-domain.md).
+* If you don't already have a custom domain, you must first purchase one with a domain provider. For example, see [Buy a custom domain name](../app-service/manage-custom-dns-buy-domain.md).
-* If you are using Azure to host your [DNS domains](../dns/dns-overview.md), you must delegate the domain provider's domain name system (DNS) to an Azure DNS. For more information, see [Delegate a domain to Azure DNS](../dns/dns-delegate-domain-azure-dns.md). Otherwise, if you are using a domain provider to handle your DNS domain, proceed to [Create a CNAME DNS record](#create-a-cname-dns-record).
+* If you're using Azure to host your [DNS domains](../dns/dns-overview.md), you must delegate the domain provider's domain name system (DNS) to an Azure DNS. For more information, see [Delegate a domain to Azure DNS](../dns/dns-delegate-domain-azure-dns.md). Otherwise, if you're using a domain provider to handle your DNS domain, continue to [Create a CNAME DNS record](#create-a-cname-dns-record).
## Create a CNAME DNS record
A custom domain and its subdomain can be associated with only a single Front Doo
## Map the temporary afdverify subdomain
-When you map an existing domain that is in production, there are special considerations. While you are registering your custom domain in the Azure portal, a brief period of downtime for the domain can occur. To avoid interruption of web traffic, first map your custom domain to your Front Door default frontend host with the Azure afdverify subdomain to create a temporary CNAME mapping. With this method, users can access your domain without interruption while the DNS mapping occurs.
+When you map an existing domain that is in production, there are special considerations. While you're registering your custom domain in the Azure portal, a brief period of downtime for the domain can occur. To avoid interruption of web traffic, first map your custom domain to your Front Door default frontend host with the Azure afdverify subdomain to create a temporary CNAME mapping. With this method, users can access your domain without interruption while the DNS mapping occurs.
-Otherwise, if you are using your custom domain for the first time and no production traffic is running on it, you can directly map your custom domain to your Front Door. Proceed to [Map the permanent custom domain](#map-the-permanent-custom-domain).
+Otherwise, if you're using your custom domain for the first time and no production traffic is running on it, you can directly map your custom domain to your Front Door. continue to [Map the permanent custom domain](#map-the-permanent-custom-domain).
To create a CNAME record with the afdverify subdomain:
To create a CNAME record with the afdverify subdomain:
||-|| | afdverify.www.contoso.com | CNAME | afdverify.contoso-frontend.azurefd.net |
- - Source: Enter your custom domain name, including the afdverify subdomain, in the following format: afdverify._&lt;custom domain name&gt;_. For example, afdverify.www.contoso.com. If you are mapping a wildcard domain, like \*.contoso.com, the source value is the same as it would be without the wildcard: afdverify.contoso.com.
+ - Source: Enter your custom domain name, including the afdverify subdomain, in the following format: afdverify._&lt;custom domain name&gt;_. For example, afdverify.www.contoso.com. If you're mapping a wildcard domain, like \*.contoso.com, the source value is the same as it would be without the wildcard: afdverify.contoso.com.
- Type: Enter *CNAME*.
After you've registered your custom domain, you can then add it to your Front Do
1. Sign in to the [Azure portal](https://portal.azure.com/) and browse to the Front Door containing the frontend host that you want to map to a custom domain.
-2. On the **Front Door designer** page, click on '+' to add a custom domain.
+2. On the **Front Door designer** page, select '+' to add a custom domain.
3. Specify **Custom domain**. 4. For **Frontend host**, the frontend host to use as the destination domain of your CNAME record is pre-filled and is derived from your Front Door: *&lt;default hostname&gt;*.azurefd.net. It cannot be changed.
-5. For **Custom hostname**, enter your custom domain, including the subdomain, to use as the source domain of your CNAME record. For example, www\.contoso.com or cdn.contoso.com. Do not use the afdverify subdomain name.
+5. For **Custom hostname**, enter your custom domain, including the subdomain, to use as the source domain of your CNAME record. For example, www\.contoso.com or cdn.contoso.com. Don't use the afdverify subdomain name.
6. Select **Add**.
After you've registered your custom domain, you can then add it to your Front Do
## Verify the custom domain
-After you have completed the registration of your custom domain, verify that the custom domain references your default Front Door frontend host.
+After you've completed the registration of your custom domain, verify that the custom domain references your default Front Door frontend host.
In your browser, navigate to the address of the file by using the custom domain. For example, if your custom domain is robotics.contoso.com, the URL to the cached file should be similar to the following URL: http:\//robotics.contoso.com/my-public-container/my-file.jpg. Verify that the result is that same as when you access the Front Door directly at *&lt;Front Door host&gt;*.azurefd.net. ## Map the permanent custom domain
-If you have verified that the afdverify subdomain has been successfully mapped to your Front Door (or if you are using a new custom domain that is not in production), you can then map the custom domain directly to your default Front Door frontend host.
+If you've verified that the afdverify subdomain has been successfully mapped to your Front Door (or if you're using a new custom domain that isn't in production), you can then map the custom domain directly to your default Front Door frontend host.
To create a CNAME record for your custom domain:
To create a CNAME record for your custom domain:
5. If you're previously created a temporary afdverify subdomain CNAME record, delete it.
-6. If you are using this custom domain in production for the first time, follow the steps for [Associate the custom domain with your Front Door](#associate-the-custom-domain-with-your-front-door) and [Verify the custom domain](#verify-the-custom-domain).
+6. If you're using this custom domain in production for the first time, follow the steps for [Associate the custom domain with your Front Door](#associate-the-custom-domain-with-your-front-door) and [Verify the custom domain](#verify-the-custom-domain).
For example, the procedure for the GoDaddy domain registrar is as follows:
For example, the procedure for the GoDaddy domain registrar is as follows:
8. Select **Delete** to delete the CNAME record. - ## Clean up resources
-In the preceding steps, you added a custom domain to a Front Door. If you no longer want to associate your Front Door with a custom domain, you can remove the custom domain by performing these steps:
+In the preceding steps, you added a custom domain to a Front Door. If you no longer want to associate your Front Door with a custom domain, you can remove the custom domain by doing these steps:
-1. In your Front Door designer, select the custom domain that you want to remove.
+1. Go to your DNS provider, delete the CNAME record for the custom domain or update the CNAME record for the custom domain to a non Front Door endpoint.
-2. Click Delete from the context menu for the custom domain.
+ > [!Important]
+ > To prevent dangling DNS entries and the security risks they create, starting from April 9th 2021, Azure Front Door requires removal of the CNAME records to Front Door endpoints before the resources can be deleted. Resources include Front Door custom domains, Front Door endpoints or Azure resource groups that has Front Door custom domain(s) enabled.
- The custom domain is disassociated from your endpoint.
+2. In your Front Door designer, select the custom domain that you want to remove.
+3. Select **Delete** from the context menu for the custom domain. The custom domain will now disassociate from your endpoint.
## Next steps
governance Protect Resource Hierarchy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/management-groups/how-to/protect-resource-hierarchy.md
Title: How to protect your resource hierarchy - Azure Governance description: Learn how to protect your resource hierarchy with hierarchy settings that include setting the default management group. Previously updated : 02/05/2021 Last updated : 04/09/2021 # How to protect your resource hierarchy
hierarchy:
To turn the setting back off, use the same endpoint and set **requireAuthorizationForGroupCreation** to a value of **false**.
+## PowerShell sample
+
+PowerShell does not have an 'Az' command to set the default management group or set require authorization, but as a workaround you can leverage the REST API with the PowerShell sample below:
+
+```powershell
+$root_management_group_id = "Enter the ID of root management group"
+$default_management_group_id = "Enter the ID of default management group (or use the same ID of the root management group)"
+
+$body = '{
+ "properties": {
+ "defaultManagementGroup": "/providers/Microsoft.Management/managementGroups/' + $default_management_group_id + '",
+ "requireAuthorizationForGroupCreation": true
+ }
+}'
+
+$token = (Get-AzAccessToken).Token
+$headers = @{"Authorization"= "Bearer $token"; "Content-Type"= "application/json"}
+$uri = "https://management.azure.com/providers/Microsoft.Management/managementGroups/$root_management_group_id/settings/default?api-version=2020-02-01"
+
+Invoke-RestMethod -Method PUT -Uri $uri -Headers $headers -Body $body
+```
+ ## Next steps To learn more about management groups, see:
hdinsight Hdinsight For Vscode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-for-vscode.md
The tool also supports the **Spark SQL** query:
> [!NOTE] >
-> ["Ms-python >=2020.5.78807 version is not supported on this extension"](#issues-changed) has been resolved. Please update the **ms-python** to the **latest version** now.
+> [ms-toolsai.jupyter >2021.3.684299474 version is not supported on this extension](#known-issues) is a known issue. Please using Synapse kernel by sticking to Microsoft Jupyter 2021.3.684299474.
## Submit PySpark batch job
Submit a job to an HDInsight cluster using Data Lake Storage Gen2. You're prompt
From the menu bar, go to **View** > **Command Palette**, and then enter **Azure: Sign Out**.
-## Issues Changed
+## Known Issues
-For this issue "ms-python >=2020.5.78807 version is not supported on this extension" has been resolved, please update the **ms-python** to the **latest version** now.
+ ms-toolsai.jupyter >2021.3.684299474 version is not supported on this extension, please using Synapse kernel by sticking to Microsoft Jupyter 2021.3.684299474.
+
+ 1. Disable auto updating extension.
+
+ ![disable auto updating extension](./media/hdinsight-for-vscode/disable-auto-updating-extension.png)
+
+2. Install a selected version of Microsoft Jupyter.
+
+ ![selected version of microsoft jupyter](./media/hdinsight-for-vscode/selected-version-of-microsoft-jupyter.png)
+
+3. Install Microsoft Jupyter version 2021.3.684299474
## Next steps
iot-edge Tutorial Nested Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-nested-iot-edge.md
For the **lower layer device**, the diagnostics image needs to be manually passe
sudo iotedge check --diagnostics-image-name <parent_device_fqdn_or_ip>:8000/azureiotedge-diagnostics:1.2 ```
-On your **top layer device**, expect to see an output with several passing evaluations and at least one warning. The check for the `latest security daemon` will warn you that another IoT Edge version is the latest stable version, because IoT Edge version 1.2 is in public preview. You may see additional warnings about logs policies and, depending on your network, DNS policies.
+On your **top layer device**, expect to see an output with several passing evaluations. You may see some warnings about logs policies and, depending on your network, DNS policies.
<!-- Add pic after GA --> <!-- KEEP! A sample output of the `iotedge check` is shown below: -->
The module deployments to your devices were automatically generated when the dev
In the [Azure Cloud Shell](https://shell.azure.com/), you can take a look at the **top layer device's** deployment JSON to understand what modules were deployed to your device: ```bash
- cat ~/nestedIotEdgeTutorial/templates/tutorial/deploymentTopLayer.json
+ cat ~/nestedIotEdgeTutorial/iotedge_config_cli_release/templates/tutorial/deploymentTopLayer.json
``` In addition the runtime modules **IoT Edge Agent** and **IoT Edge Hub**, the **top layer device** receives the **Docker registry** module and **IoT Edge API Proxy** module.
If you'd like a look at how to create a deployment like this through the Azure p
In the [Azure Cloud Shell](https://shell.azure.com/), you can take a look at the **lower layer device's** deployment JSON to understand what modules were deployed to your device: ```bash
- cat ~/nestedIotEdgeTutorial/templates/tutorial/deploymentLowerLayer.json
+ cat ~/nestedIotEdgeTutorial/iotedge_config_cli_release/templates/tutorial/deploymentLowerLayer.json
``` You can see under `systemModules` that the **lower layer device's** runtime modules are set to pull from `$upstream:8000`, instead of `mcr.microsoft.com`, as the **top layer device** did. The **lower layer device** sends Docker image requests the **IoT Edge API Proxy** module on port 8000, as it cannot directly pull the images from the cloud. The other module deployed to the **lower layer device**, the **Simulated Temperature Sensor** module, also makes its image request to `$upstream:8000`.
iot-hub-device-update Device Update Agent Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-agent-provisioning.md
The Device Update agent can also be configured without the IoT Identity service
> [!Important] > Do not add quotes around the connection string.
-
- - connection_string= "<ADD CONNECTION STRING HERE>"
+ ```shell
+ - connection_string=<ADD CONNECTION STRING HERE>
+ ```
1. Enter and save.
iot-hub Iot Hub Ha Dr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-ha-dr.md
Once the failover operation for the IoT hub completes, all operations from the d
> > - If you use Azure Functions or Azure Stream Analytics to connect the built-in Events endpoint, you might need to perform a **Restart**. This is because during failover previous offsets are no longer valid. >
-> - When routing to storage, we recommend listing the blobs or files and then iterating over them, to ensure all blobs or files are read without making any assumptions of partition. The partition range could potentially change during a Microsoft-initiated failover or manual failover. You can use the [List Blobs API](/rest/api/storageservices/list-blobs) to enumerate the list of blobs or [List ADLS Gen2 API](/rest/api/storageservices/datalakestoragegen2/filesystem/listpaths) for the list of files. To learn more, see [Azure Storage as a routing endpoint](iot-hub-devguide-messages-d2c.md#azure-storage-as-a-routing-endpoint).
+> - When routing to storage, we recommend listing the blobs or files and then iterating over them, to ensure all blobs or files are read without making any assumptions of partition. The partition range could potentially change during a Microsoft-initiated failover or manual failover. You can use the [List Blobs API](/rest/api/storageservices/list-blobs) to enumerate the list of blobs or [List ADLS Gen2 API](/rest/api/storageservices/datalakestoragegen2/filesystem/list) for the list of files. To learn more, see [Azure Storage as a routing endpoint](iot-hub-devguide-messages-d2c.md#azure-storage-as-a-routing-endpoint).
## Microsoft-initiated failover
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/azure-machine-learning-release-notes.md
__RSS feed__: Get notified when this page is updated by copying and pasting the
### Azure Machine Learning Studio Notebooks Experience (February Update) + **New features** + [Native Terminal (GA)](./how-to-access-terminal.md). Users will now have access to an integrated terminal as well as Git operation via the integrated terminal.
- + [Notebook Snippets (preview)](https://azure.github.io/azureml-web/docs/vs-code-snippets/snippets). Common Azure ML code excerpts are now available at your fingertips. Navigate to the code snippets panel, accessible via the toolbar, or activate the in-code snippets menu using Ctrl + Space.
+ + Notebook Snippets (preview). Common Azure ML code excerpts are now available at your fingertips. Navigate to the code snippets panel, accessible via the toolbar, or activate the in-code snippets menu using Ctrl + Space.
+ [Keyboard Shortcuts](./how-to-run-jupyter-notebooks.md#useful-keyboard-shortcuts). Full parity with keyboard shortcuts available in Jupyter. + Indicate Cell parameters. Shows users which cells in a notebook are parameter cells and can run parameterized notebooks via [Papermill](https://github.com/nteract/papermill) on the Compute Instance. + Terminal and Kernel session
machine-learning Migrate Execute R Script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/migrate-execute-r-script.md
Title: 'ML Studio (classic): Migrate to Azure Machine Learning - Execute R Scrip
description: Rebuild Studio (classic) Execute R script modules to run on Azure Machine Learning. -+
machine-learning Migrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/migrate-overview.md
Title: 'ML Studio (classic): Migrate to Azure Machine Learning'
description: Migrate from Studio (classic) to Azure Machine Learning for a modernized data science platform. -+
machine-learning Migrate Rebuild Experiment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/migrate-rebuild-experiment.md
Title: 'ML Studio (classic): Migrate to Azure Machine Learning - Rebuild experim
description: Rebuild Studio (classic) experiments in Azure Machine Learning designer. -+
machine-learning Migrate Rebuild Integrate With Client App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/migrate-rebuild-integrate-with-client-app.md
Title: 'ML Studio (classic): Migrate to Azure Machine Learning - Consume pipelin
description: Integrate pipeline endpoints with client applications in Azure Machine Learning. -+
machine-learning Migrate Rebuild Web Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/migrate-rebuild-web-service.md
Title: 'ML Studio (classic): Migrate to Azure Machine Learning - Rebuild web ser
description: Rebuild Studio (classic) web services as pipeline endpoints in Azure Machine Learning -+
machine-learning Migrate Register Dataset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/migrate-register-dataset.md
Title: 'ML Studio (classic): Migrate to Azure Machine Learning - Rebuild dataset
description: Rebuild Studio (classic) datasets in Azure Machine Learning designer -+
machine-learning Concept Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-data-encryption.md
The `hbi_workspace` flag controls the amount of [data Microsoft collects for dia
* Cleans up your local scratch disk between runs * Securely passes credentials for your storage account, container registry, and SSH account from the execution layer to your compute clusters using your key vault * Enables IP filtering to ensure the underlying batch pools cannot be called by any external services other than AzureMachineLearningService
-* Please note compute instances are not supported in HBI workspace
+* Compute instances are supported in HBI workspace
### Azure Blob storage
Each workspace has an associated system-assigned managed identity that has the s
* [Connect to Azure storage](how-to-access-data.md) * [Get data from a datastore](how-to-create-register-datasets.md) * [Connect to data](how-to-connect-data-ui.md)
-* [Train with datasets](how-to-train-with-datasets.md)
+* [Train with datasets](how-to-train-with-datasets.md)
machine-learning Concept Deep Learning Vs Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-deep-learning-vs-machine-learning.md
Previously updated : 01/14/2020 Last updated : 04/12/2021
With the appropriate data transformation, a neural network can understand text,
Text analytics based on deep learning methods involves analyzing large quantities of text data (for example, medical documents or expenses receipts), recognizing patterns, and creating organized and concise information out of it.
-Companies use deep learning to perform text analysis to detect insider trading and compliance with government regulations. Another common example is insurance fraud: text analytics has often been used to analyze large amounts of documents to recognize the chances of an insurance claim being fraud.
+Companies use deep learning to perform text analysis to detect insider trading and compliance with government regulations. Another common example is insurance fraud: text analytics has often been used to analyze large amounts of documents to recognize the chances of an insurance claim being fraud.
## Artificial neural networks
The following sections explore most popular artificial neural network typologies
The feedforward neural network is the most simple type of artificial neural network. In a feedforward network, information moves in only one direction from input layer to output layer. Feedforward neural networks transform an input by putting it through a series of hidden layers. Every layer is made up of a set of neurons, and each layer is fully connected to all neurons in the layer before. The last fully connected layer (the output layer) represents the generated predictions.
-### Recurrent neural network
+### Recurrent neural network (RNN)
Recurrent neural networks are a widely used artificial neural network. These networks save the output of a layer and feed it back to the input layer to help predict the layer's outcome. Recurrent neural networks have great learning abilities. They're widely used for complex tasks such as time series forecasting, learning handwriting, and recognizing language.
-### Convolutional neural network
+### Convolutional neural network (CNN)
A convolutional neural network is a particularly effective artificial neural network, and it presents a unique architecture. Layers are organized in three dimensions: width, height, and depth. The neurons in one layer connect not to all the neurons in the next layer, but only to a small region of the layer's neurons. The final output is reduced to a single vector of probability scores, organized along the depth dimension. Convolutional neural networks have been used in areas such as video recognition, image recognition, and recommender systems.
+### Generative adversarial network (GAN)
+
+Generative adversarial networks are generative models trained to create realistic content such as images. It is made up of two networks known as generator and discriminator. Both networks are trained simultaneously. During training, the generator uses random noise to create new synthetic data that closely resembles real data. The discriminator takes the output from the generator as input and uses real data to determine whether the generated content is real or synthetic. Each network is competing with each other. The generator is trying to generate synthetic content that is indistinguishable from real content and the discriminator is trying to correctly classify inputs as real or synthetic. The output is then used to update the weights of both networks to help them better achieve their respective goals.
+
+Generative adversarial networks are used to solve problems like image to image translation and age progression.
+
+### Transformers
+
+Transformers are a model architecture that is suited for solving problems containing sequences such as text or time-series data. They consist of [encoder and decoder layers](https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)#Encoder). The encoder takes an input and maps it to a numerical representation containing information such as context. The decoder uses information from the encoder to produce an output such as translated text. What makes transformers different from other architectures containing encoders and decoders are the attention sub-layers. Attention is the idea of focusing on specific parts of an input based on the importance of their context in relation to other inputs in a sequence. For example, when summarizing a news article, not all sentences are relevant to describe the main idea. By focusing on key words throughout the article, summarization can be done in a single sentence, the headline.
+
+Transformers have been used to solve natural language processing problems such as translation, text generation, question answering, and text summarization.
+
+Some well known implementations of transformers are:
+
+- Bidirectional Encoder Representations from Transformers (BERT)
+- Generative Pre-trained Transformer 2 (GPT-2)
+- Generative Pre-trained Transformer 3 (GPT-3)
+ ## Next steps The following articles show you more options for using open-source deep learning models in [Azure Machine Learning](./index.yml?WT.mc_id=docs-article-lazzeri):
machine-learning How To Configure Environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-environment.md
To configure a local development environment or remote VM:
1. Activate your newly created Python virtual environment. 1. Install the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/install).
-1. To to configure your local environment to use your Azure Machine Learning workspace, [create a workspace configuration file](#workspace) or use an existing one.
+1. To configure your local environment to use your Azure Machine Learning workspace, [create a workspace configuration file](#workspace) or use an existing one.
Now that you have your local environment set up, you're ready to start working with Azure Machine Learning. See the [Azure Machine Learning Python getting started guide](tutorial-1st-experiment-sdk-setup-local.md) to get started.
For more information, see [Data Science Virtual Machines](https://azure.microsof
## Next steps - [Train a model](tutorial-train-models-with-aml.md) on Azure Machine Learning with the MNIST dataset.-- See the [Azure Machine Learning SDK for Python reference](/python/api/overview/azure/ml/intro).
+- See the [Azure Machine Learning SDK for Python reference](/python/api/overview/azure/ml/intro).
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-training-vnet.md
To use either a [managed Azure Machine Learning __compute target__](concept-comp
> * The subnet that's specified for the compute instance or cluster must have enough unassigned IP addresses to accommodate the number of VMs that are targeted. If the subnet doesn't have enough unassigned IP addresses, a compute cluster will be partially allocated. > * Check to see whether your security policies or locks on the virtual network's subscription or resource group restrict permissions to manage the virtual network. If you plan to secure the virtual network by restricting traffic, leave some ports open for the compute service. For more information, see the [Required ports](#mlcports) section. > * If you're going to put multiple compute instances or clusters in one virtual network, you might need to request a quota increase for one or more of your resources.
-> * If the Azure Storage Account(s) for the workspace are also secured in a virtual network, they must be in the same virtual network and subnet as the Azure Machine Learning compute instance or cluster.
+> * If the Azure Storage Account(s) for the workspace are also secured in a virtual network, they must be in the same virtual network and subnet as the Azure Machine Learning compute instance or cluster. Please configure your storage firewall settings to allow communication to virtual network and subnet compute resides in. Please note selecting checkbox for "Allow trusted Microsoft services to access this account" is not sufficient to allow communication from compute.
> * For compute instance Jupyter functionality to work, ensure that web socket communication is not disabled. Please ensure your network allows websocket connections to *.instances.azureml.net and *.instances.azureml.ms. > * When compute instance is deployed in a private link workspace it can be only be accessed from within virtual network. If you are using custom DNS or hosts file please add an entry for `<instance-name>.<region>.instances.azureml.ms` with private IP address of workspace private endpoint. For more information see the [custom DNS](./how-to-custom-dns.md) article. > * The subnet used to deploy compute cluster/instance should not be delegated to any other service like ACI
machine-learning Predictive Maintenance Playbook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/team-data-science-process/predictive-maintenance-playbook.md
Microsoft Azure offers learning paths for the foundational concepts behind PdM t
| Training resource | Availability | |:-|--|
-| [Learning Path for PdM using Trees and Random Forest](https://aischool.microsoft.com/learning-paths/1H5vH5wAYcAy88CoQWQcA8) | Public |
-| [Learning Path for PdM using Deep Learning](https://aischool.microsoft.com/learning-paths/FSIXxYkOGcauo0eUO8qAS) | Public |
-| [AI Developer on Azure](https://azure.microsoft.com/training/learning-paths/azure-ai-developer) | Public |
-| [Microsoft AI School](https://aischool.microsoft.com/learning-paths) | Public |
-| [Azure AI Learning from GitHub](https://github.com/Azure/connectthedots/blob/master/readme.md) | Public |
+| [Microsoft Docs: Data Scientist Role](https://docs.microsoft.com/learn/roles/data-scientist) | Public |
+| [Microsoft Docs: AI Engineer Role](https://docs.microsoft.com/learn/roles/ai-engineer) | Public |
+| [Microsoft Docs: Data Engineer Role](https://docs.microsoft.com/learn/roles/data-engineer) | Public |
+| [Microsoft AI School](https://www.microsoft.com/ai/ai-school) | Public |
| [LinkedIn Learning](https://www.linkedin.com/learning) | Public |
-| [Microsoft AI YouTube Webinars](https://www.youtube.com/watch?v=NvrH7_KKzoM&t=4s) | Public |
+| [Microsoft: Playlists on YouTube for Artificial Intelligence and Analytics](https://www.youtube.com/c/MicrosoftAzure/playlists?view=50&sort=dd&shelf_id=7) | Public |
| [Microsoft AI Show](https://channel9.msdn.com/Shows/AI-Show) | Public |
-| [LearnAI@MS](https://learnanalytics.microsoft.com) | Partners |
+| [AI Platform Overview](https://azure.microsoft.com/overview/ai-platform/) | Public |
+| [AI Lab](https://www.microsoft.com/ai/ai-lab) | Public |
+| [Microsoft AI](https://www.microsoft.com/AI) | Public |
| [Microsoft Partner Network](https://partner.microsoft.com/training/training-center) | Partners |
-In addition, free MOOCS (massive open online courses) on AI are offered online by academic institutions like Stanford and MIT, and other educational companies.
+In addition, free MOOCS (massive open online courses) on AI are offered online by academic institutions like Stanford and MIT, and other educational companies.
mariadb Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/concepts-backup.md
There are two types of restore available:
The estimated time of recovery depends on several factors including the database sizes, the transaction log size, the network bandwidth, and the total number of databases recovering in the same region at the same time. The recovery time is usually less than 12 hours. > [!IMPORTANT]
-> Deleted servers **cannot** be restored. If you delete the server, all databases that belong to the server are also deleted and cannot be recovered.To protect server resources, post deployment, from accidental deletion or unexpected changes, administrators can leverage [management locks](../azure-resource-manager/management/lock-resources.md).
+> Deleted servers can be restored only within **five days** of deletion after which the backups are deleted. The database backup can be accessed and restored only from the Azure subscription hosting the server. To restore a dropped server, refer [documented steps](howto-restore-dropped-server.md). To protect server resources, post deployment, from accidental deletion or unexpected changes, administrators can leverage [management locks](../azure-resource-manager/management/lock-resources.md).
### Point-in-time restore
mariadb Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/concepts-business-continuity.md
You can use the service's backups to recover a server from various disruptive ev
You can perform a point-in-time-restore to create a copy of your server to a known good point in time. This point in time must be within the backup retention period you have configured for your server. After the data is restored to the new server, you can either replace the original server with the newly restored server or copy the needed data from the restored server into the original server. > [!IMPORTANT]
-> Deleted servers cannot be restored. To protect server resources, post deployment, from accidental deletion or unexpected changes, administrators can leverage [management locks](../azure-resource-manager/management/lock-resources.md).
+> Deleted servers can be restored only within **five days** of deletion after which the backups are deleted. The database backup can be accessed and restored only from the Azure subscription hosting the server. To restore a dropped server, refer [documented steps](howto-restore-dropped-server.md). To protect server resources, post deployment, from accidental deletion or unexpected changes, administrators can leverage [management locks](../azure-resource-manager/management/lock-resources.md).
## Recover from an Azure regional data center outage
mariadb Howto Restore Dropped Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/howto-restore-dropped-server.md
+
+ Title: Restore a deleted Azure Database for MariaDB server
+description: This article describes how to restore a deleted server in Azure Database for MariaDB using the Azure portal.
++++ Last updated : 4/12/2021++
+# Restore a deleted Azure Database for MariaDB server
+
+When a server is deleted, the database server backup can be retained up to five days in the service. The database backup can be accessed and restored only from the Azure subscription where the server originally resided. The following recommended steps can be followed to recover a deleted MariaDB server resource within 5 days from the time of server deletion. The recommended steps will work only if the backup for the server is still available and not deleted from the system.
+
+## Pre-requisites
+To restore a deleted Azure Database for MariaDB server, you need following:
+- Azure Subscription name hosting the original server
+- Location where the server was created
+
+## Steps to restore
+
+1. Go to the [Activity Log](https://ms.portal.azure.com/#blade/Microsoft_Azure_ActivityLog/ActivityLogBlade) from Monitor blade in Azure portal.
+
+2. In Activity Log, click on **Add filter** as shown and set following filters for the
+
+ - **Subscription** = Your Subscription hosting the deleted server
+ - **Resource Type** = Azure Database for MariaDB servers (Microsoft.DBForMariaDB/servers)
+ - **Operation** = Delete MariaDB Server (Microsoft.DBForMariaDB/servers/delete)
+
+ [![Activity log filtered for delete MariaDB server operation](./media/howto-restore-dropped-server/activity-log.png)](./media/howto-restore-dropped-server/activity-log.png#lightbox)
+
+ 3. Double Click on the Delete MariaDB Server event and click on the JSON tab and note the "resourceId" and "submissionTimestamp" attributes in JSON output. The resourceId is in the following format: /subscriptions/ffffffff-ffff-ffff-ffff-ffffffffffff/resourceGroups/TargetResourceGroup/providers/Microsoft.DBForMariaDB/servers/deletedserver.
+
+ 4. Go to [Create Server REST API Page](/rest/api/mariadb/servers/create) and click on "Try It" tab highlighted in green and login in with your Azure account.
+
+ 5. Provide the resourceGroupName, serverName (deleted server name), subscriptionId, derived from resourceId attribute captured in Step 3, while api-version is pre-populated as shown in image.
+
+ [![Create server using REST API](./media/howto-restore-dropped-server/create-server-from-rest-api.png)](./media/howto-restore-dropped-server/create-server-from-rest-api.png#lightbox)
+
+ 6. Scroll below on Request Body section and paste the following:
+
+ ```json
+ {
+ "location": "Dropped Server Location",
+ "properties":
+ {
+ "restorePointInTime": "submissionTimestamp - 15 minutes",
+ "createMode": "PointInTimeRestore",
+ "sourceServerId": "resourceId"
+ }
+ }
+ ```
+
+7. Replace the following values in the above request body:
+ * "Dropped server Location" with the Azure region where the deleted server was originally created
+ * "submissionTimestamp", and "resourceId" with the values captured in Step 3.
+ * For "restorePointInTime", specify a value of "submissionTimestamp" minus **15 minutes** to ensure the command does not error out.
+
+8. If you see Response Code 201 or 202, the restore request is successfully submitted.
+
+9. The server creation can take time depending on the database size and compute resources provisioned on the original server. The restore status can be monitored from Activity log by filtering for
+ - **Subscription** = Your Subscription
+ - **Resource Type** = Azure Database for MariaDB servers (Microsoft.DBForMariaDB/servers)
+ - **Operation** = Update MariaDB Server Create
+
+## Next steps
+- If you are trying to restore a server within five days, and still receive an error after accurately following the steps discussed earlier, open a support incident for assistance. If you are trying to restore a deleted server after five days, an error is expected since the backup file cannot be found. Do not open a support ticket in this scenario. The support team cannot provide any assistance if the backup is deleted from the system.
+- To prevent accidental deletion of servers, we highly recommend using [Resource Locks](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/preventing-the-disaster-of-accidental-deletion-for-your-mysql/ba-p/825222).
marketplace Azure Partner Customer Usage Attribution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-partner-customer-usage-attribution.md
Previously updated : 03/22/2021 Last updated : 04/12/2021 # Azure customer usage attribution
-Customer usage attribution associates usage from Azure resources in customer subscriptions created while deploying your IP with you as a partner. Forming these associations in internal Microsoft systems brings greater visibility to the Azure footprint running your software. For [Azure Application offers in the commercial marketplace](#commercial-marketplace-azure-apps), this tracking capability helps you align with Microsoft sales teams and gain credit for Microsoft partner programs.
+Customer usage attribution associates usage from Azure resources in customer subscriptions created while deploying your IP with you as a partner. Forming these associations in internal Microsoft systems brings greater visibility to the Azure footprint running your software. For [Azure Application offers in the commercial marketplace](#commercial-marketplace-azure-apps), this tracking capability helps you align with Microsoft sales teams and gain credit for Microsoft partner programs. Customer usage attribution isnΓÇÖt applicable to [Azure virtual machine offers in the commercial marketplace](./azure-vm-create.md). There is nothing a marketplace publisher needs to do for virtual machine offers to ensure their Azure consumption is tracked in end-customer subscriptions.
Customer usage attribution supports three deployment options:
media-services Manage Multiple Tenants https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/video-indexer/manage-multiple-tenants.md
When using this architecture, a Video Indexer account is created for each tenant
* Harder to manage due to multiple Video Indexer (and associated Media Services) accounts per tenant. > [!TIP]
-> Create an admin user for your system in [Video Indexer Developer Portal](https://api-portal.videoindexer.ai/) and use the Authorization API to provide your tenants the relevant [account access token](https://api-portal.videoindexer.ai/docs/services/operations/operations/Get-Account-Access-Token).
+> Create an admin user for your system in [Video Indexer Developer Portal](https://api-portal.videoindexer.ai/) and use the Authorization API to provide your tenants the relevant [account access token](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account-Access-Token).
## Single Video Indexer account for all users
When using this architecture, the customer is responsible for tenants isolation.
With this option, customization models (Person, Language, and Brands) can be shared or isolated between tenants by filtering the models by tenant.
-When [uploading videos](https://api-portal.videoindexer.ai/docs/services/operations/operations/Upload-video?), you can specify a different partition attribute per tenant. This will allow isolation in the [search API](https://api-portal.videoindexer.ai/docs/services/operations/operations/Search-videos?). By specifying the partition attribute in the search API you will only get results of the specified partition.
+When [uploading videos](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video), you can specify a different partition attribute per tenant. This will allow isolation in the [search API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Search-Videos). By specifying the partition attribute in the search API you will only get results of the specified partition.
### Considerations
mysql Concepts Azure Advisor Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-azure-advisor-recommendations.md
+
+ Title: Azure Advisor for MySQL
+description: Learn about Azure Advisor recommendations for MySQL.
++++ Last updated : 04/08/2021+
+# Azure Advisor for MySQL
+Learn about how Azure Advisor is applied to Azure Database for MySQL and get answers to common questions.
+## What is Azure Advisor for MySQL?
+The Azure Advisor system uses telemetry to issue performance and reliability recommendations for your MySQL database.
+Advisor recommendations are split among our MySQL database offerings:
+* Azure Database for MySQL - Single Server
+* Azure Database for MySQL - Flexible Server
+
+Some recommendations are common to multiple product offerings, while other recommendations are based on product-specific optimizations.
+## Where can I view my recommendations?
+Recommendations are available from the **Overview** navigation sidebar in the Azure portal. A preview will appear as a banner notification, and details can be viewed in the **Notifications** section located just below the resource usage graphs.
++
+## Recommendation types
+Azure Database for MySQL prioritize the following types of recommendations:
+* **Performance**: To improve the speed of your MySQL server. This includes CPU usage, memory pressure, connection pooling, disk utilization, and product-specific server parameters. For more information, see [Advisor Performance recommendations](../advisor/advisor-performance-recommendations.md).
+* **Reliability**: To ensure and improve the continuity of your business-critical databases. This includes storage limit and connection limit recommendations. For more information, see [Advisor Reliability recommendations](../advisor/advisor-high-availability-recommendations.md).
+* **Cost**: To optimize and reduce your overall Azure spending. This includes server right-sizing recommendations. For more information, see [Advisor Cost recommendations](../advisor/advisor-cost-recommendations.md).
+
+## Understanding your recommendations
+* **Daily schedule**: For Azure MySQL databases, we check server telemetry and issue recommendations on a daily schedule. If you make a change to your server configuration, existing recommendations will remain visible until we re-examine telemetry on the following day.
+* **Performance history**: Some of our recommendations are based on performance history. These recommendations will only appear after a server has been operating with the same configuration for 7 days. This allows us to detect patterns of heavy usage (e.g. high CPU activity or high connection volume) over a sustained time period. If you provision a new server or change to a new vCore configuration, these recommendations will be paused temporarily. This prevents legacy telemetry from triggering recommendations on a newly reconfigured server. However, this also means that performance history-based recommendations may not be identified immediately.
+
+## Next steps
+For more information, see [Azure Advisor Overview](../advisor/advisor-overview.md).
mysql How To Connect Tls Ssl https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/how-to-connect-tls-ssl.md
Following are the different configurations of SSL and TLS settings you can have
| Scenario | Server parameter settings | Description | ||--||
-|Disable SSL (encrypted connections) | require_secure_transport = OFF |If your legacy application doesn't support encrypted connections to MySQL server, you can disable enforcement of encrypted connections to your flexible server by setting require_secure_transport=OFF.|
+|Disable SSL enforcement | require_secure_transport = OFF |If your legacy application doesn't support encrypted connections to MySQL server, you can disable enforcement of encrypted connections to your flexible server by setting require_secure_transport=OFF.|
|Enforce SSL with TLS version < 1.2 | require_secure_transport = ON and tls_version = TLSV1 or TLSV1.1| If your legacy application supports encrypted connections but requires TLS version < 1.2, you can enable encrypted connections but configure your flexible server to allow connections with the tls version (v1.0 or v1.1) supported by your application| |Enforce SSL with TLS version = 1.2(Default configuration)|require_secure_transport = ON and tls_version = TLSV1.2| This is the recommended and default configuration for flexible server.| |Enforce SSL with TLS version = 1.3(Supported with MySQL v8.0 and above)| require_secure_transport = ON and tls_version = TLSV1.3| This is useful and recommended for new applications development|
In this article, you will learn how to:
* Verify encryption status for your connection * Connect to your flexible server with encrypted connections using various application frameworks
-## Disable SSL on your flexible server
+## Disable SSL enforcement on your flexible server
If your client application doesn't support encrypted connections, you will need to disable encrypted connections enforcement on your flexible server. To disable encrypted connections enforcement, you will need to set require_secure_transport server parameter to OFF as shown in the screenshot and save the server parameter configuration for it to take effect. require_secure_transport is a **dynamic server parameter** which takes effect immediately and doesn't require server restart to take effect. > :::image type="content" source="./media/how-to-connect-tls-ssl/disable-ssl.png" alt-text="Screenshot showing how to disable SSL with Azure Database for MySQL flexible server.":::
mysql Howto Restore Dropped Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/howto-restore-dropped-server.md
Title: Restore a dropped Azure Database for MySQL server
-description: This article describes how to restore a dropped server in Azure Database for MySQL using the Azure portal.
+ Title: Restore a deleted Azure Database for MySQL server
+description: This article describes how to restore a deleted server in Azure Database for MySQL using the Azure portal.
Last updated 10/09/2020
-# Restore a dropped Azure Database for MySQL server
+# Restore a deleted Azure Database for MySQL server
-When a server is dropped, the database server backup can be retained up to five days in the service. The database backup can be accessed and restored only from the Azure subscription where the server originally resided. The following recommended steps can be followed to recover a dropped MySQL server resource within 5 days from the time of server deletion. The recommended steps will work only if the backup for the server is still available and not deleted from the system.
+When a server is deleted, the database server backup can be retained up to five days in the service. The database backup can be accessed and restored only from the Azure subscription where the server originally resided. The following recommended steps can be followed to recover a deleted MySQL server resource within 5 days from the time of server deletion. The recommended steps will work only if the backup for the server is still available and not deleted from the system.
## Pre-requisites
-To restore a dropped Azure Database for MySQL server, you need following:
+To restore a deleted Azure Database for MySQL server, you need following:
- Azure Subscription name hosting the original server - Location where the server was created
To restore a dropped Azure Database for MySQL server, you need following:
[![Create server using REST API](./media/howto-restore-dropped-server/create-server-from-rest-api.png)](./media/howto-restore-dropped-server/create-server-from-rest-api.png#lightbox)
- 6. Scroll below on Request Body section and paste the following replacing the "Dropped server Location", "submissionTimestamp", and "resourceId". For "restorePointInTime", specify a value of "submissionTimestamp" minus **15 minutes** to ensure the command does not error out.
+ 6. Scroll below on Request Body section and paste the following:
```json {
To restore a dropped Azure Database for MySQL server, you need following:
} } ```
+7. Replace the following values in the above request body:
+ * "Dropped server Location" with the Azure region where the deleted server was originally created
+ * "submissionTimestamp", and "resourceId" with the values captured in Step 3.
+ * For "restorePointInTime", specify a value of "submissionTimestamp" minus **15 minutes** to ensure the command does not error out.
+
+8. If you see Response Code 201 or 202, the restore request is successfully submitted.
-7. If you see Response Code 201 or 202, the restore request is successfully submitted.
-
-8. The server creation can take time depending on the database size and compute resources provisioned on the original server. The restore status can be monitored from Activity log by filtering for
+9. The server creation can take time depending on the database size and compute resources provisioned on the original server. The restore status can be monitored from Activity log by filtering for
- **Subscription** = Your Subscription - **Resource Type** = Azure Database for MySQL servers (Microsoft.DBforMySQL/servers) - **Operation** = Update MySQL Server Create ## Next steps-- If you are trying to restore a server within five days, and still receive an error after accurately following the steps discussed earlier, open a support incident for assistance. If you are trying to restore a dropped server after five days, an error is expected since the backup file cannot be found. Do not open a support ticket in this scenario. The support team cannot provide any assistance if the backup is deleted from the system. -- To prevent accidental deletion of servers, we highly recommend using [Resource Locks](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/preventing-the-disaster-of-accidental-deletion-for-your-mysql/ba-p/825222).
+- If you are trying to restore a server within five days, and still receive an error after accurately following the steps discussed earlier, open a support incident for assistance. If you are trying to restore a deleted server after five days, an error is expected since the backup file cannot be found. Do not open a support ticket in this scenario. The support team cannot provide any assistance if the backup is deleted from the system.
+- To prevent accidental deletion of servers, we highly recommend using [Resource Locks](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/preventing-the-disaster-of-accidental-deletion-for-your-mysql/ba-p/825222).
networking Networking Partners Msp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/networking-partners-msp.md
Use the links in this section for more information about managed cloud networkin
|[BT](https://www.globalservices.bt.com/en/solutions/products/cloud-connect-azure)|[Network Transformation Consulting: 1-Hr Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/bt-americas-inc.network-transformation-consulting);[BT Cloud Connect Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bt-americas-inc.bt-cca-lh-001?tab=Overview)|[BT Cloud Connect Azure ExpressRoute](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bt-americas-inc.bt-cca-lh-003?tab=Overview)|[BT Cloud Connect Azure VWAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bt-americas-inc.bt-cca-lh-002?tab=Overview)||| |[BUI](https://www.bui.co.za/)|[a2zManaged Cloud Management](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bui.a2zmanagement?tab=Overview)||[BUI Managed Azure vWAN using VMware SD-WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bui.bui_managed_vwan?tab=Overview)||[BUI CyberSoC](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bui.buicybersoc_msp?tab=Overview)| |[Coevolve](https://www.coevolve.com/services/azure-networking-services/)|||[Managed Azure Virtual WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/coevolveptylimited1581027739259.coevolve-managed-azure-vwan?tab=Overview);[Managed VMware SD-WAN Virtual Edge](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/coevolveptylimited1581027739259.managed-vmware-sdwan-edge?tab=Overview)|||
-|[Colt](https://www.colt.net/why-colt/strategic-alliances/microsoft-partnership/msp/)|[Network optimisation on Azure: 2-hr Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/colttechnologyservices.azure_networking)|||||
+|[Colt](https://www.colt.net/why-colt/partner-hub/microsoft/)|[Network optimisation on Azure: 2-hr Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/colttechnologyservices.azure_networking)|||||
|[Equinix](https://www.equinix.com/)|[Cloud Optimized WAN Workshop](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/equinix.cloudoptimizedwan?tab=Overview)|[ExpressRoute Connectivity Strategy Workshop](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/equinix.expressroutestrategy?tab=Overview); [Equinix Cloud Exchange Fabric](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/equinix.equinix_ecx_fabric?tab=Overview)|||| |[Federated Wireless](https://www.federatedwireless.com/caas/)||||[Federated Wireless Connectivity-as-a-Service](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/federatedwireless1580839623708.fw_caas?tab=Overview)| |[HCL](https://www.hcltech.com/)|[HCL Cloud Network Transformation- 1 Day Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.clo?tab=Overview)|[1-Hour Briefing of HCL Azure ExpressRoute Service](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.hclmanagedazureexpressroute?tab=Overview)|[HCL Azure Virtual WAN Services - 1 Day Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.hclmanagedazurevitualwan?search=vWAN&page=1)|[HCL Azure Private LTE offering - 1 Day Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.hclazureprivatelteoffering)|
openshift Support Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/openshift/support-lifecycle.md
Previously updated : 08/11/2020 Last updated : 03/06/2021 # Support lifecycle for Azure Red Hat OpenShift 4
Each number in the version indicates general compatibility with the previous ver
* **Minor version**: Released approximately every three months. Minor version upgrades can include feature additions, enhancements, deprecations, removals, bug fixes, security enhancements, and other improvements. * **Patches**: Typically released each week, or as needed. Patch version upgrades can include bug fixes, security enhancements, and other improvements.
-Customers should aim to run the latest minor release of the major version they're running. For example, if your production cluster is on 4.4, and 4.5 is the latest generally available minor version for the 4 series, you should upgrade to 4.5 as soon as you can.
+Customers should aim to run the latest minor release of the major version they're running. For example, if your production cluster is on 4.4, and 4.5 is the latest generally available minor version for the 4 series, you should upgrade to 4.5 as soon as you can.
### Upgrade channels
See the following guide for the [past Red Hat OpenShift Container Platform (upst
**What happens when a user upgrades an OpenShift cluster with a minor version that is not supported?**
-If you are on the N-2 version or older, it means you are outside of support and will be asked to upgrade. When your upgrade from version N-2 to N-1 succeeds, you are back within our support policies. For example:
+If you are on the N-2 version or older, it means you are outside of support and will be asked to upgrade to continue recieving support. When your upgrade from version N-2 to N-1 succeeds, you are back within support. Upgrading from version N-3 version or older to a supported version can be challenging and in some cases not possible. We recommend you keep your cluster on the latest OpenShift version to avoid potential upgrade issues.
+For example:
* If the oldest supported Azure Red Hat OpenShift version is 4.4.z and you are on 4.3.z or older, you are outside of support.
-* When the upgrade from 4.3.z to 4.4.z or higher succeeds, you are back within our support policies.
+* When the upgrade from 4.3.z to 4.4.z or higher succeeds, you are back within our support policies.
Reverting your cluster to a previous version, or a rollback, isn't supported. Only upgrading to a newer version is supported. **What does "Outside of Support" mean?**
-"Outside of Support" means that the version you are running is outside of the supported versions list, and you may be asked to upgrade the cluster to a supported version when requesting support, unless you are within the 30-day grace period after version deprecation. Additionally, Azure Red Hat OpenShift does not make any runtime or SLA guarantees for clusters outside of the supported versions list at the end of the 30-day grace period.
+If your ARO cluster is running an OpenShift version that is not on the supported versions list or is using an [unsupported cluster configuration](https://docs.microsoft.com/azure/openshift/support-policies-v4), your cluster is "outside of support". As a result:
+- When opening a support ticket for your cluster, you will be asked to upgrade the cluster to a supported version. before receiving support, unless you are within the 30-day grace period after version support ends.
+- Any runtime or SLA guarantees for clusters outside of the support are voided.
+- Clusters outside of support will be patched only on a best effort basis.
+- Clusters outside of support will not be monitored.
postgresql Concepts Azure Advisor Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-azure-advisor-recommendations.md
Recommendations are available from the **Overview** navigation sidebar in the Az
:::image type="content" source="./media/concepts-azure-advisor-recommendations/advisor-example.png" alt-text="Screenshot of the Azure portal showing an Azure Advisor recommendation.":::
-## Recommendation Types
+## Recommendation types
Azure Database for PostgreSQL prioritize the following types of recommendations: * **Performance**: To improve the speed of your PostgreSQL server. This includes CPU usage, memory pressure, connection pooling, disk utilization, and product-specific server parameters. For more information, see [Advisor Performance recommendations](../advisor/advisor-performance-recommendations.md). * **Reliability**: To ensure and improve the continuity of your business-critical databases. This includes storage limits, connection limits, and hyperscale data distribution recommendations. For more information, see [Advisor Reliability recommendations](../advisor/advisor-high-availability-recommendations.md).
Azure Database for PostgreSQL prioritize the following types of recommendations:
* **Daily schedule**: For Azure PostgreSQL databases, we check server telemetry and issue recommendations on a daily schedule. If you make a change to your server configuration, existing recommendations will remain visible until we re-examine telemetry on the following day. * **Performance history**: Some of our recommendations are based on performance history. These recommendations will only appear after a server has been operating with the same configuration for 7 days. This allows us to detect patterns of heavy usage (e.g. high CPU activity or high connection volume) over a sustained time period. If you provision a new server or change to a new vCore configuration, these recommendations will be paused temporarily. This prevents legacy telemetry from triggering recommendations on a newly reconfigured server. However, this also means that performance history-based recommendations may not be identified immediately.
-## Next Steps
+## Next steps
For more information, see [Azure Advisor Overview](../advisor/advisor-overview.md).
remote-rendering Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/remote-rendering/how-tos/conversion/blob-storage.md
To start converting a model, you need to upload it, using one of the following o
For an example of how to upload data for conversion refer to Conversion.ps1 of the [Powershell Example Scripts](../../samples/powershell-example-scripts.md#script-conversionps1).
+> [!Note]
+> When uploading an input model take care to avoid long file names and/or folder structures in order to avoid [Windows path length limit](https://docs.mxicrosoft.com/windows/win32/fileio/maximum-file-path-limitation) issues on the service.
+ ## Get a SAS URI for the converted model This step is similar to [retrieving SAS for the storage containers](#retrieve-sas-for-the-storage-containers). However, this time you need to retrieve a SAS URI for the model file, that was written to the output container.
remote-rendering Layout Files For Conversion https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/remote-rendering/how-tos/conversion/layout-files-for-conversion.md
In order to correctly process an asset, the conversion service needs to be able to find all the input files. These consist of the main asset file being converted and usually some other files referenced by paths within the asset file.
-The request to convert an asset is given two parameters which determine how the conversion service finds these files: The `input.folderPath` (which is optional) and the `input.inputAssetPath`.
+The request to convert an asset is given two parameters which determine how the conversion service finds these files: The `settings.inputLocation.blobPrefix` (which is optional) and the `settings.inputLocation.relativeInputAssetPath`.
They are fully documented in the [Conversion REST API](conversion-rest-api.md) page.
-For the purpose of laying out files, the important thing to note is that the `folderPath` determines complete set of files which are available to the conversion service when processing the asset.
+For the purpose of laying out files, the important thing to note is that the `BlobPrefix` determines complete set of files which are available to the conversion service when processing the asset.
+
+> [!Note]
+> The service will download all files under the input.BlobPrefix. Ensure file names and paths do not exceed [Windows path length limits](https://docs.microsoft.com/windows/win32/fileio/maximum-file-path-limitation) to avoid issues on the service.
## Placing files so they can be found
security-center Exempt Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/exempt-resource.md
In such cases, you can create an exemption for a recommendation to:
||:--| | Release state: | Preview<br>[!INCLUDE [Legalese](../../includes/security-center-preview-legal-text.md)] | | Pricing: | This is a premium Azure Policy capability that's offered for Azure Defender customers with no additional cost. For other users, charges might apply in the future. |
-| Required roles and permissions: | **Subscription owner** or **Policy contributor** to create an exemption<br>To create a rule, you need permissions to edit policies in Azure Policy.<br>Learn more in [Azure RBAC permissions in Azure Policy](../governance/policy/overview.md#azure-rbac-permissions-in-azure-policy). |
+| Required roles and permissions: | **Owner** or **Resource Policy Contributor** to create an exemption<br>To create a rule, you need permissions to edit policies in Azure Policy.<br>Learn more in [Azure RBAC permissions in Azure Policy](../governance/policy/overview.md#azure-rbac-permissions-in-azure-policy). |
| Limitations: | Exemptions can be created only for recommendations included in Security Center's default initiative, Azure Security Benchmark, or any of the supplied regulatory standard initiatives. Recommendations that are generated from custom initiatives cannot be exempted. Learn more about the relationships between [policies, initiatives, and recommendations](security-policy-concept.md). | | Clouds: | ![Yes](./media/icons/yes-icon.png) Commercial clouds<br>![No](./media/icons/no-icon.png) National/Sovereign (US Gov, China Gov, Other Gov) | | | |
If you try to create an exemption for this recommendation, you'll see one of the
In this article, you learned how to exempt a resource from a recommendation so that it doesn't impact your secure score. For more information about secure score, see: -- [Secure score in Azure Security Center](secure-score-security-controls.md)
+- [Secure score in Azure Security Center](secure-score-security-controls.md)
sentinel Cef Name Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/cef-name-mapping.md
Previously updated : 03/16/2021 Last updated : 04/12/2021 # CEF and CommonSecurityLog field mapping
For more information, see [Connect your external solution using Common Event For
|Device Vendor | DeviceVendor | String that, together with device product and version definitions, uniquely identifies the type of sending device. | |Device Product | DeviceProduct | String that, together with device vendor and version definitions, uniquely identifies the type of sending device. | |Device Version | DeviceVersion | String that, together with device product and vendor definitions, uniquely identifies the type of sending device. |
-|DeviceEventClassID | DeviceEventClassID | String or integer that serves as a unique identifier per event type. |
| destinationDnsDomain | DestinationDnsDomain | The DNS part of the fully qualified domain name (FQDN). | | destinationServiceName | DestinationServiceName | The service that is targeted by the event. For example, `sshd`.| | destinationTranslatedAddress | DestinationTranslatedAddress | Identifies the translated destination referred to by the event in an IP network, as an IPv4 IP address. | | destinationTranslatedPort | DestinationTranslatedPort | Port, after translation, such as a firewall. <br>Valid port numbers: `0` - `65535` | | deviceDirection | <a name="communicationdirection"></a> CommunicationDirection | Any information about the direction the observed communication has taken. Valid values: <br>- `0` = Inbound <br>- `1` = Outbound | | deviceDnsDomain | DeviceDnsDomain | The DNS domain part of the full qualified domain name (FQDN) |
+|DeviceEventClassID | DeviceEventClassID | String or integer that serves as a unique identifier per event type. |
| deviceExternalID | DeviceExternalID | A name that uniquely identifies the device generating the event. | | deviceFacility | DeviceFacility | The facility generating the event.| | deviceInboundInterface | DeviceInboundInterface |The interface on which the packet or data entered the device. |
For more information, see [Connect your external solution using Common Event For
| requestCookies | RequestCookies |Cookies associated with the request. | | requestMethod | RequestMethod | The method used to access a URL. <br><br>Valid values include methods such as `POST`, `GET`, and so on. | | rt | ReceiptTime | The time at which the event related to the activity was received. |
-|Severity | <a name="logseverity"></a> LogSeverity | A string or integer that describes the importance of the event.<br><br> Valid string values: `Unknown` , `Low`, `Medium`, `High`, `Very-High` <br><br>Valid integer values are: `0`-`3` = Low, `4`-`6` = Medium, `7`-`8` = High, `9`-`10` = Very-High |
+|Severity | <a name="logseverity"></a> LogSeverity | A string or integer that describes the importance of the event.<br><br> Valid string values: `Unknown` , `Low`, `Medium`, `High`, `Very-High` <br><br>Valid integer values are:<br> - `0`-`3` = Low <br>- `4`-`6` = Medium<br>- `7`-`8` = High<br>- `9`-`10` = Very-High |
| shost | SourceHostName |Identifies the source that event refers to in an IP network. Format should be a fully qualified domain name (DQDN) associated with the source node, when a node is available. For example, `host` or `host.domain.com`. | | smac | SourceMacAddress | Source MAC address. | | sntdom | SourceNTDomain | The Windows domain name for the source address. |
For more information, see [Connect your external solution using Common Event For
| src | SourceIP |The source that an event refers to in an IP network, as an IPv4 address. | | start | StartTime | The time when the activity that the event refers to started. | | suid | SourceUserID | Identifies the source user by ID. |
+| suser | SourceUserName | Identifies the source user by name. |
| type | EventType | Event type. Value values include: <br>- `0`: base event <br>- `1`: aggregated <br>- `2`: correlation event <br>- `3`: action event <br><br>**Note**: This event can be omitted for base events. | | | | |
-## Unmapped fields
+## Custom fields
+
+The following tables map the names of CEF keys and CommonSecurityLog fields that are available for customers to use for data that does not apply to any of the built-in fields.
+
+### Custom IPv6 address fields
+
+The following table maps CEF key and CommonSecurityLog names for the *IPv6* address fields available for custom data.
+
+|CEF key name |CommonSecurityLog name |
+|||
+| c6a1 | DeviceCustomIPv6Address1 |
+| c6a1Label | DeviceCustomIPv6Address1Label |
+| c6a2 | DeviceCustomIPv6Address2 |
+| c6a2Label | DeviceCustomIPv6Address2Label |
+| c6a3 | DeviceCustomIPv6Address3 |
+| c6a3Label | DeviceCustomIPv6Address3Label |
+| c6a4 | DeviceCustomIPv6Address4 |
+| c6a4Label | DeviceCustomIPv6Address4Label |
+| cfp1 | DeviceCustomFloatingPoint1 |
+| cfp1Label | deviceCustomFloatingPoint1Label |
+| cfp2 | DeviceCustomFloatingPoint2 |
+| cfp2Label | deviceCustomFloatingPoint2Label |
+| cfp3 | DeviceCustomFloatingPoint3 |
+| cfp3Label | deviceCustomFloatingPoint3Label |
+| cfp4 | DeviceCustomFloatingPoint4 |
+| cfp4Label | deviceCustomFloatingPoint4Label |
+| | |
+
+### Custom number fields
+
+The following table maps CEF key and CommonSecurityLog names for the *number* fields available for custom data.
+
+|CEF key name |CommonSecurityLog name |
+|||
+| cn1 | DeviceCustomNumber1 |
+| cn1Label | DeviceCustomNumber1Label |
+| cn2 | DeviceCustomNumber2 |
+| cn2Label | DeviceCustomNumber2Label |
+| cn3 | DeviceCustomNumber3 |
+| cn3Label | DeviceCustomNumber3Label |
+| | |
+
+### Custom string fields
+
+The following table maps CEF key and CommonSecurityLog names for the *string* fields available for custom data.
+
+|CEF key name |CommonSecurityLog name |
+|||
+| cs1 | DeviceCustomString1 <sup>[1](#use-sparingly)</sup> |
+| cs1Label | DeviceCustomString1Label <sup>[1](#use-sparingly)</sup> |
+| cs2 | DeviceCustomString2 <sup>[1](#use-sparingly)</sup> |
+| cs2Label | DeviceCustomString2Label <sup>[1](#use-sparingly)</sup> |
+| cs3 | DeviceCustomString3 <sup>[1](#use-sparingly)</sup> |
+| cs3Label | DeviceCustomString3Label <sup>[1](#use-sparingly)</sup> |
+| cs4 | DeviceCustomString4 <sup>[1](#use-sparingly)</sup> |
+| cs4Label | DeviceCustomString4Label <sup>[1](#use-sparingly)</sup> |
+| cs5 | DeviceCustomString5 <sup>[1](#use-sparingly)</sup> |
+| cs5Label | DeviceCustomString5Label <sup>[1](#use-sparingly)</sup> |
+| cs6 | DeviceCustomString6 <sup>[1](#use-sparingly)</sup> |
+| cs6Label | DeviceCustomString6Label <sup>[1](#use-sparingly)</sup> |
+| flexString1 | FlexString1 |
+| flexString1Label | FlexString1Label |
+| flexString2 | FlexString2 |
+| flexString2Label | FlexString2Label |
+| | |
+
+> [!TIP]
+> <a name="use-sparingly"></a><sup>1</sup> We recommend that you use the **DeviceCustomString** fields sparingly and use more specific, built-in fields when possible.
+>
-The following **CommonSecurityLog** field names don't have mappings in CEF keys:
+### Custom timestamp fields
+The following table maps CEF key and CommonSecurityLog names for the *timestamp* fields available for custom data.
+
+|CEF key name |CommonSecurityLog name |
+|||
+| deviceCustomDate1 | DeviceCustomDate1 |
+| deviceCustomDate1Label | DeviceCustomDate1Label |
+| deviceCustomDate2 | DeviceCustomDate2 |
+| deviceCustomDate2Label | DeviceCustomDate2Label |
+| flexDate1 | FlexDate1 |
+| flexDate1Label | FlexDate1Label |
+| | |
+
+### Custom integer data fields
+
+The following table maps CEF key and CommonSecurityLog names for the *integer* fields available for custom data.
+
+|CEF key name |CommonSecurityLog name |
+|||
+| flexNumber1 | FlexNumber1 |
+| flexNumber1Label | FlexNumber1Label |
+| flexNumber2 | FlexNumber2 |
+| flexNumber2Label | FlexNumber2Label |
+| | |
+
+## Enrichment fields
+
+The following **CommonSecurityLog** fields are added by Azure Sentinel to enrich the original events received from the source devices, and don't have mappings in CEF keys:
+
+### Threat intelligence fields
+
+|CommonSecurityLog field name |Description |
+|||
+| **IndicatorThreatType** | The [MaliciousIP](#MaliciousIP) threat type, according to the threat intelligence feed. |
+| <a name="MaliciousIP"></a>**MaliciousIP** | Lists any IP addresses in the message that correlates with the current threat intelligence feed. |
+| **MaliciousIPCountry** | The [MaliciousIP](#MaliciousIP) country, according to the geographic information at the time of the record ingestion. |
+| **MaliciousIPLatitude** | The [MaliciousIP](#MaliciousIP) longitude, according to the geographic information at the time of the record ingestion. |
+| **MaliciousIPLongitude** | The [MaliciousIP](#MaliciousIP) longitude, according to the geographic information at the time of the record ingestion. |
+| **ReportReferenceLink** | Link to the threat intelligence report. |
+| **ThreatConfidence** | The [MaliciousIP](#MaliciousIP) threat confidence, according to the threat intelligence feed. |
+| **ThreatDescription** | The [MaliciousIP](#MaliciousIP) threat description, according to the threat intelligence feed. |
+| **ThreatSeverity** | The threat severity for the [MaliciousIP](#MaliciousIP), according to the threat intelligence feed at the time of the record ingestion. |
+| | |
+
+### Additional enrichment fields
|CommonSecurityLog field name |Description | |||
The following **CommonSecurityLog** field names don't have mappings in CEF keys:
|**RemoteIP** | The remote IP address. <br>This value is based on [CommunicationDirection](#communicationdirection) field, if possible. | |**RemotePort** | The remote port. <br>This value is based on [CommunicationDirection](#communicationdirection) field, if possible. | |**SimplifiedDeviceAction** | Simplifies the [DeviceAction](#deviceaction) value to a static set of values, while keeping the original value in the [DeviceAction](#deviceaction) field. <br>For example: `Denied` > `Deny`. |
+|**SourceSystem** | Always defined as **OpsManager**. |
| | | - ## Next steps For more information, see [Connect your external solution using Common Event Format](connect-common-event-format.md).
sentinel Connect Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-data-sources.md
Alternatively, you can deploy the agent manually on an existing Azure VM, on a V
| Sysmon (Event) | [Connect Sysmon](https://azure.microsoft.com/blog/detecting-in-memory-attacks-with-sysmon-and-azure-security-center)<br> [Connect Windows Events](../azure-monitor/agents/data-sources-windows-events.md) <br> [Get the Sysmon Parser](https://github.com/Azure/Azure-Sentinel/blob/master/Parsers/Sysmon/Sysmon-v10.42-Parser.txt)| &#10007; | Sysmon collection is not installed by default on virtual machines. For more information on how to install the Sysmon Agent, see [Sysmon](/sysinternals/downloads/sysmon). | | ConfigurationData | [Automate VM inventory](../automation/change-tracking/overview.md)| &#10007; | | | ConfigurationChange | [Automate VM tracking](../automation/change-tracking/overview.md) | &#10007; | |
-| F5 BIG-IP | [Connect F5 BIG-IP](https://devcentral.f5.com/s/articles/Integrating-the-F5-BIGIP-with-Azure-Sentinel) | &#10007; | |
+| F5 BIG-IP | [Connect F5 BIG-IP](https://devcentral.f5.com/s/articles/Integrating-the-F5-BIGIP-with-Azure-Sentinel) | &#10003; | |
| McasShadowItReporting | | &#10007; | | | Barracuda_CL | [Connect Barracuda](connect-barracuda.md) | &#10003; | |
Alternatively, you can deploy the agent manually on an existing Azure VM, on a V
## Next steps - To get started with Azure Sentinel, you need a subscription to Microsoft Azure. If you do not have a subscription, you can sign up for a [free trial](https://azure.microsoft.com/free/).-- Learn how to [onboard your data to Azure Sentinel](quickstart-onboard.md), and [get visibility into your data, and potential threats](quickstart-get-visibility.md).
+- Learn how to [onboard your data to Azure Sentinel](quickstart-onboard.md), and [get visibility into your data, and potential threats](quickstart-get-visibility.md).
sentinel Mssp Protect Intellectual Property https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/mssp-protect-intellectual-property.md
+
+ Title: Protecting managed security service provider (MSSPs) intellectual property in Azure Sentinel | Microsoft Docs
+description: Learn about how managed security service providers (MSSPs) can protect the intellectual property they've created in Azure Sentinel.
+
+documentationcenter: na
++
+editor: ''
+
+ms.assetid: 10cce91a-421b-4959-acdf-7177d261f6f2
++
+ms.devlang: na
++
+ na
+ Last updated : 04/12/2021+++
+# Protecting MSSP intellectual property in Azure Sentinel
+
+This article describes the methods that managed security service providers (MSSPs) can use to protect intellectual property they've developed in Azure Sentinel, such as Azure Sentinel analytics rules, hunting queries, playbooks, and workbooks.
+
+The method you choose will depend on how each of your customers buy Azure; whether you act as a [Cloud Solutions Provider (CSP)](#cloud-solutions-providers-csp), or the customer has an [Enterprise Agreement (EA)/Pay-as-you-go (PAYG)](#enterprise-agreements-ea--pay-as-you-go-payg) account. The sections below describe each of these methods separately.
+
+## Cloud Solutions Providers (CSP)
+
+If you're reselling Azure as a Cloud Solutions Provider (CSP), you're managing the customer's Azure subscription. Thanks to [Admin-On-Behalf-Of (AOBO)](/partner-center/azure-plan-manage), users in the Admin Agents group from your MSSP tenant are granted with Owner access to the customer's Azure subscription, and the customer has no access by default.
+
+If other users from the MSSP tenant, outside of the Admin Agents group, need to access the customer environment, we recommend that you use [Azure Lighthouse](multiple-tenants-service-providers.md). Azure Lighthouse enables you to grant users or groups with access to a specific scope, such as a resource group or subscription, using one of the built-in roles.
+
+If you need to provide customer users with access to the Azure environment, we recommend that you grant them access at *resource group* level, rather than the entire subscription, so that you can show / hide parts of the environment as needed.
+
+For example:
+
+- You might grant the customer with access to several resource groups where their applications are located, but still keep the Azure Sentinel workspace in a separate resource group, where the customer has no access.
+
+- Use this method to enable customers to view selected workbooks and playbooks, which are separate resources that can reside in their own resource group.
+
+Even with granting access at the resource group level, customers will still have access to log data for the resources they can access, such as logs from a VM, even without access to Azure Sentinel. For more information, see [Manage access to Azure Sentinel data by resource](resource-context-rbac.md).
+
+> [!TIP]
+> If you need to provide your customers with access to the entire subscription, you may want to see the guidance in [Enterprise Agreements (EA) / Pay-as-you-go (PAYG)](#enterprise-agreements-ea--pay-as-you-go-payg).
+>
+
+### Sample Azure Sentinel CSP architecture
+
+The following image describes how the permissions described in the [previous section](#cloud-solutions-providers-csp) might work when providing access to CSP customers:
++
+In this image:
+
+- The users granted with **Owner** access to the CSP subscription are the users in the Admin Agents group, in the MSSP Azure AD tenant.
+- Other groups from the MSSP get access to the customer environment via Azure Lighthouse.
+- Customer access to Azure resources is managed by Azure RBAC at the resource group level.
+
+ This allows MSSPs to hide Azure Sentinel components as needed, like Analytics Rules and Hunting Queries.
+
+For more information, also see the [Azure Lighthouse documentation](/azure/lighthouse/concepts/cloud-solution-provider).
+
+## Enterprise Agreements (EA) / Pay-as-you-go (PAYG)
+
+If your customer is buying directly from Microsoft, the customer already has full access to the Azure environment, and you cannot hide anything that's in the customer's Azure subscription.
+
+Instead, protect your intellectual property that you've developed in Azure Sentinel as follows, depending on the type of resource you need to protect:
+
+### Analytics rules and hunting queries
+
+Analytics rules and hunting queries are both contained within Azure Sentinel, and therefore cannot be separated from the Azure Sentinel workspace.
+
+Even if a user only has Azure Sentinel Reader permissions, they'll still be able to view the query. In this case, we recommend hosting your Analytics rules and hunting queries in your own MSSP tenant, instead of the customer tenant.
+
+To do this, you'll need a workspace in your own tenant with Azure Sentinel enabled, and you'll also need to see the customer workspace via [Azure Lighthouse](multiple-tenants-service-providers.md).
+
+To create an analytic rule or hunting query in the MSSP tenant that references data in the customer tenant, you must use the `workspace` statement as follows:
+
+```kql
+workspace('<customer-workspace>').SecurityEvent
+| where EventID == ΓÇÿ4625ΓÇÖ
+```
+
+When adding a `workspace` statement to your analytics rules, consider the following:
+
+- **No alerts in the customer workspace**. Rules created in this manner, wonΓÇÖt create alerts or incidents in the customer workspace. Both alerts and incidents will exist in your MSSP workspace only.
+
+- **Create separate alerts for each customer**. When you use this method, we also recommend that you use separate alert rules for each customer and detection, as the workspace statement will be different in each case.
+
+ You can add the customer name to the alert rule name to easily identify the customer where the alert is triggered. Separate alerts may result in a large number of rules, which you might want to manage using scripting, or [Azure Sentinel as Code](https://techcommunity.microsoft.com/t5/azure-sentinel/deploying-and-managing-azure-sentinel-as-code/ba-p/1131928).
+
+ For example:
+
+ :::image type="content" source="media/mssp-protect-intellectual-property/mssp-rules-per-customer.png" alt-text="Create separate rules in your MSSP workspace for each customer.":::
+
+- **Create separate MSSP workspaces for each customer**. Creating separate rules for each customer and detection may cause you to reach the maximum number of analytics rules for your workspace (512). If you have many customers and expect to reach this limit, you may want to create a separate MSSP workspace for each customer.
+
+ For example:
+
+ :::image type="content" source="media/mssp-protect-intellectual-property/mssp-rules-and-workspace-per-customer.png" alt-text="Create a workspace and rules in your MSSP tenant for each customer.":::
+
+> [!IMPORTANT]
+> The key to using this method successfully is using automation to manage a large set of rules across your workspaces.
+>
+> For more information, see [Cross-workspace analytics rules](https://techcommunity.microsoft.com/t5/azure-sentinel/what-s-new-cross-workspace-analytics-rules/ba-p/1664211)
+>
+
+### Workbooks
+
+If you have developed an Azure Sentinel workbook that you don't want your customer to copy, host the workbook in your MSSP tenant. Make sure that you have access to your customer workspaces via Azure Lighthouse, and then make sure to modify the workbook to use those customer workspaces.
+
+For example:
++
+For more information, see [Cross-workspace workbooks](extend-sentinel-across-workspaces-tenants.md#cross-workspace-workbooks).
+
+If you want the customer to be able to view the workbook visualizations, while still keeping the code secret, we recommend that you export the workbook to Power BI.
+
+Exporting your workbook to Power BI:
+
+- **Makes the workbook visualizations easier to share**. You can send the customer a link to the Power BI dashboard, where they can view the reported data, without requiring Azure access permissions.
+- **Enables scheduling**. Configure Power BI to send emails periodically that contain a snapshot of the dashboard for that time.
+
+For more information, see [Import Azure Monitor log data into Power BI](/azure/azure-monitor/visualize/powerbi).
+
+### Playbooks
+
+You can protect your playbooks as follows, depending on where the analytic rules that trigger the playbook have been created:
+
+- **Analytics rules created in the MSSP workspace**. Make sure to create your playbooks in the MSSP tenant, and that you get all incident and alert data from the MSSP workspace. You can attach the playbooks whenever you create a new rule in your workspace.
+
+ For example:
+
+ :::image type="content" source="media/mssp-protect-intellectual-property/rules-in-mssp-workspace.png" alt-text="Rules created in the MSSP workspace.":::
+
+- **Analytics rules created in the customer workspace**. Use Azure Lighthouse to attach analytics rules from the customer's workspace to a playbook hosted in your MSSP workspace. In this case, the playbook gets the alert and incident data, and any other customer information, from the customer workspace.
+
+ For example:
+
+ :::image type="content" source="media/mssp-protect-intellectual-property/rules-in-customer-workspace.png" alt-text="Rules created in the customer workspace.":::
+
+In both cases, if the playbook needs to access the customerΓÇÖs Azure environment, use a user or service principal that has that access via Lighthouse.
+
+However, if the playbook needs to access non-Azure resources in the customerΓÇÖs tenant, such as Azure AD, Office 365, or Microsoft 365 Defender, you'll need to create a service principal with appropriate permissions in the customer tenant, and then add that identity in the playbook.
+
+> [!NOTE]
+> If you use automation rules together with your playbooks, you must set the automation rule permissions on the resource group where the playbooks live.
+> For more information, see [Permissions for automation rules to run playbooks](automate-incident-handling-with-automation-rules.md#permissions-for-automation-rules-to-run-playbooks).
+>
+
+## Next steps
+
+For more information, see:
+
+- [Azure Sentinel Technical Playbook for MSSPs](https://cloudpartners.transform.microsoft.com/download?assetname=assets/Azure-Sentinel-Technical-Playbook-for-MSSPs.pdf&download=1)
+- [Manage multiple tenants in Azure Sentinel as an MSSP](multiple-tenants-service-providers.md)
+- [Extend Azure Sentinel across workspaces and tenants](extend-sentinel-across-workspaces-tenants.md)
+- [Tutorial: Visualize and monitor your data](tutorial-monitor-your-data.md)
+- [Tutorial: Set up automated threat responses in Azure Sentinel](tutorial-respond-threats-playbook.md)
sentinel Tutorial Detect Threats Built In https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/tutorial-detect-threats-built-in.md
ms.devlang: na
na Previously updated : 03/19/2021 Last updated : 04/12/2021 # Tutorial: Detect threats out-of-the-box
-Once you have [connected your data sources](quickstart-onboard.md) to Azure Sentinel, you'll want to be notified when something suspicious occurs. That's why Azure Sentinel provides out-of-the-box, built-in templates to help you create threat detection rules. These templates were designed by Microsoft's team of security experts and analysts based on known threats, common attack vectors, and suspicious activity escalation chains. Rules created from these templates will automatically search across your environment for any activity that looks suspicious. Many of the templates can be customized to search for activities, or filter them out, according to your needs. The alerts generated by these rules will create incidents that you can assign and investigate in your environment.
+Once you have [connected your data sources](quickstart-onboard.md) to Azure Sentinel, you'll want to be notified when something suspicious occurs. That's why Azure Sentinel provides out-of-the-box, built-in templates to help you create threat detection rules. These templates were designed by Microsoft's team of security experts and analysts based on known threats, common attack vectors, and suspicious activity escalation chains. Rules created from these templates will automatically search across your environment for any activity that looks suspicious. Many of the templates can be customized to search for activities, or filter them out, according to your needs. The alerts generated by these rules will create incidents that you can assign and investigate in your environment.
This tutorial helps you detect threats with Azure Sentinel:
To view all the out-of-the-box detections, go to **Analytics** and then **Rule t
:::image type="content" source="media/tutorial-detect-built-in/view-oob-detections.png" alt-text="Use built-in detections to find threats with Azure Sentinel":::
-The following template types are available:
+The following sections describe the types of out-of-the-box templates available:
-- **Microsoft security**
-
- Microsoft security templates automatically create Azure Sentinel incidents from the alerts generated in other Microsoft security solutions, in real time. You can use Microsoft security rules as a template to create new rules with similar logic. For more information about security rules, see [Automatically create incidents from Microsoft security alerts](create-incidents-from-alerts.md).
+### Microsoft security
-- **Fusion**
+Microsoft security templates automatically create Azure Sentinel incidents from the alerts generated in other Microsoft security solutions, in real time. You can use Microsoft security rules as a template to create new rules with similar logic.
- Based on Fusion technology, advanced multistage attack detection in Azure Sentinel uses scalable machine learning algorithms that can correlate many low-fidelity alerts and events across multiple products into high-fidelity and actionable incidents. Fusion is enabled by default. Because the logic is hidden and therefore not customizable, you can only create one rule with this template.
+For more information about security rules, see [Automatically create incidents from Microsoft security alerts](create-incidents-from-alerts.md).
- > [!IMPORTANT]
- > Some of the detections in the Fusion rule template are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- >
- > To see which detections are in preview, see [Advanced multistage attack detection in Azure Sentinel](fusion.md).
+### Fusion
-- **Machine learning behavioral analytics**
+Based on Fusion technology, advanced multistage attack detection in Azure Sentinel uses scalable machine learning algorithms that can correlate many low-fidelity alerts and events across multiple products into high-fidelity and actionable incidents. Fusion is enabled by default. Because the logic is hidden and therefore not customizable, you can only create one rule with this template.
- These templates are based on proprietary Microsoft machine learning algorithms, so you cannot see the internal logic of how they work and when they run. Because the logic is hidden and therefore not customizable, you can only create one rule with each template of this type.
+> [!IMPORTANT]
+> Some of the detections in the Fusion rule template are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+> To see which detections are in preview, see [Advanced multistage attack detection in Azure Sentinel](fusion.md).
- > [!IMPORTANT]
- > - The machine learning behavioral analytics rule templates are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- >
- > - By creating and enabling any rules based on the ML behavior analytics templates, **you give Microsoft permission to copy ingested data outside of your Azure Sentinel workspace's geography** as necessary for processing by the machine learning engines and models.
+### Machine learning behavioral analytics
-- **Scheduled**
+These templates are based on proprietary Microsoft machine learning algorithms, so you cannot see the internal logic of how they work and when they run. Because the logic is hidden and therefore not customizable, you can only create one rule with each template of this type.
- Scheduled analytics rules are based on built-in queries written by Microsoft security experts. You can see the query logic and make changes to it. You can use the scheduled rules template and customize the query logic and scheduling settings to create new rules.
+> [!IMPORTANT]
+> - The machine learning behavioral analytics rule templates are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+> - By creating and enabling any rules based on the ML behavior analytics templates, **you give Microsoft permission to copy ingested data outside of your Azure Sentinel workspace's geography** as necessary for processing by the machine learning engines and models.
+
+### Scheduled
+
+Scheduled analytics rules are based on built-in queries written by Microsoft security experts. You can see the query logic and make changes to it. You can use the scheduled rules template and customize the query logic and scheduling settings to create new rules.
+
+> [!TIP]
+> Rule scheduling options include configuring the rule to run every specified number of minutes, hours, or days, with the clock starting when you enable the rule.
+>
+> We recommend being mindful of when you enable a new or edited analytics rule to ensure that the rules will get the new stack of incidents in time. For example, you might want to run a rule in synch with when your SOC analysts begin their workday, and enable the rules then.
+>
## Use out-of-the-box detections 1. In order to use a built-in template, click the template name, and then click the **Create rule** button on the details pane to create a new active rule based on that template. Each template has a list of required data sources. When you open the template, the data sources are automatically checked for availability. If there is an availability issue, the **Create rule** button may be disabled, or you may see a warning to that effect.
-
+ :::image type="content" source="media/tutorial-detect-built-in/use-built-in-template.png" alt-text="Detection rule preview panel":::
-
+ 1. Clicking the **Create rule** button opens the rule creation wizard based on the selected template. All the details are autofilled, and with the **Scheduled** or **Microsoft security** templates, you can customize the logic and other rule settings to better suit your specific needs. You can repeat this process to create additional rules based on the built-in template. After following the steps in the rule creation wizard to the end, you will have finished creating a rule based on the template. The new rules will appear in the **Active rules** tab. For more details on how to customize your rules in the rule creation wizard, see [Tutorial: Create custom analytics rules to detect threats](tutorial-detect-threats-custom.md). ## Next steps
-In this tutorial, you learned how to get started detecting threats using Azure Sentinel.
+In this tutorial, you learned how to get started detecting threats using Azure Sentinel.
To learn how to automate your responses to threats, [Set up automated threat responses in Azure Sentinel](tutorial-respond-threats-playbook.md).
service-bus-messaging Message Deferral https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/message-deferral.md
A simple illustrative example is an order processing sequence in which a payment
Ultimately, deferral aids in reordering messages from the arrival order into an order in which they can be processed, while leaving those messages safely in the message store for which processing needs to be postponed. > [!NOTE]
-> Deferred messages will not be automatically moved to the dead-letter queue [after they expire](./service-bus-dead-letter-queues.md#exceeding-timetolive). This behaviour is by design.
+> Deferred messages will not be automatically moved to the dead-letter queue [after they expire](./service-bus-dead-letter-queues.md#time-to-live). This behaviour is by design.
## Message deferral APIs
service-bus-messaging Message Sessions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/message-sessions.md
Title: Azure Service Bus message sessions | Microsoft Docs description: This article explains how to use sessions to enable joint and ordered handling of unbounded sequences of related messages. Previously updated : 01/20/2021 Last updated : 04/12/2021 # Message sessions
Microsoft Azure Service Bus sessions enable joint and ordered handling of unboun
## First-in, first out (FIFO) pattern To realize a FIFO guarantee in Service Bus, use sessions. Service Bus isn't prescriptive about the nature of the relationship between the messages, and also doesn't define a particular model for determining where a message sequence starts or ends.
-Any sender can create a session when submitting messages into a topic or queue by setting the [SessionId](/dotnet/api/microsoft.azure.servicebus.message.sessionid#Microsoft_Azure_ServiceBus_Message_SessionId) property to some application-defined identifier that is unique to the session. At the AMQP 1.0 protocol level, this value maps to the *group-id* property.
+Any sender can create a session when submitting messages into a topic or queue by setting the **session ID** property to some application-defined identifier that is unique to the session. At the AMQP 1.0 protocol level, this value maps to the *group-id* property.
-On session-aware queues or subscriptions, sessions come into existence when there's at least one message with the session's [SessionId](/dotnet/api/microsoft.azure.servicebus.message.sessionid#Microsoft_Azure_ServiceBus_Message_SessionId). Once a session exists, there's no defined time or API for when the session expires or disappears. Theoretically, a message can be received for a session today, the next message in a year's time, and if the **SessionId** matches, the session is the same from the Service Bus perspective.
+On session-aware queues or subscriptions, sessions come into existence when there's at least one message with the session ID. Once a session exists, there's no defined time or API for when the session expires or disappears. Theoretically, a message can be received for a session today, the next message in a year's time, and if the session ID matches, the session is the same from the Service Bus perspective.
-Typically, however, an application has a clear notion of where a set of related messages starts and ends. Service Bus doesn't set any specific rules.
+Typically, however, an application has a clear notion of where a set of related messages starts and ends. Service Bus doesn't set any specific rules. For example, your application could set the **Label** property for the first message to **start**, for intermediate messages to **content**, and for the last message to **end**. The relative position of the content messages can be computed as the current message *SequenceNumber* delta from the **start** message *SequenceNumber*.
-An example of how to delineate a sequence for transferring a file is to set the **Label** property for the first message to **start**, for intermediate messages to **content**, and for the last message to **end**. The relative position of the content messages can be computed as the current message *SequenceNumber* delta from the **start** message *SequenceNumber*.
+You enable the feature by setting the [requiresSession](/azure/templates/microsoft.servicebus/namespaces/queues#property-values) property on the queue or subscription via Azure Resource Manager, or by setting the flag in the portal. It's required before you attempt to use the related API operations.
-The session feature in Service Bus enables a specific receive operation, in the form of [MessageSession](/dotnet/api/microsoft.servicebus.messaging.messagesession) in the C# and Java APIs. You enable the feature by setting the [requiresSession](/azure/templates/microsoft.servicebus/namespaces/queues#property-values) property on the queue or subscription via Azure Resource Manager, or by setting the flag in the portal. It's required before you attempt to use the related API operations.
+In the portal, you can enable sessions while creating an entity (queue or subscription) as shown in the following examples.
-In the portal, set the flag with the following check box:
-![Screenshot of the Create queue dialog box with the Enable sessions option selected and outlined in red.][2]
-> [!NOTE]
-> When Sessions are enabled on a queue or a subscription, the client applications can ***no longer*** send/receive regular messages. All messages must be sent as part of a session (by setting the session id) and received by receiving the session.
-The APIs for sessions exist on queue and subscription clients. There's an imperative model that controls when sessions and messages are received, and a handler-based model, similar to *OnMessage*, that hides the complexity of managing the receive loop.
+> [!IMPORTANT]
+> When Sessions are enabled on a queue or a subscription, the client applications can ***no longer*** send/receive regular messages. All messages must be sent as part of a session (by setting the session id) and received by accepting the session.
+
+The APIs for sessions exist on queue and subscription clients. There's an imperative model that controls when sessions and messages are received, and a handler-based model that hides the complexity of managing the receive loop.
+
+For samples, use links in the [Next steps](#next-steps) section.
### Session features
Sessions provide concurrent de-multiplexing of interleaved message streams while
![A diagram showing how the Sessions feature preserves ordered delivery.][1]
-A [MessageSession](/dotnet/api/microsoft.servicebus.messaging.messagesession) receiver is created by the client accepting a session. The client calls [QueueClient.AcceptMessageSession](/dotnet/api/microsoft.servicebus.messaging.queueclient.acceptmessagesession#Microsoft_ServiceBus_Messaging_QueueClient_AcceptMessageSession) or [QueueClient.AcceptMessageSessionAsync](/dotnet/api/microsoft.servicebus.messaging.queueclient.acceptmessagesessionasync#Microsoft_ServiceBus_Messaging_QueueClient_AcceptMessageSessionAsync) in C#. In the reactive callback model, it registers a session handler.
+A session receiver is created by a client accepting a session. When the session is accepted and held by a client, the client holds an exclusive lock on all messages with that session's **session ID** in the queue or subscription. It will also hold exclusive locks on all messages with the **session ID** that will arrive later.
-When the [MessageSession](/dotnet/api/microsoft.servicebus.messaging.messagesession) object is accepted and while it's held by a client, that client holds an exclusive lock on all messages with that session's [SessionId](/dotnet/api/microsoft.servicebus.messaging.messagesession.sessionid#Microsoft_ServiceBus_Messaging_MessageSession_SessionId) that exist in the queue or subscription, and also on all messages with that **SessionId** that still arrive while the session is held.
-
-The lock is released when **Close** or **CloseAsync** are called, or when the lock expires in cases in which the application is unable to do the close operation. The session lock should be treated like an exclusive lock on a file, meaning that the application should close the session as soon as it no longer needs it and/or doesn't expect any further messages.
+The lock is released when you call the close related methods on the receiver or when the lock expires. There are methods on the receiver to renew the locks as well. Instead, you can use the automatic lock renewal feature where you can specify the time duration for which you want to keep getting the lock renewed. The session lock should be treated like an exclusive lock on a file, meaning that the application should close the session as soon as it no longer needs it and/or doesn't expect any further messages.
When multiple concurrent receivers pull from the queue, the messages belonging to a particular session are dispatched to the specific receiver that currently holds the lock for that session. With that operation, an interleaved message stream in one queue or subscription is cleanly de-multiplexed to different receivers and those receivers can also live on different client machines, since the lock management happens service-side, inside Service Bus.
The session state facility enables an application-defined annotation of a messag
From the Service Bus perspective, the message session state is an opaque binary object that can hold data of the size of one message, which is 256 KB for Service Bus Standard, and 1 MB for Service Bus Premium. The processing state relative to a session can be held inside the session state, or the session state can point to some storage location or database record that holds such information.
-The APIs for managing session state, [SetState](/dotnet/api/microsoft.servicebus.messaging.messagesession.setstate#Microsoft_ServiceBus_Messaging_MessageSession_SetState_System_IO_Stream_) and [GetState](/dotnet/api/microsoft.servicebus.messaging.messagesession.getstate#Microsoft_ServiceBus_Messaging_MessageSession_GetState), can be found on the [MessageSession](/dotnet/api/microsoft.servicebus.messaging.messagesession) object in both the C# and Java APIs. A session that had previously no session state set returns a **null** reference for **GetState**. Clearing the previously set session state is done with [SetState(null)](/dotnet/api/microsoft.servicebus.messaging.messagesession.setstate#Microsoft_ServiceBus_Messaging_MessageSession_SetState_System_IO_Stream_).
+The methods for managing session state, SetState and GetState, can be found on the session receiver object. A session that had previously no session state returns a null reference for GetState. The previously set session state can be cleared by passing null to the SetState method on the receiver.
Session state remains as long as it isn't cleared up (returning **null**), even if all messages in a session are consumed.
-All existing sessions in a queue or subscription can be enumerated with the **SessionBrowser** method in the Java API and with [GetMessageSessions](/dotnet/api/microsoft.servicebus.messaging.queueclient.getmessagesessions#Microsoft_ServiceBus_Messaging_QueueClient_GetMessageSessions) on the [QueueClient](/dotnet/api/microsoft.servicebus.messaging.queueclient) and [SubscriptionClient](/dotnet/api/microsoft.servicebus.messaging.subscriptionclient) in the .NET Framework client.
- The session state held in a queue or in a subscription counts towards that entity's storage quota. When the application is finished with a session, it is therefore recommended for the application to clean up its retained state to avoid external management cost. ### Impact of delivery count
The [request-reply pattern](https://www.enterpriseintegrationpatterns.com/patter
Multiple applications can send their requests to a single request queue, with a specific header parameter set to uniquely identify the sender application. The receiver application can process the requests coming in the queue and send replies on the session enabled queue, setting the session ID to the unique identifier the sender had sent on the request message. The application that sent the request can then receive messages on the specific session ID and correctly process the replies. > [!NOTE]
-> The application that sends the initial requests should know about the session ID and use `SessionClient.AcceptMessageSession(SessionID)` to lock the session on which it's expecting the response. It's a good idea to use a GUID that uniquely identifies the instance of the application as a session id. There should be no session handler or `AcceptMessageSession(timeout)` on the queue to ensure that responses are available to be locked and processed by specific receivers.
+> The application that sends the initial requests should know about the session ID and use it to accept the session so that the session on which it is expecting the response is locked. It's a good idea to use a GUID that uniquely identifies the instance of the application as a session id. There should be no session handler or a timeout specified on the session receiver for the queue to ensure that responses are available to be locked and processed by specific receivers.
## Next steps -- See either the [Microsoft.Azure.ServiceBus samples](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/Sessions) or [Microsoft.ServiceBus.Messaging samples](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.ServiceBus.Messaging/Sessions) for an example that uses the .NET Framework client to handle session-aware messages. -
-To learn more about Service Bus messaging, see the following topics:
+- [Azure.Messaging.ServiceBus samples for .NET](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/)
+- [Azure Service Bus client library for Java - Samples](/samples/azure/azure-sdk-for-java/servicebus-samples/)
+- [Azure Service Bus client library for Python - Samples](/samples/azure/azure-sdk-for-python/servicebus-samples/)
+- [Azure Service Bus client library for JavaScript - Samples](/samples/azure/azure-sdk-for-js/service-bus-javascript/)
+- [Azure Service Bus client library for TypeScript - Samples](/samples/azure/azure-sdk-for-js/service-bus-typescript/)
+- [Microsoft.Azure.ServiceBus samples for .NET](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/) (Sessions and SessionState samples)
-* [Service Bus queues, topics, and subscriptions](service-bus-queues-topics-subscriptions.md)
-* [Get started with Service Bus queues](service-bus-dotnet-get-started-with-queues.md)
-* [How to use Service Bus topics and subscriptions](service-bus-dotnet-how-to-use-topics-subscriptions.md)
+To learn more about Service Bus messaging, see [Service Bus queues, topics, and subscriptions](service-bus-queues-topics-subscriptions.md).
[1]: ./media/message-sessions/sessions.png
-[2]: ./media/message-sessions/queue-sessions.png
service-bus-messaging Message Transfers Locks Settlement https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/message-transfers-locks-settlement.md
Title: Azure Service Bus message transfers, locks, and settlement description: This article provides an overview of Azure Service Bus message transfers, locks, and settlement operations. Previously updated : 06/23/2020 Last updated : 04/12/2021
The central capability of a message broker such as Service Bus is to accept messages into a queue or topic and hold them available for later retrieval. *Send* is the term that is commonly used for the transfer of a message into the message broker. *Receive* is the term commonly used for the transfer of a message to a retrieving client.
-When a client sends a message, it usually wants to know whether the message has been properly transferred to and accepted by the broker or whether some sort of error occurred. This positive or negative acknowledgment settles the client and the broker understanding about the transfer state of the message and is thus referred to as *settlement*.
+When a client sends a message, it usually wants to know whether the message has been properly transferred to and accepted by the broker or whether some sort of error occurred. This positive or negative acknowledgment settles the client and the broker understanding about the transfer state of the message. So, it' referred to as *settlement*.
Likewise, when the broker transfers a message to a client, the broker and client want to establish an understanding of whether the message has been successfully processed and can therefore be removed, or whether the message delivery or processing failed, and thus the message might have to be delivered again.
Likewise, when the broker transfers a message to a client, the broker and client
Using any of the supported Service Bus API clients, send operations into Service Bus are always explicitly settled, meaning that the API operation waits for an acceptance result from Service Bus to arrive, and then completes the send operation.
-If the message is rejected by Service Bus, the rejection contains an error indicator and text with a "tracking-id" inside of it. The rejection also includes information about whether the operation can be retried with any expectation of success. In the client, this information is turned into an exception and raised to the caller of the send operation. If the message has been accepted, the operation silently completes.
+If the message is rejected by Service Bus, the rejection contains an error indicator and text with a **tracking-id** in it. The rejection also includes information about whether the operation can be retried with any expectation of success. In the client, this information is turned into an exception and raised to the caller of the send operation. If the message has been accepted, the operation silently completes.
-When using the AMQP protocol, which is the exclusive protocol for the .NET Standard client and the Java client and [which is an option for the .NET Framework client](service-bus-amqp-dotnet.md), message transfers and settlements are pipelined and completely asynchronous, and it is recommended that you use the asynchronous programming model API variants.
+When using the AMQP protocol, which is the exclusive protocol for the .NET Standard, Java, JavaScript, Python, and Go clients, and [an option for the .NET Framework client](service-bus-amqp-dotnet.md), message transfers and settlements are pipelined and asynchronous. We recommend that you use the asynchronous programming model API variants.
A sender can put several messages on the wire in rapid succession without having to wait for each message to be acknowledged, as would otherwise be the case with the SBMP protocol or with HTTP 1.1. Those asynchronous send operations complete as the respective messages are accepted and stored, on partitioned entities or when send operation to different entities overlap. The completions might also occur out of the original send order.
-The strategy for handling the outcome of send operations can have immediate and significant performance impact for your application. The examples in this section are written in C# and apply equivalently for Java Futures.
+The strategy for handling the outcome of send operations can have immediate and significant performance impact for your application. The examples in this section are written in C# and apply to Java futures, Java monos, JavaScript promises, and equivalent concepts in other languages.
If the application produces bursts of messages, illustrated here with a plain loop, and were to await the completion of each send operation before sending the next message, synchronous or asynchronous API shapes alike, sending 10 messages only completes after 10 sequential full round trips for settlement.
-With an assumed 70 millisecond TCP roundtrip latency distance from an on-premises site to Service Bus and giving just 10 ms for Service Bus to accept and store each message, the following loop takes up at least 8 seconds, not counting payload transfer time or potential route congestion effects:
+With an assumed 70-millisecond TCP roundtrip latency distance from an on-premises site to Service Bus and giving just 10 ms for Service Bus to accept and store each message, the following loop takes up at least 8 seconds, not counting payload transfer time or potential route congestion effects:
```csharp for (int i = 0; i < 100; i++)
for (int i = 0; i < 100; i++)
} ```
-If the application starts the 10 asynchronous send operations in immediate succession and awaits their respective completion separately, the round trip time for those 10 send operations overlaps. The 10 messages are transferred in immediate succession, potentially even sharing TCP frames, and the overall transfer duration largely depends on the network-related time it takes to get the messages transferred to the broker.
+If the application starts the 10 asynchronous send operations in immediate succession and awaits their respective completion separately, the round-trip time for those 10 send operations overlaps. The 10 messages are transferred in immediate succession, potentially even sharing TCP frames, and the overall transfer duration largely depends on the network-related time it takes to get the messages transferred to the broker.
Making the same assumptions as for the prior loop, the total overlapped execution time for the following loop might stay well under one second:
for (int i = 0; i < 100; i++)
await Task.WhenAll(tasks); ```
-It is important to note that all asynchronous programming models use some form of memory-based, hidden work queue that holds pending operations. When [SendAsync](/dotnet/api/microsoft.azure.servicebus.queueclient.sendasync#Microsoft_Azure_ServiceBus_QueueClient_SendAsync_Microsoft_Azure_ServiceBus_Message_) (C#) or **Send** (Java) return, the send task is queued up in that work queue but the protocol gesture only commences once it is the task's turn to run. For code that tends to push bursts of messages and where reliability is a concern, care should be taken that not too many messages are put "in flight" at once, because all sent messages take up memory until they have factually been put onto the wire.
+It is important to note that all asynchronous programming models use some form of memory-based, hidden work queue that holds pending operations. When the send API returns, the send task is queued up in that work queue but the protocol gesture only commences once it is the task's turn to run. For code that tends to push bursts of messages and where reliability is a concern, care should be taken that not too many messages are put "in flight" at once, because all sent messages take up memory until they have factually been put onto the wire.
Semaphores, as shown in the following code snippet in C#, are synchronization objects that enable such application-level throttling when needed. This use of a semaphore allows for at most 10 messages to be in flight at once. One of the 10 available semaphore locks is taken before the send and it is released as the send completes. The 11th pass through the loop waits until at least one of the prior sends has completed, and then makes its lock available:
For receive operations, the Service Bus API clients enable two different explici
### ReceiveAndDelete
-The [Receive-and-Delete](/dotnet/api/microsoft.servicebus.messaging.receivemode) mode tells the broker to consider all messages it sends to the receiving client as settled when sent. That means that the message is considered consumed as soon as the broker has put it onto the wire. If the message transfer fails, the message is lost.
+The **receive-and-delete** mode tells the broker to consider all messages it sends to the receiving client as settled when sent. That means that the message is considered consumed as soon as the broker has put it onto the wire. If the message transfer fails, the message is lost.
The upside of this mode is that the receiver does not need to take further action on the message and is also not slowed by waiting for the outcome of the settlement. If the data contained in the individual messages have low value and/or are only meaningful for a very short time, this mode is a reasonable choice. ### PeekLock
-The [Peek-Lock](/dotnet/api/microsoft.servicebus.messaging.receivemode) mode tells the broker that the receiving client wants to settle received messages explicitly. The message is made available for the receiver to process, while held under an exclusive lock in the service so that other, competing receivers cannot see it. The duration of the lock is initially defined at the queue or subscription level and can be extended by the client owning the lock, via the [RenewLock](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver.renewlockasync#Microsoft_Azure_ServiceBus_Core_MessageReceiver_RenewLockAsync_System_String_) operation.
+The **peek-lock** mode tells the broker that the receiving client wants to settle received messages explicitly. The message is made available for the receiver to process, while held under an exclusive lock in the service so that other, competing receivers cannot see it. The duration of the lock is initially defined at the queue or subscription level and can be extended by the client owning the lock. For details about renewing locks, see the [Renew locks](#renew-locks) section in this article.
When a message is locked, other clients receiving from the same queue or subscription can take on locks and retrieve the next available messages not under active lock. When the lock on a message is explicitly released or when the lock expires, the message pops back up at or near the front of the retrieval order for redelivery.
-When the message is repeatedly released by receivers or they let the lock elapse for a defined number of times ([maxDeliveryCount](/dotnet/api/microsoft.servicebus.messaging.queuedescription.maxdeliverycount#Microsoft_ServiceBus_Messaging_QueueDescription_MaxDeliveryCount)), the message is automatically removed from the queue or subscription and placed into the associated dead-letter queue.
+When the message is repeatedly released by receivers or they let the lock elapse for a defined number of times ([Max Delivery Count](service-bus-dead-letter-queues.md#maximum-delivery-count)), the message is automatically removed from the queue or subscription and placed into the associated dead-letter queue.
-The receiving client initiates settlement of a received message with a positive acknowledgment when it calls [Complete](/dotnet/api/microsoft.servicebus.messaging.queueclient.complete#Microsoft_ServiceBus_Messaging_QueueClient_Complete_System_Guid_) at the API level. This indicates to the broker that the message has been successfully processed and the message is removed from the queue or subscription. The broker replies to the receiver's settlement intent with a reply that indicates whether the settlement could be performed.
+The receiving client initiates settlement of a received message with a positive acknowledgment when it calls the `Complete` API for the message. It indicates to the broker that the message has been successfully processed and the message is removed from the queue or subscription. The broker replies to the receiver's settlement intent with a reply that indicates whether the settlement could be performed.
-When the receiving client fails to process a message but wants the message to be redelivered, it can explicitly ask for the message to be released and unlocked instantly by calling [Abandon](/dotnet/api/microsoft.servicebus.messaging.queueclient.abandon) or it can do nothing and let the lock elapse.
+When the receiving client fails to process a message but wants the message to be redelivered, it can explicitly ask for the message to be released and unlocked instantly by calling the `Abandon` API for the message or it can do nothing and let the lock elapse.
-If a receiving client fails to process a message and knows that redelivering the message and retrying the operation will not help, it can reject the message, which moves it into the dead-letter queue by calling [DeadLetter](/dotnet/api/microsoft.servicebus.messaging.queueclient.deadletter), which also allows setting a custom property including a reason code that can be retrieved with the message from the dead-letter queue.
+If a receiving client fails to process a message and knows that redelivering the message and retrying the operation will not help, it can reject the message, which moves it into the dead-letter queue by calling the `DeadLetter` API on the message, which also allows setting a custom property including a reason code that can be retrieved with the message from the dead-letter queue.
-A special case of settlement is deferral, which is discussed in a separate article.
+A special case of settlement is deferral, which is discussed in a [separate article](message-deferral.md).
-The **Complete** or **Deadletter** operations as well as the **RenewLock** operations may fail due to network issues, if the held lock has expired, or there are other service-side conditions that prevent settlement. In one of the latter cases, the service sends a negative acknowledgment that surfaces as an exception in the API clients. If the reason is a broken network connection, the lock is dropped since Service Bus does not support recovery of existing AMQP links on a different connection.
+The `Complete`, `Deadletter`, or `RenewLock` operations may fail due to network issues, if the held lock has expired, or there are other service-side conditions that prevent settlement. In one of the latter cases, the service sends a negative acknowledgment that surfaces as an exception in the API clients. If the reason is a broken network connection, the lock is dropped since Service Bus does not support recovery of existing AMQP links on a different connection.
-If **Complete** fails, which occurs typically at the very end of message handling and in some cases after minutes of processing work, the receiving application can decide whether it preserves the state of the work and ignores the same message when it is delivered a second time, or whether it tosses out the work result and retries as the message is redelivered.
+If `Complete` fails, which occurs typically at the very end of message handling and in some cases after minutes of processing work, the receiving application can decide whether it preserves the state of the work and ignores the same message when it is delivered a second time, or whether it tosses out the work result and retries as the message is redelivered.
The typical mechanism for identifying duplicate message deliveries is by checking the message-id, which can and should be set by the sender to a unique value, possibly aligned with an identifier from the originating process. A job scheduler would likely set the message-id to the identifier of the job it is trying to assign to a worker with the given worker, and the worker would ignore the second occurrence of the job assignment if that job is already done.
The typical mechanism for identifying duplicate message deliveries is by checkin
> > When the lock is lost, Azure Service Bus will generate a LockLostException which will be surfaced on the client application code. In this case, the client's default retry logic should automatically kick in and retry the operation.
-## Next steps
-
-To learn more about Service Bus messaging, see the following topics:
+## Renew locks
+The default value for the lock duration is **30 seconds**. You can specify a different value for the lock duration at the queue or subscription level. The client owning the lock can renew the message lock by using methods on the receiver object. Instead, you can use the automatic lock-renewal feature where you can specify the time duration for which you want to keep getting the lock renewed.
-* [Service Bus queues, topics, and subscriptions](service-bus-queues-topics-subscriptions.md)
-* [Get started with Service Bus queues](service-bus-dotnet-get-started-with-queues.md)
-* [How to use Service Bus topics and subscriptions](service-bus-dotnet-how-to-use-topics-subscriptions.md)
+## Next steps
+- A special case of settlement is deferral. See the [Message deferral](message-deferral.md) for details.
+- To learn about dead-lettering, see [Dead-letter queues](service-bus-dead-letter-queues.md).
+- To learn more about Service Bus messaging in general, see [Service Bus queues, topics, and subscriptions](service-bus-queues-topics-subscriptions.md)
service-bus-messaging Service Bus Azure And Service Bus Queues Compared Contrasted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-azure-and-service-bus-queues-compared-contrasted.md
Title: Compare Azure Storage queues and Service Bus queues description: Analyzes differences and similarities between two types of queues offered by Azure. Previously updated : 11/04/2020 Last updated : 04/12/2021 # Storage queues and Service Bus queues - compared and contrasted
As a solution architect/developer, **you should consider using Service Bus queue
* Your solution needs to receive messages without having to poll the queue. With Service Bus, you can achieve it by using a long-polling receive operation using the TCP-based protocols that Service Bus supports. * Your solution requires the queue to provide a guaranteed first-in-first-out (FIFO) ordered delivery. * Your solution needs to support automatic duplicate detection.
-* You want your application to process messages as parallel long-running streams (messages are associated with a stream using the [SessionId](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.sessionid) property on the message). In this model, each node in the consuming application competes for streams, as opposed to messages. When a stream is given to a consuming node, the node can examine the state of the application stream state using transactions.
+* You want your application to process messages as parallel long-running streams (messages are associated with a stream using the **session ID** property on the message). In this model, each node in the consuming application competes for streams, as opposed to messages. When a stream is given to a consuming node, the node can examine the state of the application stream state using transactions.
* Your solution requires transactional behavior and atomicity when sending or receiving multiple messages from a queue. * Your application handles messages that can exceed 64 KB but won't likely approach the 256-KB limit. * You deal with a requirement to provide a role-based access model to the queues, and different rights/permissions for senders and receivers. For more information, see the following articles:
This section compares some of the fundamental queuing capabilities provided by S
| Comparison Criteria | Storage queues | Service Bus queues | | | | |
-| Ordering guarantee |**No** <br/><br>For more information, see the first note in the [Additional Information](#additional-information) section.</br> | **Yes - First-In-First-Out (FIFO)**<br/><br>(through the use of [message sessions](message-sessions.md)) |
+| Ordering guarantee |**No** <br/><br>For more information, see the first note in the [Additional Information](#additional-information) section.</br> | **Yes - First-In-First-Out (FIFO)**<br/><br>(by using [message sessions](message-sessions.md)) |
| Delivery guarantee |**At-Least-Once** |**At-Least-Once** (using PeekLock receive mode. It's the default) <br/><br/>**At-Most-Once** (using ReceiveAndDelete receive mode) <br/> <br/> Learn more about various [Receive modes](service-bus-queues-topics-subscriptions.md#receive-modes) | | Atomic operation support |**No** |**Yes**<br/><br/> |
-| Receive behavior |**Non-blocking**<br/><br/>(completes immediately if no new message is found) |**Blocking with or without a timeout**<br/><br/>(offers long polling, or the ["Comet technique"](https://go.microsoft.com/fwlink/?LinkId=613759))<br/><br/>**Non-blocking**<br/><br/>(through the use of .NET managed API only) |
-| Push-style API |**No** |**Yes**<br/><br/>[QueueClient.OnMessage](/dotnet/api/microsoft.servicebus.messaging.queueclient.onmessage#Microsoft_ServiceBus_Messaging_QueueClient_OnMessage_System_Action_Microsoft_ServiceBus_Messaging_BrokeredMessage__) and [MessageSessionHandler.OnMessage](/dotnet/api/microsoft.servicebus.messaging.messagesessionhandler.onmessage#Microsoft_ServiceBus_Messaging_MessageSessionHandler_OnMessage_Microsoft_ServiceBus_Messaging_MessageSession_Microsoft_ServiceBus_Messaging_BrokeredMessage__) sessions .NET API. |
+| Receive behavior |**Non-blocking**<br/><br/>(completes immediately if no new message is found) |**Blocking with or without a timeout**<br/><br/>(offers long polling, or the ["Comet technique"](https://go.microsoft.com/fwlink/?LinkId=613759))<br/><br/>**Non-blocking**<br/><br/>(using .NET managed API only) |
+| Push-style API |**No** |**Yes**<br/><br/>Our .NET, Java, JavaScript, and Go SDKs provide push-style API. |
| Receive mode |**Peek & Lease** |**Peek & Lock**<br/><br/>**Receive & Delete** | | Exclusive access mode |**Lease-based** |**Lock-based** |
-| Lease/Lock duration |**30 seconds (default)**<br/><br/>**7 days (maximum)** (You can renew or release a message lease using the [UpdateMessage](/dotnet/api/microsoft.azure.storage.queue.cloudqueue.updatemessage) API.) |**60 seconds (default)**<br/><br/>You can renew a message lock using the [RenewLock](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.renewlock#Microsoft_ServiceBus_Messaging_BrokeredMessage_RenewLock) API. |
-| Lease/Lock precision |**Message level**<br/><br/>Each message can have a different timeout value, which you can then update as needed while processing the message, by using the [UpdateMessage](/dotnet/api/microsoft.azure.storage.queue.cloudqueue.updatemessage) API. |**Queue level**<br/><br/>(each queue has a lock precision applied to all of its messages, but you can renew the lock using the [RenewLock](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.renewlock#Microsoft_ServiceBus_Messaging_BrokeredMessage_RenewLock) API.) |
-| Batched receive |**Yes**<br/><br/>(explicitly specifying message count when retrieving messages, up to a maximum of 32 messages) |**Yes**<br/><br/>(implicitly enabling a pre-fetch property or explicitly through the use of transactions) |
-| Batched send |**No** |**Yes**<br/><br/>(through the use of transactions or client-side batching) |
+| Lease/Lock duration |**30 seconds (default)**<br/><br/>**7 days (maximum)** (You can renew or release a message lease using the [UpdateMessage](/dotnet/api/microsoft.azure.storage.queue.cloudqueue.updatemessage) API.) |**30 seconds (default)**<br/><br/>You can renew the message lock for the same lock duration each time manually or use the automatic lock renewal feature where the client manages lock renewal for you. |
+| Lease/Lock precision |**Message level**<br/><br/>Each message can have a different timeout value, which you can then update as needed while processing the message, by using the [UpdateMessage](/dotnet/api/microsoft.azure.storage.queue.cloudqueue.updatemessage) API. |**Queue level**<br/><br/>(each queue has a lock precision applied to all of its messages, but the lock can be renewed as described in the previous row) |
+| Batched receive |**Yes**<br/><br/>(explicitly specifying message count when retrieving messages, up to a maximum of 32 messages) |**Yes**<br/><br/>(implicitly enabling a pre-fetch property or explicitly by using transactions) |
+| Batched send |**No** |**Yes**<br/><br/>(by using transactions or client-side batching) |
### Additional information * Messages in Storage queues are typically first-in-first-out, but sometimes they can be out of order. For example, when the visibility-timeout duration of a message expires because a client application crashed while processing a message. When the visibility timeout expires, the message becomes visible again on the queue for another worker to dequeue it. At that point, the newly visible message might be placed in the queue to be dequeued again.
This section compares some of the fundamental queuing capabilities provided by S
- Decoupling application components to increase scalability and tolerance for failures - Load leveling - Building process workflows.
-* Inconsistencies with regard to message handling in the context of Service Bus sessions can be avoided by using session state to store the application's state relative to the progress of handling the session's message sequence, and by using transactions around settling received messages and updating the session state. This kind of consistency feature is sometimes labeled *exactly once processing* in other vendor's products. Any transaction failures will obviously cause messages to be redelivered and that's why the term isn't exactly adequate.
+* Inconsistencies regarding message handling in the context of Service Bus sessions can be avoided by using session state to store the application's state relative to the progress of handling the session's message sequence, and by using transactions around settling received messages and updating the session state. This kind of consistency feature is sometimes labeled *exactly once processing* in other vendor's products. Any transaction failures will obviously cause messages to be redelivered and that's why the term isn't exactly adequate.
* Storage queues provide a uniform and consistent programming model across queues, tables, and BLOBs ΓÇô both for developers and for operations teams. * Service Bus queues provide support for local transactions in the context of a single queue. * The **Receive and Delete** mode supported by Service Bus provides the ability to reduce the messaging operation count (and associated cost) in exchange for lowered delivery assurance. * Storage queues provide leases with the ability to extend the leases for messages. This feature allows the worker processes to maintain short leases on messages. So, if a worker crashes, the message can be quickly processed again by another worker. Also, a worker can extend the lease on a message if it needs to process it longer than the current lease time.
-* Storage queues offer a visibility timeout that you can set upon the enqueuing or dequeuing of a message. Also, you can update a message with different lease values at run-time, and update different values across messages in the same queue. Service Bus lock timeouts are defined in the queue metadata. However, you can renew the lock by calling the [RenewLock](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.renewlock#Microsoft_ServiceBus_Messaging_BrokeredMessage_RenewLock) method.
+* Storage queues offer a visibility timeout that you can set upon the enqueuing or dequeuing of a message. Also, you can update a message with different lease values at run-time, and update different values across messages in the same queue. Service Bus lock timeouts are defined in the queue metadata. However, you can renew the message lock for the pre-defined lock duration manually or use the automatic lock renewal feature where the client manages lock renewal for you.
* The maximum timeout for a blocking receive operation in Service Bus queues is 24 days. However, REST-based timeouts have a maximum value of 55 seconds. * Client-side batching provided by Service Bus enables a queue client to batch multiple messages into a single send operation. Batching is only available for asynchronous send operations. * Features such as the 200-TB ceiling of Storage queues (more when you virtualize accounts) and unlimited queues make it an ideal platform for SaaS providers.
This section compares advanced capabilities provided by Storage queues and Servi
| Poison message support |**Yes** |**Yes** | | In-place update |**Yes** |**Yes** | | Server-side transaction log |**Yes** |**No** |
-| Storage metrics |**Yes**<br/><br/>**Minute Metrics** provides real-time metrics for availability, TPS, API call counts, error counts, and more. They're all in real time, aggregated per minute and reported within a few minutes from what just happened in production. For more information, see [About Storage Analytics Metrics](/rest/api/storageservices/fileservices/About-Storage-Analytics-Metrics). |**Yes**<br/><br/>(bulk queries by calling [GetQueues](/dotnet/api/microsoft.servicebus.namespacemanager.getqueues#Microsoft_ServiceBus_NamespaceManager_GetQueues)) |
-| State management |**No** |**Yes**<br/><br/>[Microsoft.ServiceBus.Messaging.EntityStatus.Active](/dotnet/api/microsoft.servicebus.messaging.entitystatus), [Microsoft.ServiceBus.Messaging.EntityStatus.Disabled](/dotnet/api/microsoft.servicebus.messaging.entitystatus), [Microsoft.ServiceBus.Messaging.EntityStatus.SendDisabled](/dotnet/api/microsoft.servicebus.messaging.entitystatus), [Microsoft.ServiceBus.Messaging.EntityStatus.ReceiveDisabled](/dotnet/api/microsoft.servicebus.messaging.entitystatus) |
+| Storage metrics |**Yes**<br/><br/>**Minute Metrics** provides real-time metrics for availability, TPS, API call counts, error counts, and more. They're all in real time, aggregated per minute and reported within a few minutes from what just happened in production. For more information, see [About Storage Analytics Metrics](/rest/api/storageservices/fileservices/About-Storage-Analytics-Metrics). |**Yes**<br/><br/>For information about metrics supported by Azure Service Bus, see [Message metrics](service-bus-metrics-azure-monitor.md#message-metrics). |
+| State management |**No** |**Yes** (Active, Disabled, SendDisabled, ReceiveDisabled. For details on these states, see [Queue status](entity-suspend.md#queue-status)) |
| Message autoforwarding |**No** |**Yes** | | Purge queue function |**Yes** |**No** |
-| Message groups |**No** |**Yes**<br/><br/>(through the use of messaging sessions) |
+| Message groups |**No** |**Yes**<br/><br/>(by using messaging sessions) |
| Application state per message group |**No** |**Yes** | | Duplicate detection |**No** |**Yes**<br/><br/>(configurable on the sender side) | | Browsing message groups |**No** |**Yes** |
This section compares advanced capabilities provided by Storage queues and Servi
### Additional information * Both queuing technologies enable a message to be scheduled for delivery at a later time. * Queue autoforwarding enables thousands of queues to autoforward their messages to a single queue, from which the receiving application consumes the message. You can use this mechanism to achieve security, control flow, and isolate storage between each message publisher.
-* Storage queues provide support for updating message content. You can use this functionality for persisting state information and incremental progress updates into the message so that it can be processed from the last known checkpoint, instead of starting from scratch. With Service Bus queues, you can enable the same scenario through the use of message sessions. Sessions enable you to save and retrieve the application processing state (by using [SetState](/dotnet/api/microsoft.servicebus.messaging.messagesession.setstate#Microsoft_ServiceBus_Messaging_MessageSession_SetState_System_IO_Stream_) and [GetState](/dotnet/api/microsoft.servicebus.messaging.messagesession.getstate#Microsoft_ServiceBus_Messaging_MessageSession_GetState)).
+* Storage queues provide support for updating message content. You can use this functionality for persisting state information and incremental progress updates into the message so that it can be processed from the last known checkpoint, instead of starting from scratch. With Service Bus queues, you can enable the same scenario by using message sessions. For more information, see [Message session state](message-sessions.md#message-session-state).
* Service Bus queues support [dead lettering](service-bus-dead-letter-queues.md). It can be useful for isolating messages that meet the following criteria: - Messages can't be processed successfully by the receiving application - Messages can't reach their destination because of an expired time-to-live (TTL) property. The TTL value specifies how long a message remains in the queue. With Service Bus, the message will be moved to a special queue called $DeadLetterQueue when the TTL period expires. * To find "poison" messages in Storage queues, when dequeuing a message the application examines the [DequeueCount](/dotnet/api/microsoft.azure.storage.queue.cloudqueuemessage.dequeuecount) property of the message. If **DequeueCount** is greater than a given threshold, the application moves the message to an application-defined "dead letter" queue. * Storage queues enable you to obtain a detailed log of all of the transactions executed against the queue, and aggregated metrics. Both of these options are useful for debugging and understanding how your application uses Storage queues. They're also useful for performance-tuning your application and reducing the costs of using queues.
-* Message sessions supported by Service Bus enable messages that belong to a logical group to be associated with a receiver. It creates a session-like affinity between messages and their respective receivers. You can enable this advanced functionality in Service Bus by setting the [SessionID](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.sessionid#Microsoft_ServiceBus_Messaging_BrokeredMessage_SessionId) property on a message. Receivers can then listen on a specific session ID and receive messages that share the specified session identifier.
-* The duplication detection feature of Service Bus queues automatically removes duplicate messages sent to a queue or topic, based on the value of the [MessageId](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.messageid#Microsoft_ServiceBus_Messaging_BrokeredMessage_MessageId) property.
+* [Message sessions](message-sessions.md) supported by Service Bus enable messages that belong to a logical group to be associated with a receiver. It creates a session-like affinity between messages and their respective receivers. You can enable this advanced functionality in Service Bus by setting the session ID property on a message. Receivers can then listen on a specific session ID and receive messages that share the specified session identifier.
+* The duplication detection feature of Service Bus queues automatically removes duplicate messages sent to a queue or topic, based on the value of the message ID property.
## Capacity and quotas This section compares Storage queues and Service Bus queues from the perspective of [capacity and quotas](service-bus-quotas.md) that may apply.
This section discusses the authentication and authorization features supported b
| | | | | Authentication |**Symmetric key** |**Symmetric key** | | Security model |Delegated access via SAS tokens. |SAS |
-| Identity provider federation |**No** |**Yes** |
+| Identity provider federation |**Yes** |**Yes** |
### Additional information * Every request to either of the queuing technologies must be authenticated. Public queues with anonymous access aren't supported. Using [SAS](service-bus-sas.md), you can address this scenario by publishing a write-only SAS, read-only SAS, or even a full-access SAS. * The authentication scheme provided by Storage queues involves the use of a symmetric key. This key is a hash-based Message Authentication Code (HMAC), computed with the SHA-256 algorithm and encoded as a **Base64** string. For more information about the respective protocol, see [Authentication for the Azure Storage Services](/rest/api/storageservices/fileservices/Authentication-for-the-Azure-Storage-Services). Service Bus queues support a similar model using symmetric keys. For more information, see [Shared Access Signature Authentication with Service Bus](service-bus-sas.md). ## Conclusion
-By gaining a deeper understanding of the two technologies, you can make a more informed decision on which queue technology to use, and when. The decision on when to use Storage queues or Service Bus queues clearly depends on a number of factors. These factors may depend heavily on the individual needs of your application and its architecture.
+By gaining a deeper understanding of the two technologies, you can make a more informed decision on which queue technology to use, and when. The decision on when to use Storage queues or Service Bus queues clearly depends on many factors. These factors may depend heavily on the individual needs of your application and its architecture.
You may prefer to choose Storage queues for reasons such as the following ones:
You may prefer to choose Storage queues for reasons such as the following ones:
- If you require basic communication and messaging between services - Need queues that can be larger than 80 GB in size
-Service Bus queues provide a number of advanced features such as the following ones. So, they may be a preferred choice if you're building a hybrid application or if your application otherwise requires these features.
+Service Bus queues provide many advanced features such as the following ones. So, they may be a preferred choice if you're building a hybrid application or if your application otherwise requires these features.
- [Sessions](message-sessions.md) - [Transactions](service-bus-transactions.md)
service-bus-messaging Service Bus Dead Letter Queues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-dead-letter-queues.md
Title: Service Bus dead-letter queues | Microsoft Docs description: Describes dead-letter queues in Azure Service Bus. Service Bus queues and topic subscriptions provide a secondary subqueue, called a dead-letter queue. Previously updated : 06/23/2020 Last updated : 04/08/2021 # Overview of Service Bus dead-letter queues
-Azure Service Bus queues and topic subscriptions provide a secondary subqueue, called a *dead-letter queue* (DLQ). The dead-letter queue doesn't need to be explicitly created and can't be deleted or otherwise managed independent of the main entity.
+Azure Service Bus queues and topic subscriptions provide a secondary subqueue, called a *dead-letter queue* (DLQ). The dead-letter queue doesn't need to be explicitly created and can't be deleted or managed independent of the main entity.
-This article describes dead-letter queues in Service Bus. Much of the discussion is illustrated by the [Dead-Letter queues sample](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.ServiceBus.Messaging/DeadletterQueue) on GitHub.
+This article describes dead-letter queues in Service Bus. Much of the discussion is illustrated by the [Dead-Letter queues sample](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/DeadletterQueue) on GitHub.
## The dead-letter queue
The purpose of the dead-letter queue is to hold messages that can't be delivered
From an API and protocol perspective, the DLQ is mostly similar to any other queue, except that messages can only be submitted via the dead-letter operation of the parent entity. In addition, time-to-live isn't observed, and you can't dead-letter a message from a DLQ. The dead-letter queue fully supports peek-lock delivery and transactional operations.
-There's no automatic cleanup of the DLQ. Messages remain in the DLQ until you explicitly retrieve them from the DLQ and call [Complete()](/dotnet/api/microsoft.azure.servicebus.queueclient.completeasync) on the dead-letter message.
+There's no automatic cleanup of the DLQ. Messages remain in the DLQ until you explicitly retrieve them from the DLQ and complete the dead-letter message.
+ ## DLQ message count It's not possible to obtain count of messages in the dead-letter queue at the topic level. That's because messages don't sit at the topic level unless Service Bus throws an internal error. Instead, when a sender sends a message to a topic, the message is forwarded to subscriptions for the topic within milliseconds and thus no longer resides at the topic level. So, you can see messages in the DLQ associated with the subscription for the topic. In the following example, **Service Bus Explorer** shows that there are 62 messages currently in the DLQ for the subscription "test1".
It's not possible to obtain count of messages in the dead-letter queue at the to
You can also get the count of DLQ messages by using Azure CLI command: [`az servicebus topic subscription show`](/cli/azure/servicebus/topic/subscription#az-servicebus-topic-subscription-show). ## Moving messages to the DLQ
+There are several activities in Service Bus that cause messages to get pushed to the DLQ from within the messaging engine itself. An application can also explicitly move messages to the DLQ. The following two properties (dead-letter reason and dead-letter description) are added to dead-lettered messages. Applications can define their own codes for the dead-letter reason property, but the system sets the following values.
-There are several activities in Service Bus that cause messages to get pushed to the DLQ from within the messaging engine itself. An application can also explicitly move messages to the DLQ.
-
-As the message gets moved by the broker, two properties are added to the message as the broker calls its internal version of the [DeadLetter](/dotnet/api/microsoft.azure.servicebus.queueclient.deadletterasync) method on the message: `DeadLetterReason` and `DeadLetterErrorDescription`.
-
-Applications can define their own codes for the `DeadLetterReason` property, but the system sets the following values.
-
-| DeadLetterReason | DeadLetterErrorDescription |
+| Dead-letter reason | Dead-letter error description |
| | | |HeaderSizeExceeded |The size quota for this stream has been exceeded. |
-|TTLExpiredException |The message expired and was dead lettered. See the [Exceeding TimeToLive](#exceeding-timetolive) section for details. |
+|TTLExpiredException |The message expired and was dead lettered. See the [Time to live](#time-to-live) section for details. |
|Session ID is null. |Session enabled entity doesn't allow a message whose session identifier is null. | |MaxTransferHopCountExceeded | The maximum number of allowed hops when forwarding between queues. Value is set to 4. |
-| MaxDeliveryCountExceededExceptionMessage | Message could not be consumed after maximum delivery attempts. See the [Exceeding MaxDeliveryCount](#exceeding-maxdeliverycount) section for details. |
-
-## Exceeding MaxDeliveryCount
-
-Queues and subscriptions each have a [QueueDescription.MaxDeliveryCount](/dotnet/api/microsoft.servicebus.messaging.queuedescription.maxdeliverycount) and [SubscriptionDescription.MaxDeliveryCount](/dotnet/api/microsoft.servicebus.messaging.subscriptiondescription.maxdeliverycount) property respectively; the default value is 10. Whenever a message has been delivered under a lock ([ReceiveMode.PeekLock](/dotnet/api/microsoft.azure.servicebus.receivemode)), but has been either explicitly abandoned or the lock has expired, the message [BrokeredMessage.DeliveryCount](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) is incremented. When [DeliveryCount](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) exceeds [MaxDeliveryCount](/dotnet/api/microsoft.servicebus.messaging.queuedescription.maxdeliverycount), the message is moved to the DLQ, specifying the `MaxDeliveryCountExceeded` reason code.
-
-This behavior can't be disabled, but you can set [MaxDeliveryCount](/dotnet/api/microsoft.servicebus.messaging.queuedescription.maxdeliverycount) to a large number.
+| MaxDeliveryCountExceededExceptionMessage | Message couldn't be consumed after maximum delivery attempts. See the [Maximum delivery count](#maximum-delivery-count) section for details. |
-## Exceeding TimeToLive
+## Maximum delivery count
+There is a limit on number of attempts to deliver messages for Service Bus queues and subscriptions. The default value is 10. Whenever a message has been delivered under a peek-lock, but has been either explicitly abandoned or the lock has expired, the delivery count on the message is incremented. When the delivery count exceeds the limit, the message is moved to the DLQ. The dead-letter reason for the message in DLQ is set to: MaxDeliveryCountExceeded. This behavior can't be disabled, but you can set the max delivery count to a large number.
-When the [QueueDescription.EnableDeadLetteringOnMessageExpiration](/dotnet/api/microsoft.servicebus.messaging.queuedescription) or [SubscriptionDescription.EnableDeadLetteringOnMessageExpiration](/dotnet/api/microsoft.servicebus.messaging.subscriptiondescription) property is set to **true** (the default is **false**), all expiring messages are moved to the DLQ, specifying the `TTLExpiredException` reason code.
+## Time to live
+When you enable dead-lettering on queues or subscriptions, all expiring messages are moved to the DLQ. The dead-letter reason code is set to: TTLExpiredException.
-Expired messages are only purged and moved to the DLQ when there is at least one active receiver pulling from the main queue or subscription, and [deferred messages](./message-deferral.md) will also not be purged and moved to the dead-letter queue after they expire. These behaviours are by design.
+Expired messages are only purged and moved to the DLQ when there is at least one active receiver pulling from the main queue or subscription. The deferred messages will also not be purged and moved to the dead-letter queue after they expire. This behavior is by design.
## Errors while processing subscription rules-
-When the [SubscriptionDescription.EnableDeadLetteringOnFilterEvaluationExceptions](/dotnet/api/microsoft.servicebus.messaging.subscriptiondescription) property is enabled for a subscription, any errors that occur while a subscription's SQL filter rule executes are captured in the DLQ along with the offending message. Don't use this option in a production environment in which not all message types have subscribers.
+If you enable dead-lettering on filter evaluation exceptions, any errors that occur while a subscription's SQL filter rule executes are captured in the DLQ along with the offending message. Don't use this option in a production environment in which not all message types have subscribers.
## Application-level dead-lettering- In addition to the system-provided dead-lettering features, applications can use the DLQ to explicitly reject unacceptable messages. They can include messages that can't be properly processed because of any sort of system issue, messages that hold malformed payloads, or messages that fail authentication when some message-level security scheme is used. ## Dead-lettering in ForwardTo or SendVia scenarios- Messages will be sent to the transfer dead-letter queue under the following conditions: - A message passes through more than four queues or topics that are [chained together](service-bus-auto-forwarding.md). - The destination queue or topic is disabled or deleted. - The destination queue or topic exceeds the maximum entity size.
-To retrieve these dead-lettered messages, you can create a receiver using the [FormatTransferDeadletterPath](/dotnet/api/microsoft.azure.servicebus.entitynamehelper.formattransferdeadletterpath) utility method.
-
-## Example
-
-The following code snippet creates a message receiver. In the receive loop for the main queue, the code retrieves the message with [Receive(TimeSpan.Zero)](/dotnet/api/microsoft.servicebus.messaging.messagereceiver), which asks the broker to instantly return any message readily available, or to return with no result. If the code receives a message, it immediately abandons it, which increments the `DeliveryCount`. Once the system moves the message to the DLQ, the main queue is empty and the loop exits, as [ReceiveAsync](/dotnet/api/microsoft.servicebus.messaging.messagereceiver) returns **null**.
-
-```csharp
-var receiver = await receiverFactory.CreateMessageReceiverAsync(queueName, ReceiveMode.PeekLock);
-while(true)
-{
- var msg = await receiver.ReceiveAsync(TimeSpan.Zero);
- if (msg != null)
- {
- Console.WriteLine("Picked up message; DeliveryCount {0}", msg.DeliveryCount);
- await msg.AbandonAsync();
- }
- else
- {
- break;
- }
-}
-```
- ## Path to the dead-letter queue You can access the dead-letter queue by using the following syntax:
You can access the dead-letter queue by using the following syntax:
<topic path>/Subscriptions/<subscription path>/$deadletterqueue ```
-If you're using the .NET SDK, you can get the path to the dead-letter queue by using the SubscriptionClient.FormatDeadLetterPath() method. This method takes the topic name/subscription name and suffixes with **/$DeadLetterQueue**.
- ## Next steps
-See the following articles for more information about Service Bus queues:
+For more information about Service Bus queues, see the following articles:
* [Get started with Service Bus queues](service-bus-dotnet-get-started-with-queues.md) * [Azure Queues and Service Bus queues compared](service-bus-azure-and-service-bus-queues-compared-contrasted.md)
service-fabric Service Fabric Health Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-health-introduction.md
The cluster health policy contains:
</FabricSettings> ```
-* [NodeTypeHealthPolicyMap](/dotnet/api/system.fabric.health.clusterhealthpolicy.nodetypehealthpolicymap). The node type health policy map can be used during cluster health evaluation to describe special node types. The node types are evaluated against the percentages associated with their node type name in the map. Setting this value has no effect on the global pool of nodes used for `MaxPercentUnhealthyNodes`. For example, a cluster has hundreds of nodes of different types and a few node types that host important work. No nodes in that type should be down. You can specify global `MaxPercentUnhealthyNodes` to 20% to tolerate some failures for all nodes, but for the node type `SpecialNodeType`, set the `MaxPercentUnhealthyNodes` to 0. This way, if some of the many nodes are unhealthy but below the global unhealthy percentage, the cluster would be evaluated as being in the Warning health state. A Warning health state doesn't affect cluster upgrade or other monitoring triggered by an Error health state. But even one node of type `SpecialNodeType` in an Error health state would make the cluster unhealthy and trigger rollback or pause the cluster upgrade, depending on the upgrade configuration. Conversely, setting the global `MaxPercentUnhealthyNodes` to 0 and setting the `SpecialNodeType` max percent unhealthy nodes to 100 with one node of type `SpecialNodeType` in an error state would still put the cluster in an error state because the global restriction is more strict in this case.
+* `NodeTypeHealthPolicyMap`. The node type health policy map can be used during cluster health evaluation to describe special node types. The node types are evaluated against the percentages associated with their node type name in the map. Setting this value has no effect on the global pool of nodes used for `MaxPercentUnhealthyNodes`. For example, a cluster has hundreds of nodes of different types and a few node types that host important work. No nodes in that type should be down. You can specify global `MaxPercentUnhealthyNodes` to 20% to tolerate some failures for all nodes, but for the node type `SpecialNodeType`, set the `MaxPercentUnhealthyNodes` to 0. This way, if some of the many nodes are unhealthy but below the global unhealthy percentage, the cluster would be evaluated as being in the Warning health state. A Warning health state doesn't affect cluster upgrade or other monitoring triggered by an Error health state. But even one node of type `SpecialNodeType` in an Error health state would make the cluster unhealthy and trigger rollback or pause the cluster upgrade, depending on the upgrade configuration. Conversely, setting the global `MaxPercentUnhealthyNodes` to 0 and setting the `SpecialNodeType` max percent unhealthy nodes to 100 with one node of type `SpecialNodeType` in an error state would still put the cluster in an error state because the global restriction is more strict in this case.
The following example is an excerpt from a cluster manifest. To define entries in the node type map, prefix the parameter name with "NodeTypeMaxPercentUnhealthyNodes-", followed by the node type name.
spring-cloud How To Access Data Plane Azure Ad Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/how-to-access-data-plane-azure-ad-rbac.md
After the Azure Spring Cloud Data Reader role is assigned, customers can access
``` 2. Compose the endpoint. We support default endpoints of the Spring Cloud Config Server and Spring Cloud Service Registry managed by Azure Spring Cloud. For more information, see [Production ready endpoints](https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#production-ready-endpoints). Customers can also get a full list of supported endpoints of the Spring Cloud Config Server and Spring Cloud Service Registry managed by Azure Spring Cloud by accessing endpoints:
- * *https://SERVICE_NAME.svc.azuremicroservices.io/eureka/actuator/*
- * *https://SERVICE_NAME.svc.azuremicroservices.io/config/actuator/*
+ * *'https://SERVICE_NAME.svc.azuremicroservices.io/eureka/actuator/'*
+ * *'https://SERVICE_NAME.svc.azuremicroservices.io/config/actuator/'*
>[!NOTE] > If you are using Azure China, please replace `*.azuremicroservices.io` with `*.microservices.azure.cn`, [learn more](https://docs.microsoft.com/azure/china/resources-developer-guide#check-endpoints-in-azure). 3. Access the composed endpoint with the access token. Put the access token in a header to provide authorization. Only the "GET" method is supported.
- For example, access an endpoint like *https://SERVICE_NAME.svc.azuremicroservices.io/eureka/actuator/health* to see the health status of eureka.
+ For example, access an endpoint like *'https://SERVICE_NAME.svc.azuremicroservices.io/eureka/actuator/health'* to see the health status of eureka.
If the response is *401 Unauthorized*, check to see if the role is successfully assigned. It will take several minutes for the role take effect or verify that the access token has not expired.
storage Quickstart Blobs C Plus Plus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/quickstart-blobs-c-plus-plus.md
Resources:
- [Azure storage account](../common/storage-account-create.md) - [C++ compiler](https://azure.github.io/azure-sdk/cpp_implementation.html#supported-platforms) - [CMake](https://cmake.org/)-- [Vcpkg - C and C++ package manager](https://github.com/microsoft/vcpkg/blob/master/docs/index.md)
+- [Vcpkg - C and C++ package manager](https://github.com/microsoft/vcpkg/blob/master/docs/README.md)
- [LibCurl](https://curl.haxx.se/libcurl/) - [LibXML2](http://www.xmlsoft.org/)
In this quickstart, you learned how to upload, download, and list blobs using C+
To see a C++ Blob Storage sample, continue to: > [!div class="nextstepaction"]
-> [Azure Blob Storage SDK v12 for C++ sample](https://github.com/Azure/azure-sdk-for-cpp/tree/master/sdk/storage/azure-storage-blobs/sample)
+> [Azure Blob Storage SDK v12 for C++ sample](https://github.com/Azure/azure-sdk-for-cpp/tree/master/sdk/storage/azure-storage-blobs/sample)
synapse-analytics Get Started Knowledge Center https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/get-started-knowledge-center.md
Last updated 04/04/2021
In this tutorial, you'll learn how to use the Synapse Studio Knowledge Center.
-## Getting to the Knowledge Center
+## Introduction to the Knowledge Center
There are two ways of finding the Knowledge Center in Synapse Studio:
There are two ways of finding the Knowledge Center in Synapse Studio:
Pick either method and open the **Knowledge Center**.
-## Overview
-
-The **Knowledge center** allows you to do three things:
+Once it is visible, you will see that the **Knowledge center** allows you to do three things:
* **Use samples immediately**. If you want a quick example of how Synapse works, choose this option. * **Browse gallery**. This option lets you link sample data sets and add sample code in the form of SQL scripts, notebooks, and pipelines. * **Tour Synapse Studio**. This option takes you on a brief tour of the basic parts of Synapse Studio. This is useful if you have never used Synapse Studio before.
-## Exploring blob storage with serverless SQL pool
+## Exploring: Use samples immediately
+
+There are three items in this section:
+* Explore sample data with Spark
+* Query data with SQL
+* Create external table with SQL
-1. Go to the **Knowledge center**, click **Use samples immediately**.
+1. In the **Knowledge center**, click **Use samples immediately**.
1. Select **Query data with SQL**. 1. Click **Use sample**. 1. A new sample SQL script will open.
synapse-analytics Get Started Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/get-started-pipelines.md
Last updated 12/31/2020
In this tutorial, you'll learn how to integrate pipelines and activities using Synapse Studio.
-## Overview
+## Create a pipeline and add a notebook activity
+
-You can integrate a wide variety of tasks in Azure Synapse.
1. In Synapse Studio, go to the **Integrate** hub. 1. Select **+** > **Pipeline** to create a new pipeline. Click on the new pipeline object to open the Pipeline designer. 1. Under **Activities**, expand the **Synapse** folder, and drag a **Notebook** object into the designer.
-1. Select the **Settings** tab of the Notebook activity properties. Use the drop-down list to select any notebook from your current Synapse workspace.
+1. Select the **Settings** tab of the Notebook activity properties. Use the drop-down list to select any notebook from your current Synapse workspace.
+
+## Schedule the pipeline to run every hour
+ 1. In the pipeline, select **Add trigger** > **New/edit**. 1. In **Choose trigger**, select **New**, and set the **Recurrence** to "every 1 hour". 1. Select **OK**.
synapse-analytics Overview Terminology https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/overview-terminology.md
A workspace can contain any number of **Linked service**, essentially connection
**Synapse SQL** is the ability to do T-SQL based analytics in Synapse workspace. Synapse SQL has two consumption models: dedicated and serverless. For the dedicated model, use **dedicated SQL pools**. A workspace can have any number of these pools. To use the serverless model, use the **serverless SQL pools**. Every workspace has one of these pools.
-Inside Synapse Studio, you can work with SQL pools by creating and running **SQL scripts** .
+Inside Synapse Studio, you can work with SQL pools by running **SQL scripts**.
## Apache Spark for Synapse
Pipelines are how Azure Synapse provides Data Integration - allowing you to move
* **Pipeline** are logical grouping of activities that perform a task together. * **Activities** defines actions within a Pipeline to perform on data such as copying data, running a Notebook or a SQL script.
-* **Data Flows** are a specific kind of activity that provide a no-code experience for doing data transformation that uses Synapse Spark under-the-covers.
+* **Data flows** are a specific kind of activity that provide a no-code experience for doing data transformation that uses Synapse Spark under-the-covers.
* **Trigger** - Executes a pipeline. It can be run manually or automatically (schedule, tumbling window or event-based) * **Integration dataset** - Named view of data that simply points or references the data to be used in an activity as input and output. It belongs to a Linked Service.
synapse-analytics Workspace Connected Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/workspace-connected-create.md
All SQL data warehouse users can now access and use an existing dedicated SQL po
## Prerequisites Before you enable the Synapse workspace features on your data warehouse, you must ensure that you've the following - Rights to create and manage the SQL resources that are hosted on the SQL logical server.
+- Write permissions on the host SQL Server.
- Rights to create Azure Synapse resources. - An Azure Active Directory admin identified on the logical server
synapse-analytics Develop Openrowset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-openrowset.md
You can instruct serverless SQL pool to traverse folders by specifying /* at the
`https://sqlondemandstorage.blob.core.windows.net/csv/population/**` > [!NOTE]
-> Unlike Hadoop and PolyBase, serverless SQL pool doesn't return subfolders unless you specify /** at the end of path. Also, unlike Hadoop and PolyBase, serverless SQL pool does return files for which the file name begins with an underline (_) or a period (.).
+> Unlike Hadoop and PolyBase, serverless SQL pool doesn't return subfolders unless you specify /** at the end of path.
-In the example below, if the unstructured_data_path=`https://mystorageaccount.dfs.core.windows.net/webdata/`, a serverless SQL pool query will return rows from mydata.txt and _hidden.txt. It won't return mydata2.txt and mydata3.txt because they're located in a subfolder.
+In the example below, if the unstructured_data_path=`https://mystorageaccount.dfs.core.windows.net/webdata/`, a serverless SQL pool query will return rows from mydata.txt. It won't return mydata2.txt and mydata3.txt because they're located in a subfolder.
![Recursive data for external tables](./media/develop-openrowset/folder-traversal.png)
synapse-analytics Develop Tables External Tables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-tables-external-tables.md
Specifies the folder or the file path and file name for the actual data in Azure
If you specify a folder LOCATION, a serverless SQL pool query will select from the external table and retrieve files from the folder. > [!NOTE]
-> Unlike Hadoop and PolyBase, serverless SQL pool doesn't return subfolders. It returns files for which the file name begins with an underline (_) or a period (.).
+> Unlike Hadoop and PolyBase, serverless SQL pool doesn't return subfolders unless you specify /** at the end of path.
-In this example, if LOCATION='/webdata/', a serverless SQL pool query, will return rows from mydata.txt and _hidden.txt. It won't return mydata2.txt and mydata3.txt because they're located in a subfolder.
+In this example, if LOCATION='/webdata/', a serverless SQL pool query, will return rows from mydata.txt. It won't return mydata2.txt and mydata3.txt because they're located in a subfolder.
![Recursive data for external tables](./media/develop-tables-external-tables/folder-traversal.png)
virtual-machines Nct4 V3 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/nct4-v3-series.md
The NCasT4_v3-series virtual machines are powered by [Nvidia Tesla T4](https://w
[VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br> [Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
-Nvidia NVLink Interconnect: Supported<br>
+Nvidia NVLink Interconnect: Not Supported<br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | GPU | GPU memory: GiB | Max data disks | Max NICs / Expected network bandwidth (Mbps) |
virtual-machines Troubleshooting Shared Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/troubleshooting-shared-images.md
If you have problems performing any operations on shared image galleries, image
**Cause**: The image definition you used to deploy the virtual machine does not contain any image versions that are included in latest. **Workaround**: Ensure that there is at least one image version that has 'Exclude from latest' set to False.
+**Message**: *The gallery image /subscriptions/<subscriptionID\>/resourceGroups/<resourceGroup\>/providers/Microsoft.Compute/galleries/<galleryName\>/images/<imageName\>/versions/<versionNumber\> is not available in <region\> region. Please contact image owner to replicate to this region, or change your requested region.*
+**Cause**: The version selected for deployment does not exist or does not have a replica in the indicated region.
+**Workaround**: Ensure that the name of the image resource is correct and that there is at least one replica in the indicated region.
+
+**Message**: *The gallery image /subscriptions/<subscriptionID\>/resourceGroups/<resourceGroup\>/providers/Microsoft.Compute/galleries/<galleryName\>/images/<imageName\> is not available in <region\> region. Please contact image owner to replicate to this region, or change your requested region.*
+**Cause**: The image definition selected for deployment does not have any image versions that are included in latest and also in the indicated region.
+**Workaround**: Ensure that there is at least one image version in the region that has 'Exclude from latest' set to False.
+ **Message**: *The client has permission to perform action 'Microsoft.Compute/galleries/images/versions/read' on scope <resourceID\>, however the current tenant <tenantID\> is not authorized to access linked subscription <subscriptionID\>.* **Cause**: The virtual machine or scale set was created through a SIG image in another tenant. You've tried to make a change to the virtual machine or scale set, but you don't have access to the subscription that owns the image. **Workaround**: Contact the owner of the subscription of the image version to grant read access to the image version.
If you have problems performing any operations on shared image galleries, image
**Cause**: The VM is created from a generalized image, and it's missing the admin username, password, or SSH keys. Because generalized images don't retain the admin username, password, or SSH keys, these fields must be specified during creation of a VM or scale set. **Workaround**: Specify the admin username, password, or SSH keys, or use a specialized image version.
-**Message**: *Cannot create Gallery Image Version from: <resourceID\> since the OS State in the parent gallery image ('Specialized') is not 'Generalized'.*
-**Cause**: The image version is created from a generalized source, but its parent definition is specialized.
-**Workaround**: Either create the image version by using a specialized source or use a parent definition that's generalized.
- **Message**: *Cannot update Virtual Machine Scale Set <vmssName\> as the current OS state of the VM Scale Set is Generalized which is different from the updated gallery image OS state which is Specialized.* **Cause**: The current source image for the scale set is a generalized source image, but it's being updated with a source image that is specialized. The current source image and the new source image for a scale set must be of the same state. **Workaround**: To update the scale set, use a generalized image version.
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 04/09/2021 Last updated : 04/12/2021
In this section, you find documents about Microsoft Power BI integration into SA
## Change Log
+- 04/12/2021: Release of [SAP HANA scale-out HSR with Pacemaker on Azure VMs on SLES](./sap-hana-high-availability-scale-out-hsr-suse.md) configuration guide
- 04/07/2021: Clarified support for SQL Server multi-instance and multi-database support in [SQL Server Azure Virtual Machines DBMS deployment for SAP NetWeaver](./dbms_guide_sqlserver.md) - 04/07/2021: Added information related to secondary IP addresses in [Azure Virtual Machines planning and implementation for SAP NetWeaver](./planning-guide.md) - 04/07/2021: added support for Oracle DBMS support on ANF in [Azure Storage types for SAP workload](./planning-guide-storage.md)
virtual-machines Hana Network Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/hana-network-architecture.md
Data transferred between HANA Large Instance and VMs is not encrypted. However,
## Use HANA Large Instance units in multiple regions
-To realize disaster recovery set ups, you need to have SHANA Large Instance units in multiple Azure regions. Even with using Azure [Global Vnet Peering], the transitive routing by default is not working between HANA Large Instance tenants in two different regions. However, Global Reach opens up the communication path between the HANA Large Instance units you have provisioned in two different regions. This usage scenario of ExpressRoute Global Reach enables:
+To realize disaster recovery set ups, you need to have HANA Large Instance units in multiple Azure regions. Even with using Azure [Global Vnet Peering], the transitive routing by default is not working between HANA Large Instance tenants in two different regions. However, Global Reach opens up the communication path between the HANA Large Instance units you have provisioned in two different regions. This usage scenario of ExpressRoute Global Reach enables:
- HANA System Replication without any additional proxies or firewalls - Copying backups between HANA Large Instance units in two different regions to perform system copies or system refreshes
The figure shows how the different virtual networks in both regions are connecte
> If you used multiple ExpressRoute circuits, AS Path prepending and Local Preference BGP settings should be used to ensure proper routing of traffic. **Next steps**-- Refer [SAP HANA (Large Instances) storage architecture](hana-storage-architecture.md)
+- Refer [SAP HANA (Large Instances) storage architecture](hana-storage-architecture.md)
virtual-machines High Availability Guide Rhel Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-netapp-files.md
vm-windows Previously updated : 01/11/2021 Last updated : 04/12/2021
First you need to create the Azure NetApp Files volumes. Deploy the VMs. Afterwa
1. Enter the name of the new load balancer rule (for example **lb.QAS.ASCS**) 1. Select the frontend IP address for ASCS, backend pool, and health probe you created earlier (for example **frontend.QAS.ASCS**, **backend.QAS** and **health.QAS.ASCS**) 1. Select **HA ports**
- 1. Increase idle timeout to 30 minutes
1. **Make sure to enable Floating IP** 1. Click OK * Repeat the steps above to create load balancing rules for ERS (for example **lb.QAS.ERS**)
virtual-machines High Availability Guide Rhel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/high-availability-guide-rhel.md
vm-windows Previously updated : 01/11/2021 Last updated : 04/12/2021
You first need to create the virtual machines for this cluster. Afterwards, you
1. Enter the name of the new load balancer rule (for example **nw1-lb-ascs**) 1. Select the frontend IP address, backend pool, and health probe you created earlier (for example **nw1-ascs-frontend**, **nw1-backend** and **nw1-ascs-hp**) 1. Select **HA ports**
- 1. Increase idle timeout to 30 minutes
1. **Make sure to enable Floating IP** 1. Click OK * Repeat the steps above to create load balancing rules for ERS (for example **nw1-lb-ers**)
virtual-machines High Availability Guide Suse Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/high-availability-guide-suse-netapp-files.md
vm-windows Previously updated : 10/22/2020 Last updated : 04/12/2021
First you need to create the Azure NetApp Files volumes. Deploy the VMs. Afterwa
1. Enter the name of the new load balancer rule (for example **lb.QAS.ASCS**) 1. Select the frontend IP address for ASCS, backend pool, and health probe you created earlier (for example **frontend.QAS.ASCS**, **backend.QAS** and **health.QAS.ASCS**) 1. Select **HA ports**
- 1. Increase idle timeout to 30 minutes
1. **Make sure to enable Floating IP** 1. Click OK * Repeat the steps above to create load balancing rules for ERS (for example **lb.QAS.ERS**)
virtual-machines High Availability Guide Suse Nfs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/high-availability-guide-suse-nfs.md
vm-windows Previously updated : 10/16/2020 Last updated : 04/12/2021
You first need to create the virtual machines for this NFS cluster. Afterwards,
1. Enter the name of the new load balancer rule (for example **nw1-lb**) 1. Select the frontend IP address, backend pool, and health probe you created earlier (for example **nw1-frontend**. **nw-backend** and **nw1-hp**) 1. Select **HA Ports**.
- 1. Increase idle timeout to 30 minutes
1. **Make sure to enable Floating IP** 1. Click OK * Repeat the steps above to create load balancing rule for NW2
virtual-machines High Availability Guide Suse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/high-availability-guide-suse.md
vm-windows Previously updated : 10/22/2020 Last updated : 04/12/2021
You first need to create the virtual machines for this NFS cluster. Afterwards,
1. Enter the name of the new load balancer rule (for example **nw1-lb-ascs**) 1. Select the frontend IP address, backend pool, and health probe you created earlier (for example **nw1-ascs-frontend**, **nw1-backend** and **nw1-ascs-hp**) 1. Select **HA ports**
- 1. Increase idle timeout to 30 minutes
1. **Make sure to enable Floating IP** 1. Click OK * Repeat the steps above to create load balancing rules for ERS (for example **nw1-lb-ers**)
virtual-machines Sap Hana Backup File Level https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/sap-hana-backup-file-level.md
- Title: SAP HANA Azure Backup on file level | Microsoft Docs
-description: There are two major backup possibilities for SAP HANA on Azure virtual machines, this article covers SAP HANA Azure Backup on file level
-----
-u vm-linux
- Previously updated : 03/01/2020----
-# SAP HANA Azure Backup on file level
-
-## Introduction
-
-This article is a related article to [Backup guide for SAP HANA on Azure Virtual Machines](./sap-hana-backup-guide.md), which provides an overview and information on getting started and more details on Azure Backup service and storage snapshots.
-
-Different VM types in Azure allow a different number of VHDs attached. The exact details are documented in [Sizes for Linux virtual machines in Azure](../../sizes.md). For the tests referred to in this documentation we used a GS5 Azure VM, which allows 64 attached data disks. For larger SAP HANA systems, a significant number of disks might already be taken for data and log files, possibly in combination with software striping for optimal disk IO throughput. For more details on suggested disk configurations for SAP HANA deployments on Azure VMs, read the article [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md). The recommendations made are including disk space recommendations for local backups as well.
-
-The standard way to manage backup/restore at the file level is with a file-based backup via SAP HANA Studio or via SAP HANA SQL statements. For more information, read the article [SAP HANA SQL and System Views Reference](https://help.sap.com/viewer/4fe29514fd584807ac9f2a04f6754767/2.0.05/en-US/3859e48180bb4cf8a207e15cf25a7e57.html).
-
-![This figure shows the dialog of the backup menu item in SAP HANA Studio](media/sap-hana-backup-file-level/backup-menue-dialog.png)
-
-This figure shows the dialog of the backup menu item in SAP HANA Studio. When choosing type &quot;file,&quot; one has to specify a path in the file system where SAP HANA writes the backup files. Restore works the same way.
-
-While this choice sounds simple and straight forward, there are some considerations. An Azure VM has a limitation of number of data disks that can be attached. There might not be capacity to store SAP HANA backup files on the file systems of the VM, depending on the size of the database and disk throughput requirements, which might involve software striping across multiple data disks. Various options for moving these backup files, and managing file size restrictions and performance when handling terabytes of data, are provided later in this article.
-
-Another option, which offers more freedom regarding total capacity, is Azure blob storage. While a single blob is also restricted to 1 TB, the total capacity of a single blob container is currently 500 TB. Additionally, it gives customers the choice to select so-called &quot;cool&quot; blob storage, which has a cost benefit. See [Azure Blob storage: hot, cool, and archive access tiers](../../../storage/blobs/storage-blob-storage-tiers.md?tabs=azure-portal) for details about cool blob storage.
-
-For additional safety, use a geo-replicated storage account to store the SAP HANA backups. See [Azure Storage redundancy](../../../storage/common/storage-redundancy.md) for details about storage redundancy and storage replication.
-
-One could place dedicated VHDs for SAP HANA backups in a dedicated backup storage account that is geo-replicated. Or else one could copy the VHDs that keep the SAP HANA backups to a geo-replicated storage account, or to a storage account that is in a different region.
-
-## Azure blobxfer utility details
-
-To store directories and files on Azure storage, one could use CLI or PowerShell, or develop a tool using one of the [Azure SDKs](https://azure.microsoft.com/downloads/). There is also a ready-to-use utility, AzCopy, for copying data to Azure storage. (see [Transfer data with the AzCopy Command-Line Utility](../../../storage/common/storage-use-azcopy-v10.md)).
-
-Therefore, blobxfer was used for copying SAP HANA backup files. It is open source, used by many customers in production environments, and available on [GitHub](https://github.com/Azure/blobxfer). This tool allows one to copy data directly to either Azure blob storage or Azure file share. It also offers a range of useful features, like md5 hash, or automatic parallelism when copying a directory with multiple files.
-
-## SAP HANA backup performance
-In this chapter, performance considerations are discussed. The numbers achieved may not represent most recent state since there is steady development to achieve better throughput to Azure storage. As a result, you should perform individual tests for your configuration and Azure region.
-
-![This screenshot is of the SAP HANA backup console in SAP HANA Studio](media/sap-hana-backup-file-level/backup-console-hana-studio.png)
-
-This screenshot shows the SAP HANA backup console of SAP HANA Studio. It took about 42 minutes to perform a backup of 230 GB on a single Azure Standard HDD storage disk attached to the HANA VM using the XFS file system on the one disk.
-
-![This screenshot is of YaST on the SAP HANA test VM](media/sap-hana-backup-file-level/one-backup-disk.png)
-
-This screenshot is of YaST on the SAP HANA test VM. You can see the 1-TB single disk for SAP HANA backup. It took about 42 minutes to backup 230 GB. In addition, five 200-GB disks were attached and software RAID md0 created, with striping on top of these five Azure data disks.
-
-![Repeating the same backup on software RAID with striping across five attached Azure standard storage data disks](media/sap-hana-backup-file-level/five-backup-disks.png)
-
-Repeating the same backup on software RAID with striping across five attached Azure standard storage data disks brought the backup time from 42 minutes down to 10 minutes. The disks were attached without caching to the VM. This exercise demonstrates the importance of disk write throughput for achieving good backup time. You could switch to Azure Standard SSD storage or Azure Premium Storage to further accelerate the process for optimal performance. In general, Azure standard HDD storage is not recommended and was used for demonstration purposes only. Recommendation is to use a minimum of Azure Standard SSD storage or Azure Premium Storage for production systems.
-
-## Copy SAP HANA backup files to Azure blob storage
-The performance numbers, backup duration numbers, and copy duration numbers mentioned might not represent most recent state of Azure technology. Microsoft is steadily improving Azure storage to deliver more throughput and lower latencies. Therefore the numbers are for demonstration purposes only. You need to test for your individual need in the Azure region of your choice to be able to judge with method is the best for you.
-
-Another option to quickly store SAP HANA backup files is Azure blob storage. One single blob container has a limit of around 500 TB, enough for SAP HANA systems, using M32ts, M32ls, M64ls, and GS5 VM types of Azure, to keep sufficient SAP HANA backups. Customers have the choice between &quot;hot&quot; and &quot;cold&quot; blob storage (see [Azure Blob storage: hot, cool, and archive access tiers](../../../storage/blobs/storage-blob-storage-tiers.md?tabs=azure-portal)).
-
-With the blobxfer tool, it is easy to copy the SAP HANA backup files directly to Azure blob storage.
-
-![Here one can see the files of a full SAP HANA file backup](media/sap-hana-backup-file-level/list-of-backups.png)
-
-You can see the files of a full SAP HANA file backup. Of the four files, the biggest one has roughly 230 GB size.
-
-![It took roughly 3000 seconds to copy the 230 GB to an Azure standard storage account blob container](media/sap-hana-backup-file-level/copy-duration-blobxfer.png)
-
-Not using md5 hash in the initial test, it took roughly 3000 seconds to copy the 230 GB to an Azure standard storage account blob container.
-
-The HANA Studio backup console allows one to restrict the max file size of HANA backup files. In the sample environment, it improved performance by having multiple smaller backup files, instead of one large 230-GB file.
-
-Setting the backup file size limit on the HANA side doesn&#39;t improve the backup time, because the files are written sequentially. The file size limit was set to 60 GB, so the backup created four large data files instead of the 230-GB single file. Using multiple backup files can become a necessity for backing up HANA databases if your backup targets have limitations on file sizes of blob sizes.
-
-![To test parallelism of the blobxfer tool, the max file size for HANA backups was then set to 15 GB](media/sap-hana-backup-file-level/parallel-copy-multiple-backup-files.png)
-
-To test parallelism of the blobxfer tool, the max file size for HANA backups was then set to 15 GB, which resulted in 19 backup files. This configuration brought the time for blobxfer to copy the 230 GB to Azure blob storage from 3000 seconds down to 875 seconds.
-
-As you are exploring copying backups performed against local disks to other locations, like Azure blob storage, keep in mind that the bandwidth used by an eventual parallel copy process is accounting against the network or storage quota of your individual VM type. As a result, you need to balance the duration of the copy against the network and storage bandwidth the normal workload running in the VM is requiring.
--
-## Copy SAP HANA backup files to NFS share
-
-Microsoft Azure offer native NFS shares through [Azure NetApp Files](https://azure.microsoft.com/services/netapp/). You can create different volumes of dozen of TBs in capacity to store and manage backups. You also can snapshot those volumes based on NetApp's technology. Azure NetApp Files (ANF) is offered in three different service levels that give different storage throughput. For more details, read the article [Service levels for Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-service-levels.md). You can create and mount an NFS volume from ANF as described in the article [Quickstart: Set up Azure NetApp Files and create an NFS volume](../../../azure-netapp-files/azure-netapp-files-quickstart-set-up-account-create-volumes.md?tabs=azure-portal).
-
-Besides using native NFS volumes of Azure through ANF, there are various possibilities of creating own deployments that provide NFS shares on Azure. All have the disadvantage that you need to deploy and manage those solutions yourself. Some of those possibilities are documented in these articles:
--- [High availability for NFS on Azure VMs on SUSE Linux Enterprise Server](./high-availability-guide-suse-nfs.md)-- [GlusterFS on Azure VMs on Red Hat Enterprise Linux for SAP NetWeaver](./high-availability-guide-rhel-glusterfs.md)-
-NFS shares created by means described above can be used to directly execute HANA backups against or to copy backups that were performed against local disks to those NFS shares.
-
-> [!NOTE]
-> SAP HANA support NFS v3 and NFS v4.x. Any other format like SMB with CIFS file system is not supported to write HANA backups against. See also [SAP support note #1820529](https://launchpad.support.sap.com/#/notes/1820529)
-
-## Copy SAP HANA backup files to Azure Files
-
-It is possible to mount an Azure Files share inside an Azure Linux VM. The article [How to use Azure File storage with Linux](../../../storage/files/storage-how-to-use-files-linux.md) provides details on how to perform the configuration. For limitations of on Azure Files or Azure premium files, read the article [Azure Files scalability and performance targets](../../../storage/files/storage-files-scale-targets.md).
-
-> [!NOTE]
-> SMB with CIFS file system is not supported by SAP HANA to write HANA backups against. See also [SAP support note #1820529](https://launchpad.support.sap.com/#/notes/1820529). As a result, you only can use this solution as final destination of a HANA database backup that has been conducted directly against local attached disks
->
-
-In a test conducted against Azure Files, not Azure Premium Files it took around 929 seconds to copy 19 backup files with an overall volume of 230 GB. We expect the time using Azure Premium Files being way better. However, you need to keep in mind that you need to balance the interests of a fast copy with the requirements your workload has on network bandwidth. Since every Azure VM type enforces network bandwidth quota, you need to stay within the range of that quota with your workload plus the copy of the backup files.
-
-![This figure shows that it took about 929 seconds to copy 19 SAP HANA backup files](media/sap-hana-backup-file-level/parallel-copy-to-azure-files.png)
--
-Storing SAP HANA backup files on Azure files could be an interesting option. Especially with the improved latency and throughput of Azure Premium Files.
-
-## Next steps
-* [Backup guide for SAP HANA on Azure Virtual Machines](sap-hana-backup-guide.md) gives an overview and information on getting started.
-* To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large instances), see [SAP HANA (large instances) high availability and disaster recovery on Azure](hana-overview-high-availability-disaster-recovery.md).
virtual-machines Sap Hana Backup Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/sap-hana-backup-guide.md
- Title: Backup guide for SAP HANA on Azure Virtual Machines | Microsoft Docs
-description: Backup guide for SAP HANA provides two major backup possibilities for SAP HANA on Azure virtual machines
-----
-u vm-linux
- Previously updated : 03/01/2020----
-# Backup guide for SAP HANA on Azure Virtual Machines
-
-## Getting Started
-
-The backup guide for SAP HANA running on Azure virtual Machines will only describe Azure-specific topics. For general SAP HANA backup related items, check the SAP HANA documentation. We expect you to be familiar with principle database backup strategies, the reasons, and motivations to have a sound and valid backup strategy, and are aware of the requirements your company has for the backup procedure, retention period of backups and restore procedure.
-
-SAP HANA is officially supported on various Azure VM types, like Azure M-Series. For a complete list of SAP HANA certified Azure VMs and HANA Large Instance units, check out [Find Certified IaaS Platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html#categories=Microsoft%20Azure). Microsoft Azure offers a number of units where SAP HANA runs non-virtualized on physical servers. This service is called [HANA Large Instances](hana-overview-architecture.md). This guide will not cover backup processes and tools for HANA Large Instances. But is going to be limited to Azure virtual machines. For details about backup/restore processes with HANA Large Instances, read the article [HLI Backup and Restore](./hana-backup-restore.md).
-
-The focus of this article is on three backup possibilities for SAP HANA on Azure virtual machines:
--- HANA backup through [Azure Backup Services](../../../backup/backup-overview.md) -- HANA backup to the file system in an Azure Linux Virtual Machine (see [SAP HANA Azure Backup on file level](sap-hana-backup-file-level.md))-- HANA backup based on storage snapshots using the Azure storage blob snapshot feature manually or Azure Backup service--
-SAP HANA offers a backup API, which allows third-party backup tools to integrate directly with SAP HANA. Products like Azure Backup service, or [Commvault](https://azure.microsoft.com/resources/protecting-sap-hana-in-azure/) are using this proprietary interface to trigger SAP HANA database or redo log backups.
--
-Information on how you can find what SAP software is supported on Azure can be found in the article [What SAP software is supported for Azure deployments](./sap-supported-product-on-azure.md).
-
-## Azure Backup Service
-
-The first scenario shown is a scenario where Azure Backup Service is either using the SAP HANA `backint` interface to perform a streaming backup with from an SAP HANA database. Or you use a more generic capability of Azure Backup service to create an application consistent disk snapshot and have that one transferred to the Azure Backup service.
-
-Azure Backup integrates and is certified as backup solution for SAP HANA using the proprietary SAP HANA interface called [backint](https://www.sap.com/dmc/exp/2013_09_adpd/enEN/#/d/solutions?id=8f3fd455-a2d7-4086-aa28-51d8870acaa5). For more details of the solution, its capabilities and the Azure regions where it is available, read the article [Support matrix for backup of SAP HANA databases on Azure VMs](../../../backup/sap-hana-backup-support-matrix.md#scenario-support). For details and principles about Azure Backup service for HANA, read the article [About SAP HANA database backup in Azure VMs](../../../backup/sap-hana-db-about.md).
-
-The second possibility to leverage Azure Backup service is to create an application consistent backup using disk snapshots of Azure Premium Storage. Other HANA certified Azure storages, like [Azure Ultra disk](../../disks-enable-ultra-ssd.md) and [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) are not supporting this kind of snapshot through Azure Backup service. Reading these articles:
--- [Plan your VM backup infrastructure in Azure](../../../backup/backup-azure-vms-introduction.md)-- [Application-consistent backup of Azure Linux VMs](../../../backup/backup-azure-linux-app-consistent.md) -
-this sequence of activity emerges:
--- Azure Backup needs to execute a pre-snapshot script that puts the application, in this case SAP HANA, in a consistent state-- As this consistent state is confirmed, Azure Backup will execute the disk snapshots-- After finishing the snapshots, Azure Backup will undo the activity it did in the pre-snapshot script-- After successful execution, Azure Backup will stream the data into the Backup vault-
-In case of SAP HANA, most customers are using Azure Write Accelerator for the volumes that contain the SAP HANA redo log. Azure Backup service will automatically exclude these volumes from the snapshots. This exclusion does not harm the ability of HANA to restore. Though it would block the ability to restore with nearly all other SAP supported DBMS.
-
-The downside of this possibility is the fact that you need to develop your own pre- and post-snapshot script. The pre-snapshot script needs to create a HANA snapshot and handle eventual exception cases. Whereas the post-snapshot script needs to delete the HANA snapshot again. For more details on the logic required, start with [SAP support note #2039883](https://launchpad.support.sap.com/#/notes/2039883). The considerations of the section 'SAP HANA data consistency when taking storage snapshots' in this article do fully apply to this kind of backup.
-
-> [!NOTE]
-> Disk snapshot based backups for SAP HANA in deployments where multiple database containers are used, require a minimum release of HANA 2.0 SP04
->
-
-See details about storage snapshots later in this document.
-
-![This figure shows two possibilities for saving the current VM state](media/sap-hana-backup-guide/azure-backup-service-for-hana.png)
-
-## Other HANA backup methods
-There are three other backup methods or paths that can be considered:
--- Backing up against an NFS share that is based on Azure NetApp Files (ANF). ANF again has the ability to create snapshots of those volumes you store backups on. Given the throughput that you eventually require to write the backups, this solution could become an expensive method. Though easy to establish since HANA can write the backups directly into the Azure native NFS share-- Executing the HANA Backup against VM attached disks of Standard SSD or Azure Premium Storage. As next step you can copy those backup files against Azure Blob storage. This strategy might be price wise attractive-- Executing the HANA Backup against VM attached disks of Standard SSD or Azure Premium Storage. As next step the disk gets snapshotted on a regular basis. After the first snapshot, incremental snapshots can be used to reduce costs-
-![This figure shows options for taking an SAP HANA file backup inside the VM](media/sap-hana-backup-guide/other-hana-backup-paths.png)
-
-This figure shows options for taking an SAP HANA file backup inside the VM, and then storing it HANA backup files somewhere else using different tools. However, all solutions not involving a third-party backup service or Azure Backup service have several hurdles in common. Some of them can be listed, like retention administration, automatic restore process and providing automatic point-in-time recovery as Azure Backup service or other specialized third-party backup suites and services provide. Many of those third-party services being able to run on Azure.
--
-## SAP resources for HANA backup
-
-### SAP HANA backup documentation
--- [Introduction to SAP HANA Administration](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.00/en-US)-- [Planning Your Backup and Recovery Strategy](https://help.sap.com/saphelp_hanaplatform/helpdata/en/ef/085cd5949c40b788bba8fd3c65743e/content.htm)-- [Schedule HANA Backup using ABAP DBACOCKPIT](https://www.hanatutorials.com/p/schedule-hana-backup-using-abap.html)-- [Schedule Data Backups (SAP HANA Cockpit)](https://help.sap.com/saphelp_hanaplatform/helpdata/en/6d/385fa14ef64a6bab2c97a3d3e40292/frameset.htm)-- FAQ about SAP HANA backup in [SAP Note 1642148](https://launchpad.support.sap.com/#/notes/1642148)-- FAQ about SAP HANA database and storage snapshots in [SAP Note 2039883](https://launchpad.support.sap.com/#/notes/2039883)-- Unsuitable network file systems for backup and recovery in [SAP Note 1820529](https://launchpad.support.sap.com/#/notes/1820529)-
-### How to verify correctness of SAP HANA backup
-Independent of your backup method, running a test restore against a different system is an absolute necessity. This approach provides a way to ensure that a backup is correct, and internal processes for backup and restore work as expected. While restoring backups could be a hurdle on-premises due to its infrastructure requirement, it is much easier to accomplish in the cloud by providing necessary resources temporarily for this purpose. It is correct that there are tools provided with HANA that can check backup files on ability to restore. However, the purpose of frequent restore exercises is to test the process of a database restore and train that process with the operations staff.
-
-Keep in mind that doing a simple restore and checking if HANA is up and running is not sufficient. You should run a table consistency check to be sure that the restored database is fine. SAP HANA offers several kinds of consistency checks described in [SAP Note 1977584](https://launchpad.support.sap.com/#/notes/1977584).
-
-Information about the table consistency check can also be found on the SAP website at [Table and Catalog Consistency Checks](https://help.sap.com/saphelp_hanaplatform/helpdata/en/25/84ec2e324d44529edc8221956359ea/content.htm#loio9357bf52c7324bee9567dca417ad9f8b).
-
-### Pros and cons of HANA backup versus storage snapshot
-
-SAP doesn&#39;t give preference to either HANA backup versus storage snapshot. It lists their pros and cons, so one can determine which to use depending on the situation and available storage technology (see [Planning Your Backup and Recovery Strategy](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.05/en-US/ef085cd5949c40b788bba8fd3c65743e.html)).
-
-On Azure, be aware of the fact that the Azure blob snapshot feature doesn&#39;t provide file system consistency across multiple disks (see [Using blob snapshots with PowerShell](/archive/blogs/cie/using-blob-snapshots-with-powershell)).
-
-In addition, one has to understand the billing implications when working frequently with blob snapshots as described in this article: [Understanding How Snapshots Accrue Charges](/rest/api/storageservices/understanding-how-snapshots-accrue-charges)ΓÇöit isn&#39;t as obvious as using Azure virtual disks.
-
-### SAP HANA data consistency when taking storage snapshots
-
-As documented earlier, describing the snapshot backup capabilities of Azure Backup, file system and application consistency is mandatory when taking storage snapshots. The easiest way to avoid problems would be to shut down SAP HANA, or maybe even the whole virtual machine. Something that is not feasible for a production instance.
-
-> [!NOTE]
-> Disk snapshot based backups for SAP HANA in deployments where multiple database containers are used, require a minimum release of HANA 2.0 SP04
->
-
-Azure storage, does not provide file system consistency across multiple disks or volumes that are attached to a VM during the snapshot process. That means the application consistency during the snapshot needs to be delivered by the application, in this case SAP HANA itself. [SAP Note 2039883](https://launchpad.support.sap.com/#/notes/2039883) has important information about SAP HANA backups by storage snapshots. For example, with XFS file systems, it is necessary to run **xfs\_freeze** before starting a storage snapshot to provide application consistency (see [xfs\_freeze(8) - Linux man page](https://linux.die.net/man/8/xfs_freeze) for details on **xfs\_freeze**).
-
-Assuming there is an XFS file system spanning four Azure virtual disks, the following steps provide a consistent snapshot that represents the HANA data area:
-
-1. Create HANA data snapshot prepare
-1. Freeze the file systems of all disks/volumes (for example, use **xfs\_freeze**)
-1. Create all necessary blob snapshots on Azure
-1. Unfreeze the file system
-1. Confirm the HANA data snapshot (will delete the snapshot)
-
-When using the Azure Backup's capability to perform application consistent snapshot backups, steps #1 need to be coded/scripted by you in for the pre-snapshot script. Azure Backup service will execute steps #2 and #3. Steps #4 and #5 need to be again provided by your code in the post-snapshot script. If you are not using Azure backup service, you also need to code/script step #2 and #3 on your own.
-More information on creating HANA data snapshots can be found in these articles:
--- [HANA data snapshots](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.04/en-US/ac114d4b34d542b99bc390b34f8ef375.html-- More details to perform step #1 can be found in article [Create a Data Snapshot (Native SQL)](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.04/en-US/9fd1c8bb3b60455caa93b7491ae6d830.html) -- Details to confirm/delete HANA data snapshots as need in step #5 can be found in the article [Create a Data Snapshot (Native SQL)](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.04/en-US/9fd1c8bb3b60455caa93b7491ae6d830.html) -
-It is important to confirm the HANA snapshot. Due to the &quot;Copy-on-Write,&quot; SAP HANA might not require additional disk space while in this snapshot-prepare mode. It&#39;s also not possible to start new backups until the SAP HANA snapshot is confirmed.
--
-### SAP HANA backup scheduling strategy
-
-The SAP HANA article [Planning Your Backup and Recovery Strategy](https://help.sap.com/saphelp_hanaplatform/helpdata/en/ef/085cd5949c40b788bba8fd3c65743e/content.htm) states a basic plan to do backups. Rely on SAP documentation around HANA and your experiences with other DBMS in defining the backup/restore strategy and process for SAP HANA. The sequence of different types of backups, and the retention period are highly dependent on the SLAs you need to provide.
--
-### SAP HANA backup encryption
-
-SAP HANA offers encryption of data and log. If SAP HANA data and log are not encrypted, then the backups are not encrypted by default. However, SAP HANA offers a separate backup encryption as documented in [SAP HANA Backup Encryption](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.03/en-US/5f837a57ce5e468d9db21c8683bc84da.html). If you are running older releases of SAP HANA, you might need to check whether backup encryption was part of the functionality provided already.
--
-## Next steps
-* [SAP HANA Azure Backup on file level](sap-hana-backup-file-level.md) describes the file-based backup option.
-* To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large instances), see [SAP HANA (large instances) high availability and disaster recovery on Azure](hana-overview-high-availability-disaster-recovery.md).
virtual-machines Sap Hana High Availability Scale Out Hsr Rhel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/sap-hana-high-availability-scale-out-hsr-rhel.md
vm-windows Previously updated : 02/01/2021 Last updated : 04/12/2021
For the configuration presented in this document, deploy seven virtual machines:
1. Enter the name of the new load balancer rule (for example, **hana-lb**). 1. Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for example, **hana-frontend**, **hana-backend** and **hana-hp**). 1. Select **HA Ports**.
- 1. Increase the **idle timeout** to 30 minutes.
1. Make sure to **enable Floating IP**. 1. Select **OK**.
virtual-machines Sap Hana High Availability Scale Out Hsr Suse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/sap-hana-high-availability-scale-out-hsr-suse.md
+
+ Title: SAP HANA scale-out with HSR and Pacemaker on SLES | Microsoft Docs
+description: SAP HANA scale-out with HSR and Pacemaker on SLES.
+
+documentationcenter: saponazure
++
+editor: ''
+tags: azure-resource-manager
+keywords: ''
+ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87
++
+ vm-windows
+ Last updated : 04/12/2021++++
+# High availability for SAP HANA scale-out system with HSR on SUSE Linux Enterprise Server
+
+[dbms-guide]:dbms-guide.md
+[deployment-guide]:deployment-guide.md
+[planning-guide]:planning-guide.md
+
+[anf-azure-doc]:https://docs.microsoft.com/azure/azure-netapp-files/
+[anf-avail-matrix]:https://azure.microsoft.com/global-infrastructure/services/?products=netapp&regions=all
+[anf-register]:https://docs.microsoft.com/azure/azure-netapp-files/azure-netapp-files-register
+[anf-sap-applications-azure]:https://www.netapp.com/us/media/tr-4746.pdf
+
+[2205917]:https://launchpad.support.sap.com/#/notes/2205917
+[1944799]:https://launchpad.support.sap.com/#/notes/1944799
+[1928533]:https://launchpad.support.sap.com/#/notes/1928533
+[2015553]:https://launchpad.support.sap.com/#/notes/2015553
+[2178632]:https://launchpad.support.sap.com/#/notes/2178632
+[2191498]:https://launchpad.support.sap.com/#/notes/2191498
+[2243692]:https://launchpad.support.sap.com/#/notes/2243692
+[1984787]:https://launchpad.support.sap.com/#/notes/1984787
+[1999351]:https://launchpad.support.sap.com/#/notes/1999351
+[1410736]:https://launchpad.support.sap.com/#/notes/1410736
+[1900823]:https://launchpad.support.sap.com/#/notes/1900823
+
+[sap-swcenter]:https://support.sap.com/en/my-support/software-downloads.html
+
+[suse-ha-guide]:https://www.suse.com/products/sles-for-sap/resource-library/sap-best-practices/
+[suse-drbd-guide]:https://www.suse.com/documentation/sle-ha-12/singlehtml/book_sleha_techguides/book_sleha_techguides.html
+[suse-ha-12sp3-relnotes]:https://www.suse.com/releasenotes/x86_64/SLE-HA/12-SP3/
+
+[sap-hana-ha]:sap-hana-high-availability.md
+[nfs-ha]:high-availability-guide-suse-nfs.md
++
+This article describes how to deploy a highly available SAP HANA system in a scale-out configuration with HANA system replication (HSR) and Pacemaker on Azure SUSE Linux Enterprise Server virtual machines (VMs). The shared file systems in the presented architecture are provided by [Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-introduction.md) and are mounted over NFS.
+
+In the example configurations, installation commands, and so on, the HANA instance is **03** and the HANA system ID is **HN1**. The examples are based on HANA 2.0 SP4 and SUSE Linux Enterprise Server 12 SP5.
+
+Before you begin, refer to the following SAP notes and papers:
+
+* [Azure NetApp Files documentation][anf-azure-doc]
+* SAP Note [1928533] includes:
+ * A list of Azure VM sizes that are supported for the deployment of SAP software
+ * Important capacity information for Azure VM sizes
+ * Supported SAP software, and operating system (OS) and database combinations
+ * The required SAP kernel version for Windows and Linux on Microsoft Azure
+* SAP Note [2015553]: Lists prerequisites for SAP-supported SAP software deployments in Azure
+* SAP Note [2205917]: Contains recommended OS settings for SUSE Linux Enterprise Server for SAP Applications
+* SAP Note [1944799]: Contains SAP Guidelines for SUSE Linux Enterprise Server for SAP Applications
+* SAP Note [2178632]: Contains detailed information about all monitoring metrics reported for SAP in Azure
+* SAP Note [2191498]: Contains the required SAP Host Agent version for Linux in Azure
+* SAP Note [2243692]: Contains information about SAP licensing on Linux in Azure
+* SAP Note [1984787]: Contains general information about SUSE Linux Enterprise Server 12
+* SAP Note [1999351]: Contains additional troubleshooting information for the Azure Enhanced Monitoring Extension for SAP
+* SAP Note [1900823]: Contains information about SAP HANA storage requirements
+* [SAP Community Wiki](https://wiki.scn.sap.com/wiki/display/HOME/SAPonLinuxNotes): Contains all required SAP notes for Linux
+* [Azure Virtual Machines planning and implementation for SAP on Linux][planning-guide]
+* [Azure Virtual Machines deployment for SAP on Linux][deployment-guide]
+* [Azure Virtual Machines DBMS deployment for SAP on Linux][dbms-guide]
+* [SUSE SAP HA Best Practice Guides][suse-ha-guide]: Contains all required information to set up NetWeaver High Availability and SAP HANA System Replication on-premises (to be used as a general baseline; they provide much more detailed information)
+* [SUSE High Availability Extension 12 SP5 Release Notes](https://www.suse.com/releasenotes/x86_64/SLE-HA/12-SP5/)
+* [Handling failed NFS share in SUSE HA cluster for HANA system replication](https://www.suse.com/support/kb/doc/?id=000019904)
+* [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files][anf-sap-applications-azure]
+* [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md)
+
+## Overview
+
+One method to achieve HANA high availability for HANA scale-out installations, is to configure HANA system replication and protect the solution with Pacemaker cluster to allow automatic failover. When an active node fails, the cluster fails over the HANA resources to the other site.
+The presented configuration shows three HANA nodes on each site, plus majority maker node to prevent split-brain scenario. The instructions can be adapted, to include more VMs as HANA DB nodes.
+
+The HANA shared file system `/han). It is mounted via NFSv4.1 on each HANA node in the same HANA system replication site. File systems `/hana/data` and `/hana/log` are local file systems and are not shared between the HANA DB nodes. SAP HANA will be installed in non-shared mode.
+
+> [!TIP]
+> For recommended SAP HANA storage configurations, see [SAP HANA Azure VMs storage configurations](./hana-vm-operations-storage.md).
+
+[![SAP HANA scale-out with HSR and Pacemaker cluster on SLES](./media/sap-hana-high-availability/sap-hana-high-availability-scale-out-hsr-suse.png)](./media/sap-hana-high-availability/sap-hana-high-availability-scale-out-hsr-suse-detail.png#lightbox)
+
+In the preceding diagram, three subnets are represented within one Azure virtual network, following the SAP HANA network recommendations:
+* for client communication - `client` 10.23.0.0/24
+* for internal HANA inter-node communication - `inter` 10.23.1.128/26
+* for HANA system replication - `hsr` 10.23.1.192/26
+
+As `/hana/data` and `/hana/log` are deployed on local disks, it is not necessary to deploy separate subnet and separate virtual network cards for communication to the storage.
+
+The Azure NetApp volumes are deployed in a separate subnet, [delegated to Azure NetApp Files](https://docs.microsoft.com/azure/azure-netapp-files/azure-netapp-files-delegate-subnet): `anf` 10.23.1.0/26.
+
+> [!IMPORTANT]
+> System replication to a 3rd site is not supported. For details see section "Important prerequisites" in [SLES-SAP HANA System Replication Scale-out Performance Optimized scenario](https://documentation.suse.com/sbp/all/html/SLES4SAP-hana-scaleOut-PerfOpt-12/https://docsupdatetracker.net/index.html#_important_prerequisites).
+
+## Set up the infrastructure
+
+In the instructions that follow, we assume that you've already created the resource group, the Azure virtual network with three Azure network subnets: `client`, `inter` and `hsr`.
+
+### Deploy Linux virtual machines via the Azure portal
+1. Deploy the Azure VMs.
+For the configuration presented in this document, deploy seven virtual machines:
+ - three virtual machines to serve as HANA DB nodes for HANA replication site 1: **hana-s1-db1**, **hana-s1-db2** and **hana-s1-db3**
+ - three virtual machines to serve as HANA DB nodes for HANA replication site 2: **hana-s2-db1**, **hana-s2-db2** and **hana-s2-db3**
+ - a small virtual machine to serve as *majority maker*: **hana-s-mm**
+
+ The VMs, deployed as SAP DB HANA nodes should be certified by SAP for HANA as published in the [SAP HANA Hardware directory](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html#categories=Microsoft%20Azure). When deploying the HANA DB nodes, make sure that [Accelerated Network](../../../virtual-network/create-vm-accelerated-networking-cli.md) is selected.
+
+ For the majority maker node, you can deploy a small VM, as this VM doesn't run any of the SAP HANA resources. The majority maker VM is used in the cluster configuration to achieve odd number of cluster nodes in a split-brain scenario. The majority maker VM only needs one virtual network interface in the `client` subnet in this example.
+
+ Deploy local managed disks for `/han).
+
+ Deploy the primary network interface for each VM in the `client` virtual network subnet.
+ When the VM is deployed via Azure portal, the network interface name is automatically generated. In these instructions for simplicity we'll refer to the automatically generated, primary network interfaces, which are attached to the `client` Azure virtual network subnet as **hana-s1-db1-client**, **hana-s1-db2-client**, **hana-s1-db3-client**, and so on.
++
+ > [!IMPORTANT]
+ > Make sure that the OS you select is SAP-certified for SAP HANA on the specific VM types you're using. For a list of SAP HANA certified VM types and OS releases for those types, go to the [SAP HANA certified IaaS platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html#categories=Microsoft%20Azure) site. Click into the details of the listed VM type to get the complete list of SAP HANA-supported OS releases for that type.
+
+
+2. Create six network interfaces, one for each HANA DB virtual machine, in the `inter` virtual network subnet (in this example, **hana-s1-db1-inter**, **hana-s1-db2-inter**, **hana-s1-db3-inter**, **hana-s2-db1-inter**, **hana-s2-db2-inter**, and **hana-s2-db3-inter**).
+
+3. Create six network interfaces, one for each HANA DB virtual machine, in the `hsr` virtual network subnet (in this example, **hana-s1-db1-hsr**, **hana-s1-db2-hsr**, **hana-s1-db3-hsr**, **hana-s2-db1-hsr**, **hana-s2-db2-hsr**, and **hana-s2-db3-hsr**).
+
+4. Attach the newly created virtual network interfaces to the corresponding virtual machines:
+
+ a. Go to the virtual machine in the [Azure portal](https://portal.azure.com/#home).
+
+ b. In the left pane, select **Virtual Machines**. Filter on the virtual machine name (for example, **hana-s1-db1**), and then select the virtual machine.
+
+ c. In the **Overview** pane, select **Stop** to deallocate the virtual machine.
+
+ d. Select **Networking**, and then attach the network interface. In the **Attach network interface** drop-down list, select the already created network interfaces for the `inter` and `hsr` subnets.
+
+ e. Select **Save**.
+
+ f. Repeat steps b through e for the remaining virtual machines (in our example, **hana-s1-db2**, **hana-s1-db3**, **hana-s2-db1**, **hana-s2-db2** and **hana-s2-db3**).
+
+ g. Leave the virtual machines in stopped state for now. Next, we'll enable [accelerated networking](../../../virtual-network/create-vm-accelerated-networking-cli.md) for all newly attached network interfaces.
+
+5. Enable accelerated networking for the additional network interfaces for the `inter` and `hsr` subnets by doing the following steps:
+
+ a. Open [Azure Cloud Shell](https://azure.microsoft.com/features/cloud-shell/) in the [Azure portal](https://portal.azure.com/#home).
+
+ b. Execute the following commands to enable accelerated networking for the additional network interfaces, which are attached to the `inter` and `hsr` subnets.
+
+ ```azurecli
+ az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db1-inter --accelerated-networking true
+ az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db2-inter --accelerated-networking true
+ az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db3-inter --accelerated-networking true
+ az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db1-inter --accelerated-networking true
+ az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db2-inter --accelerated-networking true
+ az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db3-inter --accelerated-networking true
+
+ az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db1-hsr --accelerated-networking true
+ az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db2-hsr --accelerated-networking true
+ az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db3-hsr --accelerated-networking true
+ az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db1-hsr --accelerated-networking true
+ az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db2-hsr --accelerated-networking true
+ az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db3-hsr --accelerated-networking true
+ ```
+
+7. Start the HANA DB virtual machines
+
+### Deploy Azure Load Balancer
+
+1. We recommend using standard load balancer. Follow these configuration steps to deploy standard load balancer:
+ 1. First, create a front-end IP pool:
+
+ 1. Open the load balancer, select **frontend IP pool**, and select **Add**.
+ 1. Enter the name of the new front-end IP pool (for example, **hana-frontend**).
+ 1. Set the **Assignment** to **Static** and enter the IP address (for example, **10.23.0.27**).
+ 1. Select **OK**.
+ 1. After the new front-end IP pool is created, note the pool IP address.
+
+ 1. Next, create a back-end pool and add all cluster VMs to the backend pool:
+
+ 1. Open the load balancer, select **backend pools**, and select **Add**.
+ 1. Enter the name of the new back-end pool (for example, **hana-backend**).
+ 1. Select **Add a virtual machine**.
+ 1. Select **Virtual machine**.
+ 1. Select the virtual machines of the SAP HANA cluster and their IP addresses for the `client` subnet.
+ 1. Select **Add**.
+
+ 1. Next, create a health probe:
+
+ 1. Open the load balancer, select **health probes**, and select **Add**.
+ 1. Enter the name of the new health probe (for example, **hana-hp**).
+ 1. Select **TCP** as the protocol and port 625**03**. Keep the **Interval** value set to 5, and the **Unhealthy threshold** value set to 2.
+ 1. Select **OK**.
+
+ 1. Next, create the load-balancing rules:
+
+ 1. Open the load balancer, select **load balancing rules**, and select **Add**.
+ 1. Enter the name of the new load balancer rule (for example, **hana-lb**).
+ 1. Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for example, **hana-frontend**, **hana-backend** and **hana-hp**).
+ 1. Select **HA Ports**.
+ 1. Make sure to **enable Floating IP**.
+ 1. Select **OK**.
+
+ > [!IMPORTANT]
+ > Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC.
+
+ > [!Note]
+ > When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
++
+ > [!IMPORTANT]
+ > Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set parameter **net.ipv4.tcp_timestamps** to **0**. For details see [Load Balancer health probes](../../../load-balancer/load-balancer-custom-probe-overview.md).
+ > See also SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
+
+### Deploy the Azure NetApp Files infrastructure
+
+Deploy the ANF volumes for the `/han#set-up-the-azure-netapp-files-infrastructure).
+
+In this example, the following Azure NetApp Files volumes were used:
+
+* volume **HN1**-shared-s1 (nfs://10.23.1.7/**HN1**-shared-s1)
+* volume **HN1**-shared-s2 (nfs://10.23.1.7/**HN1**-shared-s2)
++
+## Operating system configuration and preparation
+
+The instructions in the next sections are prefixed with one of the following abbreviations:
+* **[A]**: Applicable to all nodes
+* **[AH]**: Applicable to all HANA DB nodes
+* **[M]**: Applicable to the majority maker node
+* **[AH1]**: Applicable to all HANA DB nodes on SITE 1
+* **[AH2]**: Applicable to all HANA DB nodes on SITE 2
+* **[1]**: Applicable only to HANA DB node 1, SITE 1
+* **[2]**: Applicable only to HANA DB node 1, SITE 2
+
+Configure and prepare your OS by doing the following steps:
+
+1. **[A]** Maintain the host files on the virtual machines. Include entries for all subnets. The following entries were added to `/etc/hosts` for this example.
+
+ ```bash
+ # Client subnet
+ 10.23.0.19 hana-s1-db1
+ 10.23.0.20 hana-s1-db2
+ 10.23.0.21 hana-s1-db3
+ 10.23.0.22 hana-s2-db1
+ 10.23.0.23 hana-s2-db2
+ 10.23.0.24 hana-s2-db3
+ 10.23.0.25 hana-s-mm
+ # Internode subnet
+ 10.23.1.132 hana-s1-db1-inter
+ 10.23.1.133 hana-s1-db2-inter
+ 10.23.1.134 hana-s1-db3-inter
+ 10.23.1.135 hana-s2-db1-inter
+ 10.23.1.136 hana-s2-db2-inter
+ 10.23.1.137 hana-s2-db3-inter
+ # HSR subnet
+ 10.23.1.196 hana-s1-db1-hsr
+ 10.23.1.197 hana-s1-db2-hsr
+ 10.23.1.198 hana-s1-db3-hsr
+ 10.23.1.199 hana-s2-db1-hsr
+ 10.23.1.200 hana-s2-db2-hsr
+ 10.23.1.201 hana-s2-db3-hsr
+ ```
+
+2. **[A]** SUSE delivers special resource agents for SAP HANA and by default agents for SAP HANA ScaleUp are installed. Uninstall the packages for ScaleUp, if installed and install the packages for scenario SAP HANAScaleOut. The step needs to be performed on all cluster VMs, including the majority maker.
+
+ ```bash
+ # Uninstall ScaleUp packages and patterns
+ zypper remove patterns-sap-hana
+ zypper remove SAPHanaSR
+ zypper remove SAPHanaSR-doc
+ zypper remove yast2-sap-ha
+ # Install the ScaleOut packages and patterns
+ zypper in SAPHanaSR-ScaleOut SAPHanaSR-ScaleOut-doc
+ zypper in -t pattern ha_sles
+ ```
+
+3. **[AH]** Prepare the VMs - apply the recommended settings per SAP note [2205917] for SUSE Linux Enterprise Server for SAP Applications.
+
+## Prepare the file systems
+### Mount the shared file systems
+
+In this example, the shared HANA file systems are deployed on Azure NetApp Files and mounted over NFSv4.
+
+1. **[AH]** Create mount points for the HANA database volumes.
+
+ ```bash
+ mkdir -p /hana/shared
+ ```
+
+2. **[AH]** Verify the NFS domain setting. Make sure that the domain is configured as the default Azure NetApp Files domain, that is, **`defaultv4iddomain.com`** and the mapping is set to **nobody**.
+ This step is only needed, if using Azure NetAppFiles NFSv4.1.
+
+ > [!IMPORTANT]
+ > Make sure to set the NFS domain in `/etc/idmapd.conf` on the VM to match the default domain configuration on Azure NetApp Files: **`defaultv4iddomain.com`**. If there's a mismatch between the domain configuration on the NFS client (i.e. the VM) and the NFS server, i.e. the Azure NetApp configuration, then the permissions for files on Azure NetApp volumes that are mounted on the VMs will be displayed as `nobody`.
+
+ ```bash
+ sudo cat /etc/idmapd.conf
+ # Example
+ [General]
+ Domain = defaultv4iddomain.com
+ [Mapping]
+ Nobody-User = nobody
+ Nobody-Group = nobody
+ ```
+
+3. **[AH]** Verify `nfs4_disable_idmapping`. It should be set to **Y**. To create the directory structure where `nfs4_disable_idmapping` is located, execute the mount command. You won't be able to manually create the directory under /sys/modules, because access is reserved for the kernel / drivers.
+ This step is only needed, if using Azure NetAppFiles NFSv4.1.
+
+ ```bash
+ # Check nfs4_disable_idmapping
+ cat /sys/module/nfs/parameters/nfs4_disable_idmapping
+ # If you need to set nfs4_disable_idmapping to Y
+ mkdir /mnt/tmp
+ mount 10.23.1.7:/HN1-share-s1 /mnt/tmp
+ umount /mnt/tmp
+ echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
+ # Make the configuration permanent
+ echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf
+ ```
+
+4. **[AH1]** Mount the shared Azure NetApp Files volumes on the SITE1 HANA DB VMs.
+
+ ```bash
+ sudo vi /etc/fstab
+ # Add the following entries
+ 10.23.1.7:/HN1-shared-s1 /hana/shared nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
+ # Mount all volumes
+ sudo mount -a
+ ```
+
+5. **[AH2]** Mount the shared Azure NetApp Files volumes on the SITE2 HANA DB VMs.
+
+ ```bash
+ sudo vi /etc/fstab
+ # Add the following entries
+ 10.23.1.7:/HN1-shared-s2 /hana/shared nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
+ # Mount the volume
+ sudo mount -a
+ ```
++
+10. **[AH]** Verify that the corresponding `/hana/shared/` file systems are mounted on all HANA DB VMs with NFS protocol version **NFSv4**.
+
+ ```bash
+ sudo nfsstat -m
+ # Verify that flag vers is set to 4.1
+ # Example from SITE 1, hana-s1-db1
+ /hana/shared from 10.23.1.7:/HN1-shared-s1
+ Flags: rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.11,local_lock=none,addr=10.23.1.7
+ # Example from SITE 2, hana-s2-db1
+ /hana/shared from 10.23.1.7:/HN1-shared-s2
+ Flags: rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.14,local_lock=none,addr=10.23.1.7
+ ```
+
+### Prepare the data and log local file systems
+In the presented configuration, file systems `/hana/data` and `/hana/log` are deployed on managed disk and are locally attached to each HANA DB VM.
+You will need to execute the steps to create the local data and log volumes on each HANA DB virtual machine.
+
+Set up the disk layout with **Logical Volume Manager (LVM)**. The following example assumes that each HANA virtual machine has three data disks attached, that are used to create two volumes.
+
+1. **[AH]** List all of the available disks:
+ ```bash
+ ls /dev/disk/azure/scsi1/lun*
+ ```
+
+ Example output:
+
+ ```bash
+ /dev/disk/azure/scsi1/lun0 /dev/disk/azure/scsi1/lun1 /dev/disk/azure/scsi1/lun2
+ ```
+
+2. **[AH]** Create physical volumes for all of the disks that you want to use:
+ ```bash
+ sudo pvcreate /dev/disk/azure/scsi1/lun0
+ sudo pvcreate /dev/disk/azure/scsi1/lun1
+ sudo pvcreate /dev/disk/azure/scsi1/lun2
+ ```
+
+3. **[AH]** Create a volume group for the data files. Use one volume group for the log files and one for the shared directory of SAP HANA:
+ ```bash
+ sudo vgcreate vg_hana_data_HN1 /dev/disk/azure/scsi1/lun0 /dev/disk/azure/scsi1/lun1
+ sudo vgcreate vg_hana_log_HN1 /dev/disk/azure/scsi1/lun2
+ ```
+
+4. **[AH]** Create the logical volumes.
+ A linear volume is created when you use `lvcreate` without the `-i` switch. We suggest that you create a striped volume for better I/O performance, and align the stripe sizes to the values documented in [SAP HANA VM storage configurations](./hana-vm-operations-storage.md). The `-i` argument should be the number of the underlying physical volumes and the `-I` argument is the stripe size. In this document, two physical volumes are used for the data volume, so the `-i` switch argument is set to **2**. The stripe size for the data volume is **256 KiB**. One physical volume is used for the log volume, so no `-i` or `-I` switches are explicitly used for the log volume commands.
+
+ > [!IMPORTANT]
+ > Use the `-i` switch and set it to the number of the underlying physical volume when you use more than one physical volume for each data or log volumes. Use the `-I` switch to specify the stripe size, when creating a striped volume.
+ > See [SAP HANA VM storage configurations](./hana-vm-operations-storage.md) for recommended storage configurations, including stripe sizes and number of disks.
+
+ ```bash
+ sudo lvcreate -i 2 -I 256 -l 100%FREE -n hana_data vg_hana_data_HN1
+ sudo lvcreate -l 100%FREE -n hana_log vg_hana_log_HN1
+ sudo mkfs.xfs /dev/vg_hana_data_HN1/hana_data
+ sudo mkfs.xfs /dev/vg_hana_log_HN1/hana_log
+ ```
+
+5. **[AH]** Create the mount directories and copy the UUID of all of the logical volumes:
+ ```bash
+ sudo mkdir -p /hana/data/HN1
+ sudo mkdir -p /hana/log/HN1
+ # Write down the ID of /dev/vg_hana_data_HN1/hana_data and /dev/vg_hana_log_HN1/hana_log
+ sudo blkid
+ ```
+
+6. **[AH]** Create `fstab` entries for the logical volumes and mount:
+ ```bash
+ sudo vi /etc/fstab
+ ```
+
+ Insert the following line in the `/etc/fstab` file:
+
+ ```bash
+ /dev/disk/by-uuid/UUID of /dev/mapper/vg_hana_data_HN1-hana_data /hana/data/HN1 xfs defaults,nofail 0 2
+ /dev/disk/by-uuid/UUID of /dev/mapper/vg_hana_log_HN1-hana_log /hana/log/HN1 xfs defaults,nofail 0 2
+ ```
+
+ Mount the new volumes:
+
+ ```bash
+ sudo mount -a
+ ```
+
+## Create a Pacemaker cluster
+
+Follow the steps in [Setting up Pacemaker on SUSE Linux Enterprise Server in Azure](high-availability-guide-suse-pacemaker.md) to create a basic Pacemaker cluster for this HANA server.
+Include all virtual machines, including the majority maker in the cluster.
+
+> [!IMPORTANT]
+> Don't set `quorum expected-votes` to 2, as this is not a two node cluster.
+> Make sure that cluster property `concurrent-fencing` is enabled, so that node fencing is deserialized.
+
+## Installation
+
+In this example for deploying SAP HANA in scale-out configuration with HSR on Azure VMs, we've used HANA 2.0 SP4.
+
+### Prepare for HANA installation
+
+1. **[AH]** Before the HANA installation, set the root password. You can disable the root password after the installation has been completed. Execute as `root` command `passwd`.
+
+2. **[1,2]** Change the permissions on `/hana/shared`
+ ```bash
+ chmod 775 /hana/shared
+ ```
+
+3. **[1]** Verify that you can log in via SSH to the HANA DB VMs in this site **hana-s1-db2** and **hana-s1-db3**, without being prompted for a password. If that is not the case, exchange ssh keys as described in [Enable SSH Access via Public Key](https://documentation.suse.com/sbp/all/html/SLES4SAP-hana-scaleOut-PerfOpt-12/https://docsupdatetracker.net/index.html#_enable_ssh_access_via_public_key_optional).
+ ```bash
+ ssh root@hana-s1-db2
+ ssh root@hana-s1-db3
+ ```
+
+4. **[2]** Verify that you can log in via SSH to the HANA DB VMs in this site **hana-s2-db2** and **hana-s2-db3**, without being prompted for a password.
+ If that is not the case, exchange ssh keys.
+ ```bash
+ ssh root@hana-s2-db2
+ ssh root@hana-s2-db3
+ ```
+
+5. **[AH]** Install additional packages, which are required for HANA 2.0 SP4. For more information, see SAP Note [2593824](https://launchpad.support.sap.com/#/notes/2593824) for your SLES version.
+
+ ```bash
+ # In this example, using SLES12 SP5
+ sudo zypper install libgcc_s1 libstdc++6 libatomic1
+ ```
+### HANA installation on the first node on each site
+
+1. **[1]** Install SAP HANA by following the instructions in the [SAP HANA 2.0 Installation and Update guide](https://help.sap.com/viewer/2c1988d620e04368aa4103bf26f17727/2.0.04/en-US/7eb0167eb35e4e2885415205b8383584.html). In the instructions that follow, we show the SAP HANA installation on the first node on SITE 1.
+
+ a. Start the **hdblcm** program as `root` from the HANA installation software directory. Use the `internal_network` parameter and pass the address space for subnet, which is used for the internal HANA inter-node communication.
+
+ ```bash
+ ./hdblcm --internal_network=10.23.1.128/26
+ ```
+
+ b. At the prompt, enter the following values:
+
+ * For **Choose an action**: enter **1** (for install)
+ * For **Additional components for installation**: enter **2, 3**
+ * For installation path: press Enter (defaults to /hana/shared)
+ * For **Local Host Name**: press Enter to accept the default
+ * For **Do you want to add hosts to the system?**: enter **n**
+ * For **SAP HANA System ID**: enter **HN1**
+ * For **Instance number** [00]: enter **03**
+ * For **Local Host Worker Group** [default]: press Enter to accept the default
+ * For **Select System Usage / Enter index [4]**: enter **4** (for custom)
+ * For **Location of Data Volumes** [/hana/data/HN1]: press Enter to accept the default
+ * For **Location of Log Volumes** [/hana/log/HN1]: press Enter to accept the default
+ * For **Restrict maximum memory allocation?** [n]: enter **n**
+ * For **Certificate Host Name For Host hana-s1-db1** [hana-s1-db1]: press Enter to accept the default
+ * For **SAP Host Agent User (sapadm) Password**: enter the password
+ * For **Confirm SAP Host Agent User (sapadm) Password**: enter the password
+ * For **System Administrator (hn1adm) Password**: enter the password
+ * For **System Administrator Home Directory** [/usr/sap/HN1/home]: press Enter to accept the default
+ * For **System Administrator Login Shell** [/bin/sh]: press Enter to accept the default
+ * For **System Administrator User ID** [1001]: press Enter to accept the default
+ * For **Enter ID of User Group (sapsys)** [79]: press Enter to accept the default
+ * For **System Database User (system) Password**: enter the system's password
+ * For **Confirm System Database User (system) Password**: enter system's password
+ * For **Restart system after machine reboot?** [n]: enter **n**
+ * For **Do you want to continue (y/n)**: validate the summary and if everything looks good, enter **y**
+
+2. **[2]** Repeat the preceding step to install SAP HANA on the first node on SITE 2.
+
+3. **[1,2]** Verify global.ini
+
+ Display global.ini, and ensure that the configuration for the internal SAP HANA inter-node communication is in place. Verify the **communication** section. It should have the address space for the `inter` subnet, and `listeninterface` should be set to `.internal`. Verify the **internal_hostname_resolution** section. It should have the IP addresses for the HANA virtual machines that belong to the `inter` subnet.
+
+ ```bash
+ sudo cat /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini
+ # Example from SITE1
+ [communication]
+ internal_network = 10.23.1.128/26
+ listeninterface = .internal
+ [internal_hostname_resolution]
+ 10.23.1.132 = hana-s1-db1
+ 10.23.1.133 = hana-s1-db2
+ 10.23.1.134 = hana-s1-db3
+ ```
+
+4. **[1,2]** Prepare `global.ini` for installation in non-shared environment, as described in SAP note [2080991](https://launchpad.support.sap.com/#/notes/0002080991).
+
+ ```bash
+ sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini
+ [persistence]
+ basepath_shared = no
+ ```
+
+4. **[1,2]** Restart SAP HANA to activate the changes.
+
+ ```bash
+ sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StopSystem
+ sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StartSystem
+ ```
+
+6. **[1,2]** Verify that the client interface will be using the IP addresses from the `client` subnet for communication.
+
+ ```bash
+ # Execute as hn1adm
+ /usr/sap/HN1/HDB03/exe/hdbsql -u SYSTEM -p "password" -i 03 -d SYSTEMDB 'select * from SYS.M_HOST_INFORMATION'|grep net_publicname
+ # Expected result - example from SITE 2
+ "hana-s2-db1","net_publicname","10.23.0.22"
+ ```
+
+ For information about how to verify the configuration, see SAP Note [2183363 - Configuration of SAP HANA internal network](https://launchpad.support.sap.com/#/notes/2183363).
+
+7. **[AH]** Change permissions on the data and log directories to avoid HANA installation error.
+
+ ```bash
+ sudo chmod o+w -R /hana/data /hana/log
+ ```
+
+8. **[1]** Install the secondary HANA nodes. The example instructions in this step are for SITE 1.
+ a. Start the resident **hdblcm** program as `root`.
+ ```bash
+ cd /hana/shared/HN1/hdblcm
+ ./hdblcm
+ ```
+
+ b. At the prompt, enter the following values:
+
+ * For **Choose an action**: enter **2** (for add hosts)
+ * For **Enter comma separated host names to add**: hana-s1-db2, hana-s1-db3
+ * For **Additional components for installation**: enter **2, 3**
+ * For **Enter Root User Name [root]**: press Enter to accept the default
+ * For **Select roles for host 'hana-s1-db2' [1]**: 1 (for worker)
+ * For **Enter Host Failover Group for host 'hana-s1-db2' [default]**: press Enter to accept the default
+ * For **Enter Storage Partition Number for host 'hana-s1-db2' [<<assign automatically>>]**: press Enter to accept the default
+ * For **Enter Worker Group for host 'hana-s1-db2' [default]**: press Enter to accept the default
+ * For **Select roles for host 'hana-s1-db3' [1]**: 1 (for worker)
+ * For **Enter Host Failover Group for host 'hana-s1-db3' [default]**: press Enter to accept the default
+ * For **Enter Storage Partition Number for host 'hana-s1-db3' [<<assign automatically>>]**: press Enter to accept the default
+ * For **Enter Worker Group for host 'hana-s1-db3' [default]**: press Enter to accept the default
+ * For **System Administrator (hn1adm) Password**: enter the password
+ * For **Enter SAP Host Agent User (sapadm) Password**: enter the password
+ * For **Confirm SAP Host Agent User (sapadm) Password**: enter the password
+ * For **Certificate Host Name For Host hana-s1-db2** [hana-s1-db2]: press Enter to accept the default
+ * For **Certificate Host Name For Host hana-s1-db3** [hana-s1-db3]: press Enter to accept the default
+ * For **Do you want to continue (y/n)**: validate the summary and if everything looks good, enter **y**
+
+9. **[2]** Repeat the preceding step to install the secondary SAP HANA nodes on SITE 2.
+
+## Configure SAP HANA 2.0 System Replication
+
+1. **[1]** Configure System Replication on SITE 1:
+
+ Back up the databases as **hn1**adm:
+
+ ```bash
+ hdbsql -d SYSTEMDB -u SYSTEM -p "passwd" -i 03 "BACKUP DATA USING FILE ('initialbackupSYS')"
+ hdbsql -d HN1 -u SYSTEM -p "passwd" -i 03 "BACKUP DATA USING FILE ('initialbackupHN1')"
+ ```
+
+ Copy the system PKI files to the secondary site:
+
+ ```bash
+ scp /usr/sap/HN1/SYS/global/security/rsecssfs/data/SSFS_HN1.DAT hana-s2-db1:/usr/sap/HN1/SYS/global/security/rsecssfs/data/
+ scp /usr/sap/HN1/SYS/global/security/rsecssfs/key/SSFS_HN1.KEY hana-s2-db1:/usr/sap/HN1/SYS/global/security/rsecssfs/key/
+ ```
+
+ Create the primary site:
+
+ ```bash
+ hdbnsutil -sr_enable --name=HANA_S1
+ ```
+
+2. **[2]** Configure System Replication on SITE 2:
+
+ Register the second site to start the system replication. Run the following command as <hanasid\>adm:
+
+ ```bash
+ sapcontrol -nr 03 -function StopWait 600 10
+ hdbnsutil -sr_register --remoteHost=hana-s1-db1 --remoteInstance=03 --replicationMode=sync --name=HANA_S2
+ sapcontrol -nr 03 -function StartSystem
+ ```
+
+3. **[1]** Check replication status
+
+ Check the replication status and wait until all databases are in sync.
+
+ ```bash
+ sudo su - hn1adm -c "python /usr/sap/HN1/HDB03/exe/python_support/systemReplicationStatus.py"
+ # | Database | Host | Port | Service Name | Volume ID | Site ID | Site Name | Secondary | Secondary | Secondary | Secondary | Secondary | Replication | Replication | Replication |
+ # | | | | | | | | Host | Port | Site ID | Site Name | Active Status | Mode | Status | Status Details |
+ # | -- | - | -- | | | - | | - | | | | - | -- | -- | -- |
+ # | HN1 | hana-s1-db3 | 30303 | indexserver | 5 | 1 | HANA_S1 | hana-s2-db3 | 30303 | 2 | HANA_S2 | YES | SYNC | ACTIVE | |
+ # | SYSTEMDB | hana-s1-db1 | 30301 | nameserver | 1 | 1 | HANA_S1 | hana-s2-db1 | 30301 | 2 | HANA_S2 | YES | SYNC | ACTIVE | |
+ # | HN1 | hana-s1-db1 | 30307 | xsengine | 2 | 1 | HANA_S1 | hana-s2-db1 | 30307 | 2 | HANA_S2 | YES | SYNC | ACTIVE | |
+ # | HN1 | hana-s1-db1 | 30303 | indexserver | 3 | 1 | HANA_S1 | hana-s2-db1 | 30303 | 2 | HANA_S2 | YES | SYNC | ACTIVE | |
+ # | HN1 | hana-s1-db2 | 30303 | indexserver | 4 | 1 | HANA_S1 | hana-s2-db2 | 30303 | 2 | HANA_S2 | YES | SYNC | ACTIVE | |
+ #
+ # status system replication site "2": ACTIVE
+ # overall system replication status: ACTIVE
+ #
+ # Local System Replication State
+ #
+ # mode: PRIMARY
+ # site id: 1
+ # site name: HANA_S1
+ ```
+
+4. **[1,2]** Change the HANA configuration so that communication for HANA system replication if directed though the HANA system replication virtual network interfaces.
+ - Stop HANA on both sites
+ ```bash
+ sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StopSystem HDB
+ ```
+
+ - Edit global.ini to add the host mapping for HANA system replication: use the IP addresses from the `hsr` subnet.
+ ```bash
+ sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini
+ #Add the section
+ [system_replication_hostname_resolution]
+ 10.23.1.196 = hana-s1-db1
+ 10.23.1.197 = hana-s1-db2
+ 10.23.1.198 = hana-s1-db3
+ 10.23.1.199 = hana-s2-db1
+ 10.23.1.200 = hana-s2-db2
+ 10.23.1.201 = hana-s2-db3
+ ```
+
+ - Start HANA on both sites
+ ```bash
+ sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StartSystem HDB
+ ```
+
+ For more information, see [Host Name resolution for System Replication](https://help.sap.com/viewer/eb3777d5495d46c5b2fa773206bbfb46/1.0.12/en-US/c0cba1cb2ba34ec89f45b48b2157ec7b.html).
+
+## Create file system resources
+
+Create a dummy file system cluster resource, which will monitor and report failures, in case there is a problem accessing the NFS-mounted file system `/hana/shared`. That allows the cluster to trigger failover, in case there is a problem accessing `/hana/shared`. For more details see [Handling failed NFS share in SUSE HA cluster for HANA system replication](https://www.suse.com/support/kb/doc/?id=000019904)
+
+1. **[1]** Place pacemaker in maintenance mode, in preparation for the creation of the HANA cluster resources.
+ ```bash
+ crm configure property maintenance-mode=true
+ ```
+
+2. **[1,2]** Create the directory on the ANF /hana/sahred volume, which will be used in the special file system monitoring resource. The directories need to be created on both sites.
+ ```bash
+ mkdir -p /hana/shared/HN1/check
+ ```
+
+2. **[AH]** Create the directory, which will be used to mount the special file system monitoring resource. The directory needs to be created on all HANA cluster nodes.
+ ```bash
+ mkdir -p /hana/check
+ ```
+
+3. **[1]** Create the file system cluster resources.
+
+ ```bash
+ crm configure primitive fs_HN1_HDB03_fscheck Filesystem \
+ params device="/hana/shared/HN1/check" \
+ directory="/hana/check" fstype=nfs4 \
+ options="bind,defaults,rw,hard,proto=tcp,intr,noatime,vers=4.1,lock" \
+ op monitor interval=120 timeout=120 on-fail=fence \
+ op_params OCF_CHECK_LEVEL=20 \
+ op start interval=0 timeout=120 op stop interval=0 timeout=120
+
+ crm configure clone cln_fs_HN1_HDB03_fscheck fs_HN1_HDB03_fscheck \
+ meta clone-node-max=1 interleave=true
+
+ crm configure location loc_cln_fs_HN1_HDB03_fscheck_not_on_mm \
+ cln_fs_HN1_HDB03_fscheck -inf: hana-s-mm
+ ```
+
+ `OCF_CHECK_LEVEL=20` attribute is added to the monitor operation, so that monitor operations perform a read/write test on the file system. Without this attribute, the monitor operation only verifies that the file system is mounted. This can be a problem because when connectivity is lost, the file system may remain mounted, despite being inaccessible.
+
+ `on-fail=fence` attribute is also added to the monitor operation. With this option, if the monitor operation fails on a node, that node is immediately fenced.
+
+## Create SAP HANA cluster resources
+
+1. **[1,2]** Install the HANA "system replication hook". The hook needs to be installed on one HANA DB node on each system replication site.
+
+ 1. Prepare the hook as `root`
+ ```bash
+ mkdir -p /hana/shared/myHooks
+ cp /usr/share/SAPHanaSR-ScaleOut/SAPHanaSR.py /hana/shared/myHooks
+ chown -R hn1adm:sapsys /hana/shared/myHooks
+ ```
+
+ 2. Stop HANA on both system replication sites. Execute as <sid\>adm:
+ ```bash
+ sapcontrol -nr 03 -function StopSystem
+ ```
+
+ 3. Adjust `global.ini`
+ ```bash
+ # add to global.ini
+ [ha_dr_provider_SAPHanaSR]
+ provider = SAPHanaSR
+ path = /hana/shared/myHooks
+ execution_order = 1
+
+ [trace]
+ ha_dr_saphanasr = info
+ ```
+
+2. **[AH]** The cluster requires sudoers configuration on the cluster node for <sid\>adm. In this example that is achieved by creating a new file. Execute the commands as `root`.
+ ```bash
+ cat << EOF > /etc/sudoers.d/20-saphana
+ # SAPHanaSR-ScaleOut needs for srHook
+ Cmnd_Alias SOK = /usr/sbin/crm_attribute -n hana_hn1_glob_srHook -v SOK -t crm_config -s SAPHanaSR
+ Cmnd_Alias SFAIL = /usr/sbin/crm_attribute -n hana_hn1_glob_srHook -v SFAIL -t crm_config -s SAPHanaSR
+ hn1adm ALL=(ALL) NOPASSWD: SOK, SFAIL
+ EOF
+ ```
+
+3. **[1,2]** Start SAP HANA on both replication sites. Execute as <sid\>adm.
+
+ ```bash
+ sapcontrol -nr 03 -function StartSystem
+ ```
+
+4. **[1]** Verify the hook installation. Execute as <sid\>adm on the active HANA system replication site.
+
+ ```bash
+ cdtrace
+ awk '/ha_dr_SAPHanaSR.*crm_attribute/ \
+ { printf "%s %s %s %s\n",$2,$3,$5,$16 }' nameserver_*
+
+ # 2021-03-31 01:02:42.695244 ha_dr_SAPHanaSR SFAIL
+ # 2021-03-31 01:02:58.966856 ha_dr_SAPHanaSR SFAIL
+ # 2021-03-31 01:03:04.453100 ha_dr_SAPHanaSR SFAIL
+ # 2021-03-31 01:03:04.619768 ha_dr_SAPHanaSR SFAIL
+ # 2021-03-31 01:03:04.743444 ha_dr_SAPHanaSR SFAIL
+ # 2021-03-31 01:04:15.062181 ha_dr_SAPHanaSR SOK
+
+ ```
+
+5. **[1]** Create the HANA cluster resources. Execute the following commands as `root`.
+ 1. Make sure the cluster is already maintenance mode.
+
+ 2. Next, create the HANA Topology resource.
+ ```bash
+ sudo crm configure primitive rsc_SAPHanaTopology_HN1_HDB03 ocf:suse:SAPHanaTopology \
+ op monitor interval="10" timeout="600" \
+ op start interval="0" timeout="600" \
+ op stop interval="0" timeout="300" \
+ params SID="HN1" InstanceNumber="03"
+
+ sudo crm configure clone cln_SAPHanaTopology_HN1_HDB03 rsc_SAPHanaTopology_HN1_HDB03 \
+ meta clone-node-max="1" target-role="Started" interleave="true"
+ ```
+
+ 3. Next, create the HANA instance resource.
+ > [!NOTE]
+ > This article contains references to the terms *master* and *slave*, terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article.
+
+ ```bash
+ sudo crm configure primitive rsc_SAPHana_HN1_HDB03 ocf:suse:SAPHanaController \
+ op start interval="0" timeout="3600" \
+ op stop interval="0" timeout="3600" \
+ op promote interval="0" timeout="3600" \
+ op monitor interval="60" role="Master" timeout="700" \
+ op monitor interval="61" role="Slave" timeout="700" \
+ params SID="HN1" InstanceNumber="03" PREFER_SITE_TAKEOVER="true" \
+ DUPLICATE_PRIMARY_TIMEOUT="7200" AUTOMATED_REGISTER="false"
+
+ sudo crm configure ms msl_SAPHana_HN1_HDB03 rsc_SAPHana_HN1_HDB03 \
+ meta clone-node-max="1" master-max="1" interleave="true"
+ ```
+ > [!IMPORTANT]
+ > We recommend as a best practice that you only set AUTOMATED_REGISTER to **no**, while performing thorough fail-over tests, to prevent failed primary instance to automatically register as secondary. Once the fail-over tests have completed successfully, set AUTOMATED_REGISTER to **yes**, so that after takeover system replication can resume automatically.
+
+ 4. Create Virtual IP and associated resources.
+ ```bash
+ sudo crm configure primitive rsc_ip_HN1_HDB03 ocf:heartbeat:IPaddr2 \
+ op monitor interval="10s" timeout="20s" \
+ params ip="10.23.0.27"
+
+ sudo crm configure primitive rsc_nc_HN1_HDB03 azure-lb port=62503 \
+ meta resource-stickiness=0
+
+ sudo crm configure group g_ip_HN1_HDB03 rsc_ip_HN1_HDB03 rsc_nc_HN1_HDB03
+ ```
+
+ 5. Create the cluster constraints
+ ```bash
+ # Colocate the IP with HANA master
+ sudo crm configure colocation col_saphana_ip_HN1_HDB03 4000: g_ip_HN1_HDB03:Started \
+ msl_SAPHana_HN1_HDB03:Master
+
+ # Start HANA Topology before HANA instance
+ sudo crm configure order ord_SAPHana_HN1_HDB03 Optional: cln_SAPHanaTopology_HN1_HDB03 \
+ msl_SAPHana_HN1_HDB03
+
+ # HANA resources don't run on the majority maker node
+ sudo crm configure location loc_SAPHanaCon_not_on_majority_maker msl_SAPHana_HN1_HDB03 -inf: hana-s-mm
+ sudo crm configure location loc_SAPHanaTop_not_on_majority_maker cln_SAPHanaTopology_HN1_HDB03 -inf: hana-s-mm
+ ```
+
+6. **[1]** Configure additional cluster properties
+ ```bash
+ sudo crm configure rsc_defaults resource-stickiness=1000
+ sudo crm configure rsc_defaults migration-threshold=50
+ ```
+7. **[1]** verify the communication between the HOOK and the cluster
+ ```bash
+ crm_attribute -G -n hana_hn1_glob_srHook
+ # Expected result
+ # crm_attribute -G -n hana_hn1_glob_srHook
+ # scope=crm_config name=hana_hn1_glob_srHook value=SOK
+ ```
+
+8. **[1]** Place the cluster out of maintenance mode. Make sure that the cluster status is ok and that all of the resources are started.
+ ```bash
+ # Cleanup any failed resources - the following command is example
+ crm resource cleanup rsc_SAPHana_HN1_HDB03
+
+ # Place the cluster out of maintenance mode
+ sudo crm configure property maintenance-mode=false
+ ```
+
+ > [!NOTE]
+ > The timeouts in the above configuration are just examples and may need to be adapted to the specific HANA setup. For instance, you may need to increase the start timeout, if it takes longer to start the SAP HANA database.
+
+
+## Test SAP HANA failover
+
+> [!NOTE]
+> This article contains references to the terms *master* and *slave*, terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article.
+
+1. Before you start a test, check the cluster and SAP HANA system replication status.
+
+ a. Verify that there are no failed cluster actions
+ ```bash
+ #Verify that there are no failed cluster actions
+ crm status
+ # Example
+ #7 nodes configured
+ #24 resource instances configured
+ #
+ #Online: [ hana-s-mm hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
+ #
+ #Full list of resources:
+ #
+ # stonith-sbd (stonith:external/sbd): Started hana-s-mm
+ # Clone Set: cln_fs_HN1_HDB03_fscheck [fs_HN1_HDB03_fscheck]
+ # Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
+ # Stopped: [ hana-s-mm ]
+ # Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]
+ # Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
+ # Stopped: [ hana-s-mm ]
+ # Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
+ # Masters: [ hana-s1-db1 ]
+ # Slaves: [ hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
+ # Stopped: [ hana-s-mm ]
+ # Resource Group: g_ip_HN1_HDB03
+ # rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hana-s1-db1
+ # rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hana-s1-db1
+ ```
+
+ b. Verify that SAP HANA system replication is in sync
+
+ ```bash
+ # Verify HANA HSR is in sync
+ sudo su - hn1adm -c "python /usr/sap/HN1/HDB03/exe/python_support/systemReplicationStatus.py"
+ #| Database | Host | Port | Service Name | Volume ID | Site ID | Site Name | Secondary | Secondary | Secondary | Secondary | Secondary | Replication | Replication | Replication |
+ #| | | | | | | | Host | Port | Site ID | Site Name | Active Status | Mode | Status | Status Details |
+ #| -- | | -- | | | - | | | | | | - | -- | -- | -- |
+ #| SYSTEMDB | hana-s1-db1 | 30301 | nameserver | 1 | 1 | HANA_S1 | hana-s2-db1 | 30301 | 2 | HANA_S2 | YES | SYNC | ACTIVE | |
+ #| HN1 | hana-s1-db1 | 30307 | xsengine | 2 | 1 | HANA_S1 | hana-s2-db1 | 30307 | 2 | HANA_S2 | YES | SYNC | ACTIVE | |
+ #| HN1 | hana-s1-db1 | 30303 | indexserver | 3 | 1 | HANA_S1 | hana-s2-db1 | 30303 | 2 | HANA_S2 | YES | SYNC | ACTIVE | |
+ #| HN1 | hana-s1-db3 | 30303 | indexserver | 4 | 1 | HANA_S1 | hana-s2-db3 | 30303 | 2 | HANA_S2 | YES | SYNC | ACTIVE | |
+ #| HN1 | hana-s1-db2 | 30303 | indexserver | 5 | 1 | HANA_S1 | hana-s2-db2 | 30303 | 2 | HANA_S2 | YES | SYNC | ACTIVE | |
+ #
+ #status system replication site "1": ACTIVE
+ #overall system replication status: ACTIVE
+ #
+ #Local System Replication State
+ #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ #
+ #mode: PRIMARY
+ #site id: 1
+ #site name: HANA_S1
+ ```
+
+2. We recommend to thoroughly validate the SAP HANA cluster configuration, by performing the tests, documented in [HA for SAP HANA on Azure VMs on SLES](./sap-hana-high-availability.md#test-the-cluster-setup) and in [SLES Replication scale-out Performance Optimized Scenario](https://documentation.suse.com/sbp/all/html/SLES4SAP-hana-scaleOut-PerfOpt-12/https://docsupdatetracker.net/index.html#_testing_the_cluster).
+
+3. Verify the cluster configuration for a failure scenario, when a node loses access to the NFS share (`/hana/shared`).
+
+ The SAP HANA resource agents depend on binaries, stored on `/hana/shared` to perform operations during failover. File system `/hana/shared` is mounted over NFS in the presented configuration. A test that can be performed, is to create a temporary firewall rule to block access to the `/hana/shared` ANF volume on one of the primary site VMs. This approach validates that the cluster will fail over, if access to `/hana/shared` is lost on the active system replication site.
+
+ **Expected result**: When you block the access to the `/hana/shared` ANF volume on one of the primary site VMs, the monitoring operation that performs read/write operation on file system, will fail, as it is not able to access the file system and will trigger HANA resource failover. The same result is expected when your HANA node loses access to the NFS share.
+
+ You can check the state of the cluster resources by executing `crm_mon` or `crm status`. Resource state before starting the test:
+ ```bash
+ # Output of crm_mon
+ #7 nodes configured
+ #24 resource instances configured
+ #
+ #Online: [ hana-s-mm hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
+ #
+ #Active resources:
+ #
+ #stonith-sbd (stonith:external/sbd): Started hana-s-mm
+ # Clone Set: cln_fs_HN1_HDB03_fscheck [fs_HN1_HDB03_fscheck]
+ # Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
+ # Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]
+ # Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
+ # Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
+ # Masters: [ hana-s1-db1 ]
+ # Slaves: [ hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
+ # Resource Group: g_ip_HN1_HDB03
+ # rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hana-s2-db1
+ # rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hana-s2-db1
+ ```
+
+ To simulate failure for `/hana/shared`, first confirm the IP address for the `/hana/shared` ANF volume on the primary site. You can do that by running `df -kh|grep /hana/shared`.
+ Then, set up a temporary firewall rule to block access to the IP address of the `/hana/shared` ANF volume by executing the following command on one of the primary HANA system replication site VMs.
+ In this example the command was executed on hana-s1-db1.
+
+ ```bash
+ iptables -A INPUT -s 10.23.1.7 -j DROP; iptables -A OUTPUT -d 10.23.1.7 -j DROP
+ ```
+
+ The cluster resources will be migrated to the other HANA system replication site.
+
+ If you set AUTOMATED_REGISTER="false", you will need to configure SAP HANA system replication on secondary site. In this case, you can execute these commands to reconfigure SAP HANA as secondary.
+
+ ```bash
+ # Execute on the secondary
+ su - hn1adm
+ # Make sure HANA is not running on the secondary site. If it is started, stop HANA
+ sapcontrol -nr 03 -function StopWait 600 10
+ # Register the HANA secondary site
+ hdbnsutil -sr_register --name=HANA_S1 --remoteHost=hana-s2-db1 --remoteInstance=03 --replicationMode=sync
+ # Switch back to root and cleanup failed resources
+ crm resource cleanup SAPHana_HN1_HDB03
+ ```
+
+ The state of the resources, after the test:
+
+ ```bash
+ # Output of crm_mon
+ #7 nodes configured
+ #24 resource instances configured
+ #
+ #Online: [ hana-s-mm hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
+ #
+ #Active resources:
+ #
+ #stonith-sbd (stonith:external/sbd): Started hana-s-mm
+ # Clone Set: cln_fs_HN1_HDB03_fscheck [fs_HN1_HDB03_fscheck]
+ # Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
+ # Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]
+ Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
+ # Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
+ # Masters: [ hana-s2-db1 ]
+ # Slaves: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db2 hana-s2-db3 ]
+ # Resource Group: g_ip_HN1_HDB03
+ # rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hana-s2-db1
+ # rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hana-s2-db1
+ ```
++
+## Next steps
+
+* [Azure Virtual Machines planning and implementation for SAP][planning-guide]
+* [Azure Virtual Machines deployment for SAP][deployment-guide]
+* [Azure Virtual Machines DBMS deployment for SAP][dbms-guide]
+* [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md)
+* To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see [High Availability of SAP HANA on Azure Virtual Machines (VMs)][sap-hana-ha].
virtual-wan Scenario Secured Hub App Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/scenario-secured-hub-app-gateway.md
+
+ Title: 'Secure traffic between Application Gateway and backend pools'
+
+description: Scenarios for routing - secure traffic traveling through an application gateway deployed in a spoke VNet connected to a secured Virtual WAN hub.
+++++ Last updated : 04/12/2021+++
+# Scenario: Secure traffic between Application Gateway and backend pools
+
+When working with Virtual WAN virtual hub routing, there are quite a few available scenarios. In this scenario, a userΓÇÖs traffic enters Azure through an application gateway deployed in a spoke VNet that is connected to a secured Virtual WAN hub (Virtual WAN hub with an Azure Firewall). The goal is to use the Azure Firewall in the secured virtual hub to inspect traffic between the application gateway and the backend pools.
+
+There are two specific design patterns in this scenario, depending on whether the application gateway and backend pools are in the same VNet, or different VNets.
+
+* **Scenario 1:** The application gateway and backend pools are in the same virtual network peered to a Virtual WAN hub (separate subnets).
+* **Scenario 2:** The application gateway and backend pools are in different virtual networks peered to a Virtual WAN hub.
+
+## <a name="scenario-1"></a>Scenario 1 - Same VNet
+
+In this scenario, the application gateway and backend pools are in the same virtual network peered to a Virtual WAN hub (separate subnets).
++
+### Workflow
+
+Currently, routes that are advertised from the Virtual WAN route table to spoke virtual networks are applied to the entire virtual network, and not on the subnets of the spoke VNet. As a result, user-defined routes are necessary to enable this scenario. For information about user-defined routes (UDR), see [Virtual Network custom routes](../virtual-network/virtual-networks-udr-overview.md#user-defined).
++
+1. In Azure Firewall Manager, on the spoke virtual network containing the application gateway and backend pools, select **Enable Secure Internet Traffic** and **Enable Secure Private Traffic**.
+1. Configure user-defined routes (UDRs) on the application gateway subnet.
+
+ * To ensure the application gateway is able to send traffic directly to the Internet, specify the following UDR:
+
+ * **Address Prefix:** 0.0.0.0.0/0
+ * **Next Hop:** Internet
+
+ * To ensure the application gateway is able to send traffic to the backend pool via Azure Firewall in the Virtual WAN hub, specify the following UDR:
+
+ * **Address Prefix:** Backend pool subnet (10.2.0.0/24)
+ * **Next Hop:** Azure Firewall private IP
+
+1. Configure a user-defined route (UDR) on the backend pool subnet.
+
+ * **Address Prefix:** Application Gateway subnet
+ * **Next Hop:** Azure Firewall private IP
+
+## <a name="scenario-2"></a>Scenario 2 - Different VNets
+
+In this scenario, application gateway and backend pools are in different virtual networks that are peered to a Virtual WAN hub.
++
+### Workflow
+
+Currently, routes that are advertised from the Virtual WAN route table to spoke virtual networks are applied to the entire virtual network, and not on the subnets of the spoke VNet. As a result, user-defined routes are necessary to enable this scenario. For information about user-defined routes (UDR), see [Virtual Network custom routes](../virtual-network/virtual-networks-udr-overview.md#user-defined).
+
+1. In **Azure Firewall Manager**, select **Enable Secure Internet Traffic** and **Enable Secure Private Traffic** on both of the spoke virtual networks.
+
+1. Configure user-defined routes (UDRs) on the application gateway subnet. To ensure the application gateway is able to send traffic directly to the Internet, specify the following UDR:
+
+ * **Address Prefix:** 0.0.0.0.0/0
+ * **Next Hop:** Internet
+
+## Next steps
+
+* For more information about virtual hub routing, see [About virtual hub routing](about-virtual-hub-routing.md).
+* For more information about user-defined routes, see [Virtual Network custom routes](../virtual-network/virtual-networks-udr-overview.md#user-defined).
+* For information about Virtual WAN secured virtual hubs, see [Secured virtual hubs](../firewall-manager/secured-virtual-hub.md).